runtimeverification / evm-semantics Goto Github PK
View Code? Open in Web Editor NEWK Semantics of the Ethereum Virtual Machine (EVM)
License: BSD 3-Clause "New" or "Revised" License
K Semantics of the Ethereum Virtual Machine (EVM)
License: BSD 3-Clause "New" or "Revised" License
When I try to follow README.md
, I hit the following error:
$ ./tests/ci/with-k rvk ./Build run tests/VMTests/vmArithmeticTest/add0.json
<snip>
== Using rvk
== kompile: .build/rvk/driver-kompiled/interpreter
org.kframework.utils.errorsystem.KEMException: [Error] Compiler: Could not find main module with name ETHEREUM-SIMULATION in definition. Use --main-module to specify one.
at org.kframework.utils.errorsystem.KEMException.create(KEMException.java:113)
at org.kframework.utils.errorsystem.KEMException.compilerError(KEMException.java:75)
at org.kframework.parser.concrete2kore.ParserUtils.loadDefinition(ParserUtils.java:234)
at org.kframework.parser.concrete2kore.ParserUtils.loadDefinition(ParserUtils.java:213)
at org.kframework.kompile.DefinitionParsing.parseDefinition(DefinitionParsing.java:169)
at org.kframework.kompile.DefinitionParsing.parseDefinitionAndResolveBubbles(DefinitionParsing.java:149)
at org.kframework.kompile.Kompile.parseDefinition(Kompile.java:135)
at org.kframework.kompile.Kompile.run(Kompile.java:121)
at org.kframework.kompile.KompileFrontEnd.run(KompileFrontEnd.java:69)
at org.kframework.main.FrontEnd.main(FrontEnd.java:52)
at org.kframework.main.Main.runApplication(Main.java:113)
at org.kframework.main.Main.runApplication(Main.java:103)
at org.kframework.main.Main.main(Main.java:52)
[Error] Compiler: Could not find main module with name ETHEREUM-SIMULATION in
definition. Use --main-module to specify one.
Makefile:119: recipe for target '.build/rvk/driver-kompiled/interpreter' failed
make: *** [.build/rvk/driver-kompiled/interpreter] Error 113
I'm on evm-semantics 176f6f5.
[zhicheng@taxi] (80)$ make deps
...
...
...
[mlgmp.20150824] http://www-verimag.imag.fr/~monniaux/download/mlgmp_20120224.tar.gz downloaded
=-=- Processing actions -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
#=== ERROR while installing mlgmp.20150824 ====================================#
These patches didn't apply at /u/z/h/zhicheng/.opam/4.03.0+k/build/mlgmp.20150824:
=-=- Error report -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
The following actions were aborted
∗ install astring 0.8.3
∗ install bn128 0.1.2
∗ install configurator v0.9.1
∗ install cryptokit 1.12
∗ install ocaml-protoc 1.2.0
∗ install ocp-ocamlres 0.4
∗ install ppx_base v0.9.0
∗ install ppx_compare v0.9.0
∗ install ppx_core v0.9.0
∗ install ppx_driver v0.9.1
∗ install ppx_enumerate v0.9.0
∗ install ppx_hash v0.9.0
∗ install ppx_js_style v0.9.0
∗ install ppx_metaquot v0.9.0
∗ install ppx_optcomp v0.9.0
∗ install ppx_sexp_conv v0.9.0
∗ install ppx_type_conv v0.9.0
∗ install rlp 0.1
∗ install secp256k1 0.3.2
∗ install stdio v0.9.0
∗ install topkg 0.9.1
∗ install uuidm 0.9.6
∗ install yojson 1.4.1
∗ install zarith 1.7
The following actions failed
∗ install mlgmp 20150824
No changes have been performed
Makefile:65: recipe for target 'ocaml-deps' failed
make: *** [ocaml-deps] Error 4
We intend to set up the EVM semantics so that it uses the blockchain-k-plugin we wrote for IELE in order to build a VM server that can actually mine blocks.
Currently, evm semantics assumes that a Word
is always a positive integer. It would be great if em-semantics can provide a function like #signed
which takes a integer and convert it to its two's complement representation. This would be useful for implement ABI encoding for int128
type, where the value could be either positive or negative. When the value is negative, it should be converted to two's complement representation first.
Trying to prove a simple spec file I got this error. Can someone help me?
error "line 21 column 135: Sorts K and KItem are incompatible"
Here is the spec file.
requires "pow-verification.k"
module POW-OF-TWO-SPEC
imports ETHEREUM-SIMULATION
imports VERIFICATION
//powOfTwo
rule <k> #execute ... </k>
<mode> NORMAL </mode>
<schedule> DEFAULT </schedule>
<callStack> .List </callStack>
<memoryUsed> 0 </memoryUsed>
<localMem> .Map </localMem>
<previousGas> _ => _ </previousGas>
<program> powOfTwo(N) </program>
<pc> 0 => 56 </pc>
<wordStack> WS => 0 : 2 ^Int N : WS </wordStack>
<gas> G => _ </gas>
requires N >=Int 0
andBool N <=Int 340282366920938463463374607431768211455
andBool #sizeWordStack(WS) <Int 1021
rule <k> #execute ... </k>
<mode> NORMAL </mode>
<schedule> DEFAULT </schedule>
<callStack> .List </callStack>
<memoryUsed> 0 </memoryUsed>
<localMem> .Map </localMem>
<previousGas> _ => _ </previousGas>
<program> powOfTwo(N) </program>
<pc> 35 => 56 </pc>
<gas> G => _ </gas>
<wordStack> I : S : WS
=> 0 : S +Int S : WS </wordStack>
requires I >=Int 0
andBool S >=Int 0
andBool S +Int S ==Int 2 ^Int I
andBool S +Int S <=Int pow256
andBool #sizeWordStack(WS) <Int 1021
endmodule
The PR #167 revealed that the Vyper proofs, which do not have INVALID
in them, are still ending in EVM_INVALID_INSTRUCTION
exceptions.
This is a combination of two things, (1) the proofs are run in DEFAULT
mode instead of BYZANTIUM
. (2) the current disassembler maps anything that is undefined to INVALID
when it should instead map them to UNDEFINED
.
These issues need to be fixed before #167 can be merged.
i just want to know how to verify a contact. and i want to know what the relationship between EVM and kevm.
I'm running gcc
8.10 so I bump into an issue when the ocaml part of running make deps
starts. The issue was fixed in this commit in ocaml: ocaml/ocaml@ebcc2f8
Is there a way to install with a newer ocaml version which includes this commit?
Thanks!
When I follow the readme, I see an error message from the ./kevm debug
invocation
$ ./kevm debug tests/ethereum-tests/VMTests/vmArithmeticTest/add0.json
== debugging: tests/ethereum-tests/VMTests/vmArithmeticTest/add0.json
[Error] Critical: Multiple tool activations found: --interpret and --debugger
in verification.md, there is a non-critical mistake in the description of the signature extraction:
// for Solidity
rule #asWord(WS) /Int D => #asWord(#take(#sizeWordStack(WS) -Int log256Int(D), WS))
requires D modInt 256 ==Int 0 andBool D >=Int 0
andBool #sizeWordStack(WS) >=Int log256Int(D)
andBool #noOverflow(WS)
the requirement needs to be not just that D is a multiple of 256, but that it is a power of 256 (then log256Int will correspond to the number of bytes to shift by). I am not sure how to express it yet though.
%Int
semantics is not the modulo reduction for the negative numbers, while modInt
is.
We need to carefully go through the entire definition and fix this improper use of the modulo operation to make the semantics more robust.
See PR #105.
Currently we are saving the "previous" gas in a cell previousGas
, which is used when we have a call because the amount of gas to allocate to the callee depends on the amount of gas from before the gas deduction.
Instead, when calculating the gas deduction, we could also calculate the amount of gas to allocate, and give the cell a more appropriate name callGas
. This may/may not make the control flow cleaner, it's not clear. But if it works nicely, we could suggest it as a change to the yellow paper.
Currently, every opcode is looked up in the program map, which can become quite large. However, it's only ever necessary to actually do a lookup when we are checking if the destination of a JUMP
or JUMPI
is indeed a JUMPDEST
.
Proposal: add a cell called <currentBlock>
which stores all the opcodes up to the next JUMPDEST
. Now the <program>
cell will contain a map from program counters to basic blocks, where a basic block ends right before a JUMPDEST
or at the end of the program. Each block then is either an "initial block" (first chunk of the program), or a "JUMPDEST
block" (begins with a JUMPDEST
).
This paper: http://ts.data61.csiro.au/publications/csiro_full_text/Amani_BSB_18.pdf does a similar thing for the Lem semantics, but also splits basic blocks on JUMP
and JUMPI
opcodes, which I don't think is necessary.
Example:
<program>
0 |-> PUSH(1,2)
2 |-> PUSH(1, 2)
4 |-> ADD
5 |-> ISZERO
6 |-> PUSH(1, 10)
8 |-> JUMPI
9 |-> STOP
10 |-> JUMPDEST
11 |-> REVERT
</program>
becomes
<program>
0 |-> PUSH(1,2) ; PUSH(1, 2) ; ADD ; ISZERO ; PUSH(1, 10) ; JUMPI ; STOP ; .OpCodes
10 |-> JUMPDEST ; REVERT ; .Opcodes
</program>
Now everytime CALL opcode is used, it will call #asMapOpcodes to decode the byte code. Vyper implement the internal function call using the CALL opcode. And as a result, a lot time is spent on decoding same code again and again. Can we do something about it?
Issues.md states:
Currently in EVM, the *CODECOPY opcodes allow regarding program pieces as data, meaning that a translation back must always be maintained, because in theory a program can modify itself while executing.
The machine state (including memory) is defined in "9.4.1. Machine State" and the execution environment (including program code) is "9.3. Execution Environment."
CODECOPY is defined in the OPCODES table and brings code from the latter to the former.
How are you seeing that a program can modify itself?
ps -u jenkins -f
on jenkins while blockchain tests are running shows lots of java invocations of k-bin-to-text. This has a negative impact on performance of our test script, and should only be run if the test fails. That's what is supposed to happen, but there seems to be a regression.
When I run ./Build test-all
, I see
== Using uiuck
== kompile: .build/uiuck/ethereum-kompiled/extras/timestamp
== running: tests/VMTests/vmArithmeticTest/add0.json
== kompile: .build/uiuck/ethereum-kompiled/extras/timestamp
[Error] Critical: Parser returned a non-zero exit code: 113
Stdout:
Stderr:
[Error] Inner Parser: Parse error: unexpected end of file.
Source(<command line: -e>)
Location(1,8,1,9)
-:1: parser error : Document is empty
^
--- expected
+++ actual
@@ -1,60 +0,0 @@
-<?xml version="1.0"?>
-<generatedTop>
- <k> . </k>
- <exit-code> 0 </exit-code>
- <mode> VMTESTS </mode>
- <schedule> DEFAULT </schedule>
- <analysis> .Map </analysis>
- <ethereum>
- <evm>
- <output> .WordStack </output>
- <memoryUsed> 0 </memoryUsed>
- <callDepth> 0 </callDepth>
- <callStack> .List </callStack>
- <interimStates> .List </interimStates>
- <callLog> .Set </callLog>
- <txExecState>
- <program> .Map </program>
- <id> 0 </id>
- <caller> 0 </caller>
- <callData> .WordStack </callData>
- <callValue> 0 </callValue>
- <wordStack> .WordStack </wordStack>
- <localMem> .Map </localMem>
- <pc> 0 </pc>
- <gas> 0 </gas>
- <previousGas> 0 </previousGas>
- </txExecState>
- <substate>
- <selfDestruct> .Set </selfDestruct>
- <log> .Set </log>
- <refund> 0 </refund>
- </substate>
- <gasPrice> 0 </gasPrice>
- <origin> 0 </origin>
- <previousHash> 0 </previousHash>
- <ommersHash> 0 </ommersHash>
- <coinbase> 0 </coinbase>
- <stateRoot> 0 </stateRoot>
- <transactionsRoot> 0 </transactionsRoot>
- <receiptsRoot> 0 </receiptsRoot>
- <logsBloom> .WordStack </logsBloom>
- <difficulty> 0 </difficulty>
- <number> 0 </number>
- <gasLimit> 0 </gasLimit>
- <gasUsed> 0 </gasUsed>
- <timestamp> 0 </timestamp>
- <extraData> .WordStack </extraData>
- <mixHash> 0 </mixHash>
- <nonce> 0 </nonce>
- <ommerBlockHeaders> [ .JSONList ] </ommerBlockHeaders>
- </evm>
- <network>
- <activeAccounts> .Set </activeAccounts>
- <accounts> .AccountCellBag </accounts>
- <txOrder> .List </txOrder>
- <txPending> .List </txPending>
- <messages> .MessageCellBag </messages>
- </network>
- </ethereum>
-</generatedTop>
== failure: tests/VMTests/vmArithmeticTest/add0.json
== failed: 1 / 1
== running: tests/VMTests/vmArithmeticTest/add1.json
== kompile: .build/uiuck/ethereum-kompiled/extras/timestamp
[Error] Critical: Parser returned a non-zero exit code: 113
followed by more failures.
I was trying to prove a simple algorithm like the sumToN from the KEVM 1.0 technical report, but when I run the line ./kevm prove
get the error
~/evm-semantics$ ./kevm prove tests/proofs/resources/pow-of-two-spec.k
[Error] Internal: Uncaught exception thrown of type MatchError.
Please rerun your program with the --debug flag to generate a stack trace, and
file a bug report at https://github.com/kframework/k/issues (null)
When I try to run it in debug mode get an error in the name of the module.
~/evm-semantics$ ./kevm debug tests/proofs/resources/pow-of-two-spec.k
== debugging: tests/proofs/resources/pow-of-two-spec.k
[Error] Critical: Parser returned a non-zero exit code: 113
Stdout:
Stderr:
[Error] Inner Parser: Scanner error: unexpected character sequence '-'.
Source(/home/ngallego/evm-semantics/./tests/proofs/resources/pow-of-two-spec.k)
Location(3,11,3,12)
module VERIFICATION
imports EDSL
imports LEMMAS
syntax Map ::= "powOfTwo" "(" Int ")" [function]
//----------------------------------------------------
rule powOfTwo(N)
=> #asMapOpCodes( PUSH(1,1); PUSH(N,1) // y - 1, x = N
; JUMPDEST // label:loop
; DUP(1); ISZERO; PUSH(1,55); JUMPI // if n == 0, jump to end
; SWAP(1); PUSH(1,1); SWAP(1); SUB // x = x - 1
; SWAP(1); SWAP(5); DUP(1); SUM // y = y + y
; SWAP(1); SWAP(3); PSUH(1,35); JUMP // jump to loop
; JUMPDEST // label:end
; .OpCodes
) [macro]
endmodule
requires "pow-verification.k"
module POW-OF-TWO-SPEC
imports ETHEREUM-SIMULATION
imports VERIFICATION
//powOfTwo
rule <k> #execute... </k>
<mode> NORMAL </mode>
<schedule> DEFAULT </schedule>
<callStack> .List </callStack>
<memoryUsed> 0 </memoryUsed>
<localMem> .Map </localMem>
<previousGas> _ => _ </previousGas>
<program> powOfTwo(N) </program>
<pc> 0 => 56 </pc>
<wordStack> WS => 0 : 2 ^Int N : WS </wordStack>
<gas> G => G -Int (52 *Int N +Int 27) </gas>
requires N >=Int 0
andBool N <=Int 340282366920938463463374607431768211455
andBool #sizeWordStack(WS) <Int 1021
andBool G >=Int 55 *Int N +Int 27
rule <k> #execute... </k>
<mode> NORMAL </mode>
<schedule> DEFAULT </schedule>
<callStack> .List </callStack>
<memoryUsed> 0 </memoryUsed>
<localMem> .Map </localMem>
<previousGas> _ => _ </previousGas>
<program> powOfTwo(N) </program>
<pc> 35 => 56 </pc>
<gas> G => G -Int (52 *Int N +Int 27) </gas>
<wordStack> I : S : WS
=> 0 : S +Int S : WS </wordStack>
requires I >=Int 0
andBool S >=Int 0
andBool S +Int S <=Int pow256
andBool #sizeWordStack(WS) <Int 1021
andBool G >=Int 55 *Int N +Int 27
endmodule
Perhaps we could achieve cleaner separation of all the files by pulling out the subconfiguration for the network
into its own file.
We would add an extra cell, networkCommands
, which is used to communicate between the two configurations. It would only deal with looking up/retrieving network data. Operators like #newAccount
and the like would be defined there, and the various loaders/checkers could be as well.
Long term this would help with factoring out the network semantics from the KEVM, KIELE, and KWASM semantics so that they share a modular blockchain semantics.
After I cloned this repo, I ran make sphinx
.
It gave me the following error:
mkdir .build/sphinx-docs \
&& cp -r *.md proofs .build/sphinx-docs/. \
&& cd .build/sphinx-docs \
&& pandoc --from markdown --to rst README.md --output index.rst \
&& sed -i 's/{.k[ a-zA-Z.-]*}/k/g' *.md proofs/*.md \
&& sphinx-build -b dirhtml -d ../.build/sphinx-docs/doctrees . html \
&& sphinx-build -b text -d ../.build/sphinx-docs/doctrees . html/text \
&& echo "[+] HTML generated in .build/sphinx-docs/html, text in .build/sphinx-docs/html/text"
cp: cannot stat 'proofs': No such file or directory
Makefile:287: recipe for target 'sphinx' failed
make: *** [sphinx] Error 1
Thank you.
I've been running into the scenario where I try to prove something which ends up taking a lot of time, (>15 minutes) and eat up a lot of memory. Sometimes the process ends with the following output:
./../evm-semantics/kevm prove dappsys/exp-success-spec.k
[Error] Internal: Uncaught exception thrown of type OutOfMemoryError.
Please rerun your program with the --debug flag to generate a stack trace, and
file a bug report at https://github.com/kframework/k/issues (GC overhead limit
exceeded)
[Warning] Critical: missing SMTLib translation for #memoryUsageUpdate (missing
SMTLib translation for #memoryUsageUpdate)
[Warning] Critical: missing SMTLib translation for #rangeAux (missing SMTLib
translation for #rangeAux)
...
From a user perspective, it's not a very pleasant experience having K consume my computer.
From a provers perspective, I would like to know how I can construct lemmas to limit the proof search space.
When I follow the instruction in README
on Ubuntu 17.10, after make
I see the following error
ocamlfind opt -o interpreter constants.cmx prelude.cmx plugin.cmx parser.cmx lexer.cmx run.cmx interpreter.ml -package gmp -package dynlink -package zarith -package str -package uuidm -package unix -package ethereum-semantics-plugin -linkpkg -inline 20 -nodynlink -O3 -linkall
File "KRYPTO.ml", line 38, characters 20-60:
Error: Unbound module Secp256k1.RecoverableSign
File "_none_", line 1:
Error: Cannot find file KRYPTO.cmx
I notice that the interface of secp256k1
changed a lot at 0.4.0.
Currently these two functions are checking for similar things, and perhaps we could do a small refactor to make them re-use some of the same code-paths.
Problems are:
#newAccount
is used to create an account, and it's OK to first send funds to an account (making it existent), then create the account later (so #newAccount
should not error out).accountEmpty
(with function #accountEmpty
) existing and having its current signature.I needed the export the following variables to make the install progress
export PATH=~/formalMethods/k/bin:$PATH
export K_VERSION=uiuck
export PATH=~/formalMethods/pandoc-tangle/bin:$PATH
https://github.com/ehildenb/pandoc-tangle, libxml2-utils is required should be listed in README.md as requirements. Also the above exports should be listed there.
I am still having trouble building this. When i install k i used the latest releases K 4.0 which does not include the KRYPTO packages runtimeverification/k#2318
I tried to use develop version of k but am having trouble building that. Which I raised runtimeverification/k#2326
But I think the README should be more explicit about the version of K that is required.
When I follow the readme, I see
$ ./kevm prove tests/proofs/erc20/hkg/transfer-success-1-spec.k
FATAL: tests/proofs/erc20/hkg/transfer-success-1-spec.k does not exist
It seems like the file has moved or disappeard.
Some profiling shows that roughly 50% of time is spent on non-opcode semantic rules (including gas calculation), and an half of that (i.e., 25% of total) is spent on the push/popCallStacks. Is it possible to improve this?
Below are log files that show how much it took (System.nanoTime()
) and what is the k cell for each step.
Current install instructions say to install z3
on Ubuntu, but libz3-dev
may also be needed.
It's the first time I am using the k-framework. What I tried so far:
When I try to run ./Build I get the following error:
== Using uiuck
WARNING: pandoc-tangle not installed. Ignoring changes in markdown files
== kompile: .build/uiuck/ethereum-kompiled/extras/timestamp
org.kframework.utils.errorsystem.KEMException: [Error] Compiler: Could not find module: STRING-BUFFER
Source(/home/foo/evm-semantics/./.build/uiuck/data.k)
Location(16,5,16,25)
at org.kframework.utils.errorsystem.KExceptionManager.create(KExceptionManager.java:170)
at org.kframework.utils.errorsystem.KExceptionManager.compilerError(KExceptionManager.java:84)
at org.kframework.kore.convertors.KILtoKORE.lambda$apply$152(KILtoKORE.java:140)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at org.kframework.kore.convertors.KILtoKORE.apply(KILtoKORE.java:141)
at org.kframework.kore.convertors.KILtoKORE.apply(KILtoKORE.java:96)
at org.kframework.parser.concrete2kore.ParserUtils.lambda$loadModules$24(ParserUtils.java:191)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1380)
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
at org.kframework.parser.concrete2kore.ParserUtils.loadModules(ParserUtils.java:191)
at org.kframework.parser.concrete2kore.ParserUtils.loadDefinition(ParserUtils.java:222)
at org.kframework.parser.concrete2kore.ParserUtils.loadDefinition(ParserUtils.java:206)
at org.kframework.kompile.DefinitionParsing.parseDefinition(DefinitionParsing.java:154)
at org.kframework.kompile.DefinitionParsing.parseDefinitionAndResolveBubbles(DefinitionParsing.java:139)
at org.kframework.kompile.Kompile.parseDefinition(Kompile.java:128)
at org.kframework.kompile.Kompile.run(Kompile.java:114)
at org.kframework.kompile.KompileFrontEnd.run(KompileFrontEnd.java:69)
at org.kframework.main.FrontEnd.main(FrontEnd.java:52)
at org.kframework.main.Main.runApplication(Main.java:110)
at org.kframework.main.Main.runApplication(Main.java:100)
at org.kframework.main.Main.main(Main.java:52)
[Error] Compiler: Could not find module: STRING-BUFFER
Source(/home/foo/evm-semantics/./.build/uiuck/data.k)
Location(16,5,16,25)
Makefile:109: recipe for target '.build/uiuck/ethereum-kompiled/extras/timestamp' failed
make: *** [.build/uiuck/ethereum-kompiled/extras/timestamp] Error 113
I have no idea what I might have missed during the setup. Does the evm-semantics directory belong (in a specific folder) inside the k-framework? Any help is appreciated!
Hi all,
Upon inspecting several of the proofs currently in the evm-semantics repo and attempting to reproduce them with my own token code, I've noticed several minor inconsistencies between what we claim to prove and what we are actually proving.
The inconsistencies are introduced by the current proving methodology, which as described to me involves:
The core problem with the current specification is that, in several of them, the PC value being used is not the correct PC value as specified by the jump-table in the contract. This is introduced from a surprising interaction between the above steps. Specifically, it is often case that the built-in compiler used in step 2 (Remix) is not the same version as the compiler used to generate the bytecode being proved. This can result in PC values being "shifted", and the wrong PC values can be used in the specification. This issue was confirmed: I was able to reproduce several of the wrong PC values used in the specification by compiling with a different version of solc on Remix. Because Remix is dynamically updated, it is possible that such updates were deployed during the development of the spec. If the bytecode was copied over early in development and the PC values were obtained later, this would explain the discrepancies.
In practice, what this means is that many of our proofs start from the middle of a function, skipping some validation steps. What we are proving is not wrong, but it is not exactly what we claim to prove in that it does not include the full code of the function. This "missed code", either at the beginning or end of a function, could potentially violate properties in our proofs. Furthermore, some values in our proof, for example the gas costs, are incorrect in many of the specifications.
Function | Function Signature (ABI) | Jump Table Function Start (BC) | Function End (BC) | Spec Function Start | Spec Function End |
---|---|---|---|---|---|
transfer | 0xa9059cbb | 0x17e / 382 | 0x1d4 / 468 | 0x5fd / 1533 | 0x6ec / 1772 [0x764 / 1892] |
transferFrom | 0x23b872dd | 0xbe / 190 | 0x133 / 307 | 0x332 / 818 | 0x533 / 1331 [0x5ab / 1451] |
balanceOf | 0x70a08231 | 0x134 / 308 | 0x17d / 381 | 0x13c / 316 | 0x17d / 381 |
allowance | 0xdd62ed3e | 0x1d5 / 469 | 0x23d / 573 | 0x1d5 / 469 | 0x23d / 573 |
approve | 0x095ea7b3 | 0x67 / 103 | 0xbd / 189 | 0x23e / 574 | 0x32a / 810 |
The table above summarizes the discrepancies between the program counter values used in the specification and the program counter values obtained by manually decompiling the provided bytecode. The function start (BC) value is the value that the program counter is set to based on the function selector, specified in the jump table at the beginning of the bytecode and selected by the function selector. The end value is the result of manually tracing the decompiled bytecode until the RETURN opcode is reached, indicating the end of a function. The specification values are lifted from the PC cell in each corresponding specification. Some functions are covered by multiple specifications, like transfer; we use brackets to distinguish between the PC values of these cases in the specification.
The above shows that only one specification is using proper values, for approve.
The above may seem purely academic, but it has some important practical consequences. I only tested one specification manually to see what was being omitted, the balanceOf specification. Several of the function modifier checks were omitted, including the default check that Solidity adds to every non-payable function to make sure it is not sent Ether balance.
The current specification gives <callValue> 0 </callValue>
, and executes the balanceOf function assuming the call value (msg.value) is 0. Changing this to <callValue> 50 </callValue>
allows the proof to still succeed, and the prover outputs true on the property.
This property is however wrong. Calling the balanceOf function with a non-zero call value will throw in the production contract (and indeed in the bytecode being verified).
It is likely many of these specifications will allow you to prove similarly incorrect results over their target functions with some minor modification. In addition, the concrete gas bounds we derive (which may be considered as part of the property) are again wrong in every case but approve, as the full function code is not executed.
Several remediations are possible. One is ABI-level proving, which would nicely solve this issue. In fact, the properties we've proved here would in most cases fail to verify at the ABI level. This anecdotal example can also serve to motivate ABI-level verification.
Another possible solution is a tool automatically extracting PC locations from bytecode for a given function / function identifier. Using this rather than remix in our proofs will likely solve this issue.
Right now there is an extra rule for RETURNDATACOPY
which checks for invalid memory access, but this should already be caught by the #memory
function.
We should remove the redundant check.
Currently the README lists the Sphinx dependencies between the Ubuntu example and the other examples.
It would be better to move the Sphinx documentation down so that people do not think it's as important.
Currently, when I modify the Makefile to run more blockchain tests,
--- a/Makefile
+++ b/Makefile
@@ -136,6 +136,7 @@ tests/ethereum-tests/VMTests/%.test: tests/ethereum-tests/VMTests/% build
# BlockchainTests
+more_bchain_tests=$(wildcard tests/ethereum-tests/BlockchainTests/bcInvalidHeaderTest/*.json)
bchain_tests=$(wildcard tests/ethereum-tests/BlockchainTests/GeneralStateTests/*/*.json)
slow_bchain_tests=$(wildcard tests/ethereum-tests/BlockchainTests/GeneralStateTests/stQuadraticComplexityTest/*.json) \
$(wildcard tests/ethereum-tests/BlockchainTests/GeneralStateTests/stStaticCall/static_Call50000*.json) \
@@ -145,6 +146,7 @@ slow_bchain_tests=$(wildcard tests/ethereum-tests/BlockchainTests/GeneralStateTe
quick_bchain_tests=$(filter-out $(slow_bchain_tests), $(bchain_tests))
bchain-test-all: $(bchain_tests:=.test)
+more-bchain-test: $(more_bchain_tests:=.test)
bchain-test: $(quick_bchain_tests:=.test)
tests/ethereum-tests/BlockchainTests/%.test: tests/ethereum-tests/BlockchainTests/% build
I get an error
$ make more-bchain-test
./kevm test tests/ethereum-tests/BlockchainTests/bcInvalidHeaderTest/wrongStateRoot.json
--- expected
+++ actual
@@ -1 +1 @@
-`<generatedTop>`(`<k>`(.K),`<exit-code>`(#token("0","Int")),`<mode>`(`SUCCESS_ETHEREUM-SIMULATION`(.KList)),`<schedule>`(`DEFAULT_EVM`(.KList)),`<analysis>`(`.Map`(.KList)),`<ethereum>`(`<evm>`(`<output>`(`.WordStack_EVM-DATA`(.KList)),`<memoryUsed>`(#token("0","Int")),`<callDepth>`(#token("0","Int")),`<callStack>`(`.List`(.KList)),`<interimStates>`(`.List`(.KList)),`<substateStack>`(`.List`(.KList)),`<callLog>`(`.Set`(.KList)),`<txExecState>`(`<program>`(`.Map`(.KList)),`<programBytes>`(`.WordStack_EVM-DATA`(.KList)),`<id>`(#token("0","Int")),`<caller>`(#token("0","Int")),`<callData>`(`.WordStack_EVM-DATA`(.KList)),`<callValue>`(#token("0","Int")),`<wordStack>`(`.WordStack_EVM-DATA`(.KList)),`<localMem>`(`.Map`(.KList)),`<pc>`(#token("0","Int")),`<gas>`(#token("0","Int")),`<previousGas>`(#token("0","Int")),`<static>`(#token("false","Bool"))),`<substate>`(`<selfDestruct>`(`.Set`(.KList)),`<log>`(`.List`(.KList)),`<refund>`(#token("0","Int"))),`<gasPrice>`(#token("0","Int")),`<origin>`(#token("0","Int")),`<previousHash>`(#token("0","Int")),`<ommersHash>`(#token("0","Int")),`<coinbase>`(#token("0","Int")),`<stateRoot>`(#token("0","Int")),`<transactionsRoot>`(#token("0","Int")),`<receiptsRoot>`(#token("0","Int")),`<logsBloom>`(`.WordStack_EVM-DATA`(.KList)),`<difficulty>`(#token("0","Int")),`<number>`(#token("0","Int")),`<gasLimit>`(#token("0","Int")),`<gasUsed>`(#token("0","Int")),`<timestamp>`(#token("0","Int")),`<extraData>`(`.WordStack_EVM-DATA`(.KList)),`<mixHash>`(#token("0","Int")),`<blockNonce>`(#token("0","Int")),`<ommerBlockHeaders>`(`[_]_EVM-DATA`(`.List{"_,__EVM-DATA"}`(.KList))),`<blockhash>`(`.List`(.KList))),`<network>`(`<activeAccounts>`(`.Map`(.KList)),`<accounts>`(`.AccountCellMap`(.KList)),`<txOrder>`(`.List`(.KList)),`<txPending>`(`.List`(.KList)),`<messages>`(`.MessageCellMap`(.KList)))))
+`<generatedTop>`(`<k>`(`run__ETHEREUM-SIMULATION`(`_:__EVM-DATA`(#token("\"wrongStateRoot_Byzantium\"","String"),`{_}_EVM-DATA`(`_,__EVM-DATA`(`_:__EVM-DATA`(#token("\"expectExceptionALL\"","String"),#token("\"InvalidStateRoot\"","String")),`_,__EVM-DATA`(`_:__EVM-DATA`(#token("\"genesisBlockHeader\"","String"),`{_}_EVM-DATA`(`_,__EVM-DATA`(`_:__EVM-DATA`(#token("\"bloom\"","String"),#token("\"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\"","String")),`_,__EVM-DATA`(`_:__EVM-DATA`(#token("\"coinbase\"","String"),#token("\"0x8888f1f195afa192cfee860698584c030f4c9db1\"","String")),`_,__EVM-DATA`(`_:__EVM-DATA`(#token("\"difficulty\"","String"),#token("\"0x020000\"","String")),`_,__EVM-DATA`(`_:__EVM-DATA`(#token("\"extraData\"","String"),#token("\"0x42\"", ...
The problem can be reproduced in https://github.com/pirapira/evm-semantics/tree/bc-invalid-header with make more-bchain-test
.
Branch bump-tests-submodule
makes the infrastructural changes necessary for this. We need to add support for the shl
opcode it looks like.
We'll wait and see if someone volunteers to do this.
Gerard wants us to test the tests that are timing out on Mantis with normal block gas limits in order to see if they will cause problems in practice. This is the issue to track doing this on our end.
Several functions in data.md
are probably unused and can be removed.
In addition, the way that signed arithmetic is done can probably be cleaner by looking at how KWASM does it.
On version dff1378, I see the following error when I try ./Build test ethereum.md
.
$ ./Build test ethereum.md
== Using uiuck
== kompile: .build/uiuck/ethereum-kompiled/extras/timestamp
== git submodule: cloning upstreams test repository
Submodule '.build/secp256k1' (https://github.com/bitcoin-core/secp256k1) registered for path '.build/secp256k1'
Submodule 'tests/ci/rv-k' (https://github.com/runtimeverification/k) registered for path 'tests/ci/rv-k'
Submodule 'tests/ci/uiuc-k' (https://github.com/kframework/k/) registered for path 'tests/ci/uiuc-k'
Submodule 'tests/ethereum-tests' (https://github.com/ethereum/tests.git) registered for path 'tests/ethereum-tests'
Cloning into '/home/yh/src/evm-semantics/.build/secp256k1'...
Cloning into '/home/yh/src/evm-semantics/tests/ci/rv-k'...
Cloning into '/home/yh/src/evm-semantics/tests/ci/uiuc-k'...
Cloning into '/home/yh/src/evm-semantics/tests/ethereum-tests'...
Submodule path '.build/secp256k1': checked out 'f532bdc9f77f7bbf7e93faabfbe9c483f0a9f75f'
Submodule path 'tests/ci/rv-k': checked out '576eb068de5b14d6518a08b8fffafbff6171c450'
Submodule path 'tests/ci/uiuc-k': checked out '25829b11810bf1cbc0b5cc936d3e02f7ea1ea2a0'
Submodule path 'tests/ethereum-tests': checked out '3b8e2d94dd63057b5bddc9e1239513f3e63fb45e'
== split: tests/VMTests/vmArithmeticTest/make.timestamp
== split: tests/VMTests/vmBitwiseLogicOperationTest/make.timestamp
== split: tests/VMTests/vmBlockInfoTest/make.timestamp
== split: tests/VMTests/vmEnvironmentalInfoTest/make.timestamp
== split: tests/VMTests/vmIOandFlowOperationsTest/make.timestamp
== split: tests/VMTests/vmLogTest/make.timestamp
== split: tests/VMTests/vmPerformanceTest/make.timestamp
== split: tests/VMTests/vmPushDupSwapTest/make.timestamp
== split: tests/VMTests/vmSha3Test/make.timestamp
== split: tests/VMTests/vmSystemOperationsTest/make.timestamp
== split: tests/VMTests/vmtests/make.timestamp == split: tests/VMTests/vmInputLimits/make.timestamp
== split: tests/VMTests/vmInputLimitsLight/make.timestamp
== split: tests/BlockchainTests/GeneralStateTests/stCreateTest/CREATE_AcreateB_BSuicide_BStore/make.timestamp
-:1: parser error : Start tag expected, '<' not found
FATAL: Don't know how to set 'krun_opts' for 'ethereum.md'
^
@ehildenb I got the following error while running make
after make deps
:
== submodule: plugin/make.timestamp
git submodule update --init --recursive -- plugin
touch plugin/make.timestamp
eval $(opam config env) \
&& cd .build/node/driver-kompiled \
&& ocamllex lexer.mll \
&& ocamlyacc parser.mly \
&& ocamlfind opt -O3 -c -g -package gmp -package zarith -package uuidm -safe-string prelude.ml plugin.ml parser.mli parser.ml lexer.ml run.ml -thread \
&& ocamlfind opt -O3 -c -g -w -11-26 -package gmp -package zarith -package uuidm -package ethereum-semantics-plugin-node -safe-string realdef.ml -match-context-rows 2 \
&& ocamlfind opt -O3 -shared -o realdef.cmxs realdef.cmx \
&& ocamlfind opt -O3 -g -o interpreter constants.cmx prelude.cmx plugin.cmx parser.cmx lexer.cmx run.cmx interpreter.ml \
-package gmp -package dynlink -package zarith -package str -package uuidm -package unix -package ethereum-semantics-plugin-node -linkpkg -linkall -thread -safe-string
40 states, 1247 transitions, table size 5228 bytes
/usr/bin/ld: cannot find -lsecp256k1
collect2: error: ld returned 1 exit status
File "caml_startup", line 1:
Error: Error during linking
Makefile:167: recipe for target '.build/node/driver-kompiled/interpreter' failed
make: *** [.build/node/driver-kompiled/interpreter] Error 2
Dwight solved this problem by going to .build/ocaml/driver-kompiled
directory, then run
➜ driver-kompiled git:(master) export LIBRARY_PATH=/home/yiyiwang/evm-semantics/.build/local/lib
➜ driver-kompiled git:(master) ocamlfind opt -O3 -g -o interpreter constants.cmx prelude.cmx plugin.cmx parser.cmx lexer.cmx run.cmx interpreter.ml \
-package gmp -package dynlink -package zarith -package str -package uuidm -package unix -package ethereum-semantics-plugin-ocaml -linkpkg -linkall -thread -safe-string -verbose
Effective set of compiler predicates: pkg_unix,pkg_threads.posix,pkg_threads,pkg_gmp,pkg_dynlink,pkg_zarith,pkg_str,pkg_bytes,pkg_uuidm,pkg_cryptokit,pkg_bigarray,pkg_secp256k1,pkg_bn128,pkg_ppx_deriving_protobuf.runtime,pkg_ocaml-protoc,pkg_ethereum-semantics-plugin-ocaml,autolink,mt,mt_posix,native
+ ocamlopt.opt -O3 -g -o interpreter -linkall -safe-string -verbose -thread -I /home/yiyiwang/.opam/4.03.0+k/lib/gmp -I /home/yiyiwang/.opam/4.03.0+k/lib/zarith -I /home/yiyiwang/.opam/4.03.0+k/lib/bytes -I /home/yiyiwang/.opam/4.03.0+k/lib/uuidm -I /home/yiyiwang/.opam/4.03.0+k/lib/cryptokit -I /home/yiyiwang/.opam/4.03.0+k/lib/secp256k1 -I /home/yiyiwang/.opam/4.03.0+k/lib/bn128 -I /home/yiyiwang/.opam/4.03.0+k/lib/ppx_deriving_protobuf -I /home/yiyiwang/.opam/4.03.0+k/lib/ocaml-protoc -I /home/yiyiwang/.opam/4.03.0+k/lib/ethereum-semantics-plugin-ocaml /home/yiyiwang/.opam/4.03.0+k/lib/ocaml/unix.cmxa /home/yiyiwang/.opam/4.03.0+k/lib/ocaml/threads/threads.cmxa /home/yiyiwang/.opam/4.03.0+k/lib/gmp/gmp.cmxa /home/yiyiwang/.opam/4.03.0+k/lib/ocaml/dynlink.cmxa /home/yiyiwang/.opam/4.03.0+k/lib/zarith/zarith.cmxa /home/yiyiwang/.opam/4.03.0+k/lib/ocaml/str.cmxa /home/yiyiwang/.opam/4.03.0+k/lib/uuidm/uuidm.cmxa /home/yiyiwang/.opam/4.03.0+k/lib/cryptokit/cryptokit.cmxa /home/yiyiwang/.opam/4.03.0+k/lib/ocaml/bigarray.cmxa /home/yiyiwang/.opam/4.03.0+k/lib/secp256k1/secp256k1.cmxa /home/yiyiwang/.opam/4.03.0+k/lib/bn128/bn128.cmxa /home/yiyiwang/.opam/4.03.0+k/lib/ppx_deriving_protobuf/protobuf.cmxa /home/yiyiwang/.opam/4.03.0+k/lib/ocaml-protoc/pbrt.cmxa /home/yiyiwang/.opam/4.03.0+k/lib/ethereum-semantics-plugin-ocaml/semantics.cmxa constants.cmx prelude.cmx plugin.cmx parser.cmx lexer.cmx run.cmx interpreter.ml
+ as -o 'interpreter.o' '/tmp/camlasmfce520.s'
+ as -o '/tmp/camlstartupa99c5a.o' '/tmp/camlstartup10ca16.s'
+ gcc -o 'interpreter' '-L/home/yiyiwang/.opam/4.03.0+k/lib/gmp' '-L/home/yiyiwang/.opam/4.03.0+k/lib/zarith' '-L/home/yiyiwang/.opam/4.03.0+k/lib/bytes' '-L/home/yiyiwang/.opam/4.03.0+k/lib/uuidm' '-L/home/yiyiwang/.opam/4.03.0+k/lib/cryptokit' '-L/home/yiyiwang/.opam/4.03.0+k/lib/secp256k1' '-L/home/yiyiwang/.opam/4.03.0+k/lib/bn128' '-L/home/yiyiwang/.opam/4.03.0+k/lib/ppx_deriving_protobuf' '-L/home/yiyiwang/.opam/4.03.0+k/lib/ocaml-protoc' '-L/home/yiyiwang/.opam/4.03.0+k/lib/ethereum-semantics-plugin-ocaml' '-L/home/yiyiwang/.opam/4.03.0+k/lib/ocaml/threads' '-L/home/yiyiwang/.opam/4.03.0+k/lib/ocaml' -Wl,-E '/tmp/camlstartupa99c5a.o' '/home/yiyiwang/.opam/4.03.0+k/lib/ocaml/std_exit.o' 'interpreter.o' 'run.o' 'lexer.o' 'parser.o' 'plugin.o' 'prelude.o' 'constants.o' '/home/yiyiwang/.opam/4.03.0+k/lib/ethereum-semantics-plugin-ocaml/semantics.a' '/home/yiyiwang/.opam/4.03.0+k/lib/ocaml-protoc/pbrt.a' '/home/yiyiwang/.opam/4.03.0+k/lib/ppx_deriving_protobuf/protobuf.a' '/home/yiyiwang/.opam/4.03.0+k/lib/bn128/bn128.a' '/home/yiyiwang/.opam/4.03.0+k/lib/secp256k1/secp256k1.a' '/home/yiyiwang/.opam/4.03.0+k/lib/ocaml/bigarray.a' '/home/yiyiwang/.opam/4.03.0+k/lib/cryptokit/cryptokit.a' '/home/yiyiwang/.opam/4.03.0+k/lib/uuidm/uuidm.a' '/home/yiyiwang/.opam/4.03.0+k/lib/ocaml/str.a' '/home/yiyiwang/.opam/4.03.0+k/lib/zarith/zarith.a' '/home/yiyiwang/.opam/4.03.0+k/lib/ocaml/dynlink.a' '/home/yiyiwang/.opam/4.03.0+k/lib/gmp/gmp.a' '/home/yiyiwang/.opam/4.03.0+k/lib/ocaml/threads/threads.a' '/home/yiyiwang/.opam/4.03.0+k/lib/ocaml/unix.a' '/home/yiyiwang/.opam/4.03.0+k/lib/ocaml/stdlib.a' '-lsecp256k1_stubs' '-L/home/yiyiwang/GitHub/evm-semantics/.build/local/lib' '-lsecp256k1' '-lbigarray' '-lcryptokit_stubs' '-lz' '-lcamlstr' '-lzarith' '-lgmp' '-lgmp_stubs' '-L/usr/local/lib' '-lgmp' '-lmpfr' '-lthreadsnat' '-lpthread' '-lunix' '/home/yiyiwang/.opam/4.03.0+k/lib/ocaml/libasmrun.a' -lm -ldl
We are trying to make the following circularity claim: https://github.com/dapphub/verified-smart-contracts/blob/dappsys/dappsys/exp-naive-circ-spec.k
but are given the following error
Does this error stem from something wrong in our spec? It looks like an error that has to do with the internal workings of K...
These commands succeed, so they are good candidates into make test
or make test-all
.
./kevm test tests/ethereum-tests/BlockchainTests/bcExploitTest/ShanghaiLove.json
./kevm test tests/ethereum-tests/BlockchainTests/bcExploitTest/DelegateCallSpam.json
./kevm test tests/ethereum-tests/BlockchainTests/bcExploitTest/SuicideIssue.json
./kevm test tests/ethereum-tests/BlockchainTests/bcExploitTest/StrangeContractCreation.json
ShanghaiLove.json
takes more than 30 minutes, so this one should join only test-all
.
On version: 9f848a6
$ ./tests/ci/with-k uiuck ./Build tests quick
yields the following exception:
== running: tests/proofs/hkg/approve-spec.k
WARNING: pandoc-tangle not installed. Ignoring changes in markdown files
java.lang.UnsupportedOperationException: missing SMTLib translation for .WordStack
at org.kframework.backend.java.symbolic.KILtoSMTLib.transform(KILtoSMTLib.java:483)
at org.kframework.backend.java.kil.KItem.accept(KItem.java:710)
at org.kframework.backend.java.symbolic.KILtoSMTLib.translate(KILtoSMTLib.java:241)
at org.kframework.backend.java.symbolic.KILtoSMTLib.transform(KILtoSMTLib.java:550)
at org.kframework.backend.java.kil.KItem.accept(KItem.java:710)
at org.kframework.backend.java.symbolic.KILtoSMTLib.translate(KILtoSMTLib.java:241)
at org.kframework.backend.java.symbolic.KILtoSMTLib.transform(KILtoSMTLib.java:550)
at org.kframework.backend.java.kil.KItem.accept(KItem.java:710)
at org.kframework.backend.java.symbolic.KILtoSMTLib.translate(KILtoSMTLib.java:241)
at org.kframework.backend.java.symbolic.KILtoSMTLib.transform(KILtoSMTLib.java:550)
at org.kframework.backend.java.kil.KItem.accept(KItem.java:710)
at org.kframework.backend.java.symbolic.KILtoSMTLib.translate(KILtoSMTLib.java:241)
at org.kframework.backend.java.symbolic.KILtoSMTLib.translateTerm(KILtoSMTLib.java:450)
at org.kframework.backend.java.symbolic.KILtoSMTLib.transform(KILtoSMTLib.java:423)
at org.kframework.backend.java.symbolic.ConjunctiveFormula.accept(ConjunctiveFormula.java:845)
at org.kframework.backend.java.symbolic.KILtoSMTLib.translate(KILtoSMTLib.java:241)
at org.kframework.backend.java.symbolic.KILtoSMTLib.translateImplication(KILtoSMTLib.java:189)
at org.kframework.backend.java.symbolic.SMTOperations.impliesSMT(SMTOperations.java:59)
at org.kframework.backend.java.symbolic.ConjunctiveFormula.impliesSMT(ConjunctiveFormula.java:760)
at org.kframework.backend.java.symbolic.ConjunctiveFormula.implies(ConjunctiveFormula.java:704)
at org.kframework.backend.java.util.RewriteEngineUtils.evaluateConditions(RewriteEngineUtils.java:147)
at org.kframework.backend.java.util.RewriteEngineUtils.lambda$evaluateConditions$0(RewriteEngineUtils.java:185)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at org.kframework.backend.java.util.RewriteEngineUtils.evaluateConditions(RewriteEngineUtils.java:187)
at org.kframework.backend.java.symbolic.PatternMatcher.match(PatternMatcher.java:118)
at org.kframework.backend.java.kil.KItem$KItemOperations.evaluateFunction(KItem.java:458)
at org.kframework.backend.java.kil.KItem$KItemOperations.resolveFunctionAndAnywhere(KItem.java:331)
at org.kframework.backend.java.kil.KItem.resolveFunctionAndAnywhere(KItem.java:279)
at org.kframework.backend.java.symbolic.SubstituteAndEvaluateTransformer.transform(SubstituteAndEvaluateTransformer.java:104)
at org.kframework.backend.java.kil.KItem.accept(KItem.java:710)
at org.kframework.backend.java.symbolic.CopyOnWriteTransformer.transform(CopyOnWriteTransformer.java:271)
at org.kframework.backend.java.symbolic.SubstituteAndEvaluateTransformer.transform(SubstituteAndEvaluateTransformer.java:112)
at org.kframework.backend.java.kil.KList.accept(KList.java:152)
at org.kframework.backend.java.symbolic.CopyOnWriteTransformer.transform(CopyOnWriteTransformer.java:178)
at org.kframework.backend.java.symbolic.SubstituteAndEvaluateTransformer.transform(SubstituteAndEvaluateTransformer.java:103)
at org.kframework.backend.java.kil.KItem.accept(KItem.java:710)
at org.kframework.backend.java.symbolic.CopyOnWriteTransformer.transform(CopyOnWriteTransformer.java:271)
at org.kframework.backend.java.symbolic.SubstituteAndEvaluateTransformer.transform(SubstituteAndEvaluateTransformer.java:112)
at org.kframework.backend.java.kil.KList.accept(KList.java:152)
at org.kframework.backend.java.symbolic.CopyOnWriteTransformer.transform(CopyOnWriteTransformer.java:178)
at org.kframework.backend.java.symbolic.SubstituteAndEvaluateTransformer.transform(SubstituteAndEvaluateTransformer.java:103)
at org.kframework.backend.java.kil.KItem.accept(KItem.java:710)
at org.kframework.backend.java.symbolic.CopyOnWriteTransformer.transform(CopyOnWriteTransformer.java:271)
at org.kframework.backend.java.symbolic.SubstituteAndEvaluateTransformer.transform(SubstituteAndEvaluateTransformer.java:112)
at org.kframework.backend.java.kil.KList.accept(KList.java:152)
at org.kframework.backend.java.symbolic.CopyOnWriteTransformer.transform(CopyOnWriteTransformer.java:178)
at org.kframework.backend.java.symbolic.SubstituteAndEvaluateTransformer.transform(SubstituteAndEvaluateTransformer.java:103)
at org.kframework.backend.java.kil.KItem.accept(KItem.java:710)
at org.kframework.backend.java.kil.Term.substituteAndEvaluate(Term.java:82)
at org.kframework.backend.java.symbolic.SymbolicRewriter.buildRHS(SymbolicRewriter.java:322)
at org.kframework.backend.java.symbolic.SymbolicRewriter.buildRHS(SymbolicRewriter.java:342)
at org.kframework.backend.java.symbolic.SymbolicRewriter.buildRHS(SymbolicRewriter.java:342)
at org.kframework.backend.java.symbolic.SymbolicRewriter.buildRHS(SymbolicRewriter.java:342)
at org.kframework.backend.java.symbolic.SymbolicRewriter.fastComputeRewriteStep(SymbolicRewriter.java:189)
at org.kframework.backend.java.symbolic.SymbolicRewriter.proveRule(SymbolicRewriter.java:642)
at org.kframework.backend.java.symbolic.InitializeRewriter$SymbolicRewriterGlue.lambda$prove$3(InitializeRewriter.java:210)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at org.kframework.backend.java.symbolic.InitializeRewriter$SymbolicRewriterGlue.prove(InitializeRewriter.java:213)
at org.kframework.backend.java.symbolic.ProofExecutionMode.execute(ProofExecutionMode.java:161)
at org.kframework.backend.java.symbolic.ProofExecutionMode.execute(ProofExecutionMode.java:52)
at org.kframework.krun.KRun.run(KRun.java:104)
at org.kframework.krun.KRunFrontEnd.run(KRunFrontEnd.java:67)
at org.kframework.main.FrontEnd.main(FrontEnd.java:34)
at org.kframework.main.Main.runApplication(Main.java:415)
at org.kframework.main.Main.runApplication(Main.java:264)
at org.kframework.main.Main.main(Main.java:73)
<same exception repeated multiple times>
--- expected
+++ actual
@@ -1 +1,3 @@
+WARNING: pandoc-tangle not installed. Ignoring changes in markdown files
+
true
== failure: tests/proofs/hkg/approve-spec.k
== failed: 1 / 5
which might be and might not be related to the absense of pandoc-tangle.
As discussed in #192, when matching the arguments of a rule, K is in some cases not able to infer when two cases are mutually exclusive.
A workaround is to instead check the arguments using requires
, and change code blocks like:
rule W0 /Word 0 => 0
rule W0 /Word W1 => chop( W0 /Int W1 ) requires W1 =/=Int 0
to
rule W0 /Word W1 => 0 requires W1 ==Int 0
rule W0 /Word W1 => chop( W0 /Int W1 ) requires W1 =/=Int 0
Branch balance-underflow-fix
contains a test that demonstrates that balance is allowed to underflow when using driver.md
to load and execute a test.
This can be done by syncing Mantis on the main-chain using KEVM as a backend, and disabling the various rules which depend on previous hard forks.
Similarly, we may be able to find alternate (simpler) descriptions of the EVM which still conform to the existing blockchain by changing KEVM and trying to sync.
One concrete example is the dependence on FRONTIER
, https://github.com/kframework/evm-semantics/blob/master/evm.md#account-creationdeletion during account creation.
README.md has the following line:
Use ./Build run and ./Build test to run/test a file, respectively.
It was not obvious to me that <file>
cannot be ethereum.md
. So I tried ./Build test ethereum.md
but it did not work.
So some description about <file>
would be helpful here.
I'm trying to follow README.md
and I see
$ ./tests/ci/with-k uiuck ./Build debug tests/VMTests/vmArithmeticTest/add0.json
== Using uiuck
== debugging: tests/VMTests/vmArithmeticTest/add0.json
org.kframework.utils.errorsystem.KEMException: [Error] Critical: Could not read from file /home/yh/src/evm-semantics/./tests/VMTests/vmArithmeticTest/add0.json
at org.kframework.utils.errorsystem.KEMException.create(KEMException.java:109)
at org.kframework.utils.errorsystem.KEMException.criticalError(KEMException.java:34)
at org.kframework.utils.file.FileUtil.readFromWorkingDirectory(FileUtil.java:311)
at org.kframework.krun.KRun.parse(KRun.java:445)
at org.kframework.krun.KRun.getUserConfigVarsMap(KRun.java:335)
at org.kframework.krun.KRun.parseConfigVars(KRun.java:364)
at org.kframework.krun.KRun.run(KRun.java:93)
at org.kframework.krun.KRunFrontEnd.run(KRunFrontEnd.java:67)
at org.kframework.main.FrontEnd.main(FrontEnd.java:34)
at org.kframework.main.Main.runApplication(Main.java:415)
at org.kframework.main.Main.runApplication(Main.java:264)
at org.kframework.main.Main.main(Main.java:73)
Currently the operator #?_:_#?
is only used in exactly one place, meaning it should be replaced with an operator specifically for that one use. This will also simplify other parts of the code where #pop/drop{X}
is called, allowing us to write the logic for that a single time and then the rest of the semantics simply call the macro operator.
This will take some careful doing.
I kept running into the following issue with pandoc when building:
== tangle: .build/ocaml/driver.k
mkdir -p .build/ocaml/
pandoc --from markdown --to "/Users/.../evm-semantics/.build/pandoc-tangle/tangle.lua" --metadata=code:".k:not(.node),.standalone" driver.md > .build/ocaml/driver.k
pandoc: /users/.../evm-semantics/.build/pandoc-tangle/tangle.lua: openFile: does not exist (No such file or directory)
make: *** [.build/ocaml/driver.k] Error 1
Notice "/Users/" vs. "/users/". I solved this locally by changing the TANGLER variable to refer to a relative instead of absolute path.
#TANGLER:=$(PANDOC_TANGLE_SUBMODULE)/tangle.lua
TANGLER=.build/pandoc-tangle/tangle.lua
Trying to prove a simple program I got this error. Can someone help me?
[Error] Critical: Z3 crashed on input query:
(declare-sort WordStack)
(declare-sort K)
(declare-fun sizeWordStackAux (WordStack Int) Int)
(declare-fun sizeWordStack (WordStack) Int)
(declare-fun smt_keccak (WordStack) Int)
(declare-fun asByteStack (Int WordStack) WordStack)
(declare-fun chop (Int) Int)
(declare-fun smt_nthbyteof (Int Int Int) Int)
(declare-fun asWord (WordStack) Int)
(assert (forall ((|_541| Int)) (= (chop |_541|) (mod |_541|
115792089237316195423570985008687907853269984665640564039457584007913129639936))))
(assert (forall ((|_354| Int)(|_353| WordStack)) (= (>= (sizeWordStackAux
|_353| |_354|) 0) true)))
(declare-fun |R__208| () K)
(declare-fun |_1143| () Int)
(declare-fun |_1341| () Int)
(declare-fun |_1144| () WordStack)
(declare-fun |_2312| () Bool)
(declare-fun |_2313| () K)
(declare-fun |_2311| () K)
(declare-fun |_2310| () K)
(assert (and (= _2310 _2311) (= (>= |_1143| 0) true) (= _2312 false) (= (>=
|_1341| (+ (* 55 |_1143|) 27)) true) (= (<= |_1143|
340282366920938463463374607431768211455) true) (= _2313 (smt_seq_elem
|R__208|)) (= (< (sizeWordStackAux |_1144| 0) 1021) true)))
result:
(error "line 20 column 206: unknown function/constant smt_seq_elem")
[Warning] Critical: missing SMTLib translation for #KSequence (missing SMTLib
translation for #KSequence)
[Warning] Critical: missing SMTLib translation for BALANCE_EVM (missing SMTLib
translation for BALANCE_EVM)
[Warning] Critical: missing SMTLib translation for EXTCODESIZE_EVM (missing
SMTLib translation for EXTCODESIZE_EVM)
[Warning] Critical: missing SMTLib translation for isInvalidOp (missing SMTLib
translation for isInvalidOp)
Build -help
throws
./Build: line 34: type: kompile: not found
FATAL: kompile not in $PATH
When K is not installed.
Whenever I try to run, test or prove any of the examples, I get the following error:
➜ evm-semantics git:(master) ./Build run tests/VMTests/vmArithmeticTest/add0.json
== Using uiuck
WARNING: pandoc-tangle not installed. Ignoring changes in markdown files
== kompile: .build/uiuck/ethereum-kompiled/extras/timestamp
== running: tests/VMTests/vmArithmeticTest/add0.json
[Error] Critical: Parser returned a non-zero exit code: 113
Stdout:
Stderr:
[Error] Inner Parser: Parse error: unexpected end of file.
Source(<command line: -e>)
Location(1,8,1,9)
-:1: parser error : Document is empty
^
I'm on macOS 10.12.6, but I get the same error on Ubuntu 17.04. Some additional version info that might be of use:
➜ ~ java -version
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
➜ ~ javac -version
javac 1.8.0_102
➜ ~ xmllint --version
xmllint: using libxml version 20904
compiled with: Threads Tree Output Push Reader Patterns Writer SAXv1 FTP HTTP DTDValid HTML Legacy C14N Catalog XPath XPointer XInclude ICU ISO8859X Unicode Regexps Automata Expr Schemas Schematron Modules Debug Zlib
➜ ~ krun --version
K framework version 4.0.0
Git revision: d310c7a
Git branch: v4.0.0
Build date: Thu Jul 28 04:10:26 CEST 2016
Additionally, running with-k
(as well as manually attempting to build the most recent version of UIUC-K from source) gives me the following error:
➜ evm-semantics git:(master) ./tests/ci/with-k uiuck ./Build run tests/VMTests/vmArithmeticTest/add0.json
[ERROR] Failed to execute goal on project kernel: Could not resolve dependencies for project org.kframework.k:kernel:jar:4.0.1-SNAPSHOT: Failed to collect dependencies at org.kframework.dependencies:jcommander:jar:1.35-custom: Failed to read artifact descriptor for org.kframework.dependencies:jcommander:jar:1.35-custom: Could not transfer artifact org.kframework.dependencies:jcommander:pom:1.35-custom from/to runtime-verification (http://office.runtimeverification.com:8888/repository/internal): Connect to office.runtimeverification.com:8888 [office.runtimeverification.com/76.191.23.163] failed: Connection refused -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :kernel
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.