uw-pluverse / perses Goto Github PK
View Code? Open in Web Editor NEWlanguage-agnostic program reducer.
License: GNU General Public License v3.0
language-agnostic program reducer.
License: GNU General Public License v3.0
It is possible that when Perses is writing to the best result file, Perses is killed. We need to have a better way to make sure the writing operation is atomic. In other words, we need to make sure it is either success or fail. For example, we can write to a temp file first, and then rename it to the best file.
Also the following,
//test/org/perses/benchmark_toys/parentheses:reduction_golden_test_progress_test
When I build bazel, I got several returned error messages like "Error while fetching artifact with coursier: Timed out". The top-level error message is
"INFO: Repository maven instantiated at:
/my/home/path/projects/perses/WORKSPACE:16:14: in
/bazel/tmp/dir_path/fca3721dec91925a5c679fcc769838a8/external/rules_jvm_external/defs.bzl:83:19: in maven_install
Repository rule coursier_fetch defined at:
/bazel/tmp/dir_path/fca3721dec91925a5c679fcc769838a8/external/rules_jvm_external/coursier.bzl:852:33: in
INFO: Repository 'maven' used the following cache hits instead of downloading the corresponding file.
* Hash '2311054a07f5e8d83766155744e139e7a4b2ee1e77d47a636839687c4233a465' for https://jcenter.bintray.com/io/get-coursier/coursier-cli_2.12/1.1.0-M14-4/coursier-cli_2.12-1.1.0-M14-4-standalone.jar
If the definition of 'maven' was updated, verify that the hashes were also updated.
ERROR: An error occurred during the fetch of repository 'maven':
Traceback (most recent call last):
File "/bazel/tmp/dir_path/fca3721dec91925a5c679fcc769838a8/external/rules_jvm_external/coursier.bzl", line 660, column 13, in _coursier_fetch_impl
fail("Error while fetching artifact with coursier: " + exec_result.stderr)
Error in fail: Error while fetching artifact with coursier: Timed out".
I have no idea how to solve these problems.
This issue is very similar to #23, it just in my scenario I was trying to use perses on a sqlite file.
[17:44:32] [INFO ] Tree Building: Start building spar-tree from input file SourceFile{file=FileWithContent{file=logs/database7-cur.sqlite}, lang=LanguageSQLite{name=sqlite, extensions=[sqlite], defaultCodeFormatControl=SINGLE_TOKEN_PER_LINE}}
[17:44:32] [INFO ] Tree Building: Step 1: Antlr parsing.
Exception in thread "main" org.perses.grammar.AntlrFailureException: Error in parsing file: <in memory>
Details: recognizer: org.perses.grammar.sql.sqlite.PnfSQLiteParser@9dada78
offendingSymbol: [@84,352:368='Xffffffffe7e53170',<184>,10:35]
line: 10
column: 35
msg: no viable alternative at input 'VALUES (0Xffffffffe7e53170'
at org.perses.grammar.FailOnErrorAntlrErrorListener.syntaxError(FailOnErrorAntlrErrorListener.kt:43)
at org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:41)
at org.antlr.v4.runtime.Parser.notifyErrorListeners(Parser.java:544)
at org.antlr.v4.runtime.DefaultErrorStrategy.reportNoViableAlternative(DefaultErrorStrategy.java:310)
at org.antlr.v4.runtime.DefaultErrorStrategy.reportError(DefaultErrorStrategy.java:136)
at org.perses.grammar.sql.sqlite.PnfSQLiteParser.altnt_block__insert_stmt_18(PnfSQLiteParser.java:14205)
at org.perses.grammar.sql.sqlite.PnfSQLiteParser.aux_rule__insert_stmt_22(PnfSQLiteParser.java:15717)
at org.perses.grammar.sql.sqlite.PnfSQLiteParser.insert_stmt(PnfSQLiteParser.java:2019)
at org.perses.grammar.sql.sqlite.PnfSQLiteParser.altnt_block__sql_stmt_5(PnfSQLiteParser.java:13497)
at org.perses.grammar.sql.sqlite.PnfSQLiteParser.sql_stmt(PnfSQLiteParser.java:549)
at org.perses.grammar.sql.sqlite.PnfSQLiteParser.sql_stmt_list(PnfSQLiteParser.java:508)
at org.perses.grammar.sql.sqlite.PnfSQLiteParser.kleene_star__parse_1(PnfSQLiteParser.java:3621)
at org.perses.grammar.sql.sqlite.PnfSQLiteParser.parse(PnfSQLiteParser.java:463)
at org.perses.grammar.sql.sqlite.SQLiteParserFacade.startParsing(SQLiteParserFacade.java:39)
at org.perses.grammar.sql.sqlite.SQLiteParserFacade.startParsing(SQLiteParserFacade.java:13)
at org.perses.grammar.AbstractDefaultParserFacade$parseReader$3.apply(AbstractDefaultParserFacade.kt:70)
at org.perses.grammar.AbstractDefaultParserFacade$parseReader$3.apply(AbstractDefaultParserFacade.kt:65)
at org.perses.grammar.AbstractDefaultParserFacade$Companion.parseReader(AbstractDefaultParserFacade.kt:151)
at org.perses.grammar.AbstractDefaultParserFacade.parseReader(AbstractDefaultParserFacade.kt:65)
at org.perses.grammar.AbstractParserFacade.parseString(AbstractParserFacade.kt:97)
at org.perses.grammar.AbstractParserFacade.parseString$default(AbstractParserFacade.kt:95)
at org.perses.reduction.AbstractProgramReductionDriver$Companion.createSparTree(AbstractProgramReductionDriver.kt:640)
at org.perses.reduction.RegularProgramReductionDriver$Companion.create(RegularProgramReductionDriver.kt:121)
at org.perses.Main.createReductionDriver(Main.kt:64)
at org.perses.AbstractMain.internalRun(AbstractMain.kt:41)
at org.perses.util.cmd.AbstractMain.run(AbstractMain.kt:53)
at org.perses.Main$Companion.main(Main.kt:70)
at org.perses.Main.main(Main.kt)
Caused by: org.antlr.v4.runtime.NoViableAltException
at org.antlr.v4.runtime.atn.ParserATNSimulator.noViableAlt(ParserATNSimulator.java:2014)
at org.antlr.v4.runtime.atn.ParserATNSimulator.execATN(ParserATNSimulator.java:445)
at org.antlr.v4.runtime.atn.ParserATNSimulator.adaptivePredict(ParserATNSimulator.java:371)
at org.perses.grammar.sql.sqlite.PnfSQLiteParser.altnt_block__insert_stmt_18(PnfSQLiteParser.java:14186)
... 22 more
The test script I was using has the following content:
#!/bin/sh
sqlite3.27 < database7-cur.sqlite
if [ $? -eq 139 ]; then
exit 0
else
exit 1
fi
The database7-cur.sqlite.txt
is a 145K in size and has the following md5sum: 690c32477d12119a0d62c464adc8f20a
https://github.com/antlr/grammars-v4/tree/master/verilog/verilog
I do not plan to directly incorporate this into perses, and will design a plugin system to allow users to specify an antlr grammar.
Better to allow a user to specify the antlr grammar file with a command line flag.
Currently, we might print something else after the reduction is done. We should make sure that the reduction statistics should ALWAYS be printed at the end of a reduction run.
[16:33:46] [INFO ] concurrent_token_slicer ended at 16:33:46 2020/11/29. #old=12, #new=12
[16:33:46] [INFO ] Reduction ended at 16:33:46 2020/11/29
[16:33:46] [INFO ] Elapsed time is 8 seconds
[16:33:46] [INFO ] Removed 503 tokens. reduction ratio is 12/515=2.33%
[16:33:46] [INFO ] Test script execution count: 69
[16:33:47] [INFO ] Formatting the reduction result perses_node_priority_with_dfs_delta_reduced_mutant.rs with rustfmt
Instruction for downloading still shows v1.4.
wget https://github.com/perses-project/perses/releases/download/v1.4/perses_deploy.jar
java -jar perses_deploy.jar [options]? --test-script <test-script.sh> --input-file <program file>
A quick sed cmd can be added to reducer/perses/scripts/release.py
to bump the version in readme
This will be a good feature to have, especially that perses has different flags than c-reduce.
For example, when a user has typed
perses-trunk --tes
Then when the user presses TAB, we should list the available flag -test-script
for him.
It seems unnecessary to have the apache exec library at all, and I think removing the library may give us some runtime performance gain.
This bug is currently tracked by an internal bug #92
When Perses encounters a token paste operator in a C program it throws an exception and then hangs. Probably the same issue as #23
❯ java -jar perses_deploy.jar --version
perses version 1.6
Git Version: 782478b56624fa61a87f4f4fab044604ff986803
Git Branch: master
Git Status: Modified
Built on 2023-04-10T01:18:27.404399-04:00[America/Toronto]
perses_test.c:
#define TEST(x) x##1
int main(void) {
return TEST(0);
}
perses_test.sh
#!/bin/bash
gcc perses_test.c -o perses_test
Error:
❯ java -jar perses_deploy.jar --test-script perses_test.sh --input-file perses_test.c
[18:53:56] [INFO ] Tree Building: Start building spar-tree from input file SourceFile{file=FileWithContent{file=/home/rkchang/dev/omp/perses_test.c}, lang=LanguageC{name=c, extensions=[c], defaultCodeFormatControl=SINGLE_TOKEN_PER_LINE}}
[18:53:56] [INFO ] Tree Building: Step 1: Antlr parsing.
Exception in thread "main" org.perses.grammar.AntlrFailureException: Error in parsing file: <in memory>
Details: recognizer: org.perses.grammar.c.PnfCLexer@5bd1ceca
offendingSymbol: null
line: 1
column: 17
msg: token recognition error at: '##'
Caused by: LexerNoViableAltException('#')
at org.antlr.v4.runtime.atn.LexerATNSimulator.failOrAccept(LexerATNSimulator.java:309)
at org.antlr.v4.runtime.atn.LexerATNSimulator.execATN(LexerATNSimulator.java:230)
at org.antlr.v4.runtime.atn.LexerATNSimulator.match(LexerATNSimulator.java:114)
at org.antlr.v4.runtime.Lexer.nextToken(Lexer.java:141)
at org.antlr.v4.runtime.BufferedTokenStream.fetch(BufferedTokenStream.java:169)
at org.antlr.v4.runtime.BufferedTokenStream.sync(BufferedTokenStream.java:152)
at org.antlr.v4.runtime.BufferedTokenStream.setup(BufferedTokenStream.java:254)
at org.antlr.v4.runtime.BufferedTokenStream.lazyInit(BufferedTokenStream.java:249)
at org.antlr.v4.runtime.CommonTokenStream.LT(CommonTokenStream.java:92)
at org.antlr.v4.runtime.Parser.enterRule(Parser.java:628)
at org.perses.grammar.c.PnfCParser.translationUnit(PnfCParser.java:7061)
at org.perses.grammar.c.PnfCParserFacade.startParsing(PnfCParserFacade.java:48)
at org.perses.grammar.c.PnfCParserFacade.startParsing(PnfCParserFacade.java:25)
at org.perses.grammar.AbstractDefaultParserFacade$parseReader$3.apply(AbstractDefaultParserFacade.kt:70)
at org.perses.grammar.AbstractDefaultParserFacade$parseReader$3.apply(AbstractDefaultParserFacade.kt:65)
at org.perses.grammar.AbstractDefaultParserFacade$Companion.parseReader(AbstractDefaultParserFacade.kt:151)
at org.perses.grammar.AbstractDefaultParserFacade.parseReader(AbstractDefaultParserFacade.kt:65)
at org.perses.grammar.AbstractParserFacade.parseString(AbstractParserFacade.kt:86)
at org.perses.grammar.AbstractParserFacade.parseString$default(AbstractParserFacade.kt:84)
at org.perses.reduction.AbstractProgramReductionDriver$Companion.createSparTree(AbstractProgramReductionDriver.kt:607)
at org.perses.reduction.RegularProgramReductionDriver$Companion.create(RegularProgramReductionDriver.kt:116)
at org.perses.Main.createReductionDriver(Main.kt:81)
at org.perses.AbstractMain.internalRun(AbstractMain.kt:33)
at org.perses.util.cmd.AbstractMain.run(AbstractMain.kt:53)
at org.perses.Main$Companion.main(Main.kt:87)
at org.perses.Main.main(Main.kt)
Create a script to
scripts/presubmit.sh
If new warnings are found, fix the warnings. Ideally, we should get no warnings from spotbugs.
The time component in the following log message is not right.
[14:11:44] [WARNING] One script execution took too much time: 00:38:06 1970/01/01
The current library jcommander is fine, but not great. A better option might be picocli
Hi! I am now using Perses on my buggy test scripts and I really appreciate your effort because Perses did help me cut down my test scripts and eventually find the bug. However, for several complex scripts I fond Perses may hang, here is an example:
class Test {
static long instanceCount = 218L;
static void vMeth1(byte by, long l, int i4) {
int i5;
float f;
for(i5=14;i5<284;++i5) {
switch((i5%5)*5+99)
{
case 104: {
f=1;
while(++f<6) instanceCount+=f;
}
}
}
}
void vMeth(int i2, int i3) {
byte by2=54;
vMeth1(by2,instanceCount,i3);
}
void mainTest(String[] strArr1){
int i,i17=11,i19=-16903,i20=-55683;
byte by3=23;
for (i=24;i<390;i++){
vMeth(i,i);
i20+=(i*i17+i17)-instanceCount;
}
System.out.println("i19 i20 by3 = "+i19+","+i20+","+by3);
}
public static void main(String[] strArr) {
Test _instance=new Test();
for(int i=0;i<10;i++)
_instance.mainTest(strArr);
}
}
Perses may hang with this piece of code, maybe Perses may try to reduce some tokens which make the loop here endless when executing the test script, I guess. I also find it is useful to set a timeout limit in org.perses.reduction.FutureExecutionResultInfo, like this:
class FutureExecutionResultInfo(
val edit: AbstractSparTreeEdit<*>,
val program: AbstractCacheRetrievalResult.CacheMiss,
// TODO: make future private.
val future: TestScriptExecutorService.FutureTestScriptExecutionTask
) {
val result: PropertyTestResult
get() {
try {
return future.get(TIME_LIMIT, TimeUnit.SECONDS)
}catch (e : TimeoutException) {
return PropertyTestResult.NON_INTERESTING_RESULT
}
}
fun cancel() {
future.cancel(true)
}
}
At least with this modification on my local code, Perses no longer hangs. I hope my advice can be helpful.
Hello,
I would like to run the benchmark as it is described in the README.md:
cd benchmark
./start_docker.sh
(inside docker) cd perses
(inside docker) bazel test benchmark/...
After these commands, I got the following output:
Executed 19 out of 56 tests: 13 tests pass, 6 fail locally and 37 were skipped.
FAILED: Build did NOT complete successfully
After this test, I tried steps described in benchmark/README.md
and run the benchmark.py
:
(inside docker) ./benchmark.py --benchmark-all
With this command, the following tests returned with error:
The tests cases failed with this error message:
[INFO ] Reduction algorithm is org.perses.reduction.reducer.PersesNodePrioritizedDfsReducer
[SEVERE ] The initial sanity check failed. Folder: /tmp/...
[SEVERE ] The files have been saved to /tmp/...
Exception in thread "main" java.lang.IllegalStateException
at org.perses.reduction.ReductionDriver.sanityCheck(ReductionDriver.kt:277)
at org.perses.reduction.ReductionDriver.reduce(ReductionDriver.kt:107)
at org.perses.Main.main(Main.java:65)
The clang-27137
failed with out of memory error. (16 GB)
test command:
./dragonwell-11.0.19.15+7-GA/bin/java -jar perses_deploy.jar --threads 1 --test-script $PWD/reduce.sh --input-file test.c -o result
result:
[11:12:40] [INFO ] Tree Building: Start building spar-tree from input file SourceFile{file=FileWithContent{file=/root/test.c}, lang=LanguageC{name=c, extensions=[c], defaultCodeFormatControl=SINGLE
[11:12:40] [INFO ] Tree Building: Step 1: Antlr parsing.
Exception in thread "main" org.perses.grammar.AntlrFailureException: Error in parsing file: <in memory>
Details: recognizer: org.perses.grammar.c.PnfCLexer@23e3f5cd
offendingSymbol: null
line: 430
column: 0
msg: token recognition error at: '#u'
Caused by: LexerNoViableAltException('#')
at org.antlr.v4.runtime.atn.LexerATNSimulator.failOrAccept(LexerATNSimulator.java:309)
at org.antlr.v4.runtime.atn.LexerATNSimulator.execATN(LexerATNSimulator.java:230)
at org.antlr.v4.runtime.atn.LexerATNSimulator.match(LexerATNSimulator.java:114)
at org.antlr.v4.runtime.Lexer.nextToken(Lexer.java:141)
at org.antlr.v4.runtime.BufferedTokenStream.fetch(BufferedTokenStream.java:169)
at org.antlr.v4.runtime.BufferedTokenStream.sync(BufferedTokenStream.java:152)
at org.antlr.v4.runtime.BufferedTokenStream.consume(BufferedTokenStream.java:136)
at org.antlr.v4.runtime.Parser.consume(Parser.java:571)
at org.antlr.v4.runtime.Parser.match(Parser.java:205)
at org.perses.grammar.c.PnfCParser.compoundStatement(PnfCParser.java:2378)
at org.perses.grammar.c.PnfCParser.functionDefinition(PnfCParser.java:2535)
at org.perses.grammar.c.PnfCParser.aux_rule__translationUnit_2(PnfCParser.java:9468)
at org.perses.grammar.c.PnfCParser.kleene_plus__translationUnit_3(PnfCParser.java:7316)
at org.perses.grammar.c.PnfCParser.translationUnit(PnfCParser.java:7066)
at org.perses.grammar.c.PnfCParserFacade.startParsing(PnfCParserFacade.java:48)
at org.perses.grammar.c.PnfCParserFacade.startParsing(PnfCParserFacade.java:25)
at org.perses.grammar.AbstractDefaultParserFacade$parseReader$3.apply(AbstractDefaultParserFacade.kt:70)
at org.perses.grammar.AbstractDefaultParserFacade$parseReader$3.apply(AbstractDefaultParserFacade.kt:65)
at org.perses.grammar.AbstractDefaultParserFacade$Companion.parseReader(AbstractDefaultParserFacade.kt:151)
at org.perses.grammar.AbstractDefaultParserFacade.parseReader(AbstractDefaultParserFacade.kt:65)
at org.perses.grammar.AbstractParserFacade.parseString(AbstractParserFacade.kt:86)
at org.perses.grammar.AbstractParserFacade.parseString$default(AbstractParserFacade.kt:84)
at org.perses.reduction.AbstractProgramReductionDriver$Companion.createSparTree(AbstractProgramReductionDriver.kt:607)
at org.perses.reduction.RegularProgramReductionDriver$Companion.create(RegularProgramReductionDriver.kt:116)
at org.perses.Main.createReductionDriver(Main.kt:81)
at org.perses.AbstractMain.internalRun(AbstractMain.kt:33)
at org.perses.util.cmd.AbstractMain.run(AbstractMain.kt:53)
at org.perses.Main$Companion.main(Main.kt:87)
at org.perses.Main.main(Main.kt)
test.c.txt generate from syzkaller
I just tried executing Perses on a file with a SQL extension using the command line below:
java -jar perses_deploy.jar --test-script oracle.py --input-file test.sql
This resulted in the following Exception
:
Exception in thread "main" kotlin.KotlinNullPointerException
at org.perses.reduction.ReductionDriver$Companion.createConfiguration(ReductionDriver.kt:540)
at org.perses.reduction.ReductionDriver.<init>(ReductionDriver.kt:65)
at org.perses.Main.run(Main.java:91)
at org.perses.Main.main(Main.java:47)
While this might be expected, I think it would be useful to explicitly check for supported extensions and throw a more informative Exception
, or print an error message, when the extension is not supported.
When I performed a reduction on the following Java code, I encountered some issues.
#!/bin/bash
set -o pipefail
set -o nounset
rm T.class
javac T.java \
&& timeout -s 9 5 java T | grep -q '4'
public class T {
private static int counter() {
return 100;
}
public static void main(String[] args) {
for (int i = 0; i < counter(); ++i) {
System.out.println(1<<2);
}
}
}
During the reduction, Perses generated the following error message.
$ java -jar perses_deploy.jar --test-script r.sh --input-file T.java
[10:13:12] [INFO ] Tree Building: Start building spar-tree from input file SourceFile{file=/home/qeryu/others/perses_java_issue/T.java, lang=org.perses.grammar.java.LanguageJava@5e3a8624}
[10:13:12] [INFO ] Tree Building: Step 1: Antlr parsing.
[10:13:12] [INFO ] Tree Building: Step 2: Converting parse tree to spar-tree
[10:13:12] [INFO ] Tree Building: Step 3: Simplifying spar-tree
[10:13:12] [INFO ] Tree Building: Finished in TimeSpan{startMillis=1705457592389, endMillis=1705457592904, formatted=0 seconds}
[10:13:12] [INFO ] The reduction process started at 10:13:12 2024/01/17
[10:13:12] [INFO ] Cache setting: query-caching=enabled, edit-caching=enabled
[10:13:12] [INFO ] Reduction algorithm is org.perses.reduction.reducer.PersesNodePrioritizedDfsReducer
[10:13:13] [SEVERE ] ***** ***** ***** ***** ***** ***** ***** ***** ***** ***** ***** ***** *****
[10:13:13] [SEVERE ] *
[10:13:13] [SEVERE ] * The initial sanity check failed.
[10:13:13] [SEVERE ] * The files have been saved, and you can check them at:
[10:13:13] [SEVERE ] * /tmp/1705457593470-0
[10:13:13] [SEVERE ] *
[10:13:13] [SEVERE ] ***** ***** ***** ***** ***** ***** ***** ***** ***** ***** ***** ***** *****
Exception in thread "main" java.lang.IllegalStateException
at org.perses.reduction.ReductionDriver.sanityCheck(ReductionDriver.kt:343)
at org.perses.reduction.ReductionDriver.reduce(ReductionDriver.kt:114)
at org.perses.Main.run(Main.java:92)
at org.perses.Main.main(Main.java:47)
Upon checking /tmp/1705457593470-0
, I found that the Java code was parsed as follows:
...
{
System
.
out
.
println
(
1
<
<
2
)
;
}
...
In lines 11-12, the left shift operator was parsed as two less than signs, which I believe is an incorrect parse. When reducing C language code that uses shift operators, it can be parsed correctly.
Could this parsing error be due to an issue with how I'm using Perses?
I tried using Perses to reduce the javascript code, but no matter how many lines of code there are in the javascript file, the final result of the reduction is empty. I don't know which step is wrong, can you help me?
Executed command:
java -jar perses_deploy.jar -t test.sh -i test.js
test.sh:
#!/bin/bash
/home/quickjs/quickjs-2021-03-27/qjs test.js
test.js:
print("a");
the result:
[19:10:14] [INFO ] The command-line options are: "--alg": "perses_node_priority_with_dfs_delta", "--append-to-progress-dump-file": "false", "--call-creduce": "false", "--call-formatter": "false", "--creduce-cmd": "creduce", "--default-delta-debugger-for-kleene": "DFS", "--designated-parser-facade-class-name": "", "--edit-caching": "true", "--enable-lightweight-refreshing": "false", "--enable-line-slicer": "false", "--enable-token-slicer": "false", "--enable-tree-slicer": "false", "--enable-vulcan": "false", "--fixpoint": "true", "--format-cmd": "", "--help, -h": "false", "--input-file, --input, -i": "test.js", "--language-ext-jars": "[]", "--list-algs": "false", "--list-langs": "false", "--list-parser-facades": "false", "--list-verbosity-levels": "false", "--max-bfs-depth-for-regular-rule-node": "5", "--max-edit-count-for-regular-rule-node": "100", "--non-deletion-iteration-limit": "10", "--pass-level-caching": "false", "--profile": "false", "--query-cache-refresh-threshold": "0", "--query-cache-type": "COMPACT_QUERY_CACHE", "--query-caching": "AUTO", "--reparse-each-iteration": "true", "--script-execution-keep-waiting-after-timeout": "true", "--script-execution-timeout-in-seconds": "600", "--stop-at-first-compatible-child-for-regular-rule-node": "false", "--test-script, --test, -t": "test.sh", "--threads": "auto", "--verbosity": "INFO", "--version": "false", "--vulcan-fixpoint": "false", "--window-size": "4"
[19:10:14] [INFO ] Tree Building: Start building spar-tree from input file SourceFile{file=FileWithContent{file=/home/reduce/test.js}, lang=LanguageJavaScript{name=javascript, extensions=[javascript, js], defaultCodeFormatControl=COMPACT_ORIG_FORMAT}}
[19:10:14] [INFO ] Tree Building: Step 1: Antlr parsing.
[19:10:15] [INFO ] Tree Building: Step 2: Converting parse tree to spar-tree
[19:10:15] [INFO ] Tree Building: Step 3: Simplifying spar-tree
[19:10:15] [INFO ] Tree Building: Finished in TimeSpan{startMillis=1704712214979, endMillis=1704712215071, formatted=0 seconds}
[19:10:15] [INFO ] The reduction process started at 19:10:15 2024/01/08
[19:10:15] [INFO ] Cache setting: query-caching=disabled, edit-caching=enabled, query-cache=compact_query_cache
[19:10:15] [INFO ] Reduction algorithm is perses_node_priority_with_dfs_delta
[19:10:15] [INFO ] The number of lexemes in the token factory is is 5
[19:10:15] [INFO ] Reduction Started at 19:10:15 2024/01/08
[19:10:15] [INFO ] perses_node_priority_with_dfs_delta started at 19:10:15 2024/01/08. #tokens=4
[19:10:15] [INFO ] Fixpoint[1]:Reducer[perses_node_priority_with_dfs_delta]: New fixpoint iteration started. #Tokens=4
[19:10:15] [INFO ] Fixpoint[1]:Reducer[perses_node_priority_with_dfs_delta]: Reduce node (#leaves=4)
[19:10:15] [INFO ] perses_node_priority_with_dfs_delta ended at 19:10:15 2024/01/08. #old=4, #new=0
[19:10:15] [INFO ] Fixpoint[1]:Reducer[perses_node_priority_with_dfs_delta]: New minimal, delete 4 tokens, ratio=0/4=0.00%
[19:10:15] [INFO ] Fixpoint[1]:Reducer[perses_node_priority_with_dfs_delta]: Reduce node (#leaves=0): queue=0, delete 4 tokens, ratio=0/4=0.00%
[19:10:15] [INFO ] Fixpoint[1]:Reducer[perses_node_priority_with_dfs_delta]: Fixpoint iteration finished, delete 4 tokens, ratio=0/4=0.00%
[19:10:15] [INFO ] Rebuilding spar-tree.
[19:10:15] [INFO ] perses_node_priority_with_dfs_delta started at 19:10:15 2024/01/08. #tokens=0
[19:10:15] [INFO ] Fixpoint[2]:Reducer[perses_node_priority_with_dfs_delta]: New fixpoint iteration started. #Tokens=0
[19:10:15] [INFO ] perses_node_priority_with_dfs_delta ended at 19:10:15 2024/01/08. #old=0, #new=0
[19:10:15] [INFO ] Fixpoint[2]:Reducer[perses_node_priority_with_dfs_delta]: Fixpoint iteration finished, delete 0 tokens, ratio=0/4=0.00%
[19:10:15] [INFO ] Reduction ended at 19:10:15 2024/01/08
[19:10:15] [INFO ] Elapsed time is 0 seconds
[19:10:15] [INFO ] Removed 4 token(s). Reduction ratio is 0/4=0.00%
[19:10:15] [INFO ] Test script execution count: 2
It would be great to add SQL support to Perses (e.g., I noticed that several SQL dialects are available at https://github.com/antlr/grammars-v4/tree/master/sql). What would be the estimated effort for adding a dialect?
I followed the guidance but failed to build the project. I got the error messages like this "download error: Caught javax.net.ssl.SSLException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty (Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty) while downloading https://repo1.maven.org/maven2/com/google/guava/guava/28.1-jre/guava-28.1-jre.pom". Then, I opened one of them and found that the website returned 404 error.
<dependency>
<groupId>com.guardsquare</groupId>
<artifactId>proguard-base</artifactId>
<version>7.0.0</version>
<type>pom</type>
</dependency>
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.