ashish-gehani / spade Goto Github PK
View Code? Open in Web Editor NEWSPADE: Support for Provenance Auditing in Distributed Environments
License: GNU General Public License v3.0
SPADE: Support for Provenance Auditing in Distributed Environments
License: GNU General Public License v3.0
I tried to add a ProMon reporter like this:
-> add reporter ProcMon ./bin/test.CSV
Adding reporter ProcMon... failed
but it failed.
Here is the log:
java.lang.NullPointerException
at spade.reporter.ProcMon.launch(ProcMon.java:120)
at spade.core.Kernel.addReporterCommand(Kernel.java:1028)
at spade.core.Kernel.addCommand(Kernel.java:1192)
at spade.core.Kernel.executeCommand(Kernel.java:621)
at spade.core.Kernel$LocalControlConnection.run(Kernel.java:2059)
at java.base/java.lang.Thread.run(Thread.java:835)
May 23, 2021 9:18:30 PM spade.core.Kernel addReporterCommand Error: Unable to launch reporter
How can I solve this problem?
Hello.
I am trying to add Procmon reporter (on Linux).
I got the logFile (from Procmon installed on my Windows) in the .CSV format.
However, this file does not have all the columns that are specified in Procmon.java. The following are the only columns in my file: Time of Day, PID, Operation, Process Name, Detail, Result.
Can anyone guide me as to what should I do? Am I exporting the right logFile?
In /home/wajih/projects/SPADE/src/spade/core/Kernel.java
there are several synchronized collections at line 311-320
// Basic initialization
reporters = Collections.synchronizedSet(new HashSet<AbstractReporter>());
storages = Collections.synchronizedSet(new HashSet<AbstractStorage>());
removereporters = Collections.synchronizedSet(new HashSet<AbstractReporter>());
removestorages = Collections.synchronizedSet(new HashSet<AbstractStorage>());
transformers = Collections.synchronizedList(new LinkedList<AbstractTransformer>());
filters = Collections.synchronizedList(new LinkedList<AbstractFilter>());
sketches = Collections.synchronizedSet(new HashSet<AbstractSketch>());
remoteSketches = Collections.synchronizedMap(new HashMap<String, AbstractSketch>());
serverSockets = Collections.synchronizedList(new LinkedList<ServerSocket>());
According to Oracle Java 7 API specification, it is recommended to iterate these collections in synchronized manner. Failure to follow this guideline will result in non-deterministic behavior.
For example at line 312 a synchronizedSet is defined with reporters
but it is iterated in unsynchronized manner at line 666.
for (AbstractReporter reporter : reporters) {
String arguments = reporter.arguments;
configWriter.write("add reporter " + reporter.getClass().getName().split("\\.")[2]);
if (arguments != null) {
configWriter.write(" " + arguments);
}
configWriter.write("\n");
}
It should be
synchronized(reporters){
for (AbstractReporter reporter : reporters) {
String arguments = reporter.arguments;
configWriter.write("add reporter " + reporter.getClass().getName().split("\\.")[2]);
if (arguments != null) {
configWriter.write(" " + arguments);
}
configWriter.write("\n");
}
}
And this should be done on every synchronized collection where it is being iterated. I will be happy to submit a Pull request with these fixes if you agree.
Makefile has a following line:
@rm -rf src/spade/reporter/*.h lib/libLinuxFUSE.* lib/libMacFUSE.*
which removes the necessary utash.h file which is necessary for compilation
src/spade/reporter/uthash.h
This leads to compilation error:
src/spade/reporter/spadeSocketBridge.c:237:20: fatal error: uthash.h: No such file or directory
#include "uthash.h"
^
compilation terminated.
make: *** [lib/spadeSocketBridge] Error 1
make --no-print-directory -f src/spade/reporter/audit/kernel-modules/Makefile
make -C /lib/modules/3.10.0-957.el7.x86_64/build M=/home/jack/code/spade/SPADE-tc-e5/src/spade/reporter/audit/kernel-modules modules
make: *** /lib/modules/3.10.0-957.el7.x86_64/build: No such file or directory. Stop.
make[1]: *** [all] Error 2
make: *** [audit-kernel-module] Error
I recently established an environment to run SPADE project for testing purpose, but no luck, I found that the Audit reporter didn't collect any log from Linux Audit, without any error log reported.
I then tried to run lib/spadeSocketBridge
manually, and even if I executed it with root privilege, the program seems not able to connect to AF_UNIX socket of audispd. The error message shows:
./lib/spadeSocketBridge: Unable to connect to the socket. Error: Permission denied
To clarify, I have followed the instructions to set the permissions of auditctl
and lib/spadeSocketBridge
, and also set active = yes
in /etc/audisp/plugins.d/af_unix.conf
. My testing environment is Ubuntu 14.04 built with Vagrant (ubuntu/trusty64).
Thank you.
In "Available Filters" section of Wiki, change heading form "OPM2PROV" -> "OPM2Prov". Otherwise the user will assume that filter name is OPM2PROV
and try to add filter using the command add filter OPM2PROV position=1
which is certainly wrong because class name is OPM2Prov
.
https://github.com/ashish-gehani/SPADE/wiki/Available%20filters#opm2prov
Can my Ubuntu Desktop and Win10 virtual machine use the same Neo4j database?
Or I have to implement the database in these two OS, respectively?
Can someone please tell why is this error coming
"spade.core.Kernel addCommand SEVERE: Unable to initialize storage!"
when I add Graphviz Storage with the following command:
add storage Graphviz outputFilePath=/tmp/provenance.dot
spade doesn't let kernel and jvm to gracefully exit with spade is stopped though bin/spade script (hookups are executed). We should rename "bin/spade stop" to "bin/spade kill" and write a "bin/spade stop" that sends SIGTERM
Whenever a Graphviz storage is removed, the number of edges and vertices printed at controller are precisely double then vertices and edges in corresponding dot file. I have not tested it with other storages.
I got a "Error: Failed to execute query: Sanitization'" problem when using Sanitization Transformer to do the query. The steps to reproduce the problem is as follows.
Overall, I was using the VM generated by the vagrant box of SPADE. In the control panel opened by "./bin/spade control".
In the query panel opened by "./bin/spade query".
The "/home/vagrant/ls.log" is available through this link:
ls.log
And the corresponding SPADE error log is here:
SPADE_03.08.2021-14.21.23.log
I ask this question because I don't see any config file about BerkeleyDB in the /cfg. I need to use the provenance in BerkeleyDB to do some intrusion detection jobs.
When I try to run the command "add storage Neo4j /tmp/spade.graph_db" as described in the documentation, I get the error "Adding storage Neo4j... Unable to find/load class".
I can successfully add other storage such as Graphviz, and looking into the code has not helped. Any suggestions on how I might be able to fix this?
Hello
I have a question, is spade able to record provenance of a distributed system?
For example a micro service oriented system? I also want to record provenance of a bittorrent and bitcoin test networks. Is that possible? Do you have documentation on this?
thanks
This will require updates to the build code that uses it at installation time.
I am working on CamFlow/SPADE integration and I am trying to build SPADE on Fedora 27, but it fails during the reporter build stage, with the following error:
default: --- Built Reporters ---
default: gcc -o lib/spadeAuditBridge src/spade/reporter/spadeAuditBridge.c
default: src/spade/reporter/spadeAuditBridge.c: In function ‘command_line_option’:
default: src/spade/reporter/spadeAuditBridge.c:225:12: warning: implicit declaration of function ‘strptime’; did you mean ‘strftime’? [-Wimplicit-function-declaration]
default: if(strptime(dirTimeBuf, "%Y-%m-%d:%H:%M:%S", &temp_tm) == 0) {
default: ^~~~~~~~
default: strftime
default: src/spade/reporter/spadeAuditBridge.c: In function ‘main’:
default: src/spade/reporter/spadeAuditBridge.c:520:13: warning: implicit declaration of function ‘get_max_pid’; did you mean ‘getppid’? [-Wimplicit-function-declaration]
default: max_pid = get_max_pid() + 1;
default: ^~~~~~~~~~~
default: getppid
default: make -f src/spade/reporter/audit/kernel-modules/Makefile
default: make[1]: Entering directory '/home/vagrant/workspace/SPADE'
default: make -C /lib/modules/4.13.12-300.fc27.x86_64/build M=/home/vagrant/workspace/SPADE/src/spade/reporter/audit/kernel-modules modules
default: make[2]: Entering directory '/usr/src/kernels/4.13.12-300.fc27.x86_64'
default: CC [M] /home/vagrant/workspace/SPADE/src/spade/reporter/audit/kernel-modules/netio.o
default: /home/vagrant/workspace/SPADE/src/spade/reporter/audit/kernel-modules/netio.c: In function ‘copy_uint32_t_from_user’:
default: /home/vagrant/workspace/SPADE/src/spade/reporter/audit/kernel-modules/netio.c:261:9: error: implicit declaration of function ‘copy_from_user’; did you mean ‘copy_from_iter’? [-Werror=implicit-function-declaration]
default: return copy_from_user(dst, src, sizeof(uint32_t));
default: ^~~~~~~~~~~~~~
default: copy_from_iter
default: /home/vagrant/workspace/SPADE/src/spade/reporter/audit/kernel-modules/netio.c: In function ‘find_sys_call_table’:
default: /home/vagrant/workspace/SPADE/src/spade/reporter/audit/kernel-modules/netio.c:238:17: warning: ignoring return value of ‘kstrtoul’, declared with attribute warn_unused_result [-Wunused-result]
default: kstrtoul(sys_string, 16, &syscall_table_address);
default: ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
default: cc1: some warnings being treated as errors
default: make[3]: *** [scripts/Makefile.build:309: /home/vagrant/workspace/SPADE/src/spade/reporter/audit/kernel-modules/netio.o] Error 1
default: make[2]: Leaving directory '/usr/src/kernels/4.13.12-300.fc27.x86_64'
default: make[2]: *** [Makefile:1516: _module_/home/vagrant/workspace/SPADE/src/spade/reporter/audit/kernel-modules] Error 2
default: make[1]: Leaving directory '/home/vagrant/workspace/SPADE'
default: make[1]: *** [src/spade/reporter/audit/kernel-modules/Makefile:4: all] Error 2
default: make: *** [Makefile:128: audit-kernel-module] Error 2
I created a vagrant file to reproduce the issue, and the provision for SPADE can be seen here:
git clone https://github.com/ashish-gehani/SPADE.git
sudo dnf -y -v install audit fuse-devel fuse-libs git iptables kernel-devel-`uname -r` lsof uthash-devel
sudo dnf -y -v install wget
wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u161-b12/2f38c3b165be4555a1fa6e98c45e0808/jdk-8u161-linux-x64.rpm"
sudo dnf -y -v install ./jdk-8u161-linux-x64.rpm
cd SPADE
./configure
Hello everyone,
I am new to SPADE, I have recorded a CSV log file from ProcMon, and I am trying to open it in an Ubuntu Linux box using SPADE ProcMon reporter, but I am having trouble doing that:
~/SPADE/bin$ ./spade control
SPADE 3.0 Control Client
Available commands:
add reporter|storage
add analyzer|sketch
add filter|transformer position=
remove reporter|analyzer|storage|sketch
remove filter|transformer
list reporters|storages|analyzers|filters|sketches|transformers|all
config load|save
exit-> add reporter ProcMon /home/user/Desktop/Logfile.CSV
Adding reporter ProcMon... failed
Thanks in advance!
I ran SPADE once without any problem but on subsequent runs SPADE is giving out
this:
java version "1.7.0_25"
Java(TM) SE Runtime Environment (build 1.7.0_25-b15)
Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)
YAJSW: yajsw-stable-11.06
OS : Mac OS X/10.7.5/x86_64
JVM : Oracle
Corporation/1.7.0_25//Library/Java/JavaVirtualMachines/jdk1.7.0_25.jdk/Contents/
Home/jre/64
Aug 29, 2013 10:31:20 PM org.apache.commons.vfs2.VfsLog info
INFO: Using "/var/folders/y6/p9sj6_q14dj1r3ppd2mvgvdr000915/T/vfs_cache" as
temporary files store.
WARNING|wrapper|SPADE|13-08-29 22:31:20|YAJSW: yajsw-stable-11.06
WARNING|wrapper|SPADE|13-08-29 22:31:20|OS : Mac OS X/10.7.5/x86_64
WARNING|wrapper|SPADE|13-08-29 22:31:20|JVM : Oracle
Corporation/1.7.0_25//Library/Java/JavaVirtualMachines/jdk1.7.0_25.jdk/Contents/
Home/jre/64
INFO|wrapper|SPADE|13-08-29 22:31:22|working dir
/Users/qureshi/workspace/SPADE/bin/.
INFO|wrapper|SPADE|13-08-29 22:31:22|starting
YAJSW: yajsw-stable-11.06
OS : Mac OS X/10.7.5/x86_64
JVM : Oracle
Corporation/1.7.0_25//Library/Java/JavaVirtualMachines/jdk1.7.0_25.jdk/Contents/
Home/jre/64
createRWfile /tmp/out_-5782407137304340766$1377840682522
createRWfile /tmp/err_-5782407137304340766$1377840682522
INFO|wrapper|SPADE|13-08-29 22:31:23|started process 9833
INFO|wrapper|SPADE|13-08-29 22:31:23|started process with pid 9833
createRWfile /tmp/in_-5782407137304340766$1377840682522
INFO|9833/0|SPADE|13-08-29 22:31:24|java.io.IOException: Device not configured
INFO|9833/0|SPADE|13-08-29 22:31:24| at
java.io.FileInputStream.readBytes(Native Method)
INFO|9833/0|SPADE|13-08-29 22:31:24| at
java.io.FileInputStream.read(FileInputStream.java:242)
INFO|9833/0|SPADE|13-08-29 22:31:24| at
java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
INFO|9833/0|SPADE|13-08-29 22:31:24| at
java.io.BufferedInputStream.read(BufferedInputStream.java:254)
INFO|9833/0|SPADE|13-08-29 22:31:24| at
org.rzo.yajsw.io.TeeInputStream$Source.run(TeeInputStream.java:221)
INFO|9833/0|SPADE|13-08-29 22:31:24| at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
INFO|9833/0|SPADE|13-08-29 22:31:24| at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
INFO|9833/0|SPADE|13-08-29 22:31:24| at java.lang.Thread.run(Thread.java:724)
INFO|9833/0|SPADE|13-08-29 22:31:24|[INFO] StandardFileSystemManager - Using
"/var/folders/y6/p9sj6_q14dj1r3ppd2mvgvdr000915/T/vfs_cache" as temporary files
store.
There is nothing in SPADE logs.
Original issue reported on code.google.com by [email protected]
on 30 Aug 2013 at 5:36
Hi,
The following is my scenario.
A collector generates streaming data and sends them to Kafka (a message queue).
I want to create a SPADE report to read data from Kafka and leverage SPADE built-in Neo4j storage plugin to store data to Neo4j. Due to streaming data, the report can always obtain data from Kafka, thus will not stop. My goal is querying Neo4j during the report is still working.
My questions:
During reading data from Kafka, will SPADE store data to Neo4j parallelly or start storing data to Neo4j after no data left in Kafka
It is possible to query Neo4j during SPADE is storing data to Neo4j?
Thanks.
By default REORDERING_WINDOW is 10000, the latency is too long (like 10-30 min). If we change it to 100, we wonder it will affect the traceability.
And to accommodate different client systems, this value may need to be dynamic.
#define REORDERING_WINDOW 10000
// if we have enough events in the buffer..
while(HASH_COUNT(event_buf) > REORDERING_WINDOW)
If we install neo4j latest version installed by either source/ repository. The Neo4j is unable to read the SPADE database.
Currently, Spade writes the graph database in neo4j record format standard v3.4.4. However, the latest standard is 4.0.0.
If we want to read the SPADE graphdb in latest neo4j, we can uncomment the auto upgrade option. However, if we do that the SPADE won't be able to write into the database again.
for(cursor=0; cursor < strlen(buf); cursor++) {
Replace with the code below:
size_t buf_len = strlen(buf);
for(cursor=0; cursor < buf_len; cursor++) {
When used on virtual box, Linux audit infrastructure misses events (even when set to lossless, evidently a known issue in some circles). Thus, SPADE on VirtualBox is not a good setup for getting complete traces. This fact should probably be documented somewhere.
Hi, I am a fresher to SPADE, I have compiled it successfully on the Ubutnu 16.04 (64bit).
I tried it as the following steps:
(1) startup spade
./spade start
nohup: redirecting stderr to stdout
Running SPADE with PID = 9427
(2) startup spade control
./spade control
SPADE 3.0 Control Client
Available commands:
add reporter|storage <class name> <initialization arguments>
add analyzer|sketch <class name>
add filter|transformer <class name> position=<number> <initialization arguments>
set storage <class name>
remove reporter|analyzer|storage|sketch <class name>
remove filter|transformer <position number>
list reporters|storages|analyzers|filters|sketches|transformers|all
config load|save <filename>
exit
(3) add storage Neo4j
-> add storage Neo4j database=/tmp/graph.db
Adding storage Neo4j... done
(4) run Neo4j server by running bin/neo4j
start command
the config file of Neo4j: lib/neo4j-community-3.4.4/conf/neo4j.conf
I have edited the file lib/neo4j-community-3.4.4/conf/neo4j.conf
and set dbms.active_database to
the /tmp/graph.db
.
Then, startup the Neo4J server as the following:
./neo4j start
Active database: /tmp/graph.db
Directories in use:
home: /xx/SPADE-master/lib/neo4j-community-3.4.4
config: /xx/SPADE-master/lib/neo4j-community-3.4.4/conf
logs: /xx/SPADE-master/lib/neo4j-community-3.4.4/logs
plugins: /xx/SPADE-master/lib/neo4j-community-3.4.4/plugins
import: /xx/SPADE-master/lib/neo4j-community-3.4.4/import
data: /xx/SPADE-master/lib/neo4j-community-3.4.4/data
certificates: /xx/SPADE-master/lib/neo4j-community-3.4.4/certificates
run: /xx/SPADE-master/lib/neo4j-community-3.4.4/run
Starting Neo4j.
WARNING: Max 1024 open files allowed, minimum of 40000 recommended. See the Neo4j manual.
Started neo4j (pid 11598). It is available at http://localhost:7474/
There may be a short delay until the server is ready.
See /xx/SPADE-master/lib/neo4j-community-3.4.4/logs/neo4j.log for current status.
(5) configuring JSON reporter following the link
add reporter JSON /tmp/provenance.json
Adding reporter JSON... done
(6) however, when I tried to query something in the database, there is nothing !
(6.1) when executing MATCH (n:VERTEX) RETURN n
:
Could someone give some advice? Thanks .
By the way, the pictures in the Wiki documents are unavailable, it is a nice work if someone can update them!
[deleted issue]
In spade query
, trying to dump $base
fails with export limit. The suggestion given is to "Please use 'force dump ...' to force the print". This fails. dump force
succeeds, so I presume the error message should be changed.
(Also, it would be good to document in the query page that dump $base
is the way to dump the whole db, which is useful for debugging).
-> dump $base
Error: Error evaluating QuickGrail command:
------------------------------------------------------------
Dump export limit set at '4096'. Total vertices and edges requested '14632'.Please use 'force dump ...' to force the print.
------------------------------------------------------------
-> export > /tmp/query_graph_base
Output export path set to '/tmp/query_graph_base' for next query.
-> force dump $base
Error: Error evaluating QuickGrail command:
------------------------------------------------------------
Unsupported command "force" at line 1, column 0
------------------------------------------------------------
-> dump force $base
[succeeds]
Reporter JSON cannot be added
-> add reporter JSON /tmp/provenance.json Adding reporter JSON... failed
Graphviz reporter has a parsing issue. Upon feeding Graphviz reporter's output
to a Graphviz storage, its output should be identical but it purges a lot of
data.
Reproducability
1. Start SPADE
2. Start Graphviz storage and provide output file
3. Start Graphviz storage with the attached file
Expected output:
Generated dot file should be same as the input file
Output received:
A purged file with only couple of lines
Original issue reported on code.google.com by [email protected]
on 8 Jul 2013 at 11:48
Attachments:
In the document Storing provenance in a Neo4j graph database
The database is added at /tmp/spade.graph_db
^ This creates a folder in the /tmp
directory of the host machine.
In the document of viewing provenance in a graph database dbms.active_database
is set to the location of your Neo4j database; which is probably wrong. When we set the dbms.active_database
to /tmp/spade.graph_db
, it searches for that database in the data/databases
directory rather than /tmp/spade.graph_db
Ideally, database should be created in the location where neo4j databases are stored such as /home/vagrant/SPADE/lib/neo4j-community-3.4.4/data/databases
and then spade.graph_db
so full location should be /home/vagrant/SPADE/lib/neo4j-community-3.4.4/data/databases/spade.graph_db
If the neo4j is installed system-wide, the location should be /var/lib/neo4j/data/databases
and then spade.graph_db
.
By default, we should specify the name of the active database such as dbms.active_database=spade.graph_db
and neo4j automatically locates the location of the database in the data/database
folder.
If we add ne04j storage with this particular path
And set dbms.active=spade
It will directly go to the database and and then loads the database.
Setting up a new development environment requires manual work. We don't have
dependencies information except in wiki from which user has to dig out
information.
AutoConf script takes some care of this as at least it warns the user about
missing dependencies. But beyond this, the user has to manually take care of
everything by looking up the docs and issuing commands by hand.
In the long run it'll be desirable to use some automatic software configuration
management to set up a new development environment quickly in an automated
fashion.
Going through various configuration management tools, I've found Ansible
(http://www.ansibleworks.com/configuration-management/) that has low learning
curve, powerful and flexible. It also has support for automatically downloading
& installing Oracle JDK and JRE.
Any thoughts on this?
Original issue reported on code.google.com by [email protected]
on 10 Jul 2013 at 8:41
Hi, I installed SPADE on CentOS7 and stored the result in Neo4j. I added reporter Audit fileIO=true. Then I used vim to create a new file F and to write several lines, save the file F.
After that, I checked the provenance graph in Neo4j database. However, I cannot find the correlation between the process vim and the file F. Specifically, I started from the file F node, traversed the graph using BFS without concern of the edge direction. The BFS result is that I cannot find the vim process. Did I do something wrong?
Additionally, the node F's attributes contain the "permissions" which is '0000'. Is that normal?
Thanks.
Hello! I'm following the instructions here to instrument a C program to generate provenance. I have a main.c
file and run the following commands.
$ LLVM_COMPILER=clang wllvm main.c -o main
$ PATH="$PATH:/usr/lib/llvm-3.6/bin" ./SPADE/bin/llvm/llvmTrace.sh main -monitor-all output
Function name : main
function was retrieved
function was retrieved
Function name : setAtExit
Function name : bufferString
Function name : flushStrings
$ ./output.bc
LLVM ERROR: Program used external function 'LLVMReporter_getThreadId' which could not be resolved!
Unfortunately, I get the LLVM ERROR: Program used external function 'LLVMReporter_getThreadId' which could not be resolved!
. LLVMReporter_getThreadId
appears to be a function within the SPADE source code. Does anyone know what's going wrong here and how to fix it?
I'm not sure if it's relevant, but I'm running Ubuntu 14.04 and installed clang and llvm with the following commands:
sudo apt-get install -y clang-3.6 clang-3.6-dev llvm-3.6 llvm-3.6-dev llvm-3.6-runtime
and I built the LLVM tracing in SPADE with
make LLVM_INCLUDE_PATH="/usr/lib/llvm-3.6/include" build-linux-llvm
The latest SPADE requires CDM20, where could I find CDM20 dataset ?
No provenance data is being recorded in the given directories, and the "make android-start", "make android-stop" commands do not work well.
Currently, SPADE allows to add same reporter multiple times. If two same reporters are added, you can not specify which instance of reporter to remove. (Hint: create coupling between reporter and its arguments)
On this wiki page the audit config path is
/etc/audisp/plugins.d/af_unix.conf
. That should probably be
/etc/audit/plugins.d/af_unix.conf
Hi, I am running ubuntu bionic 18.04 and trying to install SPADE. I followed the documentation to its exact installing the requirements and packages needed to run SPADE. Everything works fine from requirements, downloading, starting, collecting on linux, until I try to start real time collection. The error I get in the log file is listed below:
bin/spadeAuditBridge: Unable to connect to the socket: /var/run/audispd_events. Error: Connection refused
I have tried to run spadeAudtiBridge separately and I have faced the same issue. I have tried reinstalling the entire package but it still leaves me with the same error. I tried to install the socket manually for the package to connect but this is not work either.
In spade\reporter\spadeAuditBridge.c, there are some program logic based on syscall no like 62. Obviously syscall has different meaning on X86 and X86_64, so SPADE looks not compatible with 64bit system?
Moreover, what is the OS distributions and kernel versions that has been tested using SPADE?
Problem:
Building SPADE reporter fails with the following error in Ubuntu 16.04:
--- Built Reporters ---
make -C /lib/modules/4.4.0-203-generic/build M=/root/SPADE/src/spade/reporter/audit/kernel-modules modules
CC [M] /root/SPADE/src/spade/reporter/audit/kernel-modules/netio.o
/root/SPADE/src/spade/reporter/audit/kernel-modules/netio.c: In function ‘nf_spade_log_to_audit’:
/root/SPADE/src/spade/reporter/audit/kernel-modules/netio.c:1596:16: error: implicit declaration of function ‘ipv6_chk_addr’ [-Werror=implicit-function-declaration]
found = ipv6_chk_addr(net, &selected_addr, NULL, 0);
^
cc1: some warnings being treated as errors
scripts/Makefile.build:291: recipe for target '/root/SPADE/src/spade/reporter/audit/kernel-modules/netio.o' failed
make[3]: *** [/root/SPADE/src/spade/reporter/audit/kernel-modules/netio.o] Error 1
Makefile:1471: recipe for target 'module/root/SPADE/src/spade/reporter/audit/kernel-modules' failed
make[2]: *** [module/root/SPADE/src/spade/reporter/audit/kernel-modules] Error 2
src/spade/reporter/audit/kernel-modules/Makefile:19: recipe for target 'all' failed
make[1]: *** [all] Error 2
Makefile:163: recipe for target 'audit-kernel-module' failed
make: *** [audit-kernel-module] Error 2
Suggestion:
Added the following to netio.c and it worked
#include </usr/src/linux-headers-4.4.0-31/include/net/addrconf.h>
I tried using querying with the Graphviz and H2 storage engines. Neither, apparently, support querying according to error messages received when trying to configure them for such.
I could not find mention of this in the wiki, so can I please suggest in Querying SPADE that the backends that support it (or the ones that don't) are documented.
SPADE produces invalid ttl files. For example, there are lots of close braces that have no matching open brace:
data:f25181be9786b40c9d3a880b3b9223816a5bd9eefc9861dafea8458710efb883 prov data:type "WasTriggeredBy";
]; .
I've not tested PROV-N, but it might have the same issue.
Hello,
Is SPADE still available to use through cytobank to analyze flow cytometry data? I have not found the interactive online platform.
And, in using SPADE through R, are there any other vignettes on using SPADE through R except the one that exists on SPADE's Bioconductor pages?
Thank you
After a fresh pull, ./configure
and make
I see:
$ sudo make install
/bin/sh: 1: pkg-config: not found
test -d /usr/local || mkdir /usr/local
cp -R bin /usr/local
cp -R lib /usr/local
cp -R cfg /usr/local
cp -R log /usr/local
cp: cannot stat ‘log’: No such file or directory
Perhaps the makefile should include mkdir -p log
or the git repo should include a dummy file in the log
directory to ensure it exists (since git can't track existence of empty directories).
after build, I try to spade start and control. After spade control, it shows this:
$ ./spade control
spade.client.Control Exception when communicating with SPADE Kernel! java.net.ConnectException: Connection refused (Connection refused)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.