GithubHelp home page GithubHelp logo

opennars / opennars-for-applications Goto Github PK

View Code? Open in Web Editor NEW
85.0 18.0 38.0 2.5 MB

General reasoning component for applications based on NARS theory.

Home Page: https://cis.temple.edu/~pwang/NARS-Intro.html

License: MIT License

Shell 0.63% Python 29.00% C 50.44% C++ 0.91% CMake 0.81% Jupyter Notebook 18.15% Dockerfile 0.06%
reasoning reasoner nal nars ai artificial-intelligence non-axiomatic-logic non-axiomatic-reasoning procedu

opennars-for-applications's Introduction

OpenNARS Logo Open-NARS is the open-source version of NARS, a general-purpose AI system, designed in the framework of a reasoning system. This project is an evolution of the v1.5 system. The mailing list discusses both its theory and implementation.

Build Status codecov Codacy Badge

How to build OpenNARS

Using mvn:

for each of the projects:

https://github.com/opennars/opennars-parent.git     
https://github.com/opennars/opennars.git     
https://github.com/opennars/opennars-lab.git     
https://github.com/opennars/opennars-applications.git     
https://github.com/opennars/opennars-gui.git

git clone 'project'
cd 'project_dir'
mvn clean install 

optionally append -Dmaven.javadoc.skip=true to skip documentation generation

cd 'project_dir'
mvn exec:java

Alternatively, using IntelliJ IDE:

Install git https://git-scm.com/downloads

Install OpenJDK 11 https://jdk.java.net/11/

Install community edition IntelliJ https://www.jetbrains.com/idea/download/

Checkout https://github.com/opennars/opennars.git

Checkout https://github.com/opennars/opennars-lab.git

Checkout https://github.com/opennars/opennars-applications.git

You can either checkout within Intellij or use the Github desktop (available from the github clone button in the repo)

Build opennars

If this is a fresh install you will be prompted to enter the jdk path (where you installed it above) You may be prompted to update maven dependencies - do this if prompted

Build opennars-lab

Select org.opennars.lab.launcher.Launcher as the main entry point

Build opennars-applications

Select org.opennars.applications.Launcher

Application Launchers

The launchers are the easiest way to run the various apps

opennars-lab

Main GUI - Main user interface for NARS

Test Chamber - Simulation environment for testing behaviours

Micro world	- Behaviour learning by simple insect like creature

NAR Pong - The classic pong game

Language Lab - For experimenting with parts of speech (POS) and grammar learning

Perception Test - Pattern matching experiment

Prediction Test - Predicts a waveform - Can be run directly from Intellij (Current issue with running with launcher)

Vision - Vision experiment - Can be run direcly from Intellij (Current issue with running with launcher)

opennars-applications

Main GUI - A simple MIT license GUI

Crossing - A smart city traffic intersection simulation

Identity mapping - An experimental setup for testing aspects of Relations Frame Theory (RFT)

Opennars Core is run directly by the Lab and Applications Launchers.

Example Narsese files

Here is a link to some Narses examples including:

Toothbrush example - how to use a toothbrush to undo a screw?

Detective example - who is the criminal?

https://github.com/opennars/opennars/tree/master/src/main/resources/nal/application

Theory Overview

Non-Axiomatic Reasoning System (NARS) processes tasks imposed by its environment, which may include human users or other computer systems. Tasks can arrive at any time, and there is no restriction on their contents as far as they can be expressed in Narsese, the I/O language of NARS.

There are several types of tasks:

  • Judgment - To process it means to accept it as the system's belief, as well as to derive new beliefs and to revise old beliefs accordingly.
  • Question - To process it means to find the best answer to it according to current beliefs.
  • Goal - To process it means to carry out some system operations to realize it.

As a reasoning system, the architecture of NARS consists of a memory, an inference engine, and a control mechanism.

The memory contains a collection of concepts, a list of operators, and a buffer for new tasks. Each concept is identified by a term, and contains tasks and beliefs directly on the term, as well as links to related tasks and terms.

The inference engine carries out various type of inference, according to a set of built-in rules. Each inference rule derives certain new tasks from a given task and a belief that are related to the same concept.

The control mechanism repeatedly carries out the working cycle of the system, generally consisting of the following steps:

  1. Select tasks in the buffer to insert into the corresponding concepts, which may include the creation of new concepts and beliefs, as well as direct processing on the tasks.
  2. Select a concept from the memory, then select a task and a belief from the concept.
  3. Feed the task and the belief to the inference engine to produce derived tasks.
  4. Add the derived tasks into the task buffer, and send report to the environment if a task provides a best-so-far answer to an input question, or indicates the realization of an input goal.
  5. Return the processed belief, task, and concept back to memory with feedback.

All the selections in steps 1 and 2 are probabilistic, in the sense that all the items (tasks, beliefs, or concepts) within the scope of the selection have priority values attached, and the probability for each of them to be selected at the current moment is proportional to its priority value. When an new item is produced, its priority value is determined according to its parent items, as well as the type of mechanism that produces it. At step 5, the priority values of all the involved items are adjusted, according to the immediate feedback of the current cycle.

At the current time, the most comprehensive description of NARS are the books Rigid Flexibility: The Logic of Intelligence and Non-Axiomatic Logic: A Model of Intelligent Reasoning . Various aspects of the system are introduced and discussed in many papers, most of which are available here.

Beginners can start at the following online materials:

Contents

  • core - reasoning engine
  • nal - examples/unit tests

The core is derived from the code of Pei Wang.

Run Requirements

  • Java 8+ (OpenJDK 10 recommended)

Example Files

For an overview of reasoning features, see working examples (tests) in the nal folder, also explained in SingleStepTestingCases and MultiStepExamples.

Development Requirements

  • Maven

Links

opennars-for-applications's People

Contributors

0xc1c4da avatar arcj137442 avatar ccrock4t avatar hmlatapie avatar iciccio avatar jorisbontje avatar ntoxeg avatar patham9 avatar pisaev1 avatar ptrman avatar ptrojahn avatar robert-johansson avatar tonylo1 avatar yuhongsun96 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

opennars-for-applications's Issues

v0.8.5 portability and ROS module

  • Portability for Android and Mac (and potentially Windows via Cygwin & WSL) has to be tested and ensured again, x64 instruction target, and also compilers TCC and Clang, so far tested successfully: x86 Linux with GCC.
  • A robotic operating system module will be added after testing, with a solution which maps YOLO (darknet) detections to Narsese, QR code detection (visp_auto_tracker), potentially some visual SLAM approach (OpenVSLAM or ORB-SLAM) which adds detections to the map with potential ^goto operation support, depth estimation etc..

Key benefit of the new version: More than 2x speed in event processing, plus a first solution viable for many robotic experiments.

forgetting of high exp items

<(b * cycle) --> TOPICWORD>. {1.0 0.299999999916316}
<(b * energy) --> TOPICWORD>. {1.0 0.2999999662394476}
<(b * cno) --> TOPICWORD>. {1.0 0.29999990822930384}
<(b * mev) --> TOPICWORD>. {1.0 0.2999997505413843}
<(b * sun) --> TOPICWORD>. {1.0 0.299962977058774}
<(b * bethe) --> TOPICWORD>. {1.0 0.29972643541033367}
<(b * cycles) --> TOPICWORD>. {1.0 0.29972643541033367}
<(b * e) --> TOPICWORD>. {1.0 0.29972643541033367}
<(b * two) --> TOPICWORD>. {1.0 0.2992563743470001}
<(b * stars) --> TOPICWORD>. {1.0 0.2992563743470001}
<(b * reaction) --> TOPICWORD>. {1.0 0.2992563743470001}
<(b * cno-i) --> TOPICWORD>. {1.0 0.2992563743470001}
<(b * one) --> TOPICWORD>. {1.0 0.29797861590027436}
<(b * neutrinos) --> TOPICWORD>. {1.0 0.29797861590027436}
<(b * first) --> TOPICWORD>. {1.0 0.29797861590027436}
<(b * emitted) --> TOPICWORD>. {1.0 0.29797861590027436}
<(b * weizsäcker) --> TOPICWORD>. {1.0 0.29450530833337973}
<(b * hydrogen) --> TOPICWORD>. {1.0 0.29450530833337973}
<(b * helium) --> TOPICWORD>. {1.0 0.29450530833337973}
<(b * nucleus) --> TOPICWORD>. {1.0 0.29450530833337973}
<(b * k) --> TOPICWORD>. {1.0 0.29450530833337973}
<(b * decay) --> TOPICWORD>. {1.0 0.29450530833337973}
<(b * cn-cycle) --> TOPICWORD>. {1.0 0.29450530833337973}
<(b * neutrino) --> TOPICWORD>. {1.0 0.29450530833337973}
<(b * reactions) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * chain) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * temperature) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * dominant) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * catalytic) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * nitrogen) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * oxygen) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * positrons) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * away) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * around) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * source) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * approximately) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * produced) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * proposed) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * von) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * also) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * cold) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * years) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * total) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * momentum) --> TOPICWORD>. {1.0 0.2850638794896408}
<(b * called) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * bethe–weizsäcker) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * known) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * fusion) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * convert) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * proton–proton) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * core) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * carbon) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * isotopes) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * catalysts) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * step) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * stable) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * electron) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * involved) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * annihilate) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * electrons) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * transformations) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * loop) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * mass) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * less) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * starts) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * temperatures) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * output) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * much) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * nuclei) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * process) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * carl) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * hans) --> TOPICWORD>. {1.0 0.2593994150290162}
<(b * experimental) --> TOPICWORD>. {1.0 0.2593994150290162}

50

<?x --> electron>?
<electron --> ?x>?
<?x <-> electron>?
10

questions aren't answered, question is answered if only relevant judgement is given

UDPNAR terminated by signal SIGBUS (Misaligned address error)

UDPNAR crashes when I try to send some beliefs to it, so far it always happens after sending second line. It was compiled without openmp support with LLVM Clang. The exact crash line says

'./NAR UDPNAR 127.0.0.1 50000...' terminated by signal SIGBUS (Misaligned address error)

I send data via a Python socket object from socket.

Safe iteration for temporal compounding

Inducing temporal patterns should only select concepts which were existing at the cycle start and should make sure all of them are selected. (currently not always the case in master when new concepts are formed in the process)

FIFO for compound ops removal, investigation

Can compound ops also be directly formed in memory by lifting the restriction that operations can't form concepts?
If so, FIFO structure could be removed altogether, and what's the impact of that?

Problem with UDPNAR on macOS

I'm having some trouble with UDPNAR, but there might just be something I'm missing. The UDPNAR test seems to just stop (is it expecting a client?) And when I try toothbrush_demo.py it stops, seemingly expecting an answer from ONA that it isn't getting. Running on macOS.

Here's where it stops in the UDPNAR test:

`>>UDPNAR test start
//UDPNAR started!
Input: <(a &/ ^left) =/> g>. Priority=1.000000 Truth: frequency=1.000000, confidence=0.900000
Input: a. :|: occurrenceTime=3 Priority=1.000000 Truth: frequency=1.000000, confidence=0.900000
Input: g! :|: occurrenceTime=4 Priority=1.000000 Truth: frequency=1.000000, confidence=0.900000
decision expectation 0.791600 impTruth=(1.000000, 0.900000): future=0 <(a &/ ^left) =/> g>
Input: ^left. :|: occurrenceTime=4 Priority=1.000000 Truth: frequency=1.000000, confidence=0.900000

`

Compiling on BSD

I couldn't compile on a BSD system (DragonFly BSD) because of:

src/NetworkNAR/UDP.c:41:19: error: 'PF_INET' undeclared (first use in this function); did you mean 'AF_INET'?
return socket(PF_INET, SOCK_DGRAM, 0);
^~~~~~~
AF_INET

If you change 'PF_INET' to 'AF_INET' it compiles.

In the Linux kernel these are the same anyway:

/* Protocol families, same as address families. */
#define PF_INET     AF_INET

Overlap issue

[r0b3@toshi OpenNARS-for-Applications]$ ./NAR shell
<{(b * n)} --> x>.
Input: <{(b * n)} --> x>. Priority=1.000000 Truth: frequency=1.000000, confidence=0.900000
<{(a * n)} --> x>.
Input: <{(a * n)} --> x>. Priority=1.000000 Truth: frequency=1.000000, confidence=0.900000
Derived: <{(b * n)} --> {(a * n)}>. Priority=0.181698 Truth: frequency=1.000000, confidence=0.447514
Derived: <{(a * n)} --> {(b * n)}>. Priority=0.181698 Truth: frequency=1.000000, confidence=0.447514
Derived: <{(b * n)} <-> {(a * n)}>. Priority=0.181698 Truth: frequency=1.000000, confidence=0.447514
Derived: <{(a * n)} <-> {(b * n)}>. Priority=0.181698 Truth: frequency=1.000000, confidence=0.447514
Derived: <{(b * n) (a * n)} --> x>. Priority=0.245189 Truth: frequency=1.000000, confidence=0.810000
Derived: <{(a * n) (b * n)} --> x>. Priority=0.245189 Truth: frequency=1.000000, confidence=0.810000
Derived: <({(b * n)} & {(a * n)}) --> x>. Priority=0.213928 Truth: frequency=1.000000, confidence=0.810000
Derived: <({(a * n)} & {(b * n)}) --> x>. Priority=0.213928 Truth: frequency=1.000000, confidence=0.810000
Derived: <({(b * n)} ~ {(a * n)}) --> x>. Priority=0.022457 Truth: frequency=0.000000, confidence=0.810000
Derived: <({(a * n)} ~ {(b * n)}) --> x>. Priority=0.022457 Truth: frequency=0.000000, confidence=0.810000
Derived: <<{(b * n)} --> $1> =/> <{(a * n)} --> $1>>. Priority=0.162845 Truth: frequency=1.000000, confidence=0.447514
1
performing 1 inference steps:
done with 1 additional inference steps.

5
performing 5 inference steps:
Derived: <{(b * n)} <-> {(a * n)}>. Priority=0.035531 Truth: frequency=1.000000, confidence=0.402762
Derived: <{(a * n)} --> {(a * n)}>. Priority=0.030402 Truth: frequency=1.000000, confidence=0.200269
Derived: <{(a * n)} <-> {(b * n)}>. Priority=0.035527 Truth: frequency=1.000000, confidence=0.402762
Derived: <{(a * n)} <-> {(b * n)}>. Priority=0.035524 Truth: frequency=1.000000, confidence=0.402762
Derived: <(b * n) <-> (a * n)>. Priority=0.042450 Truth: frequency=1.000000, confidence=0.402762
done with 5 additional inference steps.
5
performing 5 inference steps:
Derived: <{(b * n)} <-> {(a * n)}>. Priority=0.035520 Truth: frequency=1.000000, confidence=0.402762
Derived: <(a * n) <-> (b * n)>. Priority=0.042446 Truth: frequency=1.000000, confidence=0.402762
Derived: <(a * n) <-> (b * n)>. Priority=0.009638 Truth: frequency=1.000000, confidence=0.362486
Derived: <(a * n) <-> (b * n)>. Priority=0.009638 Truth: frequency=1.000000, confidence=0.362486
Derived: <(b * n) <-> (a * n)>. Priority=0.009637 Truth: frequency=1.000000, confidence=0.362486
Derived: <(b * n) <-> (a * n)>. Priority=0.009637 Truth: frequency=1.000000, confidence=0.362486
Derived: <{(a * n)} <-> {(a * n)}>. Priority=0.005001 Truth: frequency=1.000000, confidence=0.180242
done with 5 additional inference steps.
5
performing 5 inference steps:
Derived: <{(a * n)} <-> {(a * n)}>. Priority=0.000810 Truth: frequency=1.000000, confidence=0.162218
Derived: <(a * n) <-> (a * n)>. Priority=0.000968 Truth: frequency=1.000000, confidence=0.162218
Derived: <(a * n) <-> (a * n)>. Priority=0.000185 Truth: frequency=1.000000, confidence=0.145996
Derived: <(a * n) <-> (a * n)>. Priority=0.000185 Truth: frequency=1.000000, confidence=0.145996
done with 5 additional inference steps.

it derives a axiom/tautology Derived: <(a * n) <-> (a * n)>. Priority=0.000968 Truth: frequency=1.000000, confidence=0.162218 which a NAR shouldn't be able to do acording to NARS papers/books.

Operator renaming issue

*setopname index name
causes trouble when an operator with same name is already in the system.

Also maybe operator index can be cleaned up and only use atom index.

Allow operations to return a Substiution map

Reason: Compound ops sometimes want to pass an argument from one op to the next, using internal state for this isn't elegant.
in Decision_Execute:
First execute op, and apply substitution map to feedback before feeding it as event.
Also apply substitution to next step of compound op (to become new input args with var substituted) and repeat.

Missing Inference rule for NAL 3 decomposition

Both variants of the de-compositional form of this rule are missing:

R2( (M --> P), (M --> S), |-, (M --> (P | S)), Truth_Union )

Here is an example that highlights it:
*volume=100
(({$1} --> [birdlike]) ==> ({$1} --> ([feathered] | [fly]))).
({Tweety} --> [birdlike]).
10
({Tweety} --> [feathered])?

VMEntry crash

Derived: <((((1_4 * 1_4) | (0_3 * 1_5)) ~ (0_3 * 0_3)) | (0_3 * 0_3)) --> (1_3Derived: <( (( (1_31_4) >. :|: occurrenceTime=16603 Priority=0.000000 1_4Truth: frequency=0.000000, confidence=0.024336
) | (0_3 * 1_5)) ~ (0_3 * 0_3)) | (0_3 * 0_3)) <-> Derived: (<(1_3((( *1_4 1_3 * )1_4>. :|: occurrenceTime=16603 Priority=0.000000 Truth: frequency=0.000000, confidence=0.024336
) | (0_3 * 1_5)) ~ (0_3 * 0_3)) | (0_3 * VMEntry stack underflow
0_3Test failed.

happens with a not yet released SymVision test

delaybeforesend default 50ms wait in Pexpect investigation

Check out what the "timer-wait by default" issue in pexpect/pexpect#307 is all about.
Then try setting to None as recommended and test on multiple platforms, but also check if pexpect contains other issues, if so fall back to Popen which might suffice anyway.

In the meanwhile this can be done on usage side:

import NAR
NAR.delaybeforesend=None

If ONA was slow when used in Python apps, well this was the major reason.
50ms default delay for each input without warning the pexpect users, really annoying.

static inline is not necessary and a crime

static inline double or(double a, double b)

is not necessary, the function gets called once every 5 days in CPU time. No need for BS from the 70s, we are living in the far future.

Finishing v0.8.7

  • Memory export and import of the beliefs contained in the most k useful concepts in a format which will also easily be adoptable for OpenNARS.
  • Allow to export concept&knowledge export into a networkX graph for which tools exist to visualize and analyze.
  • Explicit questions about the future and past with :/: and :\: are now supported.

Allow events to imply contingencies (with var intro)

The representation
<A ==> <(B &/ Op) =/> D>)
is extremely useful. While some of it has been demonstrated in v0.9.0 whereby A is an inheritance statement,
now this will be extended to the case where A is a "result sequence" such as ((a &/ ^op) &/ result), also making sure that it can be learned effectively.
Several examples indicate that this is the key to acquiring key parts of natural language (not just for grounding relational statements into sensorimotor).

Reason behind choosing c for implementation and limitations of OpenNARS

hello,
I am new to AGI and hope we will finally create the next skynet.
I truly want to understand this system and its capabilities.
Basically, my questions are:

  1. Why choosing c and not more "user friendly" language like python? I believe the goal is to attract researchers and python would be a more suitable way to achieve that.
  2. I am currently watching the tutorials, but I want to understand what are the limitations of NARS? for example if modeled correctly, can it beat montezuma revenge? can it beat chess?

Thank you for your answers

Abduction/induction swapped in NAL.h

It looks like definitions for NAL-1 abduction/induction are swapped in NAL.h

image
NAL.h

But the truth-value functions are swapped too, and since they are symmetric this probably calculates the inference correctly.

image
Truth.c

But this discrepancy can potentially cause some problem depending on how it's used

Allow lists and sequence compression into lists

Allow events like
<A --> [hear]>. :|:
<B --> [hear]>. :|:
to be compressed into
<(A . B) --> [hear]>. :|: instead of (<A --> [hear]> &/ <B --> [hear]>). :|:
via the new list copula Pei suggested, which decreases the syntactic complexity within a modality and makes the representation more natural.

The system should also be able to learn symmetry and transitivity of this relationship as it can also do with product representations. This way product-based relation representations are even more optional, as sensorimotor sequences themselves can encode arbitrary relationships, just previously weren't elegant to work with.

Inducing and executing compound operations

From
a. :|:
^left. :|:
^forward. :|:
b. :|:
the system should be able to induce
<((a &/ ^left) &/ ^forward) =/> b>.

Now next time when
a. :|: happens
and
b! :|: is the goal,
the system can realize that a. :|: is fulfilled, and can execute ^left and then ^forward in that order.

Previously it needed to chain two separate contingencies to do that, e.g.
<(a &/ ^left) =/> c>.
<(c &/ ^forward) =/> b>.
but this needs more inference steps, and sometimes intermediate feedback isn't necessary.

Anticipation for compound ops

<((a &/ ^op1) &/ ^op2) =/> b>.
should not get neg. evidence in case of
a. :|:
^op2. :|:
b. :|:

Issue is related to #181
and also actually related to #183
Once this "joint issue" is resolved, v0.9.1 will be a piece of cake to finish!

Documentation

I'm trying to understand the code but it has been hard so far due to lacking documentation.
The wiki doesn't really explain any of the code and the code itself doesn't either. Even after looking at the original openNars wiki, a paper about the original openNars and the ANSNA wiki it still is difficult.

I believe some documentation about the code either in the wiki or in the code itself which explains what the important structures are used for, what they do, how they do it, which parts they use, etc would help a lot. Especially for new people like myself who are new to the openNARS project in general.

Compiling to DLL for use in a C# application?

Hi! I've been reading about the various NARS implementations. I primarily work in C# and noticed ALANN but ONA seems to be the current development focus, correct? Is it possible to compile ONA down to a DLL for embedding within a C# application?

v0.9.0 adjustments

Fixed:

  • v0.9.0 revision printing

Changes:

  • more aggressive control: don't add to cyclings events queue if an event with same term as existing belief is added which has same stamp and lower-or-equal truth value. (it had to be equal in v0.9.0)
  • TODO: don't derive implications and equivalences if a conclusion statement is also in the preconditions

motor babble prevents correct exec

*motorbabbling=true

<a --> A>. :|:
<(<a --> A> &/ ^left ) =/> <x --> Z> >.

<(<x --> Z> &/ ^pick ) =/> <{(make * icecreme)} --> cmd> >.
<{(make * icecreme)} --> cmd>! :|:
20
<{(make * icecreme)} --> cmd>! :|:
100

gives

Input: <a --> A>. :|: occurrenceTime=2 Priority=1.000000 Truth: frequency=1.000000, confidence=0.900000
Input: <(<a --> A> &/ ^left) =/> <x --> Z>>. Priority=1.000000 Truth: frequency=1.000000, confidence=0.900000
Input: <(<x --> Z> &/ ^pick) =/> <{(make * icecreme)} --> cmd>>. Priority=1.000000 Truth: frequency=1.000000, confidence=0.900000
Input: <{(make * icecreme)} --> cmd>! :|: occurrenceTime=6 Priority=1.000000 Truth: frequency=1.000000, confidence=0.900000
^go executed with args
Input: ^go. :|: occurrenceTime=6 Priority=1.000000 Truth: frequency=1.000000, confidence=0.900000
performing 20 inference steps:
done with 20 additional inference steps.
Input: <{(make * icecreme)} --> cmd>! :|: occurrenceTime=28 Priority=1.000000 Truth: frequency=1.000000, confidence=0.900000
Derived: <{(make * icecreme)} --> cmd>! :|: occurrenceTime=28 Priority=1.000000 Truth: frequency=1.000000, confidence=0.900000
Derived: <x --> Z>! :|: occurrenceTime=28 Priority=1.000000 Truth: frequency=1.000000, confidence=0.810000
Derived: <a --> A>! :|: occurrenceTime=28 Priority=1.000000 Truth: frequency=1.000000, confidence=0.729000
performing 100 inference steps:
done with 100 additional inference steps.
Statistics:
countConceptsMatchedTotal: 1
countConceptsMatchedMax: 1
countConceptsMatchedAverage: 0
currentTime: 129
total concepts: 4
Maximum chain length in concept hashtable = 1

while I expected that it calls left at some point, which is happening as expected when

*motorbabbling=false
is done

Evidential overlap in structural transformation preventing rule application

Following example fails to answer question

<(coffee * juice) --> opposite>.
<([bad] * [good]) --> opposite>.
<juice --> [good]>.
//check for derivation
<[good] --> (opposite /2 coffee)>? //Answer: <[good] --> (opposite /2 coffee)>.
10
<coffee --> [bad]>? //Answer: None

The above intermediate answer is: Answer: <coffee --> [bad]>. creationTime=8 Truth: frequency=1.000000, confidence=0.373020

Following does answer question but requires a manual entry to create a non-overlapping stamp

<(coffee * juice) --> opposite>.
<([bad] * [good]) --> opposite>.
<juice --> [good]>.
//force new stamp by manual entry
<[good] --> (opposite /2 coffee)>. {1.0 0.3}
100
<coffee --> [bad]>?

Answer: <coffee --> [bad]>. creationTime=41 Truth: frequency=1.000000, confidence=0.323546

Low confidence truth used to confirm not low truth value related.

There is an additional issue related to term reduction - the related term reduction ( <-> ) rules were disabled for the above test cases.

NAR-shell Eternal goals are not supported

I just tested the commands from here and when I enter:
(NARS --> chatty)! the shell exits with:
Eternal goals are not supported
Test failed.
I tried it with different frequency / confident values, but with the same outcome.

Working on Ubuntu 20.04 / x86_64 / Kernel 5.4.0-48-generic

show err "unknown option: --gc-sections" when build in mac

do the build.sh ,and show err msg :

First stage done, generating RuleTable.c now, and finishing compilation.
ld: unknown option: --gc-sections
collect2: error: ld returned 1 exit status

system:
macos 10.15.2 (19C57)

search google ,get some info,clang disguise gcc 。。。
need to change clang to gcc,but too hard,I failed。

Change compound op representation

From contingency form (((((a b) c) op1) op2) op3) =/> d to (((a b) c) ((op1 op2 op3)) =/> d
this is both more natural and also uses the term tree in a more balanced way, allowing for more complex compound ops to be formed.

Broken video link

"OpenNARS for Applications: Architecture and Control, on underline.io" in the wiki is broken

Possible issues with `Table_Add` function?

I've just started learning, reading Table_SantiyCheck function, it seems that all Terms occurred in
a Table should be different. However Table_Add function does not enforce this rule. It neither checks further if the term occurs as a later item when same_term is false, nor update the existing item when same_term is true. It just adds the implication as a new item, which may cause it fail "sanity check" i think? Am i missing something here?

By the way, i wonder if there's a Discord or Zullip or something for discussions.

TODO for upcoming v0.9.0 release

Reasoner complete.
TODO:

  • Update Transbot driver to include a fully-autonomous test mission.
  • Add and integrate transbot_lidar.py to make NARS also aware of obstacles which haven't been visually detected but by Lidar (instead of using Lidar only for navigation)
  • Atomic negation (-- a) should also parse in old syntax, not just new syntax (! a) and negations of statements like (-- <a --> B>)
  • Check input print when revision is happening with/without volume=100

github topics

Every good project should have topics such as:

  • reasoner
  • nal
  • nars
  • ai
    etc.

Generalized conditioning with derived events

Move FIFO after the PQ essentially, to form sequences of derived events, and allow outcomes to correlate to be derived events as well.
We tried this since many years, and now it's almost ready.

#174

Doesn't compile on Cygwin anymore

R0B3@DESKTOP /cygdrive/c/Users/R0B3/dir/github/OpenNARS-for-Applications
$ ./build.sh
rm: das Entfernen von 'NAR' ist nicht möglich: No such file or directory
rm: das Entfernen von 'src/RuleTable.c' ist nicht möglich: No such file or directory
src/Concept.c src/Cycle.c src/Decision.c src/Event.c src/FIFO.c src/Globals.c src/HashTable.c src/Implication.c src/Inference.c src/main.c src/Memory.c src/NAL.c src/NAR.c src/Narsese.c src/NetworkNAR/Metric.c src/NetworkNAR/UDP.c src/NetworkNAR/UDPNAR.c src/PriorityQueue.c src/Shell.c src/Stack.c src/Stamp.c src/Stats.c src/Table.c src/Term.c src/Truth.c src/Usage.c src/Variable.c
Compilation started:

R0B3@DESKTOP /cygdrive/c/Users/R0B3/dir/github/OpenNARS-for-Applications
$ ls

it doesn't emit a NAR.exe

How to replicate results for "Reinforcement Learning and Planning via Non-Axiomatic Reasoning"

Dear reviewer!

To replicate results any POSIX-compatible OS with GCC or Clang C compiler, and Python3 implementation will work.

How to replicate results of the ONA vs. Q-Learning comparison

  1. Please download the following zip file and extract it:
    comparison_QL_ONA_v0.8.7.zip

  2. Run python3 comparison.py master QLearner SkipFolderSetup in the evaluation_PongNonMarkovian_SpaceInvaders folder in order to obtain results of non-Markovian Pong and Space Invaders. Then run python3 plot.py in the folder to obtain both example comparison plots across 10 runs.

  3. Then run python3 comparison.py master QLearner SkipFolderSetup in the evaluation_PongMarkovian_Robot folder in order to obtain results for Markovian Pong and Grid Robot. Then run python3 plot.py in the folder to obtain both example comparison plots across 10 runs.

These steps will take about 30 minutes in total, though of course this varies with processor speed.

How to replicate the bottle collect mission case study

Here, more work is necessary. For instance the Lego NXT hardware is required. Then:

  1. Build the robot according to the building instructions file ( https://github.com/patham9/SpongeBot ) which can be opened with Lego Digital Designer ( https://www.lego.com/en-us/ldd ) and insert batteries.

  2. Attach a smartphone on the robot's smartphone mount point.

  3. Place a isolated bottle the robot can lift, and a few bottles at a different location.

  4. Install Darknet and the YOLOv4 weights to /home/tc/Dateien/Visionchannel/AlexeyAB_darknet/darknet/

  5. Adjust the BlueSock address in line 40 of /OpenNARS-for-Applications/misc/Python/robot_collect_mission.py, it has to correspond to the BlueTooth device address of the NXT brick.

  6. Adjust line 85 of /OpenNARS-for-Applications/misc/Python/vision_to_narsese.py to use the IP camera of the to the robot attached smartphone, and run the IP camera.

  7. Run python3 robot_collect_mission.py in folder /OpenNARS-for-Applications/misc/Python/

The robot should now perform the mission.

Best regards,
Patrick Hammer

Parser fails when a comment is at the end of the last line

Parser fails

<a --> b>. // Comment

Parses correctly

<a --> b>.
// Comment

After further investigation the failure condition is not clear as the following parses correctly:

<(coffee * juice) --> opposite>.
<([bad] * [good]) --> opposite>.
<juice --> [good]>.
//check for derivation
<[good] --> (opposite /2 coffee)>? //Answer: <[good] --> (opposite /2 coffee)>.
10
<coffee --> [bad]>?
// Comment

but the below example does not:

<a --> b>.
//First comment
<b --> c>. //Second comment
<a --> c>?
//Third comment

Setting operation names for UDPNARS

I can’t seem to be able to perform *setopname properly when using UDPNARS. Even when sent immediately after starting it just complains that you can only do that at the beginning or after reset and shuts down. I also tried to send *reset just before *setopname but it still didn’t work. I've noticed that Python examples also don’t show sending such input, e. g the toothbrush example just skips those commands (does it even work when op names are not set?).

Is there a way for me to set operation names with UDPNARS and if so, what is it?

Revision printing in v0.9.0

Revisions results aren't printed in v0.9.0, and eternal inputs are sometimes printed twice instead of the revised version.
No test cases are affected since they don't depend on printing of inputs, derivations and revisions.
This has been fixed in master.

Allow conditioning on derived events

This capability we wanted ever since we started implementing NAL-7.
It's hard to get right as there are usually way more derived events than input events, so the control system needs to handle a much larger flood of implications being constantly formed.
Now we have a first working implementation which does not compromise current / other capability of ONA: #170

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.