ladder99 / ladder99 Goto Github PK
View Code? Open in Web Editor NEWLadder99 - connects factory devices to database and visualizations using MTConnect, an open standard.
Home Page: https://mriiot.com
License: Apache License 2.0
Ladder99 - connects factory devices to database and visualizations using MTConnect, an open standard.
Home Page: https://mriiot.com
License: Apache License 2.0
We need to clearly define our Github workflow (issues, projects, milestones, branches, versioning …).
This issue should be considered a meta-issue and sub-issues should be created when needed for individual parts of this issues.
Related GitHub docs:
Questions to ask:
What is the lifecycle of a problem/task?
What will be our milestones? Release milestones? Feature milestones? Both?
How exactly we will use projects?
What versioning do we use? SemVer? Could we use the same version for all apps and libs?
# `version` value in `package.json` of each app (microservice)
services/meter/package.json : 0.1.0
services/recorder/package.json : 0.1.0
services/adapter/package.json : 0.10.1
services/relay/package.json : 0.10.1
services/simulator/package.json : 0.1.0
services/compiler/package.json : 0.1.0
Use GitHub Releases to publish our releases.
we're currently using v2 ARM-only image - need cross-platform
I suggest to use EditorConfig.
Many configuration files need to be touched to add/edit a device.
This process needs to be scripted to touch:
Enable mosquitto to listen on multiple protocols and any available interface. This is important when containers are share network with host and inter-container DNS resolution is not available.
example config.
allow_anonymous true
message_size_limit 10485760
retain_available true
listener 1883 0.0.0.0
max_connections 100
listener 9001 0.0.0.0
protocol websockets
persistence true
persistence_location /mosquitto/data/
log_dest stdout
log_dest file /mosquitto/log/mosquitto.log
Since we have control over setup.yaml. The device.name in agent.xml should follow agent.cfg device lookup.
Device - The name of the device that corresponds to the name of the device in the Devices file. Each adapter can map to one device. Specifying a "*" will map to the default device.
Default: The name of the block for this adapter or if that is not found the default device if only one device is specified in the devices file.
Coordinates drivers, cache, agent connections - includes compiler, docs, publications
endian: little
, default to most commonHow to create adapter for scale?
bb: The example setup does watch one device in the mazak agent
ob: Yeah, it is in the config, however, when I open Grafana, it is blank (no machines). Maybe I miss some steps …
ladder99/models/ccs-pa/model.yaml
Line 20 in d09e819
Top level data items are visible in raw probe XML, but no the XSLT styled probe.
<DataItems>
<DataItem category="EVENT" id="ccs-connection" type="AVAILABILITY"/>
<DataItem category="EVENT" discrete="true" id="ccs-asset_changed" type="ASSET_CHANGED"/>
<DataItem category="EVENT" id="ccs-asset_removed" type="ASSET_REMOVED"/>
<DataItem category="CONDITION" id="ccs-dev_cond" type="SYSTEM"/>
<DataItem category="EVENT" id="ccs-dev_msg" type="MESSAGE"/>
<DataItem category="EVENT" id="ccs-fw_ver" subType="VERSION" type="FIRMWARE"/>
<DataItem category="EVENT" id="ccs-func_mode" type="FUNCTIONAL_MODE"/>
</DataItems>
Consider using Nest framework.
This needs to be discussed, as this is a quite big change in how we program l99
and related apps/libs.
how develop locally?
ie without using console
eg debug in vscode or other ide?
I use Jest all over the place. I think we should use it, however, I don’t care about the tool used (Jest, Mocha, …), but we need to create robust tests.
We already have some tests, but it is quite messy and not thourough.
As for the L99_HOME variable, it looks like it's hardly used, so we could remove it eventually - I had been in the midst of translating from './l99 foo' usage to 'l99 foo' -
remove L99_SETUP var
Where can I find what is the L99_SETUP
env var used for? What should be its value? In shell/install/cli
, is only initialised, but no value is defined
I think that's not actually used anymore - it stores the current setup in a .l99_setup file instead. Adding to list.
Handle all representations and shdr outputs
I believe that using commitlint
is a nice tool to create short, descriptive commit messages. Moreover, we could later create (release) changelogs from them, once we learn to follow the rules.
These are the possible codes for state
:
0
: run mode
/execution active
;1
: ready to run
/execution ready
;2
: not used
;3
: faulted
/execution interrupted
.And the fault and warning message would each be the same as the “90-series” display message number.
The list of messages that are triggered by the ELTP panel processor to be displayed by the alphanumeric HMI.
eltp.txt
(renamed from eltp.msg
to eltp.txt
, as GitHub does not allow uploading unknown file extensions) can be opened with the editor program PCED40
or viewed somewhat with a text editor. It contains the list off messages
So, for example if the processor is triggering message “FAULT REFLECTED LIGHT L1” then it will show 120.
incl clickup tasks
We might want to add something like workshift management or availability management or something like that.
By workshift management I mean that the user (company/client) could define default workshifts used weekly (like a weekly time schedule), including the break times. Then they could define workshift overrides, like company/plant-wide holidays, cancelled shifts, overtimes, additional shifts, changed shift start/end times, … This might be useful if we want to calculate the exact availability.
Another related thing might be maintenance (time) management, where the user could define the planned downtime during which a particular machine or entire plant will be under maintenance. This, however, might be moved into separate issue if we want it.
As for Docker containers: Do all ports (of all containers) need to be open to host? Ports that are used only by other containers should not be binded to the host IMO. That said, ATM I can’t say if there are some ports open which are used only within Docker, however, it seems to me there are some ports binded that are not used (I might be wrong though).
Yes, this could/should be improved.
I am not a security guru, but I try to leave open as little as needed/possible. Everything that is needed only internally, should not be exposed to the world.
Grafana, Agent and Dozzle should only have ports exposed outside of docker network.
see also #206
Adapter container (an any other container) must not depend on any OS level installed libraries.
Line 216 in 041e370
You seem to have multiple packages in a single repository, but the repository structure is a bit confusing. Below is a suggested structure:
apps/
:
services/
seem to be separate apps, therefore I suggest to rename that folder to apps/
, probably using Nx;shell/
is just another app (or a collection of apps?), thus it (they all?) should be moved into separate folder(s) within apps/
:
docs/
:
design/
seems to be part of documentation, therefore it should be moved (or merged) into docs/
(see #215);libs/
:
example/
:
setups/
should be part of the codebase, maybe we should move it to docs/
and there might not be needed example/
folder at all;volumes/
:
volumes/
used for, however, it seems to be just some Grafana config, thus consider moving it in that app (in apps/
) that needs this.If we want to use Nx, it has a distinction between apps and libraries. AFAIK, most of the apps (like drivers) we have, are libraries actually, therefore those would reside in libs/
folder under Nx. Other apps (or microservices) would live under apps/
.
I really like Nx monorepo. I understand that this change is quite fundamental and @bburns would need to learn a new tool (actually, a few of them: ESLint, Nx, potentially Nest), however, I really think it would ease are work in the long-term. Currently, Brian does all the hard work, however, why should we do something that is already created for us. Moreover, this tooling helps us generate some boilerplates, helps with testing, structuring, validating the code, building CRUD methods, APIs, etc. Currently, it is a bit of a mess (structurally and codewise too). On the other, Brian seem to do well, which means he’s an experienced programmer. 😉
Probably a low priority item here but figured I'd submit it to get the concept logged.
I think it would make sense to have a <Description>
element for the Host element. Attributes could be something like this:
<Device id="host" name="Host" uuid="someUUID">
<Description manufacturer="MrIIoT" model="model" serialNumber="L99-{WAN MAC}" ></Description>
</Device>
This seems a little more elegant than including the serial in the "sender", which I think I suggested previously, though maybe that's appropriate too, not sure.
We should create som NPM scripts (in package.json
) for some stuff like:
build
,release
,lint
,lint:check
,prettier
,prettier:check
,start
,start:dev
,Make 9000 as the default external port.
Compile setup.yaml and model yamls to devices.xml, agent.cfg, pipeline-overrides.yaml etc
Develop an intermediate model to assist in auto-generation of YAML model for the Adapter. The model will be fanuc-driver.Collector dependent, meaning that data harvested at the root, path, and axis/spindle levels will be known ahead of time. However, the number of axes, spindles, and path will not be known ahead of time.
I would expect all values from /evt/query to be evaluated and sent to Agent on startup. For example, printer_start_print_duration remains UNAVAILABLE on startup.
ladder99/models/ccs-pa/inputs.yaml
Line 91 in d09e819
Inter-container communication is not available when containers share host network.
Underlying issue preventing docker bridge networking might have something to do with how br-lan interface is used within OpenWRT.
ladder99/setups/ccs-pa/setup.yaml
Line 25 in 041e370
ladder99/setups/ccs-pa/setup.yaml
Line 33 in 041e370
After configuring the setup.yaml, and adding relay alias's the relays logs roll through several items (some warning on missing alias for devices, and finally all the endpoints with Autoprune start job scheduler for { hour: 0, minute: 0, dayOfWeek: 0, tz:'America/Chicago' } Autoprune scheduled for 2022-10-02T05:00:00.000Z
. It finally chokes on an error that looks like the following :
db signal uncaughtException received - shutting down...
TypeError: Cannot read properties of undefined (reading 'Header')
at Probe.read (file:///usr/app/src/data.js:46:38)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async Probe.read (file:///usr/app/src/dataProbe.js:18:5)
at async AgentReader.start (file:///usr/app/src/agentReader.js:48:7)
Create generic Fanuc model based on inputs from https://github.com/Ladder99/fanuc-driver.
A Fanuc CNC controller can contain multiple paths. As each path is iterated, some functions are path dependent. Meaning that each path can contain its own unique set of data (e.g. parameters, diagnostics, etc). Paths then break down into axes and spindles. Each execution path of a Fanuc controller operates its own set of axes and spindles. It depends on the controller model and its implementation. Each path may contain a X, Y, and Z axis. In the MTConnect taxonomy this is represented as Controller.Path1.Axes.Linear.X, Controller.Path2.Axes.Linear.X. The number of paths and axes is unknown ahead of time. It is unrealistic to expect individual models to be created for every machine. Fanuc-driver can output an intermediate representation of an individual machine implementation. This data can be helpful in auto-generating a YAML model on the fly.
Initially, a simple three axis model with a single spindle will be created and mapped to MTConnect output as the basis for further development.
We are missing LICENSE.md
file.
In most (6 of 8) apps under services/
, we have in package.json
license
set to Apache 2.0, however, there is no global license file in the repo root.
eg --offline doesn't even work when run with agent -
#3 [internal] load metadata for docker.io/library/ubuntu:22.10
#3 ERROR: failed to do request: Head "https://registry-1.docker.io/v2/library/ubuntu/manifests/22.10": dial tcp: lookup registry-1.docker.io on 192.168.65.5:53: no such host
------
> [internal] load metadata for docker.io/library/ubuntu:22.10:
------
failed to solve: ubuntu:22.10: failed to do request: Head "https://registry-1.docker.io/v2/library/ubuntu/manifests/22.10": dial tcp: lookup registry-1.docker.io on 192.168.65.5:53: no such host
ESLint is used to define some style rules, as well as it checks some common code mistakes. ESLint statically analyses the code to quickly find problems.
If setup.yaml calls out multiple Agents, does Relay block on startup if one of the Agents is not reachable?
ladder99/services/relay/src/endpoint.js
Lines 35 to 66 in 8877156
As seen in Agent output NORMAL (msg here)
.
ladder99/services/relay/Dockerfile
Lines 30 to 32 in 5f6fe8a
ladder99/services/relay/Dockerfile
Lines 62 to 64 in d9d2b67
could be
COPY package.json .
within the context of WORKDIR.
ladder99/services/relay/Dockerfile
Lines 40 to 49 in d9d2b67
could be combined into a single RUN command to reduce layers.
availability = active / available time
performance (efficiency) = rate/optimal rate [get every minute]
quality = good/total parts
oee = availability * performance * quality
For first pass simplicity we could have a fixed optimal rate and available times, eg specify them in setup.yaml?
cm: Yes , 200ppm, and 8-5
use bins for calculations?
extend to performance and quality?
MTC EVENT MESSAGE does not parse into MTC Agent model.
TCP sending string 2021-07-09T13:11:53.428Z|ccs-dev_msg|13: printer service req ...
2021-07-09T13:38:35.443169Z | Message | | | ccs-dev_msg | 46 | UNAVAILABLE
Is this related to Agent ShdrVerion?
ShdrVersion - Specifies the SHDR protocol version used by the adapter. When greater than one (1), allows multiple complex observations, like Condition and Message on the same line. If it equials one (1), then any observation requiring more than a key/value pair need to be on separate lines. This is the default for all adapters.
Default: 1
I can't figure out why the Agent doesn't output SHDR in debug log. I thought this was a feature before.
logger_config
{
logging_level = debug
output = cout
}
Hi there! First off, this is a really neat project and I sincerely appreciate you all putting it out there into the world. It seems like this is being very actively developed and I’m eager to see it progress.
I was just acquainting myself a bit with the code and architecture and ran across this comment:
Coincidentally, I read about this exact case earlier today in a MTConnect PowerPoint presentation that I’m now struggling to find. Anyways: Apparently the correct move here is to key Condition off of ID + native code. You can have more than one condition value for an ID, but only one for an ID + native code.
Say there are 3 conditions present at a machine tool:
logic_cond warning code=1
logic_cond warning code=2
logic_cond fault code=3
All 3 should be broadcast via SHDR. If the fault resolves, you broadcast:
logic_cond normal code=3
but the warnings for code 1/2 remain. When all warning conditions resolve, you can broadcast normal without reference to a native code.
At least that was my understanding as of earlier today! Please feel free to close this issue/manage it how you will, but I thought I’d share in case it was helpful and because the coincidence was just too delightful.
finish testing build2 script, replace the build script
the command is 'l99 build ' - see https://github.com/Ladder99/ladder99/blob/main/shell/commands/build. that approach builds all images at once though - the next approach i wanted to try was to loop over them as in https://github.com/Ladder99/ladder99/blob/main/shell/commands/build2 - that way if one of the platforms fails, you haven't wasted all that time trying to build the others.
ob: l99 status is docker ps and it lists all running containers, even those that are not related to l99. IMO it should output only the state of l99 containers. I should output at least the l99 is running or not, ideally each container if it is running or not (even if it is not running), optionally in JSON format.
bb: Yeah, I had included non-ladder99 containers as there were things like fanuc_driver which are currently run separately, but yeah maybe better to filter those out. Adding to list.
cm: L99 status should be limited to containers in the l99 stack. I would not worry about fanuc-driver. We can make it part of the stack eventually.
ob: Why fanuc-driver would be a non-l99 container? IMO if fanuc-driver is used by l99, why wouldn’t it start with --project ladder99 in docker-compose up? Shouldn’t it be set up during l99 setup (l99 start)?
We already have som issue labels (mostly GitHub default) defined, however, we could do better.
Also we could improve our usage of labels. @bburns currently uses the first part of issue titles, which is not the best practice. We should remove the ‘labels’ from issue titles and try to create such issue titles that could be eventually used as commit messages (i.e. they should be short and descriptive), however, that is a separate issue.
Just a sidenote: on GitHub, I miss scoped labels that I love on GitLab (which I prefer over GitHub).
We should use labels to define:
bug
:
#d73a4a
;docs/
);enhancement
:
#a2eeef
;dependencies
:
#4b4b4c
;l99
; it should not be assigned when a new feature is added that requires a dependency to be added or updated;application:$appName
:
#5b0e6f
;appName
:
adapter
;compiler
;meter
;recorder
;relay
;simulator
;library:$libName
:
#e99695
;libName
;infrastructure
:
#86504a
;l99
not directly related to the code (apps/libs) itself; it should not be assigned when a new feature is added (in that case the tests addition should be done implicitly);documentation
:
#0075ca
;docs/
; it should not be assigned when a new feature is added (in that case the docs modification should be done implicitly);tests
:
#1b263d
;localization
:
#f11add
;component:$componentName
:
#c5def5
;componentName
;duplicate
:
#cfd3d7
;good first issue
:
#7057ff
;wontfix
:
help wanted
:
#008672
;invalid
:
#e4e669
;Label assignment rules:
application:$appName
and library:$libName
should be mutually exclusive labels. Whenever a particular issue is related to particular app/library, use of such label is required.application:$appName
and library:$libName
labels.TODO and questions:
application:$appName
and library:$libName
, do we need some other scoped labels? For example, component:$name
where $name
could be a driver type for adapter
. It could be a module name (in Nest/Angular terminology; e.g. a monitoring feature). (Currently, possible name
s of the component:$name
are adapter
, agent
, relay
, shdr
.) However, the question is whether we need such in-depth labelling for issues. IMHO it is too much for now, therefore I suggest to remove the component:$name
labels.bug
label is added, enhancement
label removed), however, that is not in the scope of this issue (at least right now).adapter
) are part of the app (microservices). Currently, I am not sure if those libraries (or plugins, as @bburns calls them) should be considered as separate parts, or if they should be later moved out of the app into separate library (like into libs/
folder) or not.use case
and question
labels;enhancement
?A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.