GithubHelp home page GithubHelp logo

ladder99 / ladder99 Goto Github PK

View Code? Open in Web Editor NEW
25.0 25.0 8.0 17.85 MB

Ladder99 - connects factory devices to database and visualizations using MTConnect, an open standard.

Home Page: https://mriiot.com

License: Apache License 2.0

JavaScript 86.54% Shell 3.95% Dockerfile 3.62% TSQL 1.50% PLpgSQL 3.92% GAP 0.47%
iiot mtconnect

ladder99's Introduction

Ladder99

License Open Issues Github Stars follow @ladder99

Ladder99 is a free and open source software pipeline that transfers data from factory devices to a database and end-user visualizations using MTConnect, an open standard.

MTConnect standardizes factory device data flow and vocabulary - it was started by UC Berkeley, Georgia Institute of Technology, and Sun Microsystems in 2008, and continues under active development.

Ladder99 is developed by MRIIOT, your agile digital transformation partners.

screenshot

Quick Start

Install Docker, then in a terminal (or use Git Bash for Windows),

shell/install/cli
source ~/.bashrc
l99 start

Then goto http://localhost:5000/current for MTConnect Agent and http://localhost for Grafana dashboard.

Folders

  • docs - website walkthrough and documentation
  • services - source code for different sections of the pipeline - adapter, relay, etc.
  • setups - configuration settings
  • shell - shell scripts
  • volumes - data for the services

Adapter plugins are defined in services/adapter/src/drivers.

Links

For the Ladder99 documentation, see https://docs.ladder99.com.

For more on MTConnect, see https://www.mtconnect.org.

For more on MRIIOT and what we offer, see https://mriiot.com.

License

Open source Apache 2.0

ladder99's People

Contributors

bburns avatar dependabot[bot] avatar deudymriiot avatar mriiot avatar ottobolyos avatar tukusejssirs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

ladder99's Issues

Adapter: Handling multiple condition values

Hi there! First off, this is a really neat project and I sincerely appreciate you all putting it out there into the world. It seems like this is being very actively developed and I’m eager to see it progress.

I was just acquainting myself a bit with the code and architecture and ran across this comment:

https://github.com/Ladder99/ladder99-ce/blob/9a583d7c255a1c2edb7c2472ab9352056b6fb733/services/adapter/src/cache.js#L144

Coincidentally, I read about this exact case earlier today in a MTConnect PowerPoint presentation that I’m now struggling to find. Anyways: Apparently the correct move here is to key Condition off of ID + native code. You can have more than one condition value for an ID, but only one for an ID + native code.

Say there are 3 conditions present at a machine tool:

logic_cond warning code=1
logic_cond warning code=2
logic_cond fault code=3

All 3 should be broadcast via SHDR. If the fault resolves, you broadcast:

logic_cond normal code=3

but the warnings for code 1/2 remain. When all warning conditions resolve, you can broadcast normal without reference to a native code.

At least that was my understanding as of earlier today! Please feel free to close this issue/manage it how you will, but I thought I’d share in case it was helpful and because the coincidence was just too delightful.

Adapter

Coordinates drivers, cache, agent connections - includes compiler, docs, publications

Calculate and display OEE and other shift metrics

availability = active / available time
performance (efficiency) = rate/optimal rate [get every minute]
quality = good/total parts

oee = availability * performance * quality

For first pass simplicity we could have a fixed optimal rate and available times, eg specify them in setup.yaml?

cm: Yes , 200ppm, and 8-5

use bins for calculations?
extend to performance and quality?

Modify mosquitto configs

Enable mosquitto to listen on multiple protocols and any available interface. This is important when containers are share network with host and inter-container DNS resolution is not available.

  • tcp 1883 listen on 0.0.0.0
  • websocket 9001 listen on 0.0.0.0

example config.

allow_anonymous true
message_size_limit 10485760
retain_available true
listener 1883 0.0.0.0
max_connections 100
listener 9001 0.0.0.0
protocol websockets
persistence true
persistence_location /mosquitto/data/
log_dest stdout
log_dest file /mosquitto/log/mosquitto.log

Cannot read properties of undefined (reading 'Header')

After configuring the setup.yaml, and adding relay alias's the relays logs roll through several items (some warning on missing alias for devices, and finally all the endpoints with Autoprune start job scheduler for { hour: 0, minute: 0, dayOfWeek: 0, tz:'America/Chicago' } Autoprune scheduled for 2022-10-02T05:00:00.000Z . It finally chokes on an error that looks like the following :

db signal uncaughtException received - shutting down...
TypeError: Cannot read properties of undefined (reading 'Header')
    at Probe.read (file:///usr/app/src/data.js:46:38)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async Probe.read (file:///usr/app/src/dataProbe.js:18:5)
    at async AgentReader.start (file:///usr/app/src/agentReader.js:48:7)

Add Sencon state messages

These are the possible codes for state:

  • 0: run mode/execution active;
  • 1: ready to run/execution ready;
  • 2: not used;
  • 3: faulted/execution interrupted.

And the fault and warning message would each be the same as the “90-series” display message number.

The list of messages that are triggered by the ELTP panel processor to be displayed by the alphanumeric HMI.

eltp.txt (renamed from eltp.msg to eltp.txt, as GitHub does not allow uploading unknown file extensions) can be opened with the editor program PCED40 or viewed somewhat with a text editor. It contains the list off messages

So, for example if the processor is triggering message “FAULT REFLECTED LIGHT L1” then it will show 120.

image

Agent - top level DataItems not visible in styled probe

Top level data items are visible in raw probe XML, but no the XSLT styled probe.

<DataItems>
        <DataItem category="EVENT" id="ccs-connection" type="AVAILABILITY"/>
        <DataItem category="EVENT" discrete="true" id="ccs-asset_changed" type="ASSET_CHANGED"/>
        <DataItem category="EVENT" id="ccs-asset_removed" type="ASSET_REMOVED"/>
        <DataItem category="CONDITION" id="ccs-dev_cond" type="SYSTEM"/>
        <DataItem category="EVENT" id="ccs-dev_msg" type="MESSAGE"/>
        <DataItem category="EVENT" id="ccs-fw_ver" subType="VERSION" type="FIRMWARE"/>
        <DataItem category="EVENT" id="ccs-func_mode" type="FUNCTIONAL_MODE"/>
      </DataItems>

Filter to just `l99` services in `l99 status` command

ob: l99 status is docker ps and it lists all running containers, even those that are not related to l99. IMO it should output only the state of l99 containers. I should output at least the l99 is running or not, ideally each container if it is running or not (even if it is not running), optionally in JSON format.
bb: Yeah, I had included non-ladder99 containers as there were things like fanuc_driver which are currently run separately, but yeah maybe better to filter those out. Adding to list.
cm: L99 status should be limited to containers in the l99 stack. I would not worry about fanuc-driver. We can make it part of the stack eventually.
ob: Why fanuc-driver would be a non-l99 container? IMO if fanuc-driver is used by l99, why wouldn’t it start with --project ladder99 in docker-compose up? Shouldn’t it be set up during l99 setup (l99 start)?

Improve tests

I use Jest all over the place. I think we should use it, however, I don’t care about the tool used (Jest, Mocha, …), but we need to create robust tests.

We already have some tests, but it is quite messy and not thourough.

Automate deployment/editing of devices.

Many configuration files need to be touched to add/edit a device.

This process needs to be scripted to touch:

  • setup.yaml
    • add/edit agent endpoint
  • agent.xml
    • add/edit device model
    • make host's name as sender
  • agent.cfg
    • add/edit device
    • modify adapter endpoint
  • extension: fanuc-driver artifacts
    • merge device model into agent.xml

Cache

Handle all representations and shdr outputs

Docker inter-container communication and host network on multi interface hardware

Inter-container communication is not available when containers share host network.

Underlying issue preventing docker bridge networking might have something to do with how br-lan interface is used within OpenWRT.

url: mqtt://broker:1883

host: adapter

Add workshift management

We might want to add something like workshift management or availability management or something like that.

By workshift management I mean that the user (company/client) could define default workshifts used weekly (like a weekly time schedule), including the break times. Then they could define workshift overrides, like company/plant-wide holidays, cancelled shifts, overtimes, additional shifts, changed shift start/end times, … This might be useful if we want to calculate the exact availability.

Another related thing might be maintenance (time) management, where the user could define the planned downtime during which a particular machine or entire plant will be under maintenance. This, however, might be moved into separate issue if we want it.

Consistency in identifying a device across configuration files.

Since we have control over setup.yaml. The device.name in agent.xml should follow agent.cfg device lookup.

Device - The name of the device that corresponds to the name of the device in the Devices file. Each adapter can map to one device. Specifying a "*" will map to the default device.

Default: The name of the block for this adapter or if that is not found the default device if only one device is specified in the devices file.

image

Revise the repository structure

You seem to have multiple packages in a single repository, but the repository structure is a bit confusing. Below is a suggested structure:

  • apps/:
    • services/ seem to be separate apps, therefore I suggest to rename that folder to apps/, probably using Nx;
    • all apps (and microservices) should go in here;
    • shell/ is just another app (or a collection of apps?), thus it (they all?) should be moved into separate folder(s) within apps/:
      • consider moving from BASH to JS/TS; consider using Commander.js;
  • docs/:
    • design/ seems to be part of documentation, therefore it should be moved (or merged) into docs/ (see #215);
  • libs/:
    • all libraries should go in here;
  • example/:
    • I am not sure if setups/ should be part of the codebase, maybe we should move it to docs/ and there might not be needed example/ folder at all;
    • I think we should remove/move this folder;
  • volumes/:
    • I am not sure what is volumes/ used for, however, it seems to be just some Grafana config, thus consider moving it in that app (in apps/) that needs this.
    • I think we should remove/move this folder;

If we want to use Nx, it has a distinction between apps and libraries. AFAIK, most of the apps (like drivers) we have, are libraries actually, therefore those would reside in libs/ folder under Nx. Other apps (or microservices) would live under apps/.

I really like Nx monorepo. I understand that this change is quite fundamental and @bburns would need to learn a new tool (actually, a few of them: ESLint, Nx, potentially Nest), however, I really think it would ease are work in the long-term. Currently, Brian does all the hard work, however, why should we do something that is already created for us. Moreover, this tooling helps us generate some boilerplates, helps with testing, structuring, validating the code, building CRUD methods, APIs, etc. Currently, it is a bit of a mess (structurally and codewise too). On the other, Brian seem to do well, which means he’s an experienced programmer. 😉

Remove `L99_HOME` and `L99_SETUP` env vars

As for the L99_HOME variable, it looks like it's hardly used, so we could remove it eventually - I had been in the midst of translating from './l99 foo' usage to 'l99 foo' -

remove L99_SETUP var
Where can I find what is the L99_SETUP env var used for? What should be its value? In shell/install/cli, is only initialised, but no value is defined
I think that's not actually used anymore - it stores the current setup in a .l99_setup file instead. Adding to list.

Models - Fanuc CNC intermediate model

#5

Develop an intermediate model to assist in auto-generation of YAML model for the Adapter. The model will be fanuc-driver.Collector dependent, meaning that data harvested at the root, path, and axis/spindle levels will be known ahead of time. However, the number of axes, spindles, and path will not be known ahead of time.

See Ladder99/fanuc-driver#12.

EVENT MESSAGE not populating model

MTC EVENT MESSAGE does not parse into MTC Agent model.

TCP sending string 2021-07-09T13:11:53.428Z|ccs-dev_msg|13: printer service req ...

2021-07-09T13:38:35.443169Z | Message |   |   | ccs-dev_msg | 46 | UNAVAILABLE

Is this related to Agent ShdrVerion?

ShdrVersion - Specifies the SHDR protocol version used by the adapter. When greater than one (1), allows multiple complex observations, like Condition and Message on the same line. If it equials one (1), then any observation requiring more than a key/value pair need to be on separate lines. This is the default for all adapters.

Default: 1

I can't figure out why the Agent doesn't output SHDR in debug log. I thought this was a feature before.

logger_config
{
    logging_level = debug
    output = cout
}

Add some scripts to `package.json`

We should create som NPM scripts (in package.json) for some stuff like:

  • build,
  • release,
  • lint,
  • lint:check,
  • prettier,
  • prettier:check,
  • start,
  • start:dev,
  • ….

Handle offline mode better in `l99 start` command

eg --offline doesn't even work when run with agent -

#3 [internal] load metadata for docker.io/library/ubuntu:22.10
#3 ERROR: failed to do request: Head "https://registry-1.docker.io/v2/library/ubuntu/manifests/22.10": dial tcp: lookup registry-1.docker.io on 192.168.65.5:53: no such host
------
 > [internal] load metadata for docker.io/library/ubuntu:22.10:
------
failed to solve: ubuntu:22.10: failed to do request: Head "https://registry-1.docker.io/v2/library/ubuntu/manifests/22.10": dial tcp: lookup registry-1.docker.io on 192.168.65.5:53: no such host

Use `commitlint`

I believe that using commitlint is a nice tool to create short, descriptive commit messages. Moreover, we could later create (release) changelogs from them, once we learn to follow the rules.

Add <Description> for Host device

Probably a low priority item here but figured I'd submit it to get the concept logged.

I think it would make sense to have a <Description> element for the Host element. Attributes could be something like this:

<Device id="host" name="Host" uuid="someUUID">
    <Description manufacturer="MrIIoT" model="model" serialNumber="L99-{WAN MAC}" ></Description>
</Device>

This seems a little more elegant than including the serial in the "sender", which I think I suggested previously, though maybe that's appropriate too, not sure.

Add a license file

We are missing LICENSE.md file.

In most (6 of 8) apps under services/, we have in package.json license set to Apache 2.0, however, there is no global license file in the repo root.

Does Relay startup block if one Agent is not reachable?

If setup.yaml calls out multiple Agents, does Relay block on startup if one of the Agents is not reachable?

async fetchJsTree(type, from, count) {
const url = this.getUrl(type, from, count)
let jsTree
do {
try {
const response = await fetch(url)
const xml = await response.text() // try to parse response as xml
jsTree = convert.xml2js(xml, convertOptions) // try to convert to js tree
// make sure we have a valid agent response
if (
!jsTree.MTConnectDevices &&
!jsTree.MTConnectStreams &&
!jsTree.MTConnectError
) {
throw new Error(
'Invalid agent response - should have MTConnectDevices, MTConnectStreams, or MTConnectError.'
)
}
} catch (error) {
// error.code could be ENOTFOUND, ECONNREFUSED, EHOSTUNREACH, etc
// console.error('Relay error', error)
console.log(`Relay error reading`, url)
console.log(`Relay error -`, error.message)
console.log(`Relay jsTree:`)
console.dir(jsTree, { depth: 3 })
console.log(`Relay error - retrying in ${waitTryAgain} ms...`)
await lib.sleep(waitTryAgain)
jsTree = null // so loop again
}
} while (!jsTree)
return jsTree
}

Use ESLint

ESLint is used to define some style rules, as well as it checks some common code mistakes. ESLint statically analyses the code to quickly find problems.

Define issue label list

We already have som issue labels (mostly GitHub default) defined, however, we could do better.

Also we could improve our usage of labels. @bburns currently uses the first part of issue titles, which is not the best practice. We should remove the ‘labels’ from issue titles and try to create such issue titles that could be eventually used as commit messages (i.e. they should be short and descriptive), however, that is a separate issue.

Just a sidenote: on GitHub, I miss scoped labels that I love on GitLab (which I prefer over GitHub).

We should use labels to define:

  • [required: exactly one] a top-level issue category labels:
    • bug:
      • colour: #d73a4a;
      • description: Something isn’t working;
      • rules: whenever an issue is a report of a bug anywhere in the code (apps, libs, scripts) or bug (incorrect explanation or other major mistake) in the app documentation (in docs/);
    • enhancement:
      • colour: #a2eeef;
      • description: New feature or enhancement of existing feature;
      • rules: whenever an issue is a dealing with a new feature or enhancement of an existing feature, including feature requests; also improvements to the documentation should be labelled with this label;
    • dependencies:
      • colour: #4b4b4c;
      • description: Pull requests that update a dependency file;
      • rules: whenever we update the dependencies of l99; it should not be assigned when a new feature is added that requires a dependency to be added or updated;
  • [required unless library label is used or issue is not related to any particular app] application name:
    • application:$appName:
      • colour: #5b0e6f;
      • description: Anything related to a specific application;
      • rules: whenever the issue is related to a specific application which name is part of the label after colon;
      • current possible values of appName:
        • adapter;
        • compiler;
        • meter;
        • recorder;
        • relay;
        • simulator;
  • [required unless application label is used or issue is not related to any particular library] library name:
    • library:$libName:
      • colour: #e99695;
      • description: Anything related to a specific library;
      • rules: whenever the issue is related to a specific library which name is part of the label after colon;
      • currently, there is no possible value of libName;
  • [required unless issue is not related to infractructure/tooling] infrastructure:
    • infrastructure:
      • colour: #86504a;
      • description: Anything related to the surrounding tools and infrastructure;
      • rules: whenever we update, improve or add some tools (e.g. Docker, Nx, Nest, ESLint, Prettier, etc) used by l99 not directly related to the code (apps/libs) itself; it should not be assigned when a new feature is added (in that case the tests addition should be done implicitly);
  • [optional: at most one] ‘additional’ labels:
    • documentation:
      • colour: #0075ca;
      • description: Improvements or additions to documentation;
      • rules: whenever an issue is modifies the documentation in docs/; it should not be assigned when a new feature is added (in that case the docs modification should be done implicitly);
    • tests:
      • colour: #1b263d;
      • description: Test improvements or additions;
      • rules: whenever we update, improve or add some tests; it should not be assigned when a new feature is added (in that case the tests addition should be done implicitly);
    • localization:
      • colour: #f11add;
      • description: Anything related to localization, internationalization and locales;
      • rules: whenever we update, improve or add some stuff regarding l10n, i18n and locales; it should not be assigned when a new feature is added (in that case the localization addition/changes should be done implicitly if they are required);
  • [optional] optionally use other labels are remarks (‘optional’ labels):
    • component:$componentName:
      • colour: #c5def5;
      • description: Anything related to a specific component;
      • rules: whenever the issue is related to a specific component which name is part of the label after colon;
      • TODO: current possible values of componentName;
    • duplicate:
      • colour: #cfd3d7;
      • description: This issue or pull request already exists;
      • rules: whenever an issue deals with the same or similar stuff as another issue;
    • good first issue:
      • colour: #7057ff;
      • description: Good for newcomers;
    • wontfix:
      • This will not be worked on
    • help wanted:
      • colour: #008672;
      • description: Extra attention is needed;
    • invalid:
      • colour: #e4e669;
      • description: This doesn't seem right;

Label assignment rules:

  1. Question issues should be converted to GitHub Discussions (there is button for that in the issues). Same should be true when a discussion is not a discussion—we should convert it to an issue.
  2. Each issue should have exactly one top-level issue category level.
  3. application:$appName and library:$libName should be mutually exclusive labels. Whenever a particular issue is related to particular app/library, use of such label is required.
  4. We can use at most one ‘additional’ label.
  5. Whenever we add/remove/rename an app/library, we should reflect this in the application:$appName and library:$libName labels.

TODO and questions:

  • Are label colours, descriptions and rules okay with you all?
  • Besides application:$appName and library:$libName, do we need some other scoped labels? For example, component:$name where $name could be a driver type for adapter. It could be a module name (in Nest/Angular terminology; e.g. a monitoring feature). (Currently, possible names of the component:$name are adapter, agent, relay, shdr.) However, the question is whether we need such in-depth labelling for issues. IMHO it is too much for now, therefore I suggest to remove the component:$name labels.
  • We could use GitHub Actions to force some rules (like when bug label is added, enhancement label removed), however, that is not in the scope of this issue (at least right now).
  • As for libraries, I haven’t added a label for them, as currently, the libraries (e.g. drivers in adapter) are part of the app (microservices). Currently, I am not sure if those libraries (or plugins, as @bburns calls them) should be considered as separate parts, or if they should be later moved out of the app into separate library (like into libs/ folder) or not.
  • I removed use case and question labels;
  • Should we create separate labels for new features and enhancements of existing features instead of enhancement?

adapter - make modbus driver

  • handle uint16 and uint32 values
  • handle different enddiannesses with a property, eg endian: little, default to most common

Models - create single-path, three-axis Fanuc CNC model

Create generic Fanuc model based on inputs from https://github.com/Ladder99/fanuc-driver.

A Fanuc CNC controller can contain multiple paths. As each path is iterated, some functions are path dependent. Meaning that each path can contain its own unique set of data (e.g. parameters, diagnostics, etc). Paths then break down into axes and spindles. Each execution path of a Fanuc controller operates its own set of axes and spindles. It depends on the controller model and its implementation. Each path may contain a X, Y, and Z axis. In the MTConnect taxonomy this is represented as Controller.Path1.Axes.Linear.X, Controller.Path2.Axes.Linear.X. The number of paths and axes is unknown ahead of time. It is unrealistic to expect individual models to be created for every machine. Fanuc-driver can output an intermediate representation of an individual machine implementation. This data can be helpful in auto-generating a YAML model on the fly.

Initially, a simple three axis model with a single spindle will be created and mapped to MTConnect output as the basis for further development.

Compiler

Compile setup.yaml and model yamls to devices.xml, agent.cfg, pipeline-overrides.yaml etc

Improve `adapter`, `meter` and `relay` dockerfiles

WORKDIR /usr/app
ENV NODE_ENV production
COPY package.json /usr/app/

COPY --chown=node:node package.json /usr/app/package.json
COPY --chown=node:node --from=build /usr/app/node_modules /usr/app/node_modules
COPY --chown=node:node src /usr/app/src

could be

COPY package.json .

within the context of WORKDIR.

RUN npm install --package-lock-only --only=production
# fix vulnerabilities where able and update package-lock.json.
# normally audit fix runs a full install under the hood, but we don't want that yet.
RUN npm audit fix --package-lock-only --only=production
# install deterministically from package-lock.json to node_modules.
# audit=false means don't submit packages to registry.
# see https://docs.npmjs.com/cli/v8/commands/npm-ci
RUN npm clean-install --audit=false --only=production

could be combined into a single RUN command to reduce layers.

Keep open only required ports in Docker containers

As for Docker containers: Do all ports (of all containers) need to be open to host? Ports that are used only by other containers should not be binded to the host IMO. That said, ATM I can’t say if there are some ports open which are used only within Docker, however, it seems to me there are some ports binded that are not used (I might be wrong though).
Yes, this could/should be improved.
I am not a security guru, but I try to leave open as little as needed/possible. Everything that is needed only internally, should not be exposed to the world.
Grafana, Agent and Dozzle should only have ports exposed outside of docker network.

see also #206

Define a GitHub workflow

We need to clearly define our Github workflow (issues, projects, milestones, branches, versioning …).

This issue should be considered a meta-issue and sub-issues should be created when needed for individual parts of this issues.

Related GitHub docs:

Questions to ask:

  • What is the lifecycle of a problem/task?

    1. Create an issue where we describe everything related, let everything comment on it. Use task lists in the issue description if the issue is a bit larger, however, strive to make one issue a single problem. In case an issue is a meta-issue, make the tasks in the task list link to other issues where we should discuss the details about a particular task.
    2. Don’t forget to assigne lables, milestones, projects, assignees.
    3. Make a decision on how to implement it. The decision could change multiple times during implementation and the issue description should change accordingly.
    4. Implement it in a fork, ideally in a feature branch.
    5. Create a pull request.
    6. GitHub Actions should run some checks and tests.
    7. Let others review our changes, test the code, comment on the code in the PR.
    8. Let others approve your change if it is looking good.
    9. Merge it to the default branch.
  • What will be our milestones? Release milestones? Feature milestones? Both?

  • How exactly we will use projects?

  • What versioning do we use? SemVer? Could we use the same version for all apps and libs?

    • Currently, each microservice seems to be versioned separately:
    # `version` value in `package.json` of each app (microservice)
    services/meter/package.json     : 0.1.0
    services/recorder/package.json  : 0.1.0
    services/adapter/package.json   : 0.10.1
    services/relay/package.json     : 0.10.1
    services/simulator/package.json : 0.1.0
    services/compiler/package.json  : 0.1.0
  • Use GitHub Releases to publish our releases.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.