GithubHelp home page GithubHelp logo

brewskey / spark-server Goto Github PK

View Code? Open in Web Editor NEW

This project forked from particle-iot/spark-server

54.0 21.0 27.0 2.66 MB

An API compatible open source server for interacting with devices speaking the spark-protocol

Home Page: https://www.particle.io/

License: GNU Affero General Public License v3.0

JavaScript 1.81% TypeScript 98.19%

spark-server's Introduction

spark-server

These instructions have been modified for the Brewskey clone of the local cloud :)

An API compatible open source server for interacting with devices speaking the spark-protocol

We support

  • OTA Updates - user application as well as system updates
  • Product API - fleet management by grouping devices.
  • Firmware compilation
   __  __            __                 __        __                ____
  / /_/ /_  ___     / /___  _________ _/ /  _____/ /___  __  ______/ / /
 / __/ __ \/ _ \   / / __ \/ ___/ __ `/ /  / ___/ / __ \/ / / / __  / /
/ /_/ / / /  __/  / / /_/ / /__/ /_/ / /  / /__/ / /_/ / /_/ / /_/ /_/  
\__/_/ /_/\___/  /_/\____/\___/\__,_/_/   \___/_/\____/\__,_/\__,_(_)   

Quick Install

git clone https://github.com/Brewskey/spark-server.git
cd spark-server/

In order to download the firmware files for OTA updates, you'll need to create a .env file in the server root.

You'll need to create an OAuth token under your github settings with the public_repo permission

The .env file needs the following:

GITHUB_AUTH_TOKEN=<github-token>

Then run from the CLI:

npm install

You may need to run npx update-firmware a few times in order to fetch all the binaries if your install didn't succeed. Github limits you to 5000 requests per hour so you'd need to run it again after an hour.

At this point we will be setting up the server. You should change the default username + password in ./dist/settings.js

The babel command pre-processes all the src/ to allow modern node syntax to be used in older versions of node. The modified code that is actually running lives in dist/ If you change anything in src/ you'll need to rerun npm run build for changes to take effect.

Raspberry pi Quick Install

How do I get started?

  1. Run the server with: Run with babel (useful for local development)
npm start

For production - uses transpiled files from babel.

npm run start:prod
  1. Watch for your IP address, you'll see something like:
Your server IP address is: 192.168.1.10
  1. We will now create a new server profile on Particle-CLI using the command:
particle config profile_name apiUrl  "http://DOMAIN_OR_IP"

For the local cloud, the port number 8080 needs to be added behind: http://domain_or_ip:8080. It is important to also have the http:// otherwise it won't work.

This will create a new profile to point to your server and switching back to the spark cloud is simply particle config particle and other profiles would be particle config profile_name

  1. We will now point over to the local cloud using
particle config profile_name
  1. On a separate CMD from the one running the server, type
particle login --username __admin__ --password adminPassword

The default username is __admin__ and password is adminPassword.

This will create an account on the local cloud This creates a config file located in the %userprofile%/.particle% folder

Perform CTRL + C once you logon with Particle-CLI asking you to send Wifi-credentials etc...

  1. Put your core into listening mode, and run
particle identify

to get your core id. You'll need this id later

  1. The next steps will generate a bunch of keys for your device. I recommend mkdir ..\temp and cd ..\temp

  2. Put your device in DFU mode.

  3. Change server keys to local cloud key + IP Address

particle keys server ..\spark-server\data\default_key.pub.pem --host IP_ADDRESS --protocol tcp

Note You can go back to using the particle cloud by downloading the public key here. You'll need to run particle config particle, particle keys server cloud_public.der, and particle keys doctor your_core_id while your device is in DFU mode.

  1. Create and provision access on your local cloud with the keys doctor:
   particle keys doctor your_core_id

Note For Electrons and probably all newer hardware you need to run these commands There is either a bug in the CLI or Particle always expects these newer devices to use UDP.

Put your device in DFU mode and then:

particle keys new test_key --protocol tcp
particle keys load test_key.der
particle keys send XXXXXXXXXXXXXXXXXXXXXXXX test_key.pub.pem

At this point you should be able to run normal cloud commands and flash binaries. You can add any webhooks you need, call functions, or get variable values.

Configuring Spark Server

In most cases you'll want to connect to a MongoDB instance as it's going to be a lot more performant than using the built-in NeDB implementation. You may also want to change the default admin password or have your server use SSL certificates.

This can all be accomplished by creating a settings.json file in the root of the project.

  1. cd /my-root-where-i-have-the-repo
  2. cp settings.example.json settings.json
  3. Edit the json with any of the keys that are set in settings.js
  4. Run npm install if you changed some directories of the binaries (I'd do it just to be safe anyways)
  5. npm run start:prod
  6. You should see the JSON changes you made log in the console and the server should run with your changes.

Electron Support

Yes, this supports Electron but only over TCP. TCP will drastically increase the amount of data used so watch out.

In order to set up your Electron to work on the server you need to run the following while the device is in DFU-mode:

particle keys server default_key.pub.pem IP_ADDRESS 5683 --protocol tcp
// Sometimes you need to run the next line as well
particle keys protocol tcp

What's different to the original spark-server?

The way the system stores data has changed.

On first run the server creates the following directories for storing data about your local cloud:

  • data/
  • The cloud keys default_key.pem and default_key.pub.pem go directly in here. Previously these keys lived in the main directory.
  • data/deviceKeys/
  • Device keys (.pub.pem) and information (.json) for each device live in here. Previously these were found in core_keys/
  • data/users/
  • User account data (.json) for each user live in here. Previously stored in users/
  • data/knownApps/
  • ???
  • data/webhooks/
  • ???

What kind of project is this?

This is a refactored clone of the original spark-server because this is what the awesome guys at Particle.io said:

We're open sourcing our core Spark-protocol code to help you build awesome stuff with your Spark Core, and with your other projects. We're also open sourcing an example server with the same easy to use Rest API as the Spark Cloud, so your local apps can easily run against both the Spark Cloud, and your local cloud. We're really excited about this, and we have tried to build an open, stable, and powerful platform, and hand it over to you, the community.

This project is our way of making sure you can change and see into every aspect of your Spark Core, yay! We'll keep improving and adding features, and we hope you'll want to join in too!

What features are currently present

This feature list is not correct wrt the Brewskey clone - there's much more!

The spark-server module aims to provide a HTTP rest interface that is API compatible with the main Spark Cloud. Ideally any programs you write to run against the Spark Cloud should also work on the Local Cloud. Some features aren't here yet, but may be coming down the road, right now the endpoints exposed are:

Claim Core

POST /v1/devices

Release Core

DELETE /v1/devices/:coreid

Provision Core and save Core's keys.

/v1/provisioning/:coreID

List devices

GET /v1/devices

Call function

POST /v1/devices/:coreid/:func

Get variable

GET /v1/devices/:coreid/:var

Get Core attributes

GET /v1/devices/:coreid

Set Core attributes (and flash a core)

PUT /v1/devices/:coreid

Get all Events

 GET /v1/events
 GET /v1/events/:event_name

Get all my Events

 GET /v1/devices/events
 GET /v1/devices/events/:event_name

Get all my Core's Events

 GET /v1/devices/:coreid/events
 GET /v1/devices/:coreid/events/:event_name

Publish an event

POST /v1/devices/events

What API features are missing

  • the build IDE is not part of this release, but may be released separately later
  • massive enterprise magic super horizontal scaling powers

Known Limitations

We worked hard to make our cloud services scalable and awesome, but that also presents a burden for new users. This release was designed to be easy to use, to understand, and to maintain, and not to discourage anyone who is new to running a server. This means some of the fancy stuff isn't here yet, but don't despair, we'll keep improving this project, and we hope you'll use it to build awesome things.

spark-server's People

Contributors

antonpuko avatar dmiddlecamp avatar durielz avatar flaz83 avatar haeferer avatar jlkalberer avatar kennethlimcp avatar lbt avatar straccio avatar suhajdab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spark-server's Issues

Move `~/src/data` to `~/data`

I've thought this over again and I really do not like that the data we are creating is being saved inside the src folder. I also think it will cause confusion as more people start using this repo.

Don't override `_id` in MongoDB

The _id is an implementation detail of MongoDB. Since We are overriding it for devices, I am running into issues with my test script.

I think it's fine to use _id for webhooks/users but for devices the _id should be handled by Mongo. The device ID comes from the device itself and isn't a trustworthy source for generating an ID. In my case, I'm getting errors when I send the public key + device ID.

Add api/build_targets URL

When doing local compile you can now select which version to compile against

::ffff:192.168.0.250 - - [29/Mar/2017:02:33:02 +0000] "GET /v1/build_targets?featured=true HTTP/1.1" 404 9 "-" "node-superagent/2.3.0"

I'm not sure what the actual data looks like but you should be able to call the public cloud to see it.

OTA not working

So we've been doing OTA updates pretty routinely over the past few weeks, and all has been working great. Today, the OTA stopped working (times out) and I don't see a reason why that would have happened overnight. We haven't pulled the latest update for the db yet. I've restarted the server which didn't help. Also tried flashing an older version of the firmware that that did not work either.

Here's a snippet from the error logs

2017-05-21 21:32 +00:00: (node:23436) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 4): Error: Request timed out
2017-05-21 21:34 +00:00: This client has an exclusive lock { cache_key: '_3',
  deviceID: '25001c001547343231323438',
  messageName: undefined }
2017-05-21 21:34 +00:00: (node:23436) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 6): Error: Request timed out
2017-05-21 21:36 +00:00: This client has an exclusive lock { cache_key: '_5',
  deviceID: '25001c001547343231323438',
  messageName: undefined }
2017-05-21 21:36 +00:00: (node:23436) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 8): Error: Request timed out
2017-05-21 21:48 +00:00: This client has an exclusive lock { cache_key: '_2',
  deviceID: '25001c001547343231323438',
  messageName: undefined }
2017-05-21 21:48 +00:00: You have triggered an unhandledRejection, you may have forgotten to catch a Promise rejection:
Error: Request timed out
    at Timeout._onTimeout (/root/node_modules/spark-server/node_modules/spark-protocol/dist/clients/Device.js:494:28)
    at ontimeout (timers.js:365:14)
    at tryOnTimeout (timers.js:237:5)
    at Timer.listOnTimeout (timers.js:207:5)
2017-05-21 21:51 +00:00: (node:1340) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 3): Error: Request timed out
2017-05-21 21:54 +00:00: (node:1340) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 4): Error: Request timed out
2017-05-21 21:56 +00:00: (node:1340) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 5): Error: Request timed out
2017-05-21 22:02 +00:00: This client has an exclusive lock { cache_key: '_3',
  deviceID: '25001c001547343231323438',
  messageName: 'PingAck' }
2017-05-21 22:02 +00:00: This client has an exclusive lock.
2017-05-21 22:03 +00:00: (node:1340) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 6): Error: Request timed out
2017-05-21 22:05 +00:00: (node:1340) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 7): Error: Request timed out
2017-05-21 22:07 +00:00: (node:1340) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 8): Error: Request timed out
2017-05-21 22:09 +00:00: This client has an exclusive lock { cache_key: '_10',
  deviceID: '25001c001547343231323438',
  messageName: undefined }
2017-05-21 22:09 +00:00: (node:1340) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 10): Error: Request timed out

Transfer device ownership administratively

To ease automation for some use-cases, it would be useful to be able to take ownership of a device as an "administrator" without requiring the original owner to release it first.

How to to install in Raspberry Pi 2 or another plataform

Hi,

I have a project in my company, and we need to use a local cloud for Particle Photons.
Because need all time the availability of exchange messagens between some Photons.

All how to in the internet for install in Raspberry Pi (2) not work for Spark Server.

I need know a simple but functionall how to for install in Raspberry Pi 2 or in any plataform (recommended) .
And I need know if need just the Spark Server installed, or Spark Protocol (Particle).
Some erros I have encontered is about node deprecated functions. All not explain the exact version for this actual Spark Server.

Thanks
Adio

Webhooks

Implement webhooks to match the Particle spec.

Make sure that if I add/remove a webhook from the CLI that it removes the subscription from the event listeners.

particle subscribe fail

particle subscribe fails when we'are trying to subscribe to eventNames with /.
We get 404 error from api.

particle subscribe foo/bar send request to http://localhost:8080/foo/bar but the controller handles only http://localhost:8080/:eventPrefix

Server hangs if URL for new webhook does not respond

If a webhook is created and the selected URL times out, the server hangs completely with the following error:

uncaughtException { message: 'connect ECONNREFUSED 67.161.118.40:3000', stack: 'Error: connect ECONNREFUSED 67.161.118.40:3000\n at Object.exports._errnoException (util.js:1022:11)\n at exports._exceptionWithHostPort (util.js:1045:20)\n at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1087:14)' } [nodemon] app crashed - waiting for file changes before starting...

Startup Parameters

When you call npm start you should be able to pass in --verbose.

There should be a single setting (there are many right now) that gets set to true in order to log a lot more data. We should read this parameter for both spark-protocol and spark-server. I know there are some checks in SparkCore.js that have if (shouldLog) that could use this.

Use DI to easily allow swapping out repositories/services?

@AntonPuko @wdimmit @durielz

I'm thinking of using a IoC framework in order to easily swap out the different repositories used here. This change will cascade down into the protocol layer.

I think the change will make swapping out certain implementations much easier but I want some feedback before I start as this is a super opinionated change.

An example of why we should do this is:
Right now we have concrete implementations of saving device attributes and server keys to the local file storage. Yes you can change the path via settings but what if you want to fetch these via HTTP or using some storage API (Azure) or whatever the AWS storage is.

By moving over to a framework like this we would be able to swap out implementations by adding a single line and the configuration would cascade down.

Here is one I'm looking at because it's fairly simple and light-weight.

socket timeout error

If my device disconnects from network i get:
Caught exception: TypeError: Cannot read property 'message' of undefined TypeError: Cannot read property 'message' of undefined at Socket.<anonymous> (/node_modules/spark-protocol/lib/clients/Device.js:175:66)

changing in client/Device.js in protocol the line 134 from
error: Error): void => this.disconnect(socket timeout ${error.message}),
to
error: Error): void => this.disconnect(socket timeout ${error}),
then i have the correct disconnect message

https

Hi, I'm trying to set up spark server over a secure connection. I tried changing the port from 8080 to 443 in the settings.js file but that seems to have broken everything - tried sending commands to https://ipaddress:443 (+various permutations) but no luck. I think I've set up the SSL certificates properly but am not completely sure.

How to share /data between different servers?

I've set up a load balancer with two spark-servers attached. Commands to the devices are being routed through the load balancer. I'm running into a problem where the device itself is connected to server1, but external requests are being split between server1 and server2. If the request goes into server2, it thinks the device is not connected.

I was thinking that a solution would be have server2 point to server1's data directory - is there any way to do this?

EDIT:
Thinking about this some more....don't think simply sharing the data folder would work, right? I was playing around with the clustering options here https://github.com/Unitech/pm2 and was able to get multiple workers going. But, the same problem....if I do a GET /v1/devices, the device will only report back as connected if the GET request was sent to the same fork that the device is really 'connected' to. Is the app not stateless?

Webhook sending incorrect payload

We have a webhook that looks like this:

{
    "event": "power",
    "url": "https://ariocloud.azurewebsites.net/v1/integrations/particle",
    "requestType": "POST",
    "query": {
      "apikey": "notarealkey"
    },
    "noDefaults": false,
    "rejectUnauthorized": true,
  "headers": {
    "Content-Type":"application/x-www-form-urlencoded"
    }
}

Payload:
"0,btn"
The server is getting a payload like:
{"0,btn": ""}

Question

Would you say this would work well for a production product?
What would be a reasonable quantity of interactions from devices that this could handle on a modern computer.

building error

You guys are doing a great job!

I was trying to build using npm run build && node ./build/main.js and i got this error:

.../node_modules/spark-protocol/lib/server/DeviceServer_v2.js:485 var _ref5 = _asyncToGenerator(regeneratorRuntime.mark(function _callee6(eventName, data, deviceID) { ReferenceError: regeneratorRuntime is not defined

Finish writing tests for all endpoints

I don't think that we've covered them all. We need to finish some in DeviceController as well

I also wonder if we should be using a real mocking framework instead of hacking some mocked objects..?
We could write better tests with the mocked objects like checking if functions had been called or getting rid of the serialized functions -- we can make repositories return fake data.

Express Clustering

Use clustering to improve speed. We will want to do this at the HTTP server and allow it through settings on the COAP server (spark-protocol). We'll need a singleton way of passing events so you'll need to update EventPublisher (or whatever it is) to send data between the forks.

This should be really easy to implement on spark-server since we're stateless - https://github.com/Flipboard/express-cluster

get a variable value

Hi again, when i try to get a var value by requesting /v1/devices/:deviceID/:varName/ the response is:

{ "result": "[object Object]" }

Error: Cannot find module '../../third-party/specifications'

Is the dev branch currently broken? I just wanted to give the server a quick try after reading about it on community.particle.io, but I'm getting the following error message (see below). Am I doing something wrong?

npm start

> [email protected] start /Users/Test/particle.io/spark-server
> nodemon --exec babel-node ./src/main.js --watch src --watch ../spark-protocol/dist --ignore data

[nodemon] 1.11.0
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: /Users/Test/particle.io/spark-server/src/**/* ../spark-protocol/dist
[nodemon] starting `babel-node ./src/main.js`
module.js:472
    throw err;
    ^

Error: Cannot find module '../../third-party/specifications'
    at Function.Module._resolveFilename (module.js:470:15)
    at Function.Module._load (module.js:418:25)
    at Module.require (module.js:498:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (/Users/Test/particle.io/spark-server/node_modules/spark-protocol/dist/lib/FirmwareManager.js:67:23)
    at Module._compile (module.js:571:32)
    at Module._extensions..js (module.js:580:10)
    at Object.require.extensions.(anonymous function) [as .js] (/Users/Test/particle.io/spark-server/node_modules/babel-register/lib/node.js:152:7)
    at Module.load (module.js:488:32)
    at tryModuleLoad (module.js:447:12)
    at Function.Module._load (module.js:439:3)
    at Module.require (module.js:498:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (/Users/Test/particle.io/spark-server/node_modules/spark-protocol/dist/server/DeviceServer.js:63:24)
    at Module._compile (module.js:571:32)
    at Module._extensions..js (module.js:580:10)
    at Object.require.extensions.(anonymous function) [as .js] (/Users/Test/particle.io/spark-server/node_modules/babel-register/lib/node.js:152:7)
    at Module.load (module.js:488:32)
    at tryModuleLoad (module.js:447:12)
    at Function.Module._load (module.js:439:3)
    at Module.require (module.js:498:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (/Users/Test/particle.io/spark-server/node_modules/spark-protocol/dist/index.js:24:21)
    at Module._compile (module.js:571:32)
    at Module._extensions..js (module.js:580:10)
    at Object.require.extensions.(anonymous function) [as .js] (/Users/Test/particle.io/spark-server/node_modules/babel-register/lib/node.js:152:7)
    at Module.load (module.js:488:32)
    at tryModuleLoad (module.js:447:12)
    at Function.Module._load (module.js:439:3)
    at Module.require (module.js:498:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (/Users/Test/particle.io/spark-server/src/defaultBindings.js:5:1)
    at Module._compile (module.js:571:32)
    at loader (/Users/Test/particle.io/spark-server/node_modules/babel-register/lib/node.js:144:5)
    at Object.require.extensions.(anonymous function) [as .js] (/Users/Test/particle.io/spark-server/node_modules/babel-register/lib/node.js:154:7)
[nodemon] app crashed - waiting for file changes before starting...

Update Express (and other packages)

I think updating express will make things much cleaner and allow us to use better loggers.

It might be a pain in the ass to do it but we can look at this branch to see what changes they had to make to add it - straccio@dfe7984

Removing device after _ensureWeHaveIntrospectionData error delays device from registering offline

If I disconnect a device within a few seconds of seeing _ensureWeHaveIntrospectionData error: Error: can't get describe data the server will take several minutes to register the device as offline. In the interim period, requests to read variables from the device time error out with a 400, but only after 15 seconds. This is with the "coreRequestTimeout" setting set to 9000 (Changing it to 9000 from the default 30000 made no difference in this case).

(Devices have 0.6.0 FW)

Reading value of variable fails after some number of requests

If I repeat a get request to /v1/devices/{deviceid}/{varname} every 1 second, the server stops returning the variable's value and starts returning {"error":"\"value\" argument is out of bounds","ok":false} instead in under 5 minutes.

This error only impacts the device that was being polled, not any other devices on the server. Leaving the device idle for several minutes does not fix the issue. In current testing, the issue remains until the spark-server process is restarted.

I have not noticed any related events in the server console yet.

Please let me know if I can provide any additional information or testing to help diagnose this.

webhook exception: Error: write after end

event:

event: sensors
data: {"coreid":"20003b000d47343432313031","data":"{\"t1\":0.000000, \"t2\":0.000000, \"t3\":0.000000, \"t4\":0.000000, \"flow1\":0.000000, \"lux\":19.095816  }","published_at":"2017-01-14T11:10:09.470Z","ttl":60}

webhook:

{
  "event": "sensors",
  "url": "https://api.pushover.net/1/messages.json",
  "requestType": "POST",
  "form": {
    "user": "XXX",
    "token": "YYY",
    "title": "TEST",
    "message": "{{ SPARK_CORE_ID }} lux: {{ lux }}"
  },
  "ownerID": "b20fd7ddd647fcb84d0f859c803e01e2",
  "created_at": "2017-01-14T08:41:01.106Z",
  "id": "df1bd069-d87b-4209-8294-8e9246d74a0a"
}

it happens also with plain data not only with json format

about events

hi guys i was playing with events and i would like to share with you how they should work for me:

  1. GET /v1/events should return all public events (from all devices) and private events from devices owned by me
  2. GET /v1/devices/events should return all public and private events from my devices only
  3. GET /v1/devices/:deviceId/events should return all public and private events for a specific device if the device is owned by me otherwise only the public events

Is that the way you are going to implement them?

In your version i have:

  1. GET /v1/events return all public and private events for all devices
  2. GET /v1/devices/events doesn't return special private events like status / flash
  3. GET /v1/devices/:deviceId/events return only events from devices i own

Device connection problems: "pkcs decoding error" and "data too large for modulus"

I've gotten myself a simple Docker image up and am trying to run this clone of spark-server inside it.

It's running but I get the errors pasted below. (note 2 devices here I think one is a core, one is a photon - I'll isolate later):

I have been running these devices quite happily using this single tweak to the old spark-server : https://github.com/lbt/spark-server/commits/lbt-working-master

So from an identical copy of that working setup (with the same core_keys/ and users/ files) I essentially checked out and ran the new code.
It starts with :

$ docker run -p 8080:8080 -p 5683:5683 spark-server 
Your device server IP address is: 172.17.0.2
Server started on port: 5683
express server started on port 8080

but then:

Connection from: ::ffff:10.0.0.102 - Connection ID: 189
Handshake failed:  error:0407109F:rsa routines:RSA_padding_check_PKCS1_type_2:pkcs decoding error { cache_key: '_189', deviceID: null, ip: '::ffff:10.0.0.102' }
1 : Device disconnected: Error: error:0407109F:rsa routines:RSA_padding_check_PKCS1_type_2:pkcs decoding error { cache_key: '_189', deviceID: '', duration: undefined }
Device startup failed: error:0407109F:rsa routines:RSA_padding_check_PKCS1_type_2:pkcs decoding error
Connection from: ::ffff:10.0.0.125 - Connection ID: 190
Handshake failed:  error:04065084:rsa routines:RSA_EAY_PRIVATE_DECRYPT:data too large for modulus { cache_key: '_190', deviceID: null, ip: '::ffff:10.0.0.125' }
1 : Device disconnected: Error: error:04065084:rsa routines:RSA_EAY_PRIVATE_DECRYPT:data too large for modulus { cache_key: '_190', deviceID: '', duration: undefined }
Device startup failed: error:04065084:rsa routines:RSA_EAY_PRIVATE_DECRYPT:data too large for modulus

I'm not an expert at nodejs so please let me know what you need to know.
Below is my annotated Dockerfile which defines the container - it should give you all the server info you need :)

I'm pretty sure the devices are on stock v0.6.0 firmware from a local build.

# Use the official nodejs v6.10.0 LTS container as a starting point
FROM node:boron

# Create an 'app' directory somewhere in the container
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# Install app dependencies by copying in the package and using the preinstalled
# npm
COPY package.json /usr/src/app/
RUN npm install

# Bundle app source from outside the container to inside
COPY . /usr/src/app
# Do the babel thing to handle new features
RUN ./node_modules/.bin/babel src/ -d lib

# The access ports will be:
EXPOSE 8080 5683

# Now run the babel'ised main.js (is this the right command?)
CMD [ "node", "lib/main.js" ]

Better failure for calling device functions

When you call a function from the cli and the function doesn't exist it should fail.

The CLI is expecting a different response then what we are returning.. We need to check the CLI code and figure out what response they are expecting. I get this error when I call a nonexistent function:

Function call failed Cannot read property 'args' of undefined

Error loading user from TingoDB.

I'm getting this error on every request.
{"error":"server_error","error_description":"Server error: accessTokenExpiresAt must be a Date instance"}

Use DB instead of files

Start with this - https://github.com/sergeyksv/tingodb
For multi-server environments we'll simply need to switch to MongoDB (same APIs) which makes things easier :)


I chose this as it seemed like the easiest way for people to scale up their systems without having to rewrite a ton of repositories.

Webhook GET request with querystring

I'm having trouble migrating a webhook from the particle server to the spark server; getting a return with "data": "Bad Request". This is a GET request with a query string (I have another webhook with a POST request with form data that is working fine on spark server).

Originally I had put the query string the URL like:
"url": "http://api.com/endpoint?id={{id}}&value={{value}}"

I also tried
"url": "http://api.com/endpoint"
"qs": "id={{id}}&value={{value}}"

But am still getting a bad request error each time. Any idea?

Webhook variables not allowed in headers?

On the particle cloud, I am able to pass variables to webhooks (requestType and headers). When pointing to my local server, when I try to add below webhook, I get "Error wrong requestType". This is simple enough to change and it should be no problem for me to just hard code the request type. However I need to be able to send a variable Authorization token.

Is this something that's changing with your project or am I just missing something? The webhook I'm trying to set up worked on the particle cloud before.

spark-server responds with:

webhookError: Error: Unauthorized

webhook:

{
  "event": "myevent",
  "url": "http://myapi.com/v1/{{path}}",
  "requestType": "{{request}}",
  "headers":
    {
        "Content-Type": "application/x-www-form-urlencoded",
        "Authorization": "{{api_key}}"
    },
  "responseTopic": "hook-response/api_{{SPARK_CORE_ID}}",
  "responseTemplate": "{{error}}:{{message}}",
  "form":
    {
        "var1": "{{var1}}",
        "value": "{{value}}"
    },
  "mydevices": true
}

Webhook Logger

When you trigger a webhook request to another server, I want to log all data we can get about the request as well as the success/failure response.

We should rely on some setting which tells us if we want to console.log these requests. It should also use constructor injection like all the other repositories so we can override the implementation.


When we use this for our server, I will want to log all the requests to a file so that if any errors occur, I can basically do "playback" on the file and get my database in sync.

Move oauthClients.json to DB

These should be in the DB so we can start implementing the OAuth controller.

I think we need a repository for this which should populate the DB with defaults -- authClients.json

I'm not 100% sure how we will want to implement that controller because it should have a higher level of permissions than just allowing any user to create the tokens.

Keys in place but handshake fails

Sent particle keys doctor [devid] and keys appear to have sent successfully. The photon restarts but blinks a very fast cyan with a red blink every so often. On the local server:

::ffff:10.10.10.107 - - [29/Jan/2017:21:31:34 +0000] "POST /v1/provisioning/[devid] HTTP/1.1" 200 128 "-" "-"

Handshake failed: error:0407109F:rsa routines:RSA_padding_check_PKCS1_type_2:pkcs decoding error { cache_key: '_64', deviceID: null, ip: '::ffff:10.10.10.117' }
1 : Core disconnected: Error: error:0407109F:rsa routines:RSA_padding_check_PKCS1_type_2:pkcs decoding error { cache_key: '_64', deviceID: '', duration: undefined }
Connection from: ::ffff:10.10.10.117 - Connection ID: 64

Connection ID increments about once every 2 seconds.

I can revert keys back to the default particle public keys, and the photon seems to work as normal (breathing cyan and connects to particle server just fine). Not sure if this is related, but I also noticed that if I do a particle identify when config'd to the particle server, I get the normal device ID and firmware response, but if I do the same particle identify when config'd to the local server, I get device ID and

Serial err: Error: Opening COM3: Access denied
Serial problems, please reconnect the device.
Unable to determine system firmware version

Update This Readme

Update this readme to use the correct particle API. Also include a link to set up a windows machine.

Please remove the openssl line because all it did is break things for me.

Please format the commands so they are always on their own line. There were a few inline commands that I read over.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.