GithubHelp home page GithubHelp logo

jhuckaby / cronicle Goto Github PK

View Code? Open in Web Editor NEW
3.3K 60.0 354.0 3.82 MB

A simple, distributed task scheduler and runner with a web based UI.

Home Page: http://cronicle.net

License: Other

JavaScript 96.80% Shell 1.00% HTML 0.70% CSS 1.51%
cron crontab scheduler multiserver

cronicle's People

Contributors

attie avatar dependabot[bot] avatar dol-leodagan avatar ftaiolivista avatar jhuckaby avatar lukejbullard avatar moonsoftsrl avatar mprasil avatar spascareli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cronicle's Issues

[Feature Request] More granular user permissions

Summary

We would like to be able to configure more granular permissions with Cronicle to limit the "damage" a user can do. Example:

  • User can edit/run/abort a specific event or a specific "Category"
  • User can edit/run/abort the events they "own"

Web Hook URL?

Please see:
https://dingtalk.taobao.com/docs/doc.htm?spm=a219a.7629140.0.0.MYZwcw&treeId=257&articleId=105735&docType=1

There is no problem in the command line as shown below:

curl 'https://oapi.dingtalk.com/robot/send?access_token=xxxxxxxx'
   -H 'Content-Type: application/json'
   -d '
  {"msgtype": "text",
    "text": {
        "content": "����Cronicle message"
     }
  }'

but write ' https://oapi.dingtalk.com/robot/send?access_token=xxxxxxxx' in cronicle, cannot receive message.

no password for send mail

Hello:
Thank you very much for writing Cronicle.
But, Why do I have no password parameters for sending mail? Anonymity is sometimes unfriendly.

Weird issue with job logs

Summary

First of all thanks for creating this amazing tool. We have been using it for a few weeks and absolutely love it. Great work !!!

In our QA setup everything was working fine, but when we started rolling out to production, we started seeing some issues with job logs. Only difference in our setup is Node JS version. Wondering if it is related to that?

When Running on Cronicle Slave

When I schedule the job to run on the cronicle slave server, the job runs fine, but gets stuck in the end (I think during job log file cleanup):

image

Navigating back to "Complete" jobs tab, I see "(No log file found.)" and clicking on "Download Log" or "View Full Log" shows the same.

image

When Running on Cronicle Master

When scheduling the same job on cronicle master, the job finishes, but the "Job Details" is blank for job "Job Event Log". Clicking "Download Log" or "View Full Log" shows the complete job log correctly.

image

Steps to reproduce the problem

  1. Create a new test event using Shell plugin with:

# Enter your shell script code here
echo "Starting"
echo $CRONICLE
date
sleep 20
echo "Done."
  1. Use "Run now" feature to schedule this on master or slave server and see the above mentioned issues with job logs.

Your Setup

We have 2 separate setups for Cronicle - QA and Prod. Only difference is Node.js version, everything else is same.

Operating system and version?

Ubuntu 16.04.2 LTS

Node.js version?

Prod: v6.10.2
QA: v7.8.0

Cronicle software version?

0.6.11

Are you using a multi-server setup, or just a single server?

1 master + 1 slave

Are you using the filesystem as back-end storage, or S3/Couchbase?

Filesystem

Can you reproduce the crash consistently?

Yes

Log Excerpts

Error.Log:
Failed to fetch job log file: https://172.17.2.143:3012/api/app/fetch_delete_job_log?path=%2Fhome%2Fdeployer%2Fcronicle%2FCronicle-0.6.11%2Flogs%2Fjobs%2Fjj26j9mdn02.log&auth=removed: Error: Error: Hostname/IP doesn't match certificate's altnames: "IP: 172.17.2.143 is not in the cert's list: "

Wrong URL when viewing the log of a running event

Here's one minor bug: if I have a slave host myhost.example.com (which Cronicle reports as just 'myhost' for all purposes), when trying to "view the full log" of an event running on it, it tries to access the following URL:

http://myhost:3012/api/app/get_live_job_log?id=jjafcdwz212&download=1

which is invalid because the domain is dropped.

(For logs of completed events, it just accesses the master node which seems to be fine.)

Cronicle login page does not load up behind a load balancer

Summary

I installed Cronicle on an AWS EC2 instance (in private subnet) and put it behind a load balancer (ALB).

Cronicle login page does not load up completely. Chrome developer tools show a websocket request is being made with the private DNS name of the cronicle master (but its inaccessible since its behind a load balancer).

How can I make Cronicle work with a load balancer?

Steps to reproduce the problem

Brand new Cronicle setup behind a load balancer. Run setup and start Cronicle.

Your Setup

I installed Cronicle on an AWS EC2 instance (in private subnet) and put it behind a load balancer (ALB).

Operating system and version?

Ubuntu 16.04

Node.js version?

v4.2.6

Cronicle software version?

0.7.5

Are you using a multi-server setup, or just a single server?

Single server

Are you using the filesystem as back-end storage, or S3/Couchbase?

Filesystem

Can you reproduce the crash consistently?

Yes

Log Excerpts

WebSocket connection to 'wss://cronicle-master-1.local:3013/socket.io/?EIO=3&transport=websocket' failed: WebSocket is closed before the connection is established.

VM100:164 WebSocket connection to 'wss://cronicle-master-1.local:3013/socket.io/?EIO=3&transport=websocket' failed: Error in connection establishment: net::ERR_NAME_NOT_RESOLVED

Account is locked out

image
This is because the admin user is not associated with this email address?

Any other way to reset the admin account's password?

Chain Reaction

Select an event to run automatically after this event completes.

workflow,I would like to have multiple job options and use the and 、or connection. If the first fails or timed out, please upstream and downstream report failed.

Server with multiple network interfaces: Cronicle might not work depending on IP auto-selected

If a server has multiple network interfaces, and (the first) one happens to be assigned a 169.254.-number, the Cronicle web UI would not start up (stayed blank). I found out that this is caused by Cronicle auto-selecting the first IP-number, which might not work in some situations. An option to (manually) overrule the IP-number selected by Cronicle, might be helpful in such a situation. My work-around was to disable the (first) interface, then run setup again on a fresh installation. A suggestion to make the already nice Cronicle code more robust might be to create a socket programmatically during setup/startup, and obtain the assigned IP-number from it (then an internet connection might be needed though).

Cronicle Crash

Hey Joseph,
I was using delete_event api to delete event and as soon as i use it, the cronicle crashes. The event that was deleted was supposed to run every 5 minutes.
Please find below the crash.log file :

Fri Apr 28 2017 15:31:00 GMT+0530 (IST)
TypeError: Cannot read property 'minutes' of undefined
at constructor.checkEventTimingMoment (/opt/cronicle/lib/scheduler.js:200:13)
at /opt/cronicle/lib/scheduler.js:107:17
at /opt/cronicle/node_modules/async/lib/async.js:1213:16
at /opt/cronicle/node_modules/async/lib/async.js:166:37
at Object.async.whilst (/opt/cronicle/node_modules/async/lib/async.js:792:13)
at /opt/cronicle/lib/scheduler.js:99:12
at /opt/cronicle/node_modules/async/lib/async.js:1213:16
at /opt/cronicle/node_modules/async/lib/async.js:166:37
at /opt/cronicle/node_modules/async/lib/async.js:181:20
at iterate (/opt/cronicle/node_modules/async/lib/async.js:262:13)
Please help me with this crash.

And after deleted the event when i restart the cronicle, in the COMPLETED tab, i can see all the job ids but "(None)' in Event Name. Below is the screenshot :
image

Is there any way possible to show the name of the event it was running before it got deleted.

Thanks
Pallavi
([email protected])

Setup step hangs while installing on Ubuntu 14.04

I'm trying to install on Ubuntu 14.04 with Couchbase as the backing store.

It says it finishes successfully, but never returns to the prompt. I see this:

root@ip-10-242-20-17:/opt/cronicle# /opt/cronicle/bin/control.sh setup
[ERROR] Failed to fetch key: global/users: The key does not exist on the server
[ERROR] Failed to fetch key: global/users: The key does not exist on the server
[ERROR] Failed to fetch key: global/plugins: The key does not exist on the server
[ERROR] Failed to fetch key: global/categories: The key does not exist on the server
[ERROR] Failed to fetch key: global/server_groups: The key does not exist on the server
[ERROR] Failed to fetch key: global/servers: The key does not exist on the server
[ERROR] Failed to fetch key: global/schedule: The key does not exist on the server
[ERROR] Failed to fetch key: global/api_keys: The key does not exist on the server
Setup completed successfully!
This server (ip-10-242-20-17) has been added as the single primary master server.
An administrator account has been created with username 'admin' and password 'admin'.
You should now be able to start the service by typing: '/opt/cronicle/bin/control.sh start'
Then, the web interface should be available at: http://ip-10-242-20-17:3012/
Please allow for up to 60 seconds for the server to become master.

And it just sits there.

Any ideas?

I'm using Node 7.5.0

Make Cronicle a bit more automation-friendly

All of our infrastructure is being built with Packer & Terraform, so we're trying to do as much as we can in an automated fashion. Right now, we bring up a 3 node Cronicle cluster automatically, but there are still some manual tasks that we have to run to have it in a fully working state. Here are things that would help us to fully automate the install process:

  • Allow us to specify the master group regex at setup time. Maybe this should be a parameter in config.json? Essentially, we know the IPs of the 3 machines that will be running Cronicle and we just want a way to cluster them automatically. Are there other ways that we could do this with the existing code?
  • Allow us to create API keys via the API (yes, I know this is a turtles-all-the-way down scenario). Basically, we need a programmatic way of generating API keys so that we can stuff them into Consul so that our services can talk to Cronicle.
  • Allow us to at least change the administrator password via the API. We don't need a full user API, just something to secure the installation. Can this be done in setup.json?

Thanks for creating such an awesome tool.
-Barry

Linux Service

Using shell script to restart a linux service and the job never finishes, how can I configure the job to execute and finish.

example: service myservice restart

the job never finishes and the only way to get a status is to abort the job and when I do that it always shows job result as error.

coredump occurs around 4 o'clock on master

We have 5 server in Cronicle cluster:
image

and totally 308w+ completed Jobs so far (maybe 4w+ per day):
image

From the last few days, the master server dumped core around 4:00 am every day , but there is no crash.log file appears. We are very worried about this situation.

From the core stack , the coredump is caused by OOM :

(gdb) bt
#0 0x00007f4a300325e5 in raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1 0x00007f4a30033dc5 in abort () at abort.c:92
#2 0x000000000125f101 in node::Abort() ()
#3 0x000000000125f13c in node::OnFatalError(char const*, char const*) ()
#4 0x0000000000a5f412 in v8::Utils::ReportOOMFailure(char const*, bool) ()
#5 0x0000000000a5f61b in v8::internal::V8::FatalProcessOutOfMemory(char const, bool) ()*
#6 0x0000000000e1a9a1 in v8::internal::Factory::NewFixedArray(int, v8::internal::PretenureFlag) ()
#7 0x0000000000f8f481 in v8::internal::HashTable<v8::internal::StringTable, v8::internal::StringTableShape, v8::internal::HashTableKey*>::New(v8::internal::Isolate*, int, v8::internal::MinimumCapacity, v8::internal::PretenureFlag) ()
#8 0x0000000000f902f9 in v8::internal::HashTable<v8::internal::StringTable, v8::internal::StringTableShape, v8::internal::HashTableKey*>::EnsureCapacity(v8::internal::Handlev8::internal::StringTable, int, v8::internal::HashTableKey*, v8::internal::PretenureFlag) ()
#9 0x0000000000f90880 in v8::internal::StringTable::LookupString(v8::internal::Isolate*, v8::internal::Handlev8::internal::String) ()
#10 0x00000000010b1aaf in v8::internal::Runtime_ObjectHasOwnProperty(int, v8::internal::Object**, v8::internal::Isolate*) ()
#11 0x00000dad3c3063a7 in ?? ()
#12 0x0000000000000000 in ?? ()
(gdb) q

From Storage.log, it seems that there are a lot of "completed log" items loading to memory at that time (may be in the maintenance operation?),

[1487276212.593][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13280][]
[1487276212.595][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13279][]
[1487276212.596][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13278][]
[1487276212.598][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13277][]
[1487276212.599][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13276][]
[1487276212.601][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13275][]
[1487276212.602][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13274][]
[1487276212.604][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13273][]
[1487276212.606][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13272][]
[1487276212.607][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13271][]
[1487276212.609][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13270][]
[1487276212.61][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13269][]
[1487276212.612][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13268][]
[1487276212.614][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13267][]
[1487276212.617][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13266][]
[1487276212.618][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13265][]
[1487276212.62][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13264][]
[1487276212.622][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13263][]
[1487276212.624][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13262][]
[1487276212.626][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13261][]
[1487276212.628][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13260][]
[1487276212.629][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13259][]
[1487276212.631][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13258][]
[1487276212.634][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13257][]
[1487276212.635][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13256][]
[1487276212.637][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13255][]
[1487276212.638][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13254][]
[1487276212.64][2017-02-17 04:16:52][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13253][]
[1487276220.862][2017-02-17 04:17:00][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13252][]
[1487276220.872][2017-02-17 04:17:00][CronicleMaster][Storage][debug][9][Fetching 0 items at position 0 from list: global/schedule][]
[1487276220.872][2017-02-17 04:17:00][CronicleMaster][Storage][debug][9][Loading list: global/schedule][]
[1487276229.266][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276229.27][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list page: global/schedule/-19][]
[1487276229.271][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list page: global/schedule/-18][]
[1487276229.271][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list page: global/schedule/-17][]
[1487276229.271][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list page: global/schedule/-16][]
[1487276229.276][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Locating item in list: global/server_groups][{"id":"giyfkrp0u01"}]
[1487276229.276][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list: global/server_groups][]
[1487276229.276][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list page: global/server_groups/0][]
[1487276229.281][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Locating item in list: global/plugins][{"id":"shellplug"}]
[1487276229.281][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list: global/plugins][]
[1487276229.281][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list page: global/plugins/0][]
[1487276229.286][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Locating item in list: global/categories][{"id":"civm2b4vm02"}]
[1487276229.286][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list: global/categories][]
[1487276229.286][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list page: global/categories/0][]
[1487276229.289][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13251][]
[1487276229.292][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276229.293][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276229.296][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Setting expiration on: jobs/jiz8tuhnp6o/log.txt.gz to 1502828229][]
[1487276229.296][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"expire_set","key":"jobs/jiz8tuhnp6o/log.txt.gz","expiration":1502828229,"force":false}]
[1487276229.297][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276229.297][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Setting expiration on: jobs/jiz8tuhnp6o to 1502828229][]
[1487276229.297][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"expire_set","key":"jobs/jiz8tuhnp6o","expiration":1502828229,"force":false}]
[1487276229.297][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276229.297][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276229.297][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276229.3][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13250][]
[1487276229.306][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13249][]
[1487276229.308][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"expire_set","key":"jobs/jiz8tuhoy6v/log.txt.gz","expiration":1502828229,"force":false}]
[1487276229.308][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Setting expiration on: jobs/jiz8tuhoy6v/log.txt.gz to 1502828229][]
[1487276229.308][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276229.308][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Setting expiration on: jobs/jiz8tuhoy6v to 1502828229][]
[1487276229.308][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"expire_set","key":"jobs/jiz8tuhoy6v","expiration":1502828229,"force":false}]
[1487276229.308][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276229.308][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276229.308][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276229.314][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13248][]
[1487276229.315][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276229.318][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Locating item in list: global/server_groups][{"id":"giyfkrp0u01"}]
[1487276229.318][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list: global/server_groups][]
[1487276229.318][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list page: global/server_groups/0][]
[1487276229.32][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Locating item in list: global/plugins][{"id":"shellplug"}]
[1487276229.32][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list: global/plugins][]
[1487276229.321][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list page: global/plugins/0][]
[1487276229.321][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13247][]
[1487276229.322][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Locating item in list: global/categories][{"id":"civm2b4vm02"}]
[1487276229.322][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list: global/categories][]
[1487276229.321][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list page: global/plugins/0][]
[1487276229.321][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13247][]
[1487276229.322][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Locating item in list: global/categories][{"id":"civm2b4vm02"}]
[1487276229.322][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list: global/categories][]
[1487276229.322][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list page: global/categories/0][]
[1487276229.323][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276229.326][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Setting expiration on: jobs/jiz8tuhmo6l/log.txt.gz to 1502828229][]
[1487276229.326][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"expire_set","key":"jobs/jiz8tuhmo6l/log.txt.gz","expiration":1502828229,"force":false}]
[1487276229.326][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276229.326][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Setting expiration on: jobs/jiz8tuhmo6l to 1502828229][]
[1487276229.326][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"expire_set","key":"jobs/jiz8tuhmo6l","expiration":1502828229,"force":false}]
[1487276229.326][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276229.326][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276229.326][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276229.33][2017-02-17 04:17:09][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13246][]
[1487276281.359][2017-02-17 04:18:01][CronicleMaster][Storage][debug][9][Fetching 0 items at position 0 from list: global/schedule][]
[1487276281.359][2017-02-17 04:18:01][CronicleMaster][Storage][debug][9][Loading list: global/schedule][]
[1487276281.361][2017-02-17 04:18:01][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276281.367][2017-02-17 04:18:01][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276285.708][2017-02-17 04:18:05][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276285.717][2017-02-17 04:18:05][CronicleMaster][Storage][debug][9][Loading list page: global/schedule/-19][]
[1487276285.717][2017-02-17 04:18:05][CronicleMaster][Storage][debug][9][Loading list page: global/schedule/-18][]
[1487276285.718][2017-02-17 04:18:05][CronicleMaster][Storage][debug][9][Loading list page: global/schedule/-17][]
[1487276285.718][2017-02-17 04:18:05][CronicleMaster][Storage][debug][9][Loading list page: global/schedule/-16][]
[1487276285.719][2017-02-17 04:18:05][CronicleMaster][Storage][debug][9][Locating item in list: global/server_groups][{"id":"giyfkrp0u01"}]
[1487276285.719][2017-02-17 04:18:05][CronicleMaster][Storage][debug][9][Loading list: global/server_groups][]
[1487276285.72][2017-02-17 04:18:05][CronicleMaster][Storage][debug][9][Loading list page: global/server_groups/0][]
[1487276285.732][2017-02-17 04:18:05][CronicleMaster][Storage][debug][9][Locating item in list: global/plugins][{"id":"shellplug"}]
[1487276285.732][2017-02-17 04:18:05][CronicleMaster][Storage][debug][9][Loading list: global/plugins][]
[1487276285.732][2017-02-17 04:18:05][CronicleMaster][Storage][debug][9][Loading list page: global/plugins/0][]
[1487276285.735][2017-02-17 04:18:05][CronicleMaster][Storage][debug][9][Locating item in list: global/categories][{"id":"civm2b4vm02"}]
[1487276285.735][2017-02-17 04:18:05][CronicleMaster][Storage][debug][9][Loading list: global/categories][]
[1487276285.735][2017-02-17 04:18:05][CronicleMaster][Storage][debug][9][Loading list page: global/categories/0][]
[1487276309.519][2017-02-17 04:18:29][CronicleMaster][Storage][debug][9][Loading list page: logs/completed/-13245][]
[1487276324.101][2017-02-17 04:18:44][CronicleMaster][Storage][debug][9][Enqueuing async task][{"action":"custom"}]
[1487276369.48][2017-02-17 04:19:29][CronicleMaster][Storage][debug][9][Fetching 0 items at position 0 from list: global/schedule][]
[1487276369.48][2017-02-17 04:19:29][CronicleMaster][Storage][debug][9][Loading list: global/schedule][]
[1487276457.089][2017-02-17 04:20:57][CronicleMaster][Storage][debug][2][Setting up storage system][]
[1487276457.112][2017-02-17 04:20:57][CronicleMaster][Storage][debug][9][Fetching 0 items at position 0 from list: global/servers][]
[1487276457.113][2017-02-17 04:20:57][CronicleMaster][Storage][debug][9][Loading list: global/servers][]

If there is any other needs I can provide, please feel free to reply to me.

Thank you!

[Feature request] support ldap account login

LDAP services are widely used for enterprise internal user account management and various resource management, Cronicle support LDAP login will help the system administrator to quickly integrate it with the company's management system, to automatically open and close the user account, and also eliminates the need for users to remember and maintain the password alone.

We can retain the existing account system, but in the certification phase, support the authenticity of identity through LDAP.

Cronicle version: querying and upgrading

Two minor suggestions that could make Cronicle easier to manage via deployment tools (like Ansible):

  • Would be nice to have control.sh version that would output e.g. v0.7.4 (matching the git release). This would allow to check automatically whether an upgrade is needed on a managed node.
  • Currently, supplying an invalid version to control.sh upgrade does two things: restarts the daemon and returns 0 (while printing the error to console); I would probably naturally expect it to just return 1 instead and take no further action.

Thanks!

SMTP and NODE_TLS_REJECT_UNAUTHORIZED

In order to sent notifications e-mails when SSL is self-signed, I suppose that NODE_TLS_REJECT_UNAUTHORIZED has to be set, otherwise it's rejected with an error. There is web_hook_ssl_cert_bypass option for web hooks, but no such option for e-mails, which would definitely be nice to have.

For completeness, here's what is logged in /opt/cronicle/Error.log:

[1510480800.66][2017-11-12 10:00:00][scheduler][Error][error][mail]
[Failed to send e-mail for job: jj9wl9orw0l: ...@...: Error: self signed certificate]
[...]

[Help / Feature Request] Building a unique job queue

Hi there,
I'am using Croncile for a few days right now and if i should not see this feature please close this, sorry.

I know that i can work around this problem. But it is a bit difficult.

I want to build a job queue that executes multiple comands after each other. I know that you can chain command together (also mentioned here).
My problem is building this queue, because Cronicle is build on the idea to reshedule jobs and execute it multiple times. I want to execute a job only once and after succesfull execution it should be deleted/deactivated so there is no changce that it could be executed again.

For better understanding i will tell you my usecase.
I have a long lists of urls where i want to download files from.
I build a job for each url (shell script wget).
Now i want to hit the start button that this queue is beeing executed (or will be execute after the master server was startet) and each job is executed after each other. Here it would be great if you could set a multiplier like "Event Concurrency" (i think this is not for this or?) so you can set a multiplier how many active jobs can be per server (cause sometimes the host where you download from is slower than my connenction so that multiple wget comand can run (not the same one!)).
If there are multiple slave servers that each one is picking the next job from the master (never executing the same job twice)
It would be great if i could add new jobs to the running queue if i found new things.

I hope you can understand this and i didn't overlook this featere.
Thanks for your help

Cannot read property 'child' of undefined

Summary

It was working without a problem and now this...

Steps to reproduce the problem

I just try to start cronicle: /opt/cronicle/bin/control.sh start

Your Setup

3 x 3 Magento crons

Operating system and version?

Debian Linux 9 Linux 4.9.0-5-amd64 on x86_64

Node.js version?

v9.3.0

Cronicle software version?

Latest v0.7.6

Are you using a multi-server setup, or just a single server?

Single Server

Are you using the filesystem as back-end storage, or S3/Couchbase?

back-end storage

Log Excerpts

TypeError: Cannot read property 'child' of undefined
at constructor.abortLocalJob (/opt/cronicle/lib/job.js:869:14)
at constructor.abortJob (/opt/cronicle/lib/job.js:795:9)
at constructor.monitorAllActiveJobs (/opt/cronicle/lib/job.js:1486:10)
at /opt/cronicle/lib/engine.js:256:10
at /opt/cronicle/lib/job.js:1668:18
at ChildProcess.exithandler (child_process.js:264:7)
at ChildProcess.emit (events.js:159:13)
at maybeClose (internal/child_process.js:943:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:220:5)

Cronicle local sendmail not working

Summary

When configuring Cronicle to use the local sendmail configuration the error log shows it still trying to connect to 127.0.0.1:25

Steps to reproduce the problem

Ensure configuration block exists:

"mail_options": { "sendmail": true, "newline": "unix", "path": "/usr/bin/sendmail" },

Remove both smtp_hostname and smtp_port from config.json. Run any job with a failure email notification.

Operating system and version?

Arch Linux

Node.js version?

9.3.0

Cronicle software version?

0.7.5

Are you using a multi-server setup, or just a single server?

Single

Are you using the filesystem as back-end storage, or S3/Couchbase?

File system

Can you reproduce the crash consistently?

Yes

Log Excerpts

[1513474254.788][2017-12-17 11:30:54][bots][Error][error][mail][Failed to send e-mail for job: jjba3hsu701: [email protected]: Error: connect ECONNREFUSED 127.0.0.1:25][{"text":"To: [email protected]\nFrom: test@example\nSubject: ⚠️ Cronicle Job Failed: Test\n\nDate/Time: 2017/12/17 11:30:54 (GMT+10)\nEvent Title: Test\nCategory: General\nServer Target: All Servers\nPlugin: Shell Script\n\nJob ID: jjba3hsu701\nHostname: example\nPID: 1161\nElapsed Time: 0 seconds\nPerformance Metrics: (No metrics provided)\nAvg. Memory Usage: (Unknown)\nAvg. CPU Usage: (Unknown)\nError Code: 1\n\nError Description:\nScript exited with code: 1\n\nJob Details:\nhttp://10.0.0.32:3012/#JobDetails?id=jjba3hsu701\n\nJob Debug Log (199 bytes):\nhttp://10.0.0.32:3012/api/app/get_job_log?id=jjba3hsu701\n\nEdit Event:\nhttp://10.0.0.32:3012/#Schedule?sub=edit_event&id=ejba26dra05\n\nEvent Notes:\n(None)\n\nRegards,\nThe Cronicle Team\n"}]

Sorry @jhuckaby after the previous issue it still looks like it wont create the correct transport.

Node v8 + daemon failure ("cwd" must be a string)

It looks like it may be affected by this bug: indexzero/daemon.node#41

Basically, installing the latest release of Cronicle on Node v8.9.1, setting up and starting yields this error:

# /opt/cronicle/bin/control.sh start
/opt/cronicle/bin/control.sh start: Starting up Cronicle Daemon...
child_process.js:403
    throw new TypeError('"cwd" must be a string');
    ^

TypeError: "cwd" must be a string
    at normalizeSpawnArguments (child_process.js:403:11)
    at Object.exports.spawn (child_process.js:496:38)
    at Function.module.exports.daemon (/opt/cronicle/node_modules/daemon/index.js:50:31)
    at module.exports (/opt/cronicle/node_modules/daemon/index.js:25:20)
    at __construct.__init (/opt/cronicle/node_modules/pixl-server/server.js:123:21)
    at __construct.startup (/opt/cronicle/node_modules/pixl-server/server.js:161:8)
    at Object.<anonymous> (/opt/cronicle/lib/main.js:29:8)
    at Module._compile (module.js:635:30)
    at Object.Module._extensions..js (module.js:646:10)
    at Module.load (module.js:554:32)
/opt/cronicle/bin/control.sh start: Cronicle Daemon could not be started

Slave doesn't connect to master

Summary

am trying to connect to master server for slave in AWS environment

Steps to reproduce the problem

Running master enabled with websocket & servercom to 1 . and have the same app url and secret key

Your Setup

2 servers on AWS

Operating system and version?

Node.js version?

latest

Cronicle software version?

0.7.6

Are you using a multi-server setup, or just a single server?

single server with slaves

Are you using the filesystem as back-end storage, or S3/Couchbase?

mongodb engine we have customized

Can you reproduce the crash consistently?

Log Excerpts

[1514568357.042][2017-12-29 17:25:57][ip-172-31-24-196][WebServer][debug][2][pixl-server-web v1.0.25 starting up][]
[1514568357.043][2017-12-29 17:25:57][ip-172-31-24-196][WebServer][debug][2][Starting HTTP server on port: 9012][]
[1514568357.047][2017-12-29 17:25:57][ip-172-31-24-196][WebServer][debug][2][Starting HTTPS (SSL) server on port: 9013][]
[1514568357.055][2017-12-29 17:25:57][ip-172-31-24-196][Cronicle][debug][3][Starting component: API][]
[1514568357.056][2017-12-29 17:25:57][ip-172-31-24-196][API][debug][3][API service listening for base URI: /api][]
[1514568357.056][2017-12-29 17:25:57][ip-172-31-24-196][WebServer][debug][3][Adding custom URI handler: //api/(\w+)/: API][]
[1514568357.056][2017-12-29 17:25:57][ip-172-31-24-196][Cronicle][debug][3][Starting component: User][]
[1514568357.056][2017-12-29 17:25:57][ip-172-31-24-196][User][debug][3][User Manager starting up][]
[1514568357.057][2017-12-29 17:25:57][ip-172-31-24-196][API][debug][3][Adding API namespace: user][]
[1514568357.057][2017-12-29 17:25:57][ip-172-31-24-196][Cronicle][debug][3][Starting component: Cronicle][]
[1514568357.057][2017-12-29 17:25:57][ip-172-31-24-196][Cronicle][debug][3][Cronicle engine starting up][["/usr/bin/node","/opt/cronicle/lib/main.js","--debug","--echo"]]
[1514568357.058][2017-12-29 17:25:57][ip-172-31-24-196][API][debug][3][Adding API namespace: app][]
[1514568357.058][2017-12-29 17:25:57][ip-172-31-24-196][WebServer][debug][3][Adding custom request method handler: OPTIONS: CORS Preflight][]
[1514568357.064][2017-12-29 17:25:57][ip-172-31-24-196][Cronicle][debug][4][Using broadcast IP: 172.31.31.255][]
[1514568357.064][2017-12-29 17:25:57][ip-172-31-24-196][Cronicle][debug][4][Starting UDP server on port: 3014][]
[1514568357.065][2017-12-29 17:25:57][ip-172-31-24-196][Storage][debug][9][Fetching 0 items at position 0 from list: global/servers][]
[1514568357.066][2017-12-29 17:25:57][ip-172-31-24-196][Storage][debug][9][Loading list: global/servers][]
[1514568357.067][2017-12-29 17:25:57][ip-172-31-24-196][Filesystem][debug][9][Fetching Object: global/servers][data/global/73/f2/06/73f2061c54ebbd19ba9bbddd70299297.json]
[1514568357.068][2017-12-29 17:25:57][ip-172-31-24-196][Storage][debug][9][List could not be loaded: global/servers: Error: Failed to fetch key: global/servers: File not found][]
[1514568357.069][2017-12-29 17:25:57][ip-172-31-24-196][Cronicle][debug][4][Server not found in cluster -- waiting for a master server to contact us][]
[1514568357.07][2017-12-29 17:25:57][ip-172-31-24-196][Cronicle][debug][2][Startup complete, entering main loop][]

If I manully connects to master it does get connected

Cronicle deployment failure!

Hello ,@jhuckaby
install is success, but visits a page is blank!!

I used the ngixn proxy,but
Chrome:

WebSocket connection to 'ws://192.168.1.179:3012/socket.io/?EIO=3&transport=websocket' failed: WebSocket is closed before the connection is established.

How can I only listen on IPV4 ports?

Suggestion: option to run a shell command (or plugin) on job failure?

First, thanks @jhuckaby for a wonderful library and reacting so fast to the issues :)

I wanted to suggest an extremely useful feature which seems to be reasonably simple to implement as well - basically, an option to run a shell command (or any 'Plugin' action that can be currently scheduled, for that matter) on job failure. (Of course, it's kind of possible right now, by baking errror handling directly into each task's script, but might be neater if it was separate.)

This would be immensely useful, e.g. in the case of error reporting systems like sentry.io that provide command line interfaces to report errors to central tracker. However, it could also be some user-made shell script that reacts to an error. In this case, it would also help if the details were exported to the environment (even it's just one json blob dumped to a variable since you can always split it with jq; but could also be separate variables), so you could do stuf like this and get it logged centrally:

echo $CRONICLE_EVENT | jq '.category, .id | read -d '\n' CATEGORY EVENT_ID
sentry-cli --api-key 1234 --auth-token 5678 -e category:$CATEGORY -e event_id:$EVENT_ID

By the way, speaking of environment variables, it might help to always have something like CRONICLE_EVENT logged to environment so that jobs can extract scheduler data from them (even if they haven't failed) - for example for logging purposes. If the task log is available as a file somewhere, could potentially share the filename in an environment variable as well.

And finally, it may also make sense to have a 'Plugin' action when a task is completed as well (wherever it failed or not, unconditionally) -- for example you may want to upload the task details or log (if it's available) somewhere.

Thanks! (and apologies if I missed some already existing functionality from the docs)

Stuck in "Waiting for master server..."

Summary

I'm trying to rearrange some of our infrastructure and put our Cronicle instance in a separate part of our VPC. I'm using Terraform to bring up the new instance, and when Cronicle starts, visiting the server's url results in this:

image

Steps to reproduce the problem

I'm not doing anything fancy. I'm installing it using the same steps we used with the previous server. Both are looking at the same Couchbase backend, but the old server is currently off.

I'm just looking for a way to troubleshoot why it can't find itself and promote itself as master. I don't think this is a bug with Cronicle, but a bug with how I'm trying to bring it up.

Your Setup

Operating system and version?

Centos 7

Node.js version?

6.11.3

Cronicle software version?

v0.7.1

Are you using a multi-server setup, or just a single server?

Single

Are you using the filesystem as back-end storage, or S3/Couchbase?

Couchbase

Can you reproduce the crash consistently?

Log Excerpts

send e-mail ,Error: Connection timeout

Failed to send e-mail for job: jj1c2352702: liuxx@xx: Error: Connection timeout

confing:

"base_app_url": "http://192.168.200.149:3012",
"email_from": "xxxx@xxx",
"smtp_hostname": "smtpcom.263xmail.com",
"smtp_port": 25,
"mail_options": {
"host": "smtpcom.263xmail.com",
"port": 25,
"secure": "flase",
"auth": {
"user": "xxxx@xxx",
"pass": "xxxxx"
}

control.sh throws exception when run inside the bin dir

Like this:

[root@host bin]# sh control.sh start
control.sh start: Starting up Cronicle Daemon...
module.js:474
throw err;
^

Error: Cannot find module '/opt/cronicle/bin/lib/main.js'
at Function.Module._resolveFilename (module.js:472:15)
at Function.Module._load (module.js:420:25)
at Module.runMain (module.js:607:10)
at run (bootstrap_node.js:382:7)
at startup (bootstrap_node.js:137:9)
at bootstrap_node.js:497:3
control.sh start: Cronicle Daemon could not be started
[root@host bin]# pwd
/opt/cronicle/bin

fix: #9

Cronicle Master tries to communicate with the cronicle slave on Private IP and effectively communication fails

We have set up a cronicle master and slave server setup.
Both in different datacentres communicating with each other over the Public IP Address.
The slave status is active.
But when we run the Job on this slave slave1-ord, the job fails because it uses "172.16.110.194" rather than the name (slave1-ord)
----------------------------------------------------------
[1496730549.317][2017-06-06 06:29:09][cronicle1.chargepoint.com][Error][error][job][Failed to fetch job log file: http://172.16.110.194:3013/api/app/fetch_delete_job_log?path=%2Fhome%2Fdeployer%2Flogs%2Fcronicle%2Fsystem%2Fjobs%2Fjj3l6pf0rfn.log&auth=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: Error: Socket Timeout (30000 ms)][]
---------------------------------------------------------------------------
In case we can ask the cronicle to use the name slave1-ord, it will fix the issue.

Possible crash when choosing certain server selection algorithms

I received the following bug report via e-mail:

We are using it from quite a couple of months but we ended up in a crash recently when we tried to create a new server group and tried running the job on that group as a target.

Can you please help us in this regard, we are also trying to look into this. Find below the logs

Stack trace (in crash.log):
Tue Apr 11 2017 12:30:00 GMT+0000 (UTC)
TypeError: Cannot read property 'hostname' of null
    at constructor.chooseServer (/home/deployer/cronicle/Cronicle-0.6.11/lib/job.js:338:45)
    at /home/deployer/cronicle/Cronicle-0.6.11/lib/job.js:118:25
    at /home/deployer/cronicle/Cronicle-0.6.11/node_modules/async/lib/async.js:726:13
    at /home/deployer/cronicle/Cronicle-0.6.11/node_modules/async/lib/async.js:52:16
    at /home/deployer/cronicle/Cronicle-0.6.11/node_modules/async/lib/async.js:269:32
    at /home/deployer/cronicle/Cronicle-0.6.11/node_modules/async/lib/async.js:44:16
    at /home/deployer/cronicle/Cronicle-0.6.11/node_modules/async/lib/async.js:723:17
    at /home/deployer/cronicle/Cronicle-0.6.11/node_modules/async/lib/async.js:167:37
    at /home/deployer/cronicle/Cronicle-0.6.11/lib/job.js:54:6
    at /home/deployer/cronicle/Cronicle-0.6.11/node_modules/pixl-server-storage/list.js:775:6

Log excerpt from Cronicle.log:
[1491913800.582][2017-04-11 12:30:00][qa-**********com][Cronicle][debug][4][Scheduler Minute Tick: Advancing time up to: 2017/04/11 12:30:00][]
[1491913800.585][2017-04-11 12:30:00][ qa-**********com][Cronicle][debug][4][Auto-launching scheduled item: ej17okjd60x (etl_charging_sessions) for timestamp: Tue, Apr 11, 2017 12:30 PM UTC][]

Although I never got reply from the author when asking which server selection algorithm he/she used, I suspect it may be a bug in the Least CPU / Least Memory code. I found a possible bug in both.

Commit ca42be4 should fix this.

- Joe

Windows Support

I'm a big fan of Cronicle and would like to use it on my current project. However, the customer is requiring that deployment be done on Windows.

How difficult do you think it would be for a novice Cronicle user to add windows support (which of course would be contributed back)?

Cronicle crashes when trying to add a new server

Summary

Cronicle crashes when trying to add new a new server.

Steps to reproduce the problem

  1. New setup of Cronicle on 2 hosts
  2. Changed the "Master Group" regex to allow for both hosts to be master eligible
  3. Start Cronicle on both hosts
  4. When the UI loads click "Add Server..." in the "Admin -> Servers" section

Your Setup

I was trying a multi-master setup of Cronicle:

  1. Installed Cronicle on 2 EC2 hosts that are publicly accessible
  2. Using a share NFS storage to store Cronicle data

Operating system and version?

Ubuntu 16.04

Node.js version?

v4.2.6

Cronicle software version?

0.7.5

Are you using a multi-server setup, or just a single server?

Trying to setup a multi-master setup

Are you using the filesystem as back-end storage, or S3/Couchbase?

Filesystem

Can you reproduce the crash consistently?

Yes, see stack trace below.

Log Excerpts

Errors show in Chrome dev tools console:

POST https://eu-qa-cronicle-master-1.ev-chargepoint.com:3013/api/app/add_server net::ERR_EMPTY_RESPONSE

WebSocket connection to 'wss://eu-qa-cronicle-master-1.xyz.com:3013/socket.io/?EIO=3&transport=websocket' failed: Error in connection establishment: net::ERR_CONNECTION_CLOSE

Contents of logs/crash.log:

Thu Nov 30 2017 20:20:00 GMT+0000 (UTC)
TypeError: this is not a typed array.
    at Function.from (native)
    at module.exports.Class.create.post (/opt/cronicle/node_modules/pixl-request/request.js:231:41)
    at module.exports.Class.create.json (/opt/cronicle/node_modules/pixl-request/request.js:112:15)
    at /opt/cronicle/lib/api/admin.js:78:17
    at /opt/cronicle/lib/api.js:354:6
    at /opt/cronicle/node_modules/pixl-server-storage/storage.js:216:4
    at /opt/cronicle/node_modules/pixl-server-storage/engines/Filesystem.js:273:4
    at FSReqWrap.readFileAfterClose [as oncomplete] (fs.js:380:3)

Thu Nov 30 2017 20:37:34 GMT+0000 (UTC)
TypeError: this is not a typed array.
    at Function.from (native)
    at module.exports.Class.create.post (/opt/cronicle/node_modules/pixl-request/request.js:231:41)
    at module.exports.Class.create.json (/opt/cronicle/node_modules/pixl-request/request.js:112:15)
    at /opt/cronicle/lib/api/admin.js:78:17
    at /opt/cronicle/lib/api.js:354:6
    at /opt/cronicle/node_modules/pixl-server-storage/storage.js:216:4
    at /opt/cronicle/node_modules/pixl-server-storage/engines/Filesystem.js:273:4
    at FSReqWrap.readFileAfterClose [as oncomplete] (fs.js:380:3)

Suggestion: prefix ("root") for storage backend?

It would be nice if it was possible to specify a root for Cronicle to store all its data in, so that global/server_groups could be for example my_cronicle_folder/global/server_groups.

This would help greatly, for example, if you have to use a shared S3 bucket (or DigitalOcean Space) for some reason.

Node v4 Server Requirements

Please note, if you want to run Cronicle with Node.js v4 or higher, you will need an OS with GCC 4.8+, or Clang 3.5+. This is a requirement of some of the C++ npm dependencies.

Reference: https://docs.travis-ci.com/user/languages/javascript-with-nodejs#Node.js-v4-(or-io.js-v3)-compiler-requirements

To compile native modules for io.js v3 or Node.js v4, a C++11 standard-compilant compiler is required. More specifically, either gcc 4.8 (or later), or clang 3.5 (or later) works.

Specifically, CentOS 6.x does not satisfy these requirements. You'll need CentOS 7 or later. Amazon's AWS EC2 Linux flavor seems fine, as does OS X 10.8 and up.

Provide “Read Only” Privilege

At present, the cronicle system is managed by our system administrator. For safety reasons, we do not allow RD to perform any changes to the task (add/edit/delete/start/stop), but if the timing task fails, RD need to view the event history and log output to determine what's wrong with the task.

So, can cronicle provide a "Read Only" privilage? It will be more convenient for us to work with RDs.

image

Minor suggestion: a button to remove event result from the history

Apologies in advance for keeping spamming with suggestions, but here's another small one :)

One thing I've realized would be nice to have is a simple option to manually clear results from history. It's currently possible to delete events but not their results. E.g., when playing around with Cronicle to see what works and what doesn't, especially when it comes to more complicated things like web hooks, chain reaction and API - would be great if those runs could be cleared manually from the history, so it's not littered with temporary stuff?

Could probably be located next to 'Run Again' button in 'Completed Job Details' (speaking of this button, by the way, it seems to overlay the underlying table text on small-to-medium sized screens, is that intentional?).

(Could also potentially ask the user whether to clear the event history when deleting the event.)

Thanks!

Deprecation warning from node in event log

This is on node 7.10.1 and v0.7.3 of Cronicle. At the end of Shell Script task log, there's this:

(node:20574) DeprecationWarning: Calling an asynchronous function without callback is deprecated.

Support HTTPS Proper

Cronicle has various issues when HTTPS mode is enabled on the underlying WebServer component, especially when https_force is also enabled. This forces server-to-server requests to use HTTPS, which fails because they also use IP addresses. See Issue #26 for at least one case. Example error (copying logs between servers):

Failed to fetch job log file: https://172.17.2.143:3012/api/app/fetch_delete_job_log?path=%2Fhome%2Fdeployer%2Fcronicle%2FCronicle-0.6.11%2Flogs%2Fjobs%2Fjj26j9mdn02.log&auth=removed: Error: Error: Hostname/IP doesn't match certificate's altnames: "IP: 172.17.2.143 is not in the cert's list: "

We need to fully test HTTPS mode, especially in a multi-server environment, and dig out all the possible issues that may arise.

[feature request]support parameter “H” like which in Jenkins

For example, using 0 0 * * * for a dozen daily jobs will cause a large spike at midnight. In contrast, using H H * * * would still execute each job once a day, but not all at the same time, better using limited resources.

The H symbol can be thought of as a random value over a range, but it actually is a hash of the job name, not a random function, so that the value remains stable for any given project.

To allow periodically scheduled tasks to produce even load on the system, the symbol H (for “hash”) should be used wherever possible.

image

Failed to delete job: key not found

I've tried deleting a few jobs in 0.7.5, and it generally seems to work, but also fails sometimes. Here's a piece of the log when trying to delete an old job whose event doesn't even exist anymore:

==> /opt/cronicle/logs/S3.log <==
[1511742214.349][2017-11-27 00:23:34][scheduler][S3][debug][9][Fetching S3 Object: logs/events/ej9vv63tu01][]

==> /opt/cronicle/logs/Storage.log <==
[1511742214.358][2017-11-27 00:23:34][scheduler][Storage][debug][9][List could not be loaded: logs/events/ej9vv63tu01: Error: Failed to fetch key: logs/events/ej9vv63tu01: Not found][]
[1511742214.358][2017-11-27 00:23:34][scheduler][Storage][debug][9][Unlocking key: ||logs/events/ej9vv63tu01][]

==> /opt/cronicle/logs/User.log <==
[1511742214.358][2017-11-27 00:23:34][scheduler][User][error][job][Failed to delete job: Error: Failed to fetch key: logs/events/ej9vv63tu01: Not found][]

(I'm not sure what's the exact reason, is it that the event doesn't exist now or is it something else?)

Cronicle crash after a week

Summary

Steps to reproduce the problem

Your Setup

Operating system and version?

Debian GNU/Linux 8.8 (jessie)

Node.js version?

v7.10.0

Cronicle software version?

0.6.15

Are you using a multi-server setup, or just a single server?

Single Server

Are you using the filesystem as back-end storage, or S3/Couchbase?

No

Can you reproduce the crash consistently?

Nope! It crash after a week or 2

Log Excerpts

Sun Jun 04 2017 17:19:00 GMT+0200 (CEST)
TypeError: Cannot read property 'title' of undefined
at /opt/cronicle/lib/scheduler.js:277:61
at /opt/cronicle/lib/job.js:70:13
at /opt/cronicle/node_modules/async/lib/async.js:726:13
at /opt/cronicle/node_modules/async/lib/async.js:52:16
at /opt/cronicle/node_modules/async/lib/async.js:269:32
at /opt/cronicle/node_modules/async/lib/async.js:44:16
at /opt/cronicle/node_modules/async/lib/async.js:723:17
at /opt/cronicle/node_modules/async/lib/async.js:167:37
at /opt/cronicle/lib/job.js:54:6
at /opt/cronicle/node_modules/pixl-server-storage/list.js:775:6

Some bugs in "get_schedule" API

Conicle's "get_schedule" API output sometimes does not meet expectations, sometimes incomplete, I test it both by curl(Linux) and Chrome(Windows), it seems the bug is more likely came from the server side.

OS: CentOS 6.5 x64
Node: 7.0.0
Cronicle: 0.6.5

  1. [RIGHT] Wants 2 returns 2:
    image
    (note: 8012 is the cronicle server port)

  2. [WRONG] Want 2 but returns 16:
    image

  3. [WRONG] Truncated output:
    (/api/app/get_schedule/v1?offset=16&limit=2)
    image

Shell plugin FAQ

Summary

@jhuckaby your app is wonderful and thanks your quicke response

As I am trying setup jobs using shell plugin am running simple shell jobs.

  1. Am changing the work directory from default $HOME/ubuntu or /home/ubuntu and unable to run the jobs.

  2. I want default simulation for memory usage and CPU usage. for live and past jobs.

Steps to reproduce the problem

Your Setup

Operating system and version?

Node.js version?

latest

Cronicle software version?

Are you using a multi-server setup, or just a single server?

Are you using the filesystem as back-end storage, or S3/Couchbase?

Can you reproduce the crash consistently?

Log Excerpts

image

image

Please advise

Question: Is there an easy way to upgrade all Cronicle hosts in a cluster?

We have a single-master, multi-slave setup Cronicle setup. Does Cronicle detect version mismatch between various Cronicle master(s)/slave(s)? I'd imagine weird bugs in case master/slave go out-of-sync?

  • Is there an easy way to upgrade all hosts in the cluster?
  • It would be nice if "Server Cluster" table on "Admin -> Servers" page showed current Cronicle version for each host.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.