GithubHelp home page GithubHelp logo

dshearer / jobber Goto Github PK

View Code? Open in Web Editor NEW
1.4K 31.0 79.0 7.02 MB

An alternative to cron, with sophisticated status-reporting and error-handling

Home Page: https://dshearer.github.io/jobber/

License: MIT License

Makefile 6.60% Go 68.14% Shell 2.70% RobotFramework 8.68% Python 10.26% Yacc 3.09% Dockerfile 0.52%
cron go schedule unix linux jobber

jobber's People

Contributors

carlo-colombo avatar dshearer avatar gadget avatar ipcjk avatar ironsand avatar jedirandy avatar nathanleclaire avatar rkrdeano avatar rthrfrd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jobber's Issues

Panic's when run within a docker container

Here's the Dockerfile:

# vim: set ts=4 sw=4 sts=4 colorcolumn=80 expandtab:
FROM ubuntu:trusty
MAINTAINER Jason Meredith <[email protected]>

RUN DEBIAN_FRONTEND="noninteractive" apt-get update \
    && apt-get install -y --no-install-recommends \
       golang build-essential git mercurial ca-certificates \
    && mkdir -p /usr/local/bin/go/bin \
    && export GOPATH="/usr/local/go" \
    && export PATH=$PATH:$GOPATH/bin \
    && go get github.com/dshearer/jobber \
    && useradd --home / -M --system --shell /sbin/nologin jobber_client \
    && cd /usr/local/go/src/github.com/dshearer/jobber \
    && make -C . build \
    && install -d /usr/bin \
    && install -T -o jobber_client -g root -m 4775 -p /usr/local/go/bin/jobber /usr/bin/jobber \
    && install -d /usr/sbin \
    && install -T -o jobber_client -g root -m 4775 -p /usr/local/go/bin/jobberd /usr/sbin/jobberd \
    && apt-get clean \
    && rm -rf /tmp/* /var/tmp/* \
    && rm -rf /var/lib/apt/lists/* \
    && rm -rf /var/log/apt/* \
    && rm -rf /var/cache/apt/* \
    && rm -rf /var/log/dpkg.log \
    && rm -rf /var/log/alternatives.log \
    && rm -rf /usr/share/doc/* \
    && rm -rf /usr/share/man/*

#COPY <user>.jobber /path/to/<user-home>/.jobber

CMD ["/usr/sbin/jobberd"]

Here's the output

root@4115772183fd:/# /usr/sbin/jobberd
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x0 pc=0x4b5d8b]

goroutine 1 [running]:
runtime.panic(0x6ddd60, 0xa85148)
    /usr/lib/go/src/pkg/runtime/panic.c:266 +0xb6
log.(*Logger).Output(0x0, 0x2, 0xc210069bc0, 0x3f, 0x0, ...)
    /usr/lib/go/src/pkg/log/log.go:134 +0x46b
log.(*Logger).Printf(0x0, 0x74c830, 0xa, 0x7f5ad1febf08, 0x1, ...)
    /usr/lib/go/src/pkg/log/log.go:160 +0x7a
main.main()
    /usr/local/go/src/github.com/dshearer/jobber/jobberd/jobberd.go:35 +0xfb

goroutine 3 [syscall]:
os/signal.loop()
    /usr/lib/go/src/pkg/os/signal/signal_unix.go:21 +0x1e
created by os/signal.init·1
    /usr/lib/go/src/pkg/os/signal/signal_unix.go:27 +0x31

Add 'pause' and 'resume' commands

'jobber pause' should stop the current user's jobs. 'jobber resume' should resume them.

The pause state should persist across restarts of jobberd.

Configuration file format support

Is there a reason you can't support both YAML and JSON? A lot of people (myself included) would rather use JSON instead of YAML. Given this module appears to have once supported JSON, it seems odd that support for it would be completely removed instead of just supporting both formats.

time config like "0-5" and "10,20" fails

I got a error when I try to set day of week by 0-5.

Invalid 'weekday' value:strconv.ParseInt: parsing "0-5": invalid syntax

And for sec, 10,20

Invalid 'sec' value:strconv.ParseInt: parsing "10,20": invalid syntax

I checked the source code and found these expression are implemented.
https://github.com/dshearer/jobber/blob/master/jobberd/job_file.go#L480

I have no idea why I got this error. I'm using CentOS release 6.7 (Final).

How can I solve this problem?

Thanks

Mail Notifications Inside Docker

Docker best-practise is a single-process approach inside an Image. I have no mail demon running inside my Jobber Image (blacklabelops/jobber). Jobber's included mail notification does not work.

My use-case:

  1. Job fails.
  2. Notification is send using smtp.sendgrid.com server with a custom username and password
  3. Advanced: Secured TLS enabled smtp.sendgrid.com with server certificate

Example Docker Environment Parameters:
-e "SMTP_USER_NAME=mypersonaluser"
-e "SMTP_USER_PASS=swordfish"
-e "SMTP_HOST=smtp.sendgrid.com"
-e "SMTP_PORT=587" \

I need either a command line tool that does not need a local demon, can be configured by the settings above or direct Jobber support (E.g. Mail API and appropriate configuration part.)

Timezone for jobs

It would be useful if you could specify which timezone the 'time' field refers to.

So in the configuration you could add a field

  • Timezone: US/Eastern
    If you need to specify which timezone it should run in, and by default if it is not defined, you could have it use the local time.

cmd/reload: errors are silently disposed of

I accidentally created the .jobber file as root in my user's home dir. Jobber did not load the file (which is okay) but silently ignored it (not good). Consider adding a log or prints to stdout?

Thanks for this project! It just needs a REST API to make it perfect :]

jobberd exits with code 1

Hey. I've tried compiling jobber with go 1.4.3 and go 1.5 on both Debian (wheezy) and OS X (10.11).

In every situation both binaries compile without error, the jobber client binary seems to work (can output help etc), but running jobberd always results in the process exiting with code 1 and no output.

Sorry I can't offer more help for debugging, but there is no error message.

Jobber Binaries Release

I am currently pursuing minimal Docker images. Is it possible that you can release binaries? This way I don't have to install the toolset for compiling the sources.

I am using amd64 platform.

Error parsing config file

I get the following error:

Failed to load jobs for yaml: line 3: did not find expected alphabetic or numeric character: jobber

with the following config file:


---
- name: TestEcho
  cmd: trigger
  time: * * * * * *
  onError: Stop
  notifyOnError: true
  notifyOnFailure: true

Full log:

Launching.
Loaded 0 new jobs for root.
Loaded 0 new jobs for bin.
Loaded 0 new jobs for daemon.
Loaded 0 new jobs for adm.
Loaded 0 new jobs for lp.
Loaded 0 new jobs for sync.
Loaded 0 new jobs for shutdown.
Loaded 0 new jobs for halt.
Loaded 0 new jobs for mail.
Loaded 0 new jobs for operator.
Loaded 0 new jobs for games.
Loaded 0 new jobs for ftp.
Loaded 0 new jobs for nobody.
Loaded 0 new jobs for systemd-bus-proxy.
Loaded 0 new jobs for systemd-network.
Loaded 0 new jobs for dbus.
Failed to load jobs for open /dev/null/.jobber: not a directory: tss
Failed to load jobs for yaml: line 3: did not find expected alphabetic or numeric character: jobber
Loaded 0 new jobs for jobber_client.
totalJobs: -2; len(m.jobs): 0
New context: 0.0
Main thread context: 0.0
New context: 0.0.0

Explanations of error handling are swiched

I think the explanations of "job error" and "job failure" are switched.

When discussing jobs, by “job error” we mean the failure of a particular run of a job, whereas by “job failure” we mean a job that jobber has decided not to schedule anymore due to one or more recent job errors.

from how to use Jobber

Thanks

Attaching tty

Is it possible to attach a tty to jobs started with jobber?

We're using jobber to start containers (using conplicity) and this requires a tty.

Ubuntu 14.04 cloud image - sudo dpkg -i jobber_1.1-1_amd64.deb fails

nutanix@NTNX-AVM:~$ sudo dpkg -i jobber_1.1-1_amd64.deb
Selecting previously unselected package jobber.
(Reading database ... 64029 files and directories currently installed.)
Preparing to unpack jobber_1.1-1_amd64.deb ...
Unpacking jobber (1.1-1) ...
dpkg: dependency problems prevent configuration of jobber:
 jobber depends on init-system-helpers (>= 1.18~); however:
  Version of init-system-helpers on system is 1.14.

dpkg: error processing package jobber (--install):
 dependency problems - leaving unconfigured
Processing triggers for ureadahead (0.100.0-16) ...
Errors were encountered while processing:
 jobber

I used the --force-all to install . It seems to work but is this the right approach?

Jobber Job Execution Does Not Work On Alpine

When I execute a job then Jobber cant execute because of unknown option in command su in Alpine:

Looks like Jobber build an execution command that does not work with the su inside this distribution. Is there a workaround or does it have to be adjusted?

Log:
bash-4.3# jobber list
NAME STATUS SEC/MIN/HR/MDAY/MTH/WDAY NEXT RUN TIME NOTIFY ON ERR NOTIFY ON FAIL ERR HANDLER
Hourly Good 0 20 */4 * * * Feb 17 16:20:00 2016 false false Continue
Daily Good 0 30 1 * * * Feb 18 01:30:00 2016 false false Continue
Weekly Good 0 40 2 * * 5 Feb 19 02:40:00 2016 false false Continue
Monthly Good 0 50 4 1 * * Mar 1 04:50:00 2016 false false Continue
bash-4.3# jobber test Hourly
Running job "Hourly" for user "root"...
Job "Hourly" failed.
New status: Good.

Stdout:

Stderr:
su: unrecognized option: login
BusyBox v1.24.1 (2015-12-16 08:00:02 GMT) multi-call binary.

Usage: su [OPTIONS] [-] [USER]

Run shell under USER (by default, root)

-,-l    Clear environment, run shell as login shell
-p,-m   Do not set new $HOME, $SHELL, $USER, $LOGNAME
-c CMD  Command to pass to 'sh -c'
-s SH   Shell to use instead of user's default

bash-4.3# su --help
BusyBox v1.24.1 (2015-12-16 08:00:02 GMT) multi-call binary.

Usage: su [OPTIONS] [-] [USER]

Run shell under USER (by default, root)

-,-l    Clear environment, run shell as login shell
-p,-m   Do not set new $HOME, $SHELL, $USER, $LOGNAME
-c CMD  Command to pass to 'sh -c'
-s SH   Shell to use instead of user's default

Crashes due to memory exhaustion when there is plenty available

The OS: CentOS 7 64 bit
Memory installed: 32g

The bug: Jobberd crashes occasionally on my servers. If I run "jobber status" I get:

Couldn't connect to daemon: dial unix /var/jobber_daemon.sock: connect: connection refused

After running a "systemctl start jobber" (since its shown as dead) I get this:
Starting jobber (via systemctl): Job for jobber.service failed because the control process exited with error code. See "systemctl status jobber.service" and "journalctl -xe" for details.

Then obviously doing that command I get this:

● jobber.service - LSB: jobber
   Loaded: loaded (/etc/rc.d/init.d/jobber)
   Active: failed (Result: exit-code) since Tue 2016-10-11 01:19:47 EDT; 10s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 12236 ExecStop=/etc/rc.d/init.d/jobber stop (code=exited, status=0/SUCCESS)
  Process: 12233 ExecStart=/etc/rc.d/init.d/jobber start (code=exited, status=1/FAILURE)

Oct 11 01:19:47 localhost.localdomain systemd[1]: Starting LSB: jobber...
Oct 11 01:19:47 localhost.localdomain jobber[12233]: Starting jobberd: /bin/bash: fork: Cannot allocate memory
Oct 11 01:19:47 localhost.localdomain jobber[12233]: [FAILED]
Oct 11 01:19:47 localhost.localdomain systemd[1]: jobber.service: control process exited, code=exited status=1
Oct 11 01:19:47 localhost.localdomain systemd[1]: Failed to start LSB: jobber.
Oct 11 01:19:47 localhost.localdomain systemd[1]: Unit jobber.service entered failed state.
Oct 11 01:19:47 localhost.localdomain systemd[1]: jobber.service failed.

There is no issue with memory as running free -g returns:

          total        used        free      shared  buff/cache   available
Mem:             31           9          17           0           4          17
Swap:             7           0           7

Doing a stop, restart, stop and then start I get this:

● jobber.service - LSB: jobber
   Loaded: loaded (/etc/rc.d/init.d/jobber)
   Active: active (exited) since Tue 2016-10-11 01:21:14 EDT; 4s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 12236 ExecStop=/etc/rc.d/init.d/jobber stop (code=exited, status=0/SUCCESS)
  Process: 1850 ExecStart=/etc/rc.d/init.d/jobber start (code=exited, status=0/SUCCESS)

Oct 11 01:21:14 localhost.localdomain systemd[1]: Starting LSB: jobber...
Oct 11 01:21:14 localhost.localdomain systemd[1]: Started LSB: jobber.

Then doing jobber status :

jobberd isn't running.

After doing some more stop and starts, I manage to get it to fail when doing jobber :

runtime/cgo: pthread_create failed: Resource temporarily unavailable
SIGABRT: abort
PC=0x7f877dbe55f7 m=0

goroutine 0 [idle]:

goroutine 1 [running, locked to thread]:
runtime.systemstack_switch()
        /usr/lib/golang/src/runtime/asm_amd64.s:216 fp=0xc82004fe90 sp=0xc82004fe88
runtime.newproc(0x0, 0x89d720)
        /usr/lib/golang/src/runtime/proc1.go:2213 +0x62 fp=0xc82004fed8 sp=0xc82004fe90
runtime.init.4()
        /usr/lib/golang/src/runtime/proc.go:141 +0x2b fp=0xc82004fef0 sp=0xc82004fed8
runtime.init()
        /usr/lib/golang/src/runtime/zversion.go:9 +0x378 fp=0xc82004ff50 sp=0xc82004fef0
runtime.main()
        /usr/lib/golang/src/runtime/proc.go:63 +0x103 fp=0xc82004ffa0 sp=0xc82004ff50
runtime.goexit()
        /usr/lib/golang/src/runtime/asm_amd64.s:1696 +0x1 fp=0xc82004ffa8 sp=0xc82004ffa0

goroutine 17 [syscall, locked to thread]:
runtime.goexit()
        /usr/lib/golang/src/runtime/asm_amd64.s:1696 +0x1

Reproducing: I cannot reliably reproduce the problem. Out of 10 servers it's on, they randomly crash.

Expected behavior: Not crashing

Extra info: I have a single jobber task running every 20 minutes ( 0 */20 * * * * ) which runs many commands. The output for those commands would not eat up any memory, maybe 2 megs at the most. Commands are mostly network related, and then running 2 PHP scripts.

Not able to start on RedHat 7

Hi

i have two instances on AWS one is RH6 and another is RH7.
jobber working perfectly on RH6 but failed on RH7, could anyone please have a check? thanks!

Error Output:
$systemctl status jobber.service -l
● jobber.service - LSB: jobber
Loaded: loaded (/etc/rc.d/init.d/jobber)
Active: failed (Result: exit-code) since Wed 2016-08-03 01:42:55 EDT; 24s ago
Docs: man:systemd-sysv-generator(8)

Aug 03 01:42:55 systemd[1]: Starting LSB: jobber...
Aug 03 01:42:55 systemd[1]: jobber.service: control process exited, code=exited status=203
Aug 03 01:42:55 systemd[1]: Failed to start LSB: jobber.
Aug 03 01:42:55 systemd[1]: Unit jobber.service entered failed state.
Aug 03 01:42:55 systemd[1]: jobber.service failed.

FreeBSD err

carlos@src:~ % jobber test "Clear logs"
Running job "Clear logs" for user "carlos"...
Job "Clear logs" failed.
New status: Good.

Stdout:

Stderr:
su: unknown login: /bin/sh


carlos@src:~ % cat .jobber

  • name: Clear
    cmd: |
    cd /usr/game/
    time: "0 0 */3 * * *"
    onError: Stop
    notifyOnError: true
    notifyOnFailure: true

Run jobber side by side with existing cron daemon

From what I can tell, this can run equally as well regardless of whether you have an existing cron daemon installed and running, and the two won't conflict with each other.

Is this true? If so, I feel like the documentation should be updated to reflect this. I think it would make it easier for people to try it out before committing to making the switch.

Environment Variables for Jobs

How do I set environment variables for the Jobber demon? Put them inside .bashrc or where do I have to define them globally so all Jobs gets access?

I am currently preparing cron tasks for amazonws. Cli command example: aws s3 ls

The credentials are applied using environment variables, e.g. AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Now I need a good mechanism to pass all docker environment variables to the demon or in worst case the required ones.

What I don't want: I do not want to specify the required environment variables inside the Jobber config or inside scripts.

My current workaround:
Calling this inside the docker entrypoint works:
export >> /etc/profile.d/jobber.sh

Unable to install

When installing on both OSX and Alpine I get the following error when running go get github.com/dshearer/jobber

Error:
can't load package: package github.com/dshearer/jobber: no buildable Go source files in {GOPATH}/src/github.com/dshearer/jobber

I'm able to install other packages using go get.

The contents of {GOPATH}/src/github.com/dshearer/jobber:
-rw-r--r-- 1 sebastiaan staff 3732 Nov 11 13:20 DEBIAN_UBUNTU.md
drwxr-xr-x 5 sebastiaan staff 170 Nov 11 13:20 Godeps
-rw-r--r-- 1 sebastiaan staff 1088 Nov 11 13:20 LICENSE
-rw-r--r-- 1 sebastiaan staff 4725 Nov 11 13:20 Makefile
-rw-r--r-- 1 sebastiaan staff 1476 Nov 11 13:20 README.md
drwxr-xr-x 4 sebastiaan staff 136 Nov 11 13:20 buildtools
drwxr-xr-x 12 sebastiaan staff 408 Nov 11 13:20 common
drwxr-xr-x 4 sebastiaan staff 136 Nov 11 13:20 jobber
drwxr-xr-x 11 sebastiaan staff 374 Nov 11 13:20 jobberd
drwxr-xr-x 8 sebastiaan staff 272 Nov 11 13:20 jobfile
drwxr-xr-x 8 sebastiaan staff 272 Nov 11 13:20 packaging
drwxr-xr-x 9 sebastiaan staff 306 Nov 11 13:20 platform_tests
-rw-r--r-- 1 sebastiaan staff 4 Nov 11 13:20 version

Configuration loader only loads configuration for users with home directories in /home

Currently the code that is responsible for loading the job configuration will only load configuration for with home directories in /home. This would prevent me from using this to run cron jobs as (for example) the webserver user, whose home directory is usually set to something like /srv/http or /var/www.

Instead of simply opening up every directory in /home (some of which may not even be home directories), wouldn't it be better to just parse /etc/users? This would also remove the need to handle the root user specially (the code for which is also kinda broken, as assuming /root is the root user's home directory is also not always a correct assumption).

Job always failing but runs when i do Test

I have a job set for every 10 mins. But it always fails. If i run jobber test Filebot, the Job runs. I dont see any error in the log

NAME     STATUS  SEC/MIN/HR/MDAY/MTH/WDAY  NEXT RUN TIME         NOTIFY ON ERR  NOTIFY ON FAIL  ERR HANDLER
Filebot  Failed  0 */10 * * * *            Oct  2 22:40:00 2016  false          false           Stop

---
- name: Filebot
  cmd: /mnt/sde1/Scripts/Filebot/MediaMgmt.sh
  time: "0 */10 * * * *"
  onError: Stop
  notifyOnError: false
  notifyOnFailure: false

run with

sudo -H -u svc-saitoh bash -c 'jobber test Filebot'

Jobber Installer Not Working On Alpine

The installer is not working on Alpine because the instructions to add and remove users are different:

Log:
make: Entering directory '/opt/jobber/lib/src/github.com/dshearer/jobber'
bash update-version.sh
go install github.com/dshearer/jobber
go install github.com/dshearer/jobber/jobber
go install github.com/dshearer/jobber/jobberd
TMPDIR="/opt/jobber/lib" go test github.com/dshearer/jobber/jobberd
ok github.com/dshearer/jobber/jobberd 2.678s
userdel jobber_client
Makefile:50: recipe for target '/opt/jobber/bin/jobber' failed
useradd --home / -M --system --shell /sbin/nologin "jobber_client"
make: userdel: Command not found
make: [/opt/jobber/bin/jobber] Error 127 (ignored)
/bin/sh: useradd: not found
make: *** [/opt/jobber/bin/jobber] Error 127
Makefile:50: recipe for target '/opt/jobber/bin/jobber' failed
make: Leaving directory '/opt/jobber/lib/src/github.com/dshearer/jobber'

Working command example for Alpine:
adduser -u 1000 -G jenkins -h /usr/bin/logrotate.d -s /bin/bash -S jenkins

Fires off job multiple times until crash

Hi, here is my jobber script:


---
- name: PlexEmail
  cmd: /usr/bin/python /opt/plexEmail/scripts/plexEmail.py -t
  time: "* 51 13 * * *"
  onError: Stop
  notifyOnError: false
  notifyOnFailure: false

When i use the test command, it does the job and stops right after and i receive my email. But when the task is triggered from the schedule, It is as if jobber launches the task like 30+ times which results in me getting 30+ emails of the same thing till the task crashes because of out of memory. I dont understand why it doesnt do like when i run the test

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.