rfk / django-supervisor Goto Github PK
View Code? Open in Web Editor NEWeasy integration between django and supervisord
License: MIT License
easy integration between django and supervisord
License: MIT License
For the first time deploying, I cam simply do
python manage.py supervisor -d
If I deploy for the second time I will get an
Error: Another program is already listening on a port that one of our HTTP servers is configured to use. Shut this program down first before starting supervisord.
So I have to do
python manage.py supervisor shutdown
python manage.py supervisor -d
But then if this is deployed to a new instance then I get
ERROR: http://127.0.0.1:9544 refused connection (already shut down?)
Is there a command that starts the process only when it's not started and not return an error message?
django-supervisor 0.4.0 and supervisor 3.3.1 raises KeyError: 'ctl-command'
when trying to launch / daemonize supervisor via ./manage.py supervisor --daemonize
Currently running django version 1.5
Please let me know what info you require to debug.
Subj. will it be helfull ?
If I use py2app to package a Django application with django-supervisor, it will not run as it's looking for version.txt within the site-packages.zip directory. I can work around this by hard-coding in the version number in the lines mentioned below but I do not believe this is a great solution. I will work to figure out a patch.
Traceback (most recent call last):
File "/Users/kevinlondon/dropbox/programming/python/django/sample_app/dist/sample_app.app/Contents/Resources/lib/python2.7/manage.py", line 14, in <module>
execute_from_command_line(sys.argv)
File "/Users/kevinlondon/dropbox/programming/python/django/sample_app/dist/sample_app.app/Contents/Resources/lib/python2.7/django/core/management/__init__.py", line 453, in execute_from_command_line
utility.execute()
File "/Users/kevinlondon/dropbox/programming/python/django/sample_app/dist/sample_app.app/Contents/Resources/lib/python2.7/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Users/kevinlondon/dropbox/programming/python/django/sample_app/dist/sample_app.app/Contents/Resources/lib/python2.7/django/core/management/__init__.py", line 272, in fetch_command
klass = load_command_class(app_name, subcommand)
File "/Users/kevinlondon/dropbox/programming/python/django/sample_app/dist/sample_app.app/Contents/Resources/lib/python2.7/django/core/management/__init__.py", line 77, in load_command_class
module = import_module('%s.management.commands.%s' % (app_name, name))
File "/Users/kevinlondon/dropbox/programming/python/django/sample_app/dist/sample_app.app/Contents/Resources/lib/python2.7/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/Users/kevinlondon/dropbox/programming/python/django/sample_app/dist/sample_app.app/Contents/Resources/lib/python2.7/djsupervisor/management/commands/supervisor.py", line 39, in <module>
from supervisor import supervisord, supervisorctl
File "supervisor/supervisord.pyc", line 41, in <module>
File "supervisor/options.pyc", line 56, in <module>
IOError: [Errno 20] Not a directory: '/Users/kevinlondon/dropbox/programming/python/django/sample_app/dist/sample_app.app/Contents/Resources/lib/python2.7/site-packages.zip/supervisor/version.txt'
Django supports Python 2 and 3 since version 1.5. Supervisor does not yet have a release version that supports Python 3, but there is a branch that supports Python 2 and 3 that is planned to be the next major version. It is not complete yet but the tests pass and it works for the most part.
According to the classifiers in setup.py, django-supervisor only supports Python 2. Now is a great time to think about a version of django-supervisor that supports Python 2 and 3.
When running ./manage.py supervisor
with an empty supervisord.conf
, only the autoreload program is running. When I save a file and the reloader process reloaded, the OS X crash reporter pops up saying 'Python quit unexpectedly'. The stack trace is not super helpful. In the short term, I can work around this by disabling the OS X problem reporter.
Thread 0 Crashed:
0 com.apple.CoreFoundation 0x00007fff845878e6 CFRunLoopWakeUp + 150
1 _watchdog_fsevents.so 0x00000001013ea915 watchdog_stop + 37
2 org.python.python 0x00000001000a3ed1 PyEval_EvalFrameEx + 6657
3 org.python.python 0x00000001000a2479 PyEval_EvalCodeEx + 2201
4 org.python.python 0x00000001000aa2b9 fast_function + 313
5 org.python.python 0x00000001000a417e PyEval_EvalFrameEx + 7342
6 org.python.python 0x00000001000aa243 fast_function + 195
System setup:
OS X 10.7
Python 2.7.3 (installed with homebrew as framework brew install python --framework
)
Django==1.4
django-supervisor==0.3.0
watchdog==0.6.0
Mainly I'd be interested if anybody else can reproduce this bug, and it might actually be a watchdog bug.
Something in django-supervisor's configuration is preventing the tail -f
command from working in a supervisorctl shell:
$ ./manage.py supervisor shell
autoreload RUNNING pid 30869, uptime 0:00:03
runserver RUNNING pid 30870, uptime 0:00:03
supervisor> tail runserver
supervisor> tail -f runserver
==> Press Ctrl-C to exit <==
http://127.0.0.1:9110/logtail/runserver/stdout Cannot read, status code 401
It really is the configuration itself doing it:
$ ./manage.py supervisor getconfig > config.conf
$ supervisord -c config.conf
$ supervisorctl -c config.conf tail -f runserver
==> Press Ctrl-C to exit <==
http://127.0.0.1:9110/logtail/runserver/stdout Cannot read, status code 401
Hello
Is there any plans of support python3?
Thank you
importlib has been deprecated and removed in Django 1.9 since python 2.6 compatibility is no longer supported.
Full error:
InvalidTemplateLibrary: Invalid template library specified. ImportError raised when trying to load 'djsupervisor.templatetags.djsupervisor_tags': No module named importlib
Should djsupervisor follow the support schedule of django and remove these incompatibilities?
When I run it with a simple config file
[program:webserver]
command=./manage.py runserver 8000
it repeatedly starts and exits the webserver. When I try to have it run the webserver specifically, I get this error message
$ python manage.py supervisor start webserver
http://127.0.0.1:9668 refused connection
Also, when I edit the config file it doesn't seem to be picking up the changes. I previously had celery, rabbitmq and other processes in the config file, and even after taking them out, it still tries to run them. Is it outputting the results of the template somewhere? I get this output
$ python manage.py supervisor
2011-06-11 09:30:51,583 INFO Increased RLIMIT_NOFILE limit to 1024
2011-06-11 09:30:51,630 INFO RPC interface 'supervisor' initialized
2011-06-11 09:30:51,633 INFO supervisord started with pid 67114
2011-06-11 09:30:52,636 INFO spawned: 'celeryd' with pid 67117
2011-06-11 09:30:52,638 INFO spawned: 'autoreload' with pid 67118
2011-06-11 09:30:52,641 INFO spawned: 'webserver' with pid 67119
2011-06-11 09:30:52,643 INFO spawned: 'celerybeat' with pid 67120
2011-06-11 09:30:52,646 INFO spawned: 'runserver' with pid 67121
2011-06-11 09:30:53,696 INFO success: autoreload entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2011-06-11 09:30:53,696 INFO success: webserver entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2011-06-11 09:30:53,696 INFO success: runserver entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2011-06-11 09:30:53,842 INFO exited: webserver (exit status 1; not expected)
2011-06-11 09:30:54,846 INFO spawned: 'webserver' with pid 67140
2011-06-11 09:30:55,806 INFO exited: webserver (exit status 1; not expected)
2011-06-11 09:30:56,809 INFO spawned: 'webserver' with pid 67144
2011-06-11 09:30:57,787 INFO success: celeryd entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
2011-06-11 09:30:57,787 INFO success: celerybeat entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
2011-06-11 09:30:57,816 INFO success: webserver entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2011-06-11 09:30:57,963 INFO exited: webserver (exit status 1; not expected)
^C2011-06-11 09:30:58,249 INFO spawned: 'webserver' with pid 67151
2011-06-11 09:30:58,250 WARN received SIGINT indicating exit request
2011-06-11 09:30:58,250 INFO waiting for autoreload, celeryd, webserver, celerybeat, runserver to die
2011-06-11 09:30:58,253 INFO stopped: runserver (terminated by SIGTERM)
2011-06-11 09:30:58,278 INFO stopped: celerybeat (exit status 0)
2011-06-11 09:30:58,280 INFO stopped: webserver (terminated by SIGTERM)
2011-06-11 09:30:58,283 INFO stopped: autoreload (terminated by SIGTERM)
.....
When I use sudo python manage.py supervisor
, the log is telling me that webserver exited.
2012-03-08 12:06:34,223 CRIT Set uid to user 0
2012-03-08 12:06:34,265 INFO RPC interface 'supervisor' initialized
2012-03-08 12:06:34,266 INFO supervisord started with pid 19530
2012-03-08 12:06:35,270 INFO spawned: 'webserver' with pid 19539
2012-03-08 12:06:36,177 DEBG fd 7 closed, stopped monitoring <POutputDispatcher at 39415392 for <Subprocess at 38594752 with name webserver in state STARTING> (stdout)>
2012-03-08 12:06:36,177 INFO exited: webserver (exit status 0; not expected)
2012-03-08 12:06:36,177 DEBG received SIGCLD indicating a child quit
2012-03-08 12:06:37,182 INFO spawned: 'webserver' with pid 19553
2012-03-08 12:06:38,074 DEBG fd 7 closed, stopped monitoring <POutputDispatcher at 39415392 for <Subprocess at 38594752 with name webserver in state STARTING> (stdout)>
2012-03-08 12:06:38,074 INFO exited: webserver (exit status 0; not expected)
2012-03-08 12:06:38,074 DEBG received SIGCLD indicating a child quit
2012-03-08 12:06:40,080 INFO spawned: 'webserver' with pid 19563
2012-03-08 12:06:40,987 DEBG fd 7 closed, stopped monitoring <POutputDispatcher at 39415392 for <Subprocess at 38594752 with name webserver in state STARTING> (stdout)>
2012-03-08 12:06:40,987 INFO exited: webserver (exit status 0; not expected)
2012-03-08 12:06:40,987 DEBG received SIGCLD indicating a child quit
2012-03-08 12:06:43,994 INFO spawned: 'webserver' with pid 19572
2012-03-08 12:06:44,900 DEBG fd 7 closed, stopped monitoring <POutputDispatcher at 39415392 for <Subprocess at 38594752 with name webserver in state STARTING> (stdout)>
2012-03-08 12:06:44,901 INFO exited: webserver (exit status 0; not expected)
2012-03-08 12:06:44,901 DEBG received SIGCLD indicating a child quit
2012-03-08 12:06:45,902 INFO gave up: webserver entered FATAL state, too many start retries too quickly
In fact, the webserver process starts successfully and ps -ef | grep fcgi
can find it. However using sudo python mange.py supervisor shell
, displays:
webserver FATAL Exited too quickly (process log may have details)
So, may anybody help me ? Here is my supervisord.conf:
{% if not settings.DBUG %}
[program:runserver]
exclude=true[program:webserver]
command={{ PYTHON }} {{ PROJECT_DIR }}/manage.py runfcgi host=127.0.0.1 port=8000
{% endif %}[program:celerybeat]
autoreload = true
numprocs = 1
redirect_stderr = true
startsecs = 5
priority = 999
command = {{ PYTHON }} {{ PROJECT_DIR }}/manage.py celerybeat --pidfile=/var/run/celerybeat.pid --logfile=/var/log/celerybeat.log
user = root
directory = {{ PROJECT_DIR }}
exclude=true[program:celeryd]
autoreload = true
numprocs = 1
redirect_stderr = true
stopwaitsecs = 600
startsecs = 5
priority = 998
command = {{ PYTHON }} {{ PROJECT_DIR }}/manage.py celeryd --pidfile=/var/run/celeryd.pid --logfile=/var/log/celeryd.log
user = root
directory = {{ PROJECT_DIR }}
exclude=true[supervisord]
logfile=/var/log/supervisor/supervisord.log
logfile_maxbytes=50MB
logfile_backups=10
loglevel=debug ; info, debug, warn, trace
pidfile=/var/run/supervisord.pid
nodaemon=true
minfds=1024
minprocs=200
user=root[program:overrides]
user=root
Supervisor provides a supervisorctl tail -f
command which is handy -- and which djsupervisor rejects from the command line, since -f
is an unrecognized option:
$ ./manage.py supervisor tail -f runserver
Usage: ./manage.py supervisor [options] [<command> [<process>, ...]]
Manage processes with supervisord.
With no arguments, this spawns the configured background processes.
With a command argument it lets you control the running processes.
Available commands include:
supervisor shell
supervisor start <progname>
supervisor stop <progname>
supervisor restart <progname>
./manage.py: error: no such option: -f
Splitting apart the supervisord
/ supervisorctl
/ djsupervisor
commands (as discussed in #3) might make this an easier fix .. any manage.py supervisorctl
command could just pass all its arguments directly to supervisorctl
and be done with it.
Hi,
When I run django-supervisor with the default settings through the management command, it autospaws runserver and autoreload as well as the processes that I've defined in the supervisord.conf. This is OK, but I noticed that autoreload takes up around 50-70% of my CPU when enabled. I observed that after doing './manage.py supervisor shell' and then 'stop autoreload', my CPU dropped down to the level that it was before I started supervisor (around 6% for my idle desktop applications). I'm wondering other people have this issue, if maybe is specific to my environment, or if it's a known limitation and the recommendation is to expect this overhead in the development environment. Thanks for your consideration.
Here is my config:
vim $MY_DJANGO_PROJECT_DIR/supervisord.conf
[program:celeryd]
command={{ PYTHON }} {{ PROJECT_DIR }}/manage.py celeryd -E -B
[program:celerycam]
command={{ PYTHON }} {{ PROJECT_DIR }}/manage.py celerycam
[program:runserver]
exclude=false
[program:autoreload]
exclude=true
PS: Django-supervisor is a pretty awesome wrapper to an already awesome project. Thank you for creating it!
I feel a bit silly asking, but how do people typically run python manage.py supervisor
in production environments? Naturally, you'd want a solution that starts django-supervisor
after boot. Is running django-supervisor
with supervisor
the common pattern? Or is it more common to use an upstart script? I do like that I can have all of my configuration cleanly managed through my Django project, but something about running django-supervisor
from supervisor
sounds a bit odd (although I can't really think of any reason why I shouldn't).
It's not a supervisor config file, it's a supervisor config file template.
When you run supervisorctl
, it will use the supervisor.conf
file from the current working directory if there is one. We use a system level supervisor to start our apps which use django-supervisor
.
This means that if we try to control the system level supervisor while in the app directory, it fails with some errors about our bogus configuration, because it's trying to read the Django template.
This should at least be configurable, so we can name the file something like supervisor.conf.tmpl
or supervisor.conf
, so we won't be forever typing cd ..
before sudo supervisorctl ...
:)
Ideally though, it should probably just try to read supervisor.conf.tmpl
first, and fallback to supervisor.conf
(with a deprecation warning)?
Hi,
I have a suggestion for a nice feature that would make using django-supervisor in multiple environments much easier.
Currently, the main config file is hardcoded to be supervisord.conf
. The templated config approach makes it really easy to add things like {% if settings.DEBUG %}
to change the behaviour based on the environment. However, there's only so far you can take this. For example, I might want to specify different logging directories on staging and production servers, and I'm finding myself writing things like {% if "production" in settings.SETTINGS_MODULE %}
, which feels very ugly.
What do you think about the idea of adding a -c/--config
flag to manage.py supervisor
to allow me to maintain a completely different config file for each environment? {{ PROJECT_DIR }}/supervisord.conf
would still be the default, and app-specific configs would still live in <app directory>/management/supervisord.conf
.
It might even be possible to use template inheritance on your configs.. but maybe that's taking things a bit far ;)
What do you think? I'm happy to have a go at implementing it, if you think it's a good idea.
Cheers
When I start supervisor on my project, and it executes Django's webserver, there's no way to see the output in the terminal.
Is there a way to have it intercept the output of that process (or any given process in the supervisor config file) and echo it?
This would be very helpful during development.
It would be nice if there was a way to know when a process dies via say a Django signal or something.
When I tried to install django-supervisor with pip, it failed to install and got the following error:
>pip install django-supervisor
Downloading/unpacking django-supervisor
Downloading django-supervisor-0.3.0.tar.gz
Running setup.py egg_info for package django-supervisor
[...]
reading manifest file 'pip-egg-info\django_supervisor.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
Traceback (most recent call last):
File "<string>", line 14, in <module>
[...]
File "C:\Python27\Lib\distutils\util.py", line 203, in convert_path
raise ValueError, "path '%s' cannot end with '/'" % pathname
ValueError: path 'djsupervisor/' cannot end with '/'
I'm using Python 2.7.1 on Windows 7.
My traceback was similar to the one in this distutils bug report. Apparently distutils changed this behavior to raise a warning instead of an error in September 2011, but that would mean only Python 2.7.3 and later versions of Python 3 have this distutils fix. (Here: 2.7.3 was Mar 2012; 2.7.2 was Jul 2011.)
I managed to install django-supervisor the following way: I downloaded the django-supervisor package from GitHub, unzipped it, modified MANIFEST.in to not have a trailing slash after djsupervisor
on the recursive-include
line, zipped it back up, and then installed it by specifying the filepath of the zip. So I've confirmed that removing the trailing slash makes it work for me.
I'm not sure if removing the trailing slash would break installation on other OSes/Pythons, but from looking at various other Django apps, it seems a lot of them don't have trailing slashes in their MANIFEST.in recursive-include
lines.
Might I suggest removing the trailing slash in MANIFEST.in?
It seems like you removed support for per-app config files recently. Unfortunately this quite broke things for us. If you don't want this to be default behavior, it would be nice if there were at least a config setting we could put in supervisord.conf or on the command line to turn it back on. We had no problem at all with this setup, and were quite surprised when it vanished without warning.
Alternatively, at least bump the version number when this happens, so our requirement.txt files can be set to require the older version (unfortunately there are now two versions of 0.3.0 in the wild, one with support for per-app configs, one without) and possibly do a little loop through all the apps to see if any have config files in them and print a warning that this is no longer supported. (Would have saved us half a day of troubleshooting to figure out why two machines with the exact same version (0.3.0) had wildly different behaviors.
Whenever I try to shut down supervisor, it endlessly prints a message like this:
'2011-06-24 19:10:04,916 INFO waiting for celeryd to die'
Any ideas what could be causing celeryd to fail to exit or how to diagnose the problem? I have to kill the processes manually in order for supervisor to quit.
Lee
Hi,
I use django-supervisor 0.3.3 and have this issue while trying to upgrade from django 1.8 to django 1.9:
InvalidTemplateLibrary: Invalid template library specified. ImportError raised when trying to load 'djsupervisor.templatetags.djsupervisor_tags': cannot import name djsupervisor_tags
After investigating a bit, it's seem that's because of a cyclic import:
>>> import djsupervisor.templatetags.djsupervisor_tags
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/phil/venvs/mca-5c2e50c/local/lib/python2.7/site-packages/djsupervisor/templatetags/djsupervisor_tags.py", line 17, in <module>
import djsupervisor.config
File "/home/phil/venvs/mca-5c2e50c/local/lib/python2.7/site-packages/djsupervisor/config.py", line 28, in <module>
from djsupervisor.templatetags import djsupervisor_tags
ImportError: cannot import name djsupervisor_tags
So djsupervisor.templatetags.djsupervisor_tags
import djsupervisor.config
which import djsupervisor.templatetags.djsupervisor_tags
.
Cannot explain why this bug doesn't occur with django 1.8...
Thanks.
With no autoreload programs, the autoreload process results in:
2014-03-24 03:39:04,887 DEBG 'autoreload' stdout output:
Error: restart requires a process name
restart <name> Restart a process
restart <gname>:* Restart all processes in a group
restart <name> <name> Restart multiple processes or groups
restart all Restart all processes
Note: restart does not reread config files. For that, see reread and update.
If _get_autoreload_programs
returns an empty list, a warning/info should get logged (maybe only once) and no restart should get initiated.
I was not sure how to implement this, since django-supervisor does not use any logging currently.
Should this get done via print()
, or should a logger = logging.getLogger(__name__)
-like logger be used?
for exzample : the celery mult is:
celery multi start 4 -c 10 -A digger -Q analyse_report_queue -P eventlet -l info --pidfile="/tmp/celery/analyse_report_%n.pid" --logfile="var/log/celery/analyse_report_%n.log"
how to make the above celery mult command in the django supervisord.conf?
Hello, thanks for the great module! The templating is awesome in supervisord.conf, and I've got other files (nginx.conf and such) used by processes managed by supervisor, that need to know {{ PROJECT_DIR }}.
It would be cool if I could create an nginx.conf.template, and tell django-supervisor somehow that it should template that file and output it without the .template extension, before running the file.
I wonder if this could be accomplished with a custom template filter, such that you can do {{ "appdir/management/nginx.conf.template"|template }}, which renders nginx.conf.template, saves it to nginx.conf, and returns the new path.
Thoughts?
Everytime I use the --pidfile param as follows, I get the following error:
(givity)givity@Web:~/givity/src$ python manage.py supervisor --pidfile tasks.pid
Traceback (most recent call last):
File "manage.py", line 14, in
execute_manager(settings)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/init.py", line 459, in execute_manager
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/init.py", line 382, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/givity/.virtualenvs/givity/local/lib/python2.7/site-packages/djsupervisor/management/commands/supervisor.py", line 166, in run_from_argv
return super(Command,self).run_from_argv(argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 196, in run_from_argv
self.execute(_args, *_options.dict)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 232, in execute
output = self.handle(_args, *_options)
File "/home/givity/.virtualenvs/givity/local/lib/python2.7/site-packages/djsupervisor/management/commands/supervisor.py", line 181, in handle
assert args[0].isalnum()
AssertionError
Watchdog fails if the process doesn't have permission on any of the sub-directories it is asked to watch - see gorakhargosh/watchdog#75.
Currently, the _find_live_code_dirs
command will find the parent directory containing manage.py
- i.e. the root directory of the project. My project directory contains a .vagrant
subdirectory which contains virtual machine files, and they are not readable by the process running django-supervisor.
Consequently, the autoreload process fails to start:
./manage.py supervisor autoreload
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/roger/.virtualenvs/fdw/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 399, in execute_from_command_line
utility.execute()
File "/home/roger/.virtualenvs/fdw/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/roger/.virtualenvs/fdw/local/lib/python2.7/site-packages/djsupervisor/management/commands/supervisor.py", line 163, in run_from_argv
return super(Command,self).run_from_argv(argv)
File "/home/roger/.virtualenvs/fdw/local/lib/python2.7/site-packages/django/core/management/base.py", line 242, in run_from_argv
self.execute(*args, **options.__dict__)
File "/home/roger/.virtualenvs/fdw/local/lib/python2.7/site-packages/django/core/management/base.py", line 285, in execute
output = self.handle(*args, **options)
File "/home/roger/.virtualenvs/fdw/local/lib/python2.7/site-packages/djsupervisor/management/commands/supervisor.py", line 186, in handle
return method(cfg_file,*args[1:],**options)
File "/home/roger/.virtualenvs/fdw/local/lib/python2.7/site-packages/djsupervisor/management/commands/supervisor.py", line 264, in _handle_autoreload
observer.start()
File "/home/roger/.virtualenvs/fdw/local/lib/python2.7/site-packages/watchdog/observers/api.py", line 255, in start
emitter.start()
File "/home/roger/.virtualenvs/fdw/local/lib/python2.7/site-packages/watchdog/utils/__init__.py", line 111, in start
self.on_thread_start()
File "/home/roger/.virtualenvs/fdw/local/lib/python2.7/site-packages/watchdog/observers/inotify.py", line 121, in on_thread_start
self._inotify = InotifyBuffer(path, self.watch.is_recursive)
File "/home/roger/.virtualenvs/fdw/local/lib/python2.7/site-packages/watchdog/observers/inotify_buffer.py", line 35, in __init__
self._inotify = Inotify(path, recursive)
File "/home/roger/.virtualenvs/fdw/local/lib/python2.7/site-packages/watchdog/observers/inotify_c.py", line 187, in __init__
self._add_dir_watch(path, recursive, event_mask)
File "/home/roger/.virtualenvs/fdw/local/lib/python2.7/site-packages/watchdog/observers/inotify_c.py", line 371, in _add_dir_watch
self._add_watch(full_path, mask)
File "/home/roger/.virtualenvs/fdw/local/lib/python2.7/site-packages/watchdog/observers/inotify_c.py", line 385, in _add_watch
Inotify._raise_error()
File "/home/roger/.virtualenvs/fdw/local/lib/python2.7/site-packages/watchdog/observers/inotify_c.py", line 406, in _raise_error
raise OSError(os.strerror(err))
OSError: Permission denied
In the absence of a fix upstream in Watchdog that ignores permissions errors and keeps on running (which given that bug has been open since 2011, and could be desirable functionality in many scenarios, seems unlikely) we need some way to manage it in django-supervisor.
An obvious approach would be to allow the directories to be watched to be specified in the autoreload section of the conf file, either by allowing specific directories to be included, in which case the search wouldn't occur; or by allowing directories to be excluded, in which case the search would ignore them. Ignoring directories beginning with .
would also work for me.
Originally reported by @likesxuqiang in Supervisor/supervisor#537
Traceback (most recent call last):
File "manage.py", line 9, in
execute_from_command_line(sys.argv)
File "/home/ubuntu/autolib/venv/lib/python3.4/site-packages/django/core/management/init.py", line 385, in execute_from_command_line
utility.execute()
File "/home/ubuntu/autolib/venv/lib/python3.4/site-packages/django/core/management/init.py", line 377, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/ubuntu/autolib/venv/lib/python3.4/site-packages/djsupervisor/management/commands/supervisor.py", line 163, in run_from_argv
return super(Command,self).run_from_argv(argv)
File "/home/ubuntu/autolib/venv/lib/python3.4/site-packages/django/core/management/base.py", line 288, in run_from_argv
self.execute(args, *options.dict)
File "/home/ubuntu/autolib/venv/lib/python3.4/site-packages/django/core/management/base.py", line 338, in execute
output = self.handle(args, *options)
File "/home/ubuntu/autolib/venv/lib/python3.4/site-packages/djsupervisor/management/commands/supervisor.py", line 175, in handle
return supervisord.main(("-c",cfg_file))
File "/home/ubuntu/autolib/venv/lib/python3.4/site-packages/supervisor-4.0.0_dev-py3.4.egg/supervisor/supervisord.py", line 346, in main
options.realize(args, doc=doc)
File "/home/ubuntu/autolib/venv/lib/python3.4/site-packages/supervisor-4.0.0_dev-py3.4.egg/supervisor/options.py", line 471, in realize
Options.realize(self, arg, *kw)
File "/home/ubuntu/autolib/venv/lib/python3.4/site-packages/supervisor-4.0.0_dev-py3.4.egg/supervisor/options.py", line 318, in realize
self.process_config()
File "/home/ubuntu/autolib/venv/lib/python3.4/site-packages/supervisor-4.0.0_dev-py3.4.egg/supervisor/options.py", line 531, in process_config
Options.process_config(self, do_usage=do_usage)
File "/home/ubuntu/autolib/venv/lib/python3.4/site-packages/supervisor-4.0.0_dev-py3.4.egg/supervisor/options.py", line 326, in process_config
self.process_config_file(do_usage)
File "/home/ubuntu/autolib/venv/lib/python3.4/site-packages/supervisor-4.0.0_dev-py3.4.egg/supervisor/options.py", line 357, in process_config_file
self.read_config(self.configfile)
File "/home/ubuntu/autolib/venv/lib/python3.4/site-packages/supervisor-4.0.0_dev-py3.4.egg/supervisor/options.py", line 558, in read_config
parser.read_file(fp)
File "/usr/lib/python3.4/configparser.py", line 691, in read_file
self._read(f, source)
File "/usr/lib/python3.4/configparser.py", line 993, in _read
for lineno, line in enumerate(fp, start=1):
TypeError: 'OnDemandStringIO' object is not iterable
Hello folks,
I'm currently trying to launch multiple gunicorn instances via django-supervisor each having a custom settings file to have them use different databases. Everything works quite well expect the calls to gunicorn which I encapsulated in different programs seem to ignore the separate settings file I specified .... Any hints?
[program:webserver_first]
command=gunicorn -w 2 myApp.wsgi:application --settings myApp.settings_prod -b 127.0.0.1:8027
[program:webserver_second]
command=gunicorn -w 2 myApp.wsgi:application --settings myApp.settings_prod_second -b 127.0.0.1:4020
python manage.py supervisor --daemonize runs fine. Stop all the processes by issuing stop all command in the shell, then try to run python manage.py supervisor, and it fails with:
Error: Another program is already listening on a port that one of our HTTP servers is configured to use. Shut this program down first before starting supervisord.
It is possible to completely ignore the django-supervisor provided configs for celerybeat, runserver, etc.? They are conflicting with my own configs for them, and I can't find an easy way of ignoring them without adding explicit excludes for each program. I am really only interested in using django-supervisor's templating and command line features - not configs. Thanks!
Here's how I set up django-supervisor to execute when my VPS boots up. This file goes in /etc/init/django-supervisor.conf
. Any output or errors from this file itself will appear in /var/log/test.debug
since Upstart is apparently bad about outputting errors.
description "start django-supervisor"
start on runlevel [2345]
stop on runlevel [!2345]
env RUN_AS_USER=root
env VIRTUALENV=myvirtualenv
env PROJECT_DIR=/home/www/myprojectdir
env CMD="source /opt/virtualenvs/$VIRTUALENV/bin/activate; /usr/bin/env python $PROJECT_DIR/manage.py supervisor"
respawn
script
exec >/var/log/test.debug 2>&1
echo $CMD
cd $PROJECT_DIR
su -c "$CMD" $RUN_AS_USER
end script
If you'd like I can add this to the README or other docs.
When using the following in apps/supervisord.conf, which is used by django-supervisor, the referenced include file is silently ignored:
[include]
files = ../supervisord.conf
I guess that this is caused by the config being passed as a string:
cfg_file = OnDemandStringIO(get_merged_config, **options)
# With no arguments, we launch the processes under supervisord.
if not args:
return supervisord.main(("-c",cfg_file))
And supervisor appears to silently ignore any non-existent "files" in the "include" section.
Using an absolute path works around this.
My supervisor
configuration runs a celeryd
, solr
, runserver
, autoreload
and node
. The autoreload
is set to false
for autoreload
, solr
and node
, i.e. celeryd
and runserver
get restarted every time I change some code. However, at every code change, number of autoreload
processes increases (seem to be child processes of the main autoreload
process) and those processes are not killed when supervisord
is stopped.
I am trying to use django-supervisor to start and stop celery daemons in a virtualenv to run scheduled tasks. I am using Django 1.5.5, with the following libraries:
django-supervisor==0.3.2
supervisor==3.1.0
celery==3.1.13
django-celery==3.1.10
This is the supervisor.conf file:
[program:celeryd]
command={{ PYTHON }} {{ PROJECT_DIR }}/manage.py celeryd -E -B --loglevel=DEBUG
[program:celerycam]
command={{ PYTHON }} {{ PROJECT_DIR }}/manage.py celerycam
When running it, everything works properly in my development computer and my scheduled tasks are running:
$ ./manage.py supervisor
2014-08-07 14:39:17,863 INFO RPC interface 'supervisor' initialized
2014-08-07 14:39:17,863 INFO supervisord started with pid 3471
2014-08-07 14:39:18,867 INFO spawned: 'autoreload' with pid 3483
2014-08-07 14:39:18,869 INFO spawned: 'celeryd' with pid 3484
2014-08-07 14:39:18,871 INFO spawned: 'celerycam' with pid 3485
2014-08-07 14:39:19,447 DEBG 'celerycam' stdout output:
Stale pidfile exists. Removing it.
2014-08-07 14:39:19,447 DEBG 'celerycam' stdout output:
2014-08-07 14:39:19,467 DEBG 'celerycam' stdout output:
[2014-08-07 14:39:19,466: INFO/MainProcess] Connected to amqp://guest:*@127.0.0.1:5672//
....[CLIPPED]....
2014-08-07 14:39:19,537 DEBG 'celeryd' stdout output:
[2014-08-07 14:39:19,537: DEBUG/Beat] DatabaseScheduler: Fetching database schedule
2014-08-07 14:39:19,552 DEBG 'celeryd' stdout output:
[2014-08-07 14:39:19,552: DEBUG/MainProcess] ^-- substep ok
2014-08-07 14:39:19,553 DEBG 'celeryd' stdout output:
[2014-08-07 14:39:19,552: DEBUG/MainProcess] | Consumer: Starting Mingle
[2014-08-07 14:39:19,552: INFO/MainProcess] mingle: searching for neighbors
2014-08-07 14:39:19,578 DEBG 'celeryd' stdout output:
[2014-08-07 14:39:19,578: DEBUG/Beat] Current schedule:
<ModelEntry: myprj.myapp.tasks.remove_orphaned_images myprj.myapp.tasks.remove_orphaned_images([], *{}) {4}>
<ModelEntry: celery.backend_cleanup celery.backend_cleanup([], *{}) {4}>
<ModelEntry: Remove orphaned images myprj.myapp.tasks.remove_orphaned_images([], **{}) {4}>
...
But when I run this in the production server, it doesn't seem to work:
$ ./manage.py supervisor
2014-08-07 14:44:27,605 INFO RPC interface 'supervisor' initialized
2014-08-07 14:44:27,606 INFO supervisord started with pid 26165
2014-08-07 14:44:28,609 INFO spawned: 'celeryd' with pid 26418
2014-08-07 14:44:28,610 INFO spawned: 'celerycam' with pid 26419
2014-08-07 14:44:30,543 INFO success: celeryd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2014-08-07 14:44:30,543 INFO success: celerycam entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
...[it stops here]...
If I run the commands (celeryd and celerycam) without supervisor, they are properly working in the production server virtualenv.
Any idea of what it could be?
Thanks in advance
Need to revisit batching of change notifications, per comments here: 440f616#commitcomment-1143608
When I run the supervisord process and want to reload the whole configuration without having to manually shutdown the supervisord process I'd like to use supervisor's reload
command.
So here's what I do:
$ ./manage.py supervisor -d
# much chatter
$ ./manage.py supervisor reload
# shuts everything down then tries to start again..
Error: .ini file does not include supervisord section
For help, use /Users/jezdez/.virtualenvs/src/acme/acme/manage.py -h
I expected instead a process restart including loading the supervisor config and the app code.
Don't hesitate to let me know if you need any more description or logs.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.