GithubHelp home page GithubHelp logo

praiskup / resalloc Goto Github PK

View Code? Open in Web Editor NEW
5.0 5.0 7.0 282 KB

Allocator and manager for (expensive) resources

License: GNU General Public License v2.0

Python 71.26% Mako 0.34% Makefile 1.48% Shell 17.99% Smarty 6.90% CSS 0.02% HTML 2.02%

resalloc's People

Contributors

ficap avatar frostyx avatar praiskup avatar schlupov avatar xsuchy avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

resalloc's Issues

taken tickets should trigger spawner

Even though I have this:

aws_x86_64_normal:
    max: 15
    max_starting: 5
    max_prealloc: 2
    tags:
    - copr_builder
    - arch_x86_64
    - arch_x86_64_native
    cmd_new: "/var/lib/resallocserver/resalloc_provision/vm-aws-new --arch=x86_64"
    cmd_delete: "/var/lib/resallocserver/resalloc_provision/vm-aws-delete"

And several (5+) tag-matching tickets are taken, only two builders are being started
at the same time.

RFE: Connect to a resource

It would be nice to have something like

resalloc-maint resource-ssh <resource-id>

Alternatively resource-shell, resource-connect, or just resalloc-maint ssh, ...

Configurable command to introspect the non-tracked resources, and automatic removal

For various reasons, we sometimes end-up with still running resources (or at least wasting the quota) while reaslloc considers them non-existing (SSH connection fails, resource is seemingly deleted, etc..).

It would be good to have a new option like cmd_introspect. This would be a shell command that would list all the instances in the pool regardless of the current state of the database (e.g. using virsh list --all for hypervisors) Using this list, we could execute the cmd_delete the normal way for all the existing resources not being in the database. This should happen:

  1. as a first action right after the resalloc service start (to cleanup the leftovers as soon as possible)
  2. periodically, say each 15 minutes

config option to disable spawning / evacuate

If we want to (temporarily) disable spawning new resources, we need to edit /etc/resallocserver/pools.yaml and set max: 0 for each pool section. This is too cumbersome as we have 10 pools in Copr.

I would like to propose some boolean option into server.yaml or some main section in pools.yaml that would supersede the max value and temporarily disable spawning new resources.

My log file is empty

I am not sure how resalloc logging work, but I am using the default configuration and my /var/log/resallocserver/main.log got created but it is empty. Even though my spawning didn't work because of missing cmd_new and cmd_check scripts I didn't get any error information.

My /var/log/resallocserver/hooks/ directory contains specific resource logs and they are not empty.

Show resource DATA (e.g. IP address if we talk about VMs)

I think it would be useful if resalloc-maint resource-list printed also an IP address for each resource. Or alternatively, if we are afraid to get the output too verbose or complicated (which thanks to tags, it IMHO is), I think it would be great to introduce something like resalloc-maint resource-info <id> for displaying everything we know about the resource.

on demand resources

Sometimes it would be useful to have pool which doesn't preallocate resources in advance because those wouldn't be used for a long time. It would be better if we started those resources on demand, when ticket requesting that comes.

sqlite database is locked

Random test failure against sqlite:

 * 3. case: count limit
Traceback (most recent call last):
  File "/builddir/build/BUILD/resalloc-3.1/test-tooling/../bin/resalloc", line 113, in <module>
    sys.exit(main())
  File "/builddir/build/BUILD/resalloc-3.1/test-tooling/../bin/resalloc", line 98, in main
    output = ticket.wait()
  File "/builddir/build/BUILD/resalloc-3.1/resalloc/client.py", line 90, in wait
    return str(self._connection.call("waitTicket", self.id))
  File "/builddir/build/BUILD/resalloc-3.1/resalloc/client.py", line 40, in call
    return fcall(*args)
  File "/usr/lib64/python3.7/xmlrpc/client.py", line 1112, in __call__
    return self.__send(self.__name, args)
  File "/usr/lib64/python3.7/xmlrpc/client.py", line 1452, in __request
    verbose=self.__verbose
  File "/usr/lib64/python3.7/xmlrpc/client.py", line 1154, in request
    return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib64/python3.7/xmlrpc/client.py", line 1170, in single_request
    return self.parse_response(resp)
  File "/usr/lib64/python3.7/xmlrpc/client.py", line 1342, in parse_response
    return u.close()
  File "/usr/lib64/python3.7/xmlrpc/client.py", line 656, in close
    raise Fault(**self._stack[0])
xmlrpc.client.Fault: <Fault 1: "<class 'sqlalchemy.exc.OperationalError'>:(sqlite3.OperationalError) database is locked\n[SQL: SELECT tickets.id AS tickets_id, tickets.resource_id AS tickets_resource_id, tickets.state AS tickets_state, tickets.tid AS tickets_tid, tickets.sandbox AS tickets_sandbox \nFROM tickets \nWHERE tickets.id = ?]\n[parameters: ('9',)]\n(Background on this error at: http://sqlalche.me/e/e3q8)">

Command for resource logs

Something like

resalloc-maint resource-tail <resource-id>

(or alternatively resource-logs, etc) that would tail -f logs from /var/log/resallocserver/hooks/<ID>_*

weird RESALLOC_NAME value in terminate script

$ cat 007134_terminate 
+ test -z aarch64_02_prod_00005317_20190608_000817
+ case "$RESALLOC_POOL_ID" in
+ conn=qemu+ssh://[email protected]/system
+ repeat virsh -c qemu+ssh://[email protected]/system destroy aarch64_02_prod_00005317_20190608_000817
+ counter=0
+ :
++ echo '1 + 0'
++ bc
+ counter=1
+ virsh -c qemu+ssh://[email protected]/system destroy aarch64_02_prod_00005317_20190608_000817
Domain aarch64_02_prod_00005317_20190608_000817 destroyed

+ test 0 -eq 0
+ break
+ repeat virsh -c qemu+ssh://[email protected]/system undefine aarch64_02_prod_00005317_20190608_000817 --remove-all-storage --nvram
+ counter=0
+ :
++ echo '1 + 0'
++ bc
+ counter=1
+ virsh -c qemu+ssh://[email protected]/system undefine aarch64_02_prod_00005317_20190608_000817 --remove-all-storage --nvram
Domain aarch64_02_prod_00005317_20190608_000817 has been undefined
Volume 'vda'(/var/lib/libvirt/images/aarch64_02_prod_00005317_20190608_000817_root) removed.
Volume 'vdb'(/var/lib/libvirt/images/aarch64_02_prod_00005317_20190608_000817_config) removed.
Volume 'vdc'(/dev/rhel/aarch64_02_prod_00005317_20190608_000817_swap) removed.

+ test 0 -eq 0
+ break

Delete all resources

I find myself occasionally deleting all resources on copr-be-dev. You typically don't want to do this on a production server but on dev, it happens that you have broken all builders and they need to be respawned. In such scenario, I do

[root@copr-be-dev ~][STG]# su resalloc
bash-5.0$ resalloc-maint resource-list
7282 - aws_x86_64_normal_dev_00007282_20201002_195313 ...
7283 - aws_x86_64_normal_dev_00007283_20201002_195314 ...
7278 - aws_aarch64_normal_dev_00007278_20201002_195012 ...
7284 - aws_x86_64_normal_dev_00007284_20201002_195334 ...
7281 - aws_x86_64_normal_dev_00007281_20201002_195312 ...
7279 - aws_x86_64_normal_dev_00007279_20201002_195309 ...
7280 - aws_x86_64_normal_dev_00007280_20201002_195310 ...
7270 - aws_aarch64_normal_dev_00007270_20201002_193005 ...

bash-5.0$ resalloc-maint resource-delete \
    7282 7283 7278 7284 7281 7279 7280 7270

I could probably write a small bash script doing this, but I think it would be nicer to have a possibility to do something like this

resalloc-maint resource-delete all

and maybe even

resalloc-maint resource-delete unused

Or whatever syntax you would prefer. Another possibility would be to add --tag, --pool, --status to delete resources by those values - which would IMHO be really cool but overkill.

What do you think?

Released resources are calculated as "ready"

So when there are dozens of released resources (assigned to sandbox), manager is not
spawning other resources on background according to the max_prealloc quota.

This decreases the copr throughput today a lot. I had to increase the max_prealloc quota.

Execute asynchronous actions in daemons

AllocWorker, TerminationWorker, etc. are run in background using
threads. It means that restart of whole resalloc server terminates
the background threads, too - which means that the database can
be inconsistent after resalloc server restart.

Typically it means that some resources are STARTING or
DELETING, even though nothing happens with them.

Web User Interface

It would be nice to have a list of tickets/resources available on web page, and
as "admin" it would be nice to be able to kill them.

Config option hostname does nothing, it should be host

The default config/server.yaml suggests to change hostname variable to make xmlrpc server listen on some other address. This config option does nothing though, resalloc reads host from the config instead.

I would send a PR but I am not sure whether we want to fix the documentation or the code.

termination scripts stay around, even though the resource is killed

   CGroup: /system.slice/resalloc.service
           ├─  436 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─  756 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─ 1102 /usr/bin/python3 -s /usr/bin/resalloc-server
           ├─ 1333 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─ 2052 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─ 2375 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─ 2935 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─ 3365 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─ 4246 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─ 5497 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─ 5844 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─ 5849 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─ 6170 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─ 7952 ssh: /var/lib/resallocserver/.ansible/cp/018a180a6c [mux]
           ├─ 8227 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─ 8487 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─ 9254 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─ 9653 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─10469 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─10953 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─11528 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─12437 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─12848 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─12893 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─12999 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─13889 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─13936 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─14337 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─14740 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─14741 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─14870 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─15811 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─16235 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─16244 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─17413 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─17632 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─17675 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─17706 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─17847 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─18068 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─18169 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─18374 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─18415 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─19920 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─20384 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─21139 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─21858 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─22505 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─22984 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─23342 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─24433 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─24852 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─25625 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─25751 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─26146 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─26209 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─26210 sleep 15
           ├─26219 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─26224 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─26288 sleep 15
           ├─26385 sleep 15
           ├─26386 sleep 15
           ├─26387 sleep 15
           ├─26389 sleep 15
           ├─26390 sleep 15
           ├─26391 sleep 15
           ├─26392 sleep 15
           ├─26393 sleep 15
           ├─26416 sleep 15
           ├─26420 sleep 15
           ├─26436 sleep 15
           ├─26441 sleep 15
           ├─26446 sleep 15
           ├─26448 sleep 15
           ├─26449 sleep 15
           ├─26501 sleep 15
           ├─26534 sleep 15
           ├─26535 sleep 15
           ├─26536 sleep 15
           ├─26537 sleep 15
           ├─26538 sleep 15
           ├─26613 sleep 15
           ├─26614 sleep 15
           ├─26637 sleep 15
           ├─26639 sleep 15
           ├─26643 sleep 15
           ├─26645 sleep 15
           ├─26750 sleep 15
           ├─26751 sleep 15
           ├─26773 sleep 15
           ├─26774 sleep 15
           ├─26776 sleep 15
           ├─26861 sleep 15
           ├─26862 sleep 15
           ├─26863 sleep 15
           ├─26864 sleep 15
           ├─26868 sleep 15
           ├─26869 sleep 15
           ├─26870 sleep 15
           ├─26871 sleep 15
           ├─26966 sleep 15
           ├─26967 sleep 15
           ├─26969 sleep 15
           ├─27078 sleep 15
           ├─27079 sleep 15
           ├─27080 sleep 15
           ├─27081 sleep 15
           ├─27082 sleep 15
           ├─27085 sleep 15
           ├─27086 sleep 15
           ├─27088 sleep 15
           ├─27089 sleep 15
           ├─27090 sleep 15
           ├─27092 sleep 15
           ├─27102 sleep 15
           ├─27245 sleep 15
           ├─27246 sleep 15
           ├─27247 sleep 15
           ├─27248 sleep 15
           ├─27251 sleep 15
           ├─27254 sleep 15
           ├─27255 sleep 15
           ├─27256 sleep 15
           ├─27257 sleep 15
           ├─27258 sleep 15
           ├─27259 sleep 15
           ├─27261 sleep 15
           ├─27266 sleep 15
           ├─27272 virsh -c qemu+ssh://[email protected]/system destroy aarch64_02_prod_00010442_20190925_113044
           ├─27279 virsh -c qemu+ssh://[email protected]/system destroy aarch64_02_prod_00010823_20190929_000248
           ├─27280 virsh -c qemu+ssh://[email protected]/system destroy aarch64_02_prod_00009274_20190908_001005
           ├─27283 ssh -l copr -- virthost-aarch64-os02.fedorainfracloud.org sh -c 'if 'nc' -q 2>&1 | grep "requires an argument" >/dev/null 2>&1; then ARG=-q0;else ARG=;fi;'nc' $ARG -U /var/run/libvirt/libvirt-sock'
           ├─27287 ssh -l copr -- virthost-aarch64-os02.fedorainfracloud.org sh -c 'if 'nc' -q 2>&1 | grep "requires an argument" >/dev/null 2>&1; then ARG=-q0;else ARG=;fi;'nc' $ARG -U /var/run/libvirt/libvirt-sock'
           ├─27288 ssh -l copr -- virthost-aarch64-os02.fedorainfracloud.org sh -c 'if 'nc' -q 2>&1 | grep "requires an argument" >/dev/null 2>&1; then ARG=-q0;else ARG=;fi;'nc' $ARG -U /var/run/libvirt/libvirt-sock'
           ├─27297 virsh -c qemu+ssh://[email protected]/system destroy aarch64_02_prod_00010823_20190929_000248
           ├─27299 virsh -c qemu+ssh://[email protected]/system destroy aarch64_02_prod_00009274_20190908_001005
           ├─27304 virsh -c qemu+ssh://[email protected]/system destroy aarch64_02_prod_00011818_20191010_104750
           ├─27309 ssh -l copr -- virthost-aarch64-os02.fedorainfracloud.org sh -c 'if 'nc' -q 2>&1 | grep "requires an argument" >/dev/null 2>&1; then ARG=-q0;else ARG=;fi;'nc' $ARG -U /var/run/libvirt/libvirt-sock'
           ├─27310 ssh -l copr -- virthost-aarch64-os02.fedorainfracloud.org sh -c 'if 'nc' -q 2>&1 | grep "requires an argument" >/dev/null 2>&1; then ARG=-q0;else ARG=;fi;'nc' $ARG -U /var/run/libvirt/libvirt-sock'
           ├─27709 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─27715 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─27782 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─28891 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─28974 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─29177 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─29226 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─29254 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─29617 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─29753 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─29771 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─29777 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─29779 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─29816 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─29840 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─30330 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─30873 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─31201 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           ├─31756 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
           └─31822 /bin/sh -x /var/lib/resallocserver/resalloc_provision/vm-aarch64-delete
[copr@copr-be ~][PROD]$ logout

traceback with sqlite under load

Jun 05 03:02:04 copr-be.cloud.fedoraproject.org resalloc-server[19997]: 127.0.0.1 - - [05/Jun/2019 03:02:03] "POST /RPC2 HTTP/1.1" 200 -
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]: Traceback (most recent call last):
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:   File "/usr/lib64/python3.6/site-packages/sqlalchemy/engine/base.py", line 1236, in _execute_context
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:     cursor, statement, parameters, context
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:   File "/usr/lib64/python3.6/site-packages/sqlalchemy/engine/default.py", line 536, in do_execute
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:     cursor.execute(statement, parameters)
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]: sqlite3.OperationalError: database is locked
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]: The above exception was the direct cause of the following exception:
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]: Traceback (most recent call last):
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:   File "/usr/bin/resalloc-server", line 136, in <module>
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:     Manager(sync).run()
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:   File "/usr/lib/python3.6/site-packages/resallocserver/manager.py", line 483, in run
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:     self._loop()
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:   File "/usr/lib/python3.6/site-packages/resallocserver/manager.py", line 464, in _loop
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:     self._assign_tickets()
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:   File "/usr/lib/python3.6/site-packages/resallocserver/manager.py", line 440, in _assign_tickets
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:     tickets = [x.id for x in qticket.new().order_by(models.Ticket.id).all()]
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:   File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 2925, in all
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:     return list(self)
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:   File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 3081, in __iter__
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:     return self._execute_and_instances(context)
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:   File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 3106, in _execute_and_instances
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:     result = conn.execute(querycontext.statement, self._params)
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:   File "/usr/lib64/python3.6/site-packages/sqlalchemy/engine/base.py", line 980, in execute
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:     return meth(self, multiparams, params)
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:   File "/usr/lib64/python3.6/site-packages/sqlalchemy/sql/elements.py", line 273, in _execute_on_connection
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:     return connection._execute_clauseelement(self, multiparams, params)
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:   File "/usr/lib64/python3.6/site-packages/sqlalchemy/engine/base.py", line 1099, in _execute_clauseelement
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:     distilled_params,
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:   File "/usr/lib64/python3.6/site-packages/sqlalchemy/engine/base.py", line 1240, in _execute_context
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:     e, statement, parameters, cursor, context
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:   File "/usr/lib64/python3.6/site-packages/sqlalchemy/engine/base.py", line 1458, in _handle_dbapi_exception
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:     util.raise_from_cause(sqlalchemy_exception, exc_info)
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:   File "/usr/lib64/python3.6/site-packages/sqlalchemy/util/compat.py", line 296, in raise_from_cause
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:     reraise(type(exception), exception, tb=exc_tb, cause=cause)
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:   File "/usr/lib64/python3.6/site-packages/sqlalchemy/util/compat.py", line 276, in reraise
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:     raise value.with_traceback(tb)
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:   File "/usr/lib64/python3.6/site-packages/sqlalchemy/engine/base.py", line 1236, in _execute_context
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:     cursor, statement, parameters, context
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:   File "/usr/lib64/python3.6/site-packages/sqlalchemy/engine/default.py", line 536, in do_execute
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]:     cursor.execute(statement, parameters)
Jun 05 03:02:14 copr-be.cloud.fedoraproject.org resalloc-server[19997]: sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) database is locked [SQL: 'SELECT tickets.id AS tickets_id, tickets.resource_id AS tickets_resource_id, t>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.