Deploying the charm in Google Cloud's Kubernetes (GKE) doesn't work. The charm goes to an unknown/idle
status.
unit-mongodb-0: 16:45:18 INFO juju.worker.uniter.operation ran "leader-elected" hook (via hook dispatching script: dispatch)
unit-mongodb-0: 16:45:18 ERROR unit.mongodb/0.juju-log Uncaught exception while in charm code:
Traceback (most recent call last):
File "/var/lib/juju/agents/unit-mongodb-0/charm/./src/charm.py", line 467, in <module>
main(MongoDBCharm)
File "/var/lib/juju/agents/unit-mongodb-0/charm/venv/ops/main.py", line 435, in main
_emit_charm_event(charm, dispatcher.event_name)
File "/var/lib/juju/agents/unit-mongodb-0/charm/venv/ops/main.py", line 144, in _emit_charm_event
event_to_emit.emit(*args, **kwargs)
File "/var/lib/juju/agents/unit-mongodb-0/charm/venv/ops/framework.py", line 355, in emit
framework._emit(event) # noqa
File "/var/lib/juju/agents/unit-mongodb-0/charm/venv/ops/framework.py", line 824, in _emit
self._reemit(event_path)
File "/var/lib/juju/agents/unit-mongodb-0/charm/venv/ops/framework.py", line 899, in _reemit
custom_handler(event)
File "/var/lib/juju/agents/unit-mongodb-0/charm/./src/charm.py", line 110, in on_mongod_pebble_ready
container.replan()
File "/var/lib/juju/agents/unit-mongodb-0/charm/venv/ops/model.py", line 1828, in replan
self._pebble.replan_services()
File "/var/lib/juju/agents/unit-mongodb-0/charm/venv/ops/pebble.py", line 1564, in replan_services
return self._services_action('replan', [], timeout, delay)
File "/var/lib/juju/agents/unit-mongodb-0/charm/venv/ops/pebble.py", line 1646, in _services_action
raise ChangeError(change.err, change)
ops.pebble.ChangeError: cannot perform the following tasks:
- Start service "mongod" (cannot start service: exited quickly with code 100)
----- Logs from task 0 -----
2023-02-15T21:45:18Z INFO Most recent service output:
(...)
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"REPL", "id":4784900, "ctx":"initandlisten","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":15000}}
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"COMMAND", "id":4784901, "ctx":"initandlisten","msg":"Shutting down the MirrorMaestro"}
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"SHARDING", "id":4784902, "ctx":"initandlisten","msg":"Shutting down the WaitForMajorityService"}
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"NETWORK", "id":20562, "ctx":"initandlisten","msg":"Shutdown: going to close listening sockets"}
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"NETWORK", "id":4784905, "ctx":"initandlisten","msg":"Shutting down the global connection pool"}
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"CONTROL", "id":4784906, "ctx":"initandlisten","msg":"Shutting down the FlowControlTicketholder"}
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"-", "id":20520, "ctx":"initandlisten","msg":"Stopping further Flow Control ticket acquisitions."}
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"REPL", "id":4784907, "ctx":"initandlisten","msg":"Shutting down the replica set node executor"}
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"NETWORK", "id":4784918, "ctx":"initandlisten","msg":"Shutting down the ReplicaSetMonitor"}
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"SHARDING", "id":4784921, "ctx":"initandlisten","msg":"Shutting down the MigrationUtilExecutor"}
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"ASIO", "id":22582, "ctx":"MigrationUtil-TaskExecutor","msg":"Killing all outstanding egress activity."}
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"COMMAND", "id":4784923, "ctx":"initandlisten","msg":"Shutting down the ServiceEntryPoint"}
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"CONTROL", "id":4784925, "ctx":"initandlisten","msg":"Shutting down free monitoring"}
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"CONTROL", "id":4784927, "ctx":"initandlisten","msg":"Shutting down the HealthLog"}
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"CONTROL", "id":4784928, "ctx":"initandlisten","msg":"Shutting down the TTL monitor"}
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"CONTROL", "id":4784929, "ctx":"initandlisten","msg":"Acquiring the global lock for shutdown"}
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"-", "id":4784931, "ctx":"initandlisten","msg":"Dropping the scope cache for shutdown"}
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"FTDC", "id":4784926, "ctx":"initandlisten","msg":"Shutting down full-time data capture"}
{"t":{"$date":"2023-02-15T21:45:18.813+00:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"initandlisten","msg":"Now exiting"}
{"t":{"$date":"2023-02-15T21:45:18.814+00:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"initandlisten","msg":"Shutting down","attr":{"exitCode":100}}
2023-02-15T21:45:18Z ERROR cannot start service: exited quickly with code 100
-----
unit-mongodb-0: 16:45:19 ERROR juju.worker.uniter.operation hook "mongod-pebble-ready" (via hook dispatching script: dispatch) failed: exit status 1
unit-mongodb-0: 16:45:19 ERROR juju.worker.uniter pebble poll failed for container "mongod": failed to send pebble-ready event: hook failed
unit-mongodb-0: 16:45:19 ERROR unit.mongodb/0.juju-log Uncaught exception while in charm code:
Traceback (most recent call last):
File "/var/lib/juju/agents/unit-mongodb-0/charm/./src/charm.py", line 467, in <module>
main(MongoDBCharm)
File "/var/lib/juju/agents/unit-mongodb-0/charm/venv/ops/main.py", line 435, in main
_emit_charm_event(charm, dispatcher.event_name)
File "/var/lib/juju/agents/unit-mongodb-0/charm/venv/ops/main.py", line 144, in _emit_charm_event
event_to_emit.emit(*args, **kwargs)
File "/var/lib/juju/agents/unit-mongodb-0/charm/venv/ops/framework.py", line 355, in emit
framework._emit(event) # noqa
File "/var/lib/juju/agents/unit-mongodb-0/charm/venv/ops/framework.py", line 824, in _emit
self._reemit(event_path)
File "/var/lib/juju/agents/unit-mongodb-0/charm/venv/ops/framework.py", line 899, in _reemit
custom_handler(event)
File "/var/lib/juju/agents/unit-mongodb-0/charm/./src/charm.py", line 110, in on_mongod_pebble_ready
container.replan()
File "/var/lib/juju/agents/unit-mongodb-0/charm/venv/ops/model.py", line 1828, in replan
self._pebble.replan_services()
File "/var/lib/juju/agents/unit-mongodb-0/charm/venv/ops/pebble.py", line 1564, in replan_services
return self._services_action('replan', [], timeout, delay)
File "/var/lib/juju/agents/unit-mongodb-0/charm/venv/ops/pebble.py", line 1646, in _services_action
raise ChangeError(change.err, change)
ops.pebble.ChangeError: cannot perform the following tasks:
- Start service "mongod" (cannot start service: exited quickly with code 100)
----- Logs from task 0 -----
2023-02-15T21:45:19Z INFO Most recent service output:
(...)
{"t":{"$date":"2023-02-15T21:45:19.882+00:00"},"s":"I", "c":"REPL", "id":4784900, "ctx":"initandlisten","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":15000}}
{"t":{"$date":"2023-02-15T21:45:19.882+00:00"},"s":"I", "c":"COMMAND", "id":4784901, "ctx":"initandlisten","msg":"Shutting down the MirrorMaestro"}
{"t":{"$date":"2023-02-15T21:45:19.882+00:00"},"s":"I", "c":"SHARDING", "id":4784902, "ctx":"initandlisten","msg":"Shutting down the WaitForMajorityService"}
{"t":{"$date":"2023-02-15T21:45:19.882+00:00"},"s":"I", "c":"NETWORK", "id":20562, "ctx":"initandlisten","msg":"Shutdown: going to close listening sockets"}
{"t":{"$date":"2023-02-15T21:45:19.882+00:00"},"s":"I", "c":"NETWORK", "id":4784905, "ctx":"initandlisten","msg":"Shutting down the global connection pool"}
{"t":{"$date":"2023-02-15T21:45:19.882+00:00"},"s":"I", "c":"CONTROL", "id":4784906, "ctx":"initandlisten","msg":"Shutting down the FlowControlTicketholder"}
{"t":{"$date":"2023-02-15T21:45:19.882+00:00"},"s":"I", "c":"-", "id":20520, "ctx":"initandlisten","msg":"Stopping further Flow Control ticket acquisitions."}
{"t":{"$date":"2023-02-15T21:45:19.882+00:00"},"s":"I", "c":"REPL", "id":4784907, "ctx":"initandlisten","msg":"Shutting down the replica set node executor"}
{"t":{"$date":"2023-02-15T21:45:19.882+00:00"},"s":"I", "c":"NETWORK", "id":4784918, "ctx":"initandlisten","msg":"Shutting down the ReplicaSetMonitor"}
{"t":{"$date":"2023-02-15T21:45:19.882+00:00"},"s":"I", "c":"SHARDING", "id":4784921, "ctx":"initandlisten","msg":"Shutting down the MigrationUtilExecutor"}
{"t":{"$date":"2023-02-15T21:45:19.882+00:00"},"s":"I", "c":"ASIO", "id":22582, "ctx":"MigrationUtil-TaskExecutor","msg":"Killing all outstanding egress activity."}
{"t":{"$date":"2023-02-15T21:45:19.882+00:00"},"s":"I", "c":"COMMAND", "id":4784923, "ctx":"initandlisten","msg":"Shutting down the ServiceEntryPoint"}
{"t":{"$date":"2023-02-15T21:45:19.883+00:00"},"s":"I", "c":"CONTROL", "id":4784925, "ctx":"initandlisten","msg":"Shutting down free monitoring"}
{"t":{"$date":"2023-02-15T21:45:19.883+00:00"},"s":"I", "c":"CONTROL", "id":4784927, "ctx":"initandlisten","msg":"Shutting down the HealthLog"}
{"t":{"$date":"2023-02-15T21:45:19.883+00:00"},"s":"I", "c":"CONTROL", "id":4784928, "ctx":"initandlisten","msg":"Shutting down the TTL monitor"}
{"t":{"$date":"2023-02-15T21:45:19.883+00:00"},"s":"I", "c":"CONTROL", "id":4784929, "ctx":"initandlisten","msg":"Acquiring the global lock for shutdown"}
{"t":{"$date":"2023-02-15T21:45:19.883+00:00"},"s":"I", "c":"-", "id":4784931, "ctx":"initandlisten","msg":"Dropping the scope cache for shutdown"}
{"t":{"$date":"2023-02-15T21:45:19.883+00:00"},"s":"I", "c":"FTDC", "id":4784926, "ctx":"initandlisten","msg":"Shutting down full-time data capture"}
{"t":{"$date":"2023-02-15T21:45:19.883+00:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"initandlisten","msg":"Now exiting"}
{"t":{"$date":"2023-02-15T21:45:19.883+00:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"initandlisten","msg":"Shutting down","attr":{"exitCode":100}}
2023-02-15T21:45:19Z ERROR cannot start service: exited quickly with code 100
-----
unit-mongodb-0: 16:45:20 ERROR juju.worker.uniter.operation hook "mongod-pebble-ready" (via hook dispatching script: dispatch) failed: exit status 1
unit-mongodb-0: 16:45:20 ERROR juju.worker.uniter pebble poll failed for container "mongod": failed to send pebble-ready event: hook failed
unit-mongodb-0: 16:45:21 INFO juju.worker.uniter.operation ran "db-storage-attached" hook (via hook dispatching script: dispatch)
unit-mongodb-0: 16:45:21 INFO juju.worker.uniter.operation ran "config-changed" hook (via hook dispatching script: dispatch)
unit-mongodb-0: 16:45:21 INFO juju.worker.uniter found queued "start" hook
unit-mongodb-0: 16:45:22 INFO unit.mongodb/0.juju-log Running legacy hooks/start.
``