mariadb-operator / mariadb-operator Goto Github PK
View Code? Open in Web Editor NEW🦭 Run and operate MariaDB in a cloud native way
License: MIT License
🦭 Run and operate MariaDB in a cloud native way
License: MIT License
The oficial image of MariaDB now supports passing the passwords as a hash, check the environment variables documentation:
This implies that the user would provide a hash instead of the actual password in the Secret
, so he basically will need to run this command beforehand in another MariaDB instance:
SELECT PASSWORD('thepassword')
This could be a security enhancement, since the Kubernetes secrets are just base64 encoded. However, it also requires the user to do extra work, so we should support both methods (the way we do now + hash)
Hi,
Seems like the helm repo charts.mmontes-dev.duckdns.org is down, it times out after some time.
Cheers,
Is your feature request related to a problem? Please describe.
I wanted to experiment with the operator using AWS EFS volumes, but the folders in this type of volume are owned by user 1000.
When MariaDB starts, it chown the /var/lib/mysql to its user. But it doesn't have the permission to do so, since the folders are owned by user 1000. (chown: changing ownership of '/var/lib/mysql/': Operation not permitted
).
Running the pod with
securityContext:
runAsUser: 1000
would solve this issue. Unfortunately, while it's possible to define the securityContext for the controller pod and the controller container in the values.yaml
file, it doesn't seem like it's possible for the MariaDB pod.
Describe the solution you'd like
The possibility to define the securityContext of the MariaDB pod in the values.yaml
file.
Describe alternatives you've considered
Exploring the template files of the Helm chart to hardcode the securityContext, without success.
Additional context
None.
Environment details:
Describe the solution you'd like
Connections
healthiness can be determined by checking the Endpoint
state related to the spec.ServiceName
field. If no endpoints are available, no connection should be made to MariaDB and the Connection
can mark as unhealthy
Additional context
This could potentially save a lot of connections to MariaDB and fix race conditions when creating Connections
that depend on Services
.
At the moment, kubebuilder
does not support ginkgo v2 but there are plans to migrate, see:
Gingko v2 supports table tests that can be leveraged to test inmutable webhooks in a more readable way. See the current approach.
Is your feature request related to a problem? Please describe.
Currently the name of the user in the database will be the name of the CRD. I have multiple MariaDB running on the same namespace and I want some users in the databases to have the same name without sharing the same CRD (since a CRD name is unique in its namespace).
Describe the solution you'd like
Add a (optional?) "username"/"name" field in the spec of the CRD that will define the name of the user in MariaDB.
Describe alternatives you've considered
Splitting the MariaDB CRDs in multiple namespace, not possible on my end.
Additional context
Example: If I create a "foo" user in the database A, I cannot create a user "foo" in the database B since there is already a User CRD named "foo" in the namespace.
Environment details:
AFAICT it does not need the data dir.
It does indeed need something. Unfortunately I did not notice straight away because the job completed successfully but the log said
💾 Taking physical backup
[00] 2023-01-16 12:35:50 Connecting to MariaDB server host: mariadb, user: root, password: set, port: 3306, socket: /run/mysqld/mysqld.sock
[00] 2023-01-16 12:35:50 Using server version 10.10.2-MariaDB-1:10.10.2+maria~ubu2204
mariabackup based on MariaDB server 10.10.2-MariaDB debian-linux-gnu (x86_64)
[00] 2023-01-16 12:35:50 uses posix_fadvise().
[00] 2023-01-16 12:35:50 cd to /var/lib/mysql/
[00] 2023-01-16 12:35:50 open files limit requested 0, set to 1048576
[00] 2023-01-16 12:35:50 mariabackup: using the following InnoDB configuration:
[00] 2023-01-16 12:35:50 innodb_data_home_dir =
[00] 2023-01-16 12:35:50 innodb_data_file_path = ibdata1:12M:autoextend
[00] 2023-01-16 12:35:50 innodb_log_group_home_dir = ./
[00] 2023-01-16 12:35:50 InnoDB: Using liburing
2023-01-16 12:35:50 0 [Note] InnoDB: Number of transaction pools: 1
mariabackup: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required)
2023-01-16 12:35:50 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF
2023-01-16 12:35:50 0 [ERROR] InnoDB: File ./ib_logfile0 was not found
[00] 2023-01-16 12:35:50 Error: cannot read redo log header
🧹 Cleaning up old backups
📜 Backup history
This is not as successful as I hoped.
They are included in the helm chart (in mariadb-operator
/crds/crds.yaml) and I am not using
--skip-crds`. On this cluster I installed the operator before, so maybe the old CRDs are in conflict. But deleting the CRDs and trying to reinstall via helm did not change anything.
Describe the solution you'd like
Ability to specify affinity
, nodeSelector
and tolerations
in Backup
and Restore
Jobs.
Additional context
We have supported HA features recently in the MariaDB
CRD, we should support this in Backup
and Restore
so we can schedule the Pods in the same node as MariaDB
. This is important when it comes to do physical backups as we need access to the MariaDB
PVC, something that won't be possible if the Backup
Pod is scheduled in a different node.
Describe the bug
After releasing a helm chart, it doesn't automatically trigger a build for OLM.
Expected behaviour
The Bundle
workflow is dispatched in mariadb-operator-helm
Steps to reproduce the bug
Bump version in the helm chart and push.
Additional context
Example run:
hello
i'm new with kubernetes operators, and i'm not sure i understand how to configure this
i want to configure username, db name and user password, but i'm not sure how to edit config/*.yml , and / or if i have to edit even the samples
thank you in advance
Some fields of the StatefulSet
are inmutable, for example, upgrading the volumeClaimTemplates
will result in the following error:
# statefulsets.apps "mariadb" was not valid:
# * spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy' and 'minReadySeconds' are forbidden
As an alternative, the operator could perform a blue-green deployment by provisioning the new StatefulSet first and decomissioning the old one afterwards.
At the moment, the controller provides a limited observability by:
status.conditions
and mapping them to printer columns, allowing the user and the developer to know the internal state of the CRDskubebuilder
This could be notably improved by:
kubebuilder
following the Kubernetes guidelines for logging. An exmple with kubebuilder can be found here.Use built-in exponential backoffs provided by controller-runtime
instead of static requeues where possible, as they can overwhelm the Kubernetes API and in some cases also the MariaDB instance.
Describe the solution you'd like
Return an error instead of ctrl.Result{RequeueAfter: <time>}
Additional context
We use static requeues in multiple places:
The README.md
is a bit lacking and the examples aren't enough, we should create a documentation site using this:
Some sources of inspiration:
Topics to be documented:
Hello
it would be nice to have an out-of-the-box support for loading sql scripts during initialization of mariadb instance
afaik you could achieve this simply with configmaps, if scripts loaded is < 1Mb, or by implementing a container that clone a git repository and mounts the script in a volume, but it seems quite cumbersome
thank you
Describe the bug
mariadb-operator create a user due to Error 1130: Host '10.244.1.39' is not allowed to connect to this MariaDB server
. E.g.:
{"level":"error","ts":1682341296.5456517,"msg":"Reconciler error","controller":"user","controllerGroup":"mariadb.mmontes.io","controllerKind":"User","user":{"name":"photoprism","namespace":"mariadb"},"namespace":"mariadb","name":"photoprism","reconcileID":"9854f729-1653-428c-9f38-c431fd603f65","error":"error reconciling in TemplateReconciler: error creating MariaDB client: 1 error occurred:\n\t* Error 1130: Host '10.244.1.39' is not allowed to connect to this MariaDB server\n\n","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:234"}
I can presumably log in to MariaDB to fix this manually.
Expected behaviour
mariadb-operator creates a user.
Steps to reproduce the bug
Install the 0.11.0 helm chart with no customisations, then apply the yaml below.
Additional context
Deployed CRs:
apiVersion: mariadb.mmontes.io/v1alpha1
kind: MariaDB
metadata:
name: mariadb
spec:
rootPasswordSecretKeyRef:
name: passwords
key: root-password
image:
repository: mariadb
tag: "10.7.4" # "10.11.2"
pullPolicy: IfNotPresent
port: 3306
volumeClaimTemplate:
resources:
requests:
storage: 10Gi
storageClassName: rook-ceph-block
accessModes:
- ReadWriteOnce
env:
- name: TZ
value: UTC
---
apiVersion: mariadb.mmontes.io/v1alpha1
kind: User
metadata:
name: photoprism
spec:
mariaDbRef:
name: mariadb
passwordSecretKeyRef:
name: passwords
key: photoprism
# This field is immutable and defaults to 10
maxUserConnections: 20
---
apiVersion: v1
kind: Secret
metadata:
name: passwords
stringData:
photoprism: photoprismsecret
root-password: supersecret
Logged by MariaDB:
2023-04-24 12:23:05 2134 [Warning] Aborted connection 2134 to db: 'unconnected' user: 'unauthenticated' host: '10.244.1.39' (This connection closed normally without authentication)
Environment details:
Creating a MariaDB
resource with the minimal metrics
configuration allowed by the CRD, causes an error.
Metrics configuration:
metrics:
exporter:
image:
repository: prom/mysqld-exporter
tag: v0.14.0
serviceMonitor:
prometheusRelease: kube-prometheus-stack
Error:
{
"level": "error",
"ts": 1675436926.6985223,
"msg": "Reconciler error",
"controller": "mariadb",
"controllerGroup": "mariadb.mmontes.io",
"controllerKind": "MariaDB",
"mariaDB": {
"name": "mariadb",
"namespace": "mariadb-test"
},
"namespace": "mariadb-test",
"name": "mariadb",
"reconcileID": "b27c0a29-a642-42ab-aa4c-90778e24f3bd",
"error": "error creating ServiceMonitor: 1 error occurred:\n\t* error creating Service Monitor: ServiceMonitor.monitoring.coreos.com \"mariadb\" is invalid: [spec.endpoints[0].scrapeTimeout: Invalid value: \"'10s'\": spec.endpoints[0].scrapeTimeout in body should match '^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$', spec.endpoints[0].interval: Invalid value: \"'10s'\": spec.endpoints[0].interval in body should match '^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$']\n\n",
"stacktrace": "sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:234"
}
I think the problem has to do with quoting, as the resource looks like this after the failed reconciliation:
metrics:
exporter:
image:
pullPolicy: IfNotPresent
repository: prom/mysqld-exporter
tag: v0.14.0
serviceMonitor:
interval: '''10s'''
prometheusRelease: kube-prometheus-stack
scrapeTimeout: '''10s'''
The version used to reproduce this error was v0.0.6.
Instead of creating commands by manipulating strings in the backupcmd
pkg, create a CLI to better abstract them without needing to rely on bash at all.
Requirements:
Hi
I dont see any possibility how to i can use a custom my.cnf file or inject custom configuration for mariadb.
Can any one tell me how to do in this operator ?
Support for HA via MariaDB Galera following the same approach as this PoC.
A posible solution might be:
MariaDB
resources, create the StatefulSet
and Pods
with the mariadb.mmontes.io/galera: enabled
annotationGaleraReconciler
that will watch StatefulSets
and Pods
with the mariadb.mmontes.io/galera: enabled
annotationPod
will have an extra sidecar container, the Galera reloader, which has an API to reload the /etc/mysql/mariadb.conf.d/galera.conf
file and gracefully restart the mariadbd
process by sending a signal.Pod
goes down, the GaleraReconciler
queries the StatefulSet
to get the total number of pods, and based on their availability makes calls to the Galera reloaders to update the configuration. It will use the StatefulSet
FQDN to selectively talk to specific instances (mariadb-0.mariadb.default.svc.cluster.local
).Related to:
Additional resources:
Further improvements:
Describe the solution you'd like
I'd love to scale Galera with the use of a Galera Arbitrator. This makes it so we're able to run, for example, 2 Galera nodes with an Arbitrator for the 3rd vote.
Describe alternatives you've considered
Running 3 Galera nodes instead. Capacity wise this is more consuming.
Additional context
The Cluster Loses its Primary State Due to Split Brain
section of the Crash Recovery documentationMigrate the operator helm chart to a public OCI registry. DockerHub has support for OCI artifacts already:
We should keep our old chartmuseum instance anyway so we can serve the chart for people using helm < 3.8
Describe the solution you'd like
Docker image based on distroless:
Describe alternatives you've considered
Alpine:
Additional context
Security report shows CVEs related to alpine:
It might be useful to support both these cases from the MariaDB documentation:
This would allow us to:
Seamless migrate large datasets to a new MariaDB cluster managed by the Operator.
This use-case is really useful in migrations. For example, you could setup replication from the old cluster to the new cluster (where the new cluster is managed by the Operator). In this case the dataset is constantly synced, in which case you're able to switch seamless to the new cluster.
Allows to spin-up additional read-only nodes that never get promoted to master.
In our setup, developers have access to a slave MariaDB server. This to ensure that they're not able to impact production workloads by doing heavy queries or queries that apply certain locks.
If you'd like to split this up in different issues, please let me know.
Hi !
First off all thanks for your work . It's really a nice solution for Databases in Openshift !
We are going to migrate from old physical infrastructure to openshift.
We currently use circular replication (Master -> Slave in both direction / ring replication) between multiple-site.
I would like to use the same type of replication between different Openshift cluster.
I already use 'submariner' to link my cluster with service (export service through VPN)
By now , if i understand it well , replication will work on local cluster with special connexion name ?
It may be possible to you the same concept with external ressources ?
Best regards
We are creating database connections on the fly in the TemplateReconciler
whenever a resource needs to be reconciled, see:
sql.Open returns a sql.DB that maintains its own pool of connections, but the problem here is that we might have multiple MariaDB instances with different connection details, so we cannot reuse the connection.
The idea would be introducing a LRU of sql.DB
objects, keeping only the most recently used. It would also need to be indexable by the MariaDB
's types.NamespacedName
so we can efficiently get the right instance on each reconciliation cycle.
Describe the bug
Quickstart process results in pods being launched that get killed off. The pods events suggest that the service does not run:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 42s default-scheduler Successfully assigned mariadb/mariadb-0 to k3s-n3
Warning Unhealthy 22s (x3 over 32s) kubelet Liveness probe failed: ERROR 2002 (HY000): Can't connect to local server through socket '/run/mysqld/mysqld.sock' (2)
Normal Killing 22s kubelet Container mariadb failed liveness probe, will be restarted
Warning Unhealthy 2s (x8 over 32s) kubelet Readiness probe failed: ERROR 2002 (HY000): Can't connect to local server through socket '/run/mysqld/mysqld.sock' (2)
Normal Pulled 1s (x2 over 42s) kubelet Container image "mariadb:10.7.4" already present on machine
Normal Created 1s (x2 over 42s) kubelet Created container mariadb
Normal Started 1s (x2 over 42s) kubelet Started container mariadb
Expected behaviour
A working MariaDB should be created, in a running state.
Steps to reproduce the bug
cert-manager and prometheus are both installed in the cluster.
helm install -n mariadb mariadb-operator mariadb-operator/mariadb-operator -f values.yaml
.nameOverride: mariadb
metrics:
enabled: true
ha:
enabled: false
kubectl -n mariadb apply -f samples/config
kubectl -n mariadb apply -f samples/mariadb_v1alpha1_mariadb.yaml
I've modified mariadb_v1alpha1_mariadb.yaml
to include a 'standard' storage class:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
namespace: mariadb
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mariadb-mariadb
labels:
app: mariadb-mariadb
spec:
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
# storageClassName: "mariadb-mariadb"
# storageClassName: local-path
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/cephfs/apps/mariadb/mariadb"
---
apiVersion: mariadb.mmontes.io/v1alpha1
kind: MariaDB
metadata:
name: mariadb
spec:
rootPasswordSecretKeyRef:
name: mariadb
key: root-password
database: mariadb
username: mariadb
passwordSecretKeyRef:
name: mariadb
key: password
image:
repository: mariadb
tag: "10.7.4"
pullPolicy: IfNotPresent
port: 3306
volumeClaimTemplate:
resources:
requests:
storage: 100Mi
storageClassName: standard
accessModes:
- ReadWriteOnce
myCnf: |
[mysqld]
bind-address=0.0.0.0
default_storage_engine=InnoDB
binlog_format=row
innodb_autoinc_lock_mode=2
max_allowed_packet=256M
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 300m
memory: 512Mi
env:
- name: TZ
value: SYSTEM
envFrom:
- configMapRef:
name: mariadb
podSecurityContext:
runAsUser: 0
securityContext:
allowPrivilegeEscalation: false
Additional context
Environment details:
Describe the bug
Given the following CRs
---
apiVersion: mariadb.mmontes.io/v1alpha1
kind: Database
metadata:
name: ak-test
spec:
mariaDbRef:
name: mariadb
characterSet: utf8
collate: utf8_general_ci
---
apiVersion: mariadb.mmontes.io/v1alpha1
kind: User
metadata:
name: ak-test-user
spec:
mariaDbRef:
name: mariadb
passwordSecretKeyRef:
name: ak-test-user
key: password
---
apiVersion: mariadb.mmontes.io/v1alpha1
kind: Grant
metadata:
name: ak-test-grant
spec:
mariaDbRef:
name: mariadb
privileges:
- "ALL"
database: "ak-test"
table: "*"
username: ak-test-user
grantOption: false
---
apiVersion: v1
kind: Secret
metadata:
name: ak-test-user
data:
password: NjRiYzJlZWYyYzgyN2ZlY2JmYjFiYzMzOWIyMTFkZTQ=
the controller fails to add grant with the following error
error reconciling in TemplateReconciler: error creating ak-test-grant: 1 error occurred:\n\t* error granting privileges in MariaDB: Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '-test.* TO 'ak-test-user'@'%' WITH GRANT OPTION'
Expected behaviour
I expect the controller to successfully assign the grant
Environment details:
Describe the solution you'd like
Ability to have multiple replicas of MariaDB that are asynchronously replicated. We could potentially support multiple topologies and multiple degress of asynchronism:
Describe alternatives you've considered
HA via Galera, but it might be worth having a simpler alternatives for HA.
Additional context
$ kubectl apply -f config/samples/mariadb_v1alpha1_mariadb.yaml
Error from server (InternalError): error when creating "config/samples/mariadb_v1alpha1_mariadb.yaml": Internal error occurred: failed calling webhook "vmariadb.kb.io": failed to call webhook: Post "https://mariadb-operator-webhook.mariadb-operator.svc:443/validate-mariadb-mmontes-io-v1alpha1-mariadb?timeout=10s": Address is not allowed
It should be posible to specify a cron expression in a new schedule
field in order to take backups periodically. The operator should reconcile a CronJob
instead of a Job
for this case.
Generated mariadb pod is configured with configmap different from what I put in myCnfConfigMapKeyRef.name
My configuration
myCnfConfigMapKeyRef:¬
name: mariadb-my-cnf¬
key: my.cnf¬
Resulting pod manifest:
- configMap:
defaultMode: 420
items:
- key: my.cnf
path: my.cnf
name: config-mariadb
name: config
I think it comes from here:
❯ rg config-
controllers/mariadb_controller.go
401: Name: fmt.Sprintf("config-%s", mariadb.Name),
Expected behaviour
The name of the configmap is exactly the one provided as myCnfConfigMapKeyRef.name. Or the documentation explaining that the map name is built using "config-" prefix and the name or MariaDB object.
Steps to reproduce the bug
Configure MariaDB object as described above.
Environment details:
Is your feature request related to a problem? Please describe.
Our infrastructure rely on some k8s label where we expect specific values :
Describe the solution you'd like
We'd like to be able to override the label set by mariadb-operator with some custom label.
One way to do this would be to inherit label from mariadb-operator manifest inside child objects (statefulset, pods, ...)
Something like:
apiVersion: mariadb.mmontes.io/v1alpha1
kind: MariaDB
metadata:
name: mariadb
labels:
app.kubernetes.io/component: mycomponent
app.kubernetes.io/instance: myinstance
app.kubernetes.io/name: myname
app.kubernetes.io/version: 1.0.0
Would create a statefulset and the associated pods with the same labels
Potentially, a configuration, to specify the labels where we seek inheritance could allow backward compatibility.
Something like the configuration done by zalando with their postgres operator:
inherited_labels list of label keys that can be inherited from the cluster manifest, and added to each child objects (Deployment, StatefulSet, Pod, PVCs, PDB, Service, Endpoints and Secrets) created by the operator. Typical use case is to dynamically pass labels that are specific to a given Postgres cluster, in order to implement NetworkPolicy. The default is empty.
https://opensource.zalando.com/postgres-operator/docs/reference/operator_parameters.html
Be able to take and restore backups using AWS S3 as storage.
Thank you for your operator. It works quite well! (Despite the helm repo https://charts.mmontes-dev.duckdns.org/ is often not available)
MariaDB Version Tags are immutable.
What is the recommanded upgrade path?
Is your feature request related to a problem? Please describe.
Many operators out there, such as Zalando's Postgres Operator, Stackgres and MinIO have a UI to create and manage deployments. I like this for monitoring purposes, because it enables a better overview of the available options and deployments.
Describe the solution you'd like
Some kind of management UI, to create and view deployments, change any mutable options, restart deployments to apply those configurations, and anything else that makes sense to have in a operator UI. Additionally, it could integrate with OIDC or Kubernetes RBAC for authentication.
Describe alternatives you've considered
Well, using kubectl
and the CRDs, but sometimes a UI is just simpler.
Additional context
I'd love to help out with this, if there's a vision for this project then collaborating with its creators would make sense. I have skills in both Go and React, so building a dashboard and management UI shouldn't be too hard. Would love to hear if this is on the roadmap, and how I can help!
Hi,
The Grant example shows how to grant basic rights to a user, but how to list them all ? Like what should I declare to grant all rights to a database to a user ?
Thanks in advance,
Is your feature request related to a problem? Please describe.
When performing backups with large databases we would like the ability to not lock tables.
Before using operator we were creating backups using: mysqldump --single-transaction
Describe the solution you'd like
The ability to customize the logical backup options such as locking tables, using single-transaction, other flags
Alternatives would be running a different backup process, using standalone cron job
Environment details:
Is your feature request related to a problem? Please describe.
I would like to have mariadb running with root fs readonly, like all containers should.
Describe the solution you'd like
Either more direct support for it or at least an ability to define additional custom volumes for the pod (e.g /tmp, /var/lib)
Describe alternatives you've considered
There does not seem to be any options today to achieve it. The read-only root fs mode can be set via securitycontext, but there seems to be no way to provide the addional (emptyDir) volumes to mariadb pod.
Get "Error creating ConfigMap" when using myCnfConfigMapKeyRef
Reproduce steps:
kubectl apply -f config/samples/config
kubectl apply -f config/samples/mariadb_v1alpha1_mariadb_config.yaml
mariadb-operator: v0.11.0 installed via Helm
$ kubectl get mariadb
NAME READY STATUS AGE
mariadb False Error creating ConfigMap 6s
$ kubectl describe mariadb
Name: mariadb
Namespace: default
Labels: <none>
Annotations: <none>
API Version: mariadb.mmontes.io/v1alpha1
Kind: MariaDB
Metadata:
Creation Timestamp: 2023-03-21T03:52:39Z
Generation: 1
Managed Fields:
API Version: mariadb.mmontes.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:image:
.:
f:pullPolicy:
f:repository:
f:tag:
f:myCnfConfigMapKeyRef:
f:port:
f:rootPasswordSecretKeyRef:
f:volumeClaimTemplate:
.:
f:accessModes:
f:resources:
.:
f:requests:
.:
f:storage:
f:storageClassName:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2023-03-21T03:52:39Z
API Version: mariadb.mmontes.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:conditions:
Manager: mariadb-operator
Operation: Update
Subresource: status
Time: 2023-03-21T03:52:39Z
Resource Version: 5080193
UID: a22ab51e-7353-48d6-b25a-98709cce1e46
Spec:
Image:
Pull Policy: IfNotPresent
Repository: mariadb
Tag: 10.7.4
My Cnf Config Map Key Ref:
Key: my.cnf
Name: mariadb-my-cnf
Port: 3306
Root Password Secret Key Ref:
Key: root-password
Name: mariadb
Volume Claim Template:
Access Modes:
ReadWriteOnce
Resources:
Requests:
Storage: 100Mi
Storage Class Name: standard
Status:
Conditions:
Last Transition Time: 2023-03-21T03:52:39Z
Message: Error creating ConfigMap
Reason: Failed
Status: False
Type: Ready
Events: <none>
Be able to provide a Service
template in the MariaDB
CRD to customize how it is exposed in the cluster. For example, if we wanted to expose it using Metallb:
apiVersion: mariadb.mmontes.io/v1alpha1
kind: MariaDB
metadata:
name: mariadb
spec:
....
service:
type: LoadBalancer
annotations:
metallb.universe.tf/address-pool: sandbox
It should default to type ClusterIP
and no annotations.
Kubernetes version: microk8s v1.26.1
mariadb-operator version: latest
Install method: helm
Install flavour: minimal
kubectl apply -f config/samples/mariadb_v1alpha1_mariadb_minimal.yaml
kubectl describe pod/mariadb-0
Events:
Type Reason Age From Message
Normal Scheduled 40s default-scheduler Successfully assigned mariadb-operator/mariadb-0 to kvmub02
Normal SuccessfulAttachVolume 39s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-06c18fac-12b5-4808-8b0d-9c4bee6849b9"
Normal Pulled 11s (x3 over 31s) kubelet Container image "mariadb:10.7.4" already present on machine
Normal Created 11s (x3 over 31s) kubelet Created container mariadb
Normal Started 11s (x3 over 31s) kubelet Started container mariadb
Warning BackOff 7s (x7 over 29s) kubelet Back-off restarting failed container mariadb in pod mariadb-0_mariadb-operator(96449cb0-be28-4855-89d3-4363d0217ece)
kubectl logs pod/mariadb-0
2023-02-28 15:29:32+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.7.4+mariafocal started.focal started.
2023-02-28 15:29:32+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2023-02-28 15:29:32+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.7.4+maria
2023-02-28 15:29:32+00:00 [Note] [Entrypoint]: Initializing database files
2023-02-28 15:29:32 0 [ERROR] InnoDB: The Auto-extending data file './ibdata1' is of a different size 767 pages than specified by innodb_data_file_path
2023-02-28 15:29:32 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error
2023-02-28 15:29:33 0 [ERROR] Plugin 'InnoDB' init function returned error.
2023-02-28 15:29:33 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2023-02-28 15:29:33 0 [ERROR] Unknown/unsupported storage engine: InnoDB
2023-02-28 15:29:33 0 [ERROR] Aborting
Installation of system tables failed! Examine the logs in
/var/lib/mysql/ for more information.
The problem could be conflicting information in an external
my.cnf files. You can ignore these by doing:
shell> /usr/bin/mariadb-install-db --defaults-file=~/.my.cnf
You can also try to start the mysqld daemon with:
shell> /usr/sbin/mariadbd --skip-grant-tables --general-log &
and use the command line tool /usr/bin/mariadb
to connect to the mysql database and look at the grant tables:
shell> /usr/bin/mysql -u root mysql
mysql> show tables;
Try 'mysqld --help' if you have problems with paths. Using
--general-log gives you a log in /var/lib/mysql/ that may be helpful.
The latest information about mysql_install_db is available at
https://mariadb.com/kb/en/installing-system-tables-mysql_install_db
You can find the latest source at https://downloads.mariadb.org and
the maria-discuss email list at https://launchpad.net/~maria-discuss
Please check all of the above before submitting a bug report
at https://mariadb.org/jira
MariaDB has its own backup command to perform backups, we should migrate to use it:
Usage can be found here under the Creating backups with Mariabackup
section:
docker run --user mysql -v some-mariadb-socket:/var/run/mysqld -v some-mariadb-backup:/backup -v /my/own/datadir:/var/lib/mysql --rm mariadb:latest mariabackup --backup --target-dir=/backup
To considerably reduce the size of backups, it would be nice to support incremental backups via mariabackup
:
Provide HA by adding support of multiple topologies by using https://github.com/openark/orchestrator
Make sure the operator behaves as expected in OCP, for example, by running the automated tests and then publish the operator in OperatorHub:
More detailed instructions can be found in this comment.
It's worth mentioning that, Openshift might require a different set of RBAC permissions:
1.6759603221844468e+09 ERROR Reconciler error {"controller": "restoremariadb", "controllerGroup": "database.mmontes.io", "controllerKind": "RestoreMariaDB", "restoreMariaDB": {"name":"restore","namesp: 1 error occurred:\n\t* error reconciling batch: error creating Job: jobs.batch "restore" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: ,
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:273
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:234
Looks Like fix is: (I'm still on old CRD naming convention)
Make use of initContainer
to wait until MariaDB StatefulSet
is ready in the BackupMariaDb
and RestoreMariaDb
jobs.
Kubernetes version: microk8s v1.26.1
mariadb-operator version: latest
Install method: helm
Install flavour: minimal
Hi, when I connect an application (fireflyiii) sometimes the warning "Aborted connection .. to db: '...' user: '...' host: '10.1.156.63' (Got an error reading communication packets)" appears. The application does not receive the expected data in that case.
Thank you for any help.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.