How to Backup Kubernetes MySQL Operator Clusters

Configurare noua (How To)

Situatie

Oracle’s MySQL Operator for Kubernetes is a convenient way to automate MySQL database provisioning within your cluster. One of the operator’s headline features is integrated hands-off backup support that increases your resiliency. Backups copy your database to external storage on a recurring schedule.

This article will walk you through setting up backups to an Amazon S3-compatible object storage service. You’ll also see how to store backups in Oracle Cloud Infrastructure (OCI) storage or local persistent volumes inside your cluster.

Solutie

Preparing a Database Cluster

Install the MySQL operator in your Kubernetes cluster and create a simple database instance for testing purposes. Copy the YAML below and save it to mysql.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: mysql-root-user
stringData:
  rootHost: "%"
  rootUser: "root"
  rootPassword: "P@$$w0rd"
 
---

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: mysql-cluster
spec:
  secretName: mysql-root-user
  instances: 3
  tlsUseSelfSigned: true
  router:
    instances: 1

Use Kubectl to apply the manifest:

$ kubectl apply -f mysql.yaml

Wait a few minutes while the MySQL operator provisions your Pods. Use Kubectl’s get pods command to check on the progress. You should see four running Pods: one MySQL router instance and three MySQL server replicas.

$ kubectl get pods
NAME                                    READY   STATUS    RESTARTS   AGE
mysql-cluster-0                         2/2     Running   0          2m
mysql-cluster-1                         2/2     Running   0          2m
mysql-cluster-2                         2/2     Running   0          2m
mysql-cluster-router-6b68f9b5cb-wbqm5   1/1     Running   0          2m
Defining a Backup Schedule

The MySQL operator requires two components to successfully create a backup:

  • backup schedule which defines when the backup will run.
  • backup profile which configures the storage location and MySQL export options.

Schedules and profiles are created independently of each other. This lets you run multiple backups on different schedules using the same profile.

Each schedule and profile is associated with a specific database cluster. They’re created as nested resources within your InnoDBCluster objects. Each database you create with the MySQL operator needs its own backup configuration.

Backup schedules are defined by your database’s spec.backupSchedules field. Each item requires a schedule field that specifies when to run the backup using a cron expression. Here’s an example that starts a backup every hour:

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: mysql-cluster
spec:
  secretName: mysql-root-user
  instances: 3
  tlsUseSelfSigned: true
  router:
    instances: 1
   backupSchedules:
    - name: hourly
      enabled: true
      schedule: "0 * * * *"
      backupProfileName: hourly-backup

The backupProfileName field references the backup profile to use. You’ll create this in the next step.

Creating Backup Profiles

Profiles are defined in the spec.backupProfiles field. Each profile should have a name and a dumpInstance property that configures the backup operation.

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: mysql-cluster
spec:
  secretName: mysql-root-user
  instances: 3
  tlsUseSelfSigned: true
  router:
    instances: 1
  backupSchedules:
    - name: hourly
      enabled: true
      schedule: "0 * * * *"
      backupProfileName: hourly-backup
  backupProfiles:
    - name: hourly-backup
      dumpInstance:
        storage:
          # ...

Backup storage is configured on a per-profile basis in the dumpInstance.storage field. The properties you need to supply depend on the type of storage you’re using.

S3 Storage

The MySQL operator can upload your backups straight to S3-compatible object storage providers. To use this method, you must create a Kubernetes secret that contains an aws CLI config file with your credentials.

Add the following content to s3-secret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: s3-secret
stringData:
  credentials: |
    [default]
    aws_access_key_id = YOUR_S3_ACCESS_KEY
    aws_secret_access_key = YOUR_S3_SECRET_KEY

Substitute in your own S3 access and secret keys, then use Kubectl to create the secret:

$ kubectl apply -f s3-secret.yaml
secret/s3-secret created

Next add the following fields to your backup profile’s storage.s3 section:

  • bucketName – The name of the S3 bucket to upload your backups to.
  • prefix – Set this to apply a prefix to your uploaded files, such as /my-app/mysql. The prefix allows you to create folder trees within your bucket.
  • endpoint – Set this to your service provider’s URL when you’re using third-party S3-compatible storage. You can omit this field if you’re using Amazon S3.
  • config – The name of the secret containing your credentials file.
  • profile – The name of the config profile to use within the credentials file. This was set to default in the example above.

Here’s a complete example:

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: mysql-cluster
spec:
  secretName: mysql-root-user
  instances: 3
  tlsUseSelfSigned: true
  router:
    instances: 1
  backupSchedules:
    - name: hourly
      enabled: true
      schedule: "0 * * * *"
      backupProfileName: hourly-backup
  backupProfiles:
    - name: hourly-backup
      dumpInstance:
        storage:
          s3:
            bucketName: backups
            prefix: /mysql
            config: s3-secret
            profile: default

Applying this manifest will activate hourly database backups to your S3 account.

OCI Storage

The operator supports Oracle Cloud Infrastructure (OCI) object storage as an alternative to S3. It’s configured in a similar way. First create a secret for your OCI credentials:

apiVersion: v1
kind: Secret
metadata:
  name: oci-secret
stringData:
  fingerprint: YOUR_OCI_FINGERPRINT
  passphrase: YOUR_OCI_PASSPHRASE
  privatekey: YOUR_OCI_RSA_PRIVATE_KEY
  region: us-ashburn-1
  tenancy: YOUR_OCI_TENANCY
  user: YOUR_OCI_USER

Next configure the backup profile with a storage.ociObjectStorage stanza:

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: mysql-cluster
spec:
  secretName: mysql-root-user
  instances: 3
  tlsUseSelfSigned: true
  router:
    instances: 1
  backupSchedules:
    - name: hourly
      enabled: true
      schedule: "0 * * * *"
      backupProfileName: hourly-backup
  backupProfiles:
    - name: hourly-backup
      dumpInstance:
        storage:
          ociObjectStorage:
            bucketName: backups
            prefix: /mysql
            credentials: oci-secret

Modify the bucketName and prefix fields to set the upload location in your OCI account. The credentials field must reference the secret that contains your OCI credentials.

Kubernetes Volume Storage

Local persistent volumes are a third storage option. This is less robust as your backup data will still reside inside your Kubernetes cluster. However it can be useful for one-off backups and testing purposes.

First create a persistent volume and accompanying claim:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: backup-pv
spec:
  storageClassName: standard
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp
 
---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: backup-pvc
spec:
  storageClassName: standard
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

This example manifest is not suitable for production use. You should select an appropriate storage class and volume mounting mode for your Kubernetes distribution.

Next configure your backup profile to use your persistent volume by adding a storage.persistentVolumeClaim field:

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: mysql-cluster
spec:
  secretName: mysql-root-user
  instances: 3
  tlsUseSelfSigned: true
  router:
    instances: 1
  backupSchedules:
    - name: hourly
      enabled: true
      schedule: "0 * * * *"
      backupProfileName: hourly-backup
  backupProfiles:
    - name: hourly-backup
      dumpInstance:
        storage:
          persistentVolumeClaim:
            claimName: backup-pvc

The persistent volume claim created earlier is referenced by the claimName field. The MySQL operator will now deposit backup data into the volume.

Setting Backup Options

Backups are created using the MySQL Shell’s dumpInstance utility. This defaults to exporting a complete dump of your server. The format writes structure and chunked data files for each table. The output is compressed with zstd.

You can pass options through to dumpInstance via the dumpOptions field in a MySQL operator backup profile:

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: mysql-cluster
spec:
  # ...
  backupProfiles:
    - name: hourly-backup
      dumpInstance:
        dumpOptions:
          chunking: false
          compression: gzip
        storage:
          # ...

This example disables chunked output, creating one data file per table, and switches to gzip compression instead of zstd. You can find a complete reference for available options in the MySQL documentation.

Restoring a Backup

The MySQL operator can initialize new database clusters using previously created files from dumpInstance. This allows you to restore your backups straight into your Kubernetes cluster. It’s useful in recovery situations or when you’re migrating an existing database to Kubernetes.

Database initialization is controlled by the spec.initDB field on your InnoDBCluster objects. Within this stanza, use the dump.storage object to reference the backup location you used earlier. The format matches the equivalent dumpInstance.storage field in backup profile objects.

apiVersion: v1
kind: Secret
metadata:
  name: s3-secret
stringData:
  credentials: |
    [default]
    aws_access_key_id = YOUR_S3_ACCESS_KEY
    aws_secret_access_key = YOUR_S3_SECRET_KEY

---

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: mysql-cluster-recovered
spec:
  secretName: mysql-root-user
  instances: 3
  tlsUseSelfSigned: true
  router:
    instances: 1
  initDB:
    dump:
      storage:
        s3:
          bucketName: backups
          prefix: /mysql/mysql20221031220000
          config: s3-secret
          profile: default

Applying this YAML file will create a new database cluster that’s initialized with the dumpInstance output in the specified S3 bucket. The prefix field must contain the full path to the dump files within the bucket. Backups created by the operator will automatically be stored in timestamped folders; you’ll need to indicate which one to recover by setting the prefix. If you’re restoring from a persistent volume, use the path field instead of prefix.

Oracle’s MySQL operator automates MySQL database management within Kubernetes clusters. In this article you’ve learned how to configure the operator’s backup system to store complete database dumps in a persistent volume or object storage bucket.

Using Kubernetes to horizontally scale MySQL adds resiliency, but external backups are still vital in case your cluster’s compromised or data is accidentally deleted. The MySQL operator can restore a new database instance from your backup if you ever need to, simplifying the post-disaster recovery procedure.

Tip solutie

Permanent

Voteaza

(6 din 12 persoane apreciaza acest articol)

Despre Autor

Leave A Comment?