December 4, 2022


Graphic showing the MySQL logo

Oracle’s MySQL Operator for Kubernetes is a handy technique to automate MySQL database provisioning inside your cluster. One of many operator’s headline options is built-in hands-off backup assist that will increase your resiliency. Backups copy your database to exterior storage on a recurring schedule.

This text will stroll you thru establishing backups to an Amazon S3-compatible object storage service. You’ll additionally see retailer backups in Oracle Cloud Infrastructure (OCI) storage or native persistent volumes inside your cluster.

Making ready a Database Cluster

Set up the MySQL operator in your Kubernetes cluster and create a easy database occasion for testing functions. Copy the YAML under and reserve it to mysql.yaml:

apiVersion: v1
type: Secret
metadata:
  identify: mysql-root-user
stringData:
  rootHost: "%"
  rootUser: "root"
  rootPassword: "[email protected]$$w0rd"
 
---

apiVersion: mysql.oracle.com/v2
type: InnoDBCluster
metadata:
  identify: mysql-cluster
spec:
  secretName: mysql-root-user
  situations: 3
  tlsUseSelfSigned: true
  router:
    situations: 1

Use Kubectl to use the manifest:

$ kubectl apply -f mysql.yaml

Wait a couple of minutes whereas the MySQL operator provisions your Pods. Use Kubectl’s get pods command to examine on the progress. It’s best to see 4 operating Pods: one MySQL router occasion and three MySQL server replicas.

$ kubectl get pods
NAME                                    READY   STATUS    RESTARTS   AGE
mysql-cluster-0                         2/2     Working   0          2m
mysql-cluster-1                         2/2     Working   0          2m
mysql-cluster-2                         2/2     Working   0          2m
mysql-cluster-router-6b68f9b5cb-wbqm5   1/1     Working   0          2m

Defining a Backup Schedule

The MySQL operator requires two parts to efficiently create a backup:

  • A backup schedule which defines when the backup will run.
  • A backup profile which configures the storage location and MySQL export choices.

Schedules and profiles are created independently of one another. This allows you to run a number of backups on totally different schedules utilizing the identical profile.

Every schedule and profile is related to a selected database cluster. They’re created as nested sources inside your InnoDBCluster objects. Every database you create with the MySQL operator wants its personal backup configuration.

Backup schedules are outlined by your database’s spec.backupSchedules subject. Every merchandise requires a schedule subject that specifies when to run the backup utilizing a cron expression. Right here’s an instance that begins a backup each hour:

apiVersion: mysql.oracle.com/v2
type: InnoDBCluster
metadata:
  identify: mysql-cluster
spec:
  secretName: mysql-root-user
  situations: 3
  tlsUseSelfSigned: true
  router:
    situations: 1
   backupSchedules:
    - identify: hourly
      enabled: true
      schedule: "0 * * * *"
      backupProfileName: hourly-backup

The backupProfileName subject references the backup profile to make use of. You’ll create this within the subsequent step.

Creating Backup Profiles

Profiles are outlined within the spec.backupProfiles subject. Every profile ought to have a identify and a dumpInstance property that configures the backup operation.

apiVersion: mysql.oracle.com/v2
type: InnoDBCluster
metadata:
  identify: mysql-cluster
spec:
  secretName: mysql-root-user
  situations: 3
  tlsUseSelfSigned: true
  router:
    situations: 1
  backupSchedules:
    - identify: hourly
      enabled: true
      schedule: "0 * * * *"
      backupProfileName: hourly-backup
  backupProfiles:
    - identify: hourly-backup
      dumpInstance:
        storage:
          # ...

Backup storage is configured on a per-profile foundation within the dumpInstance.storage subject. The properties it’s essential to provide rely upon the kind of storage you’re utilizing.

S3 Storage

The MySQL operator can add your backups straight to S3-compatible object storage suppliers. To make use of this methodology, you will need to create a Kubernetes secret that incorporates an aws CLI config file along with your credentials.

Add the next content material to s3-secret.yaml:

apiVersion: v1
type: Secret
metadata:
  identify: s3-secret
stringData:
  credentials: |
    [default]
    aws_access_key_id = YOUR_S3_ACCESS_KEY
    aws_secret_access_key = YOUR_S3_SECRET_KEY

Substitute in your individual S3 entry and secret keys, then use Kubectl to create the key:

$ kubectl apply -f s3-secret.yaml
secret/s3-secret created

Subsequent add the next fields to your backup profile’s storage.s3 part:

  • bucketName – The identify of the S3 bucket to add your backups to.
  • prefix – Set this to use a prefix to your uploaded recordsdata, similar to /my-app/mysql. The prefix means that you can create folder timber inside your bucket.
  • endpoint – Set this to your service supplier’s URL while you’re utilizing third-party S3-compatible storage. You’ll be able to omit this subject if you happen to’re utilizing Amazon S3.
  • config – The identify of the key containing your credentials file.
  • profile – The identify of the config profile to make use of inside the credentials file. This was set to default within the instance above.

Right here’s an entire instance:

apiVersion: mysql.oracle.com/v2
type: InnoDBCluster
metadata:
  identify: mysql-cluster
spec:
  secretName: mysql-root-user
  situations: 3
  tlsUseSelfSigned: true
  router:
    situations: 1
  backupSchedules:
    - identify: hourly
      enabled: true
      schedule: "0 * * * *"
      backupProfileName: hourly-backup
  backupProfiles:
    - identify: hourly-backup
      dumpInstance:
        storage:
          s3:
            bucketName: backups
            prefix: /mysql
            config: s3-secret
            profile: default

Making use of this manifest will activate hourly database backups to your S3 account.

OCI Storage

The operator helps Oracle Cloud Infrastructure (OCI) object storage as a substitute for S3. It’s configured in the same means. First create a secret in your OCI credentials:

apiVersion: v1
type: Secret
metadata:
  identify: oci-secret
stringData:
  fingerprint: YOUR_OCI_FINGERPRINT
  passphrase: YOUR_OCI_PASSPHRASE
  privatekey: YOUR_OCI_RSA_PRIVATE_KEY
  area: us-ashburn-1
  tenancy: YOUR_OCI_TENANCY
  person: YOUR_OCI_USER

Subsequent configure the backup profile with a storage.ociObjectStorage stanza:

apiVersion: mysql.oracle.com/v2
type: InnoDBCluster
metadata:
  identify: mysql-cluster
spec:
  secretName: mysql-root-user
  situations: 3
  tlsUseSelfSigned: true
  router:
    situations: 1
  backupSchedules:
    - identify: hourly
      enabled: true
      schedule: "0 * * * *"
      backupProfileName: hourly-backup
  backupProfiles:
    - identify: hourly-backup
      dumpInstance:
        storage:
          ociObjectStorage:
            bucketName: backups
            prefix: /mysql
            credentials: oci-secret

Modify the bucketName and prefix fields to set the add location in your OCI account. The credentials subject should reference the key that incorporates your OCI credentials.

Kubernetes Quantity Storage

Native persistent volumes are a 3rd storage choice. That is much less strong as your backup information will nonetheless reside inside your Kubernetes cluster. Nonetheless it may be helpful for one-off backups and testing functions.

First create a persistent quantity and accompanying declare:

apiVersion: v1
type: PersistentVolume
metadata:
  identify: backup-pv
spec:
  storageClassName: commonplace
  capability:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp
 
---

apiVersion: v1
type: PersistentVolumeClaim
metadata:
  identify: backup-pvc
spec:
  storageClassName: commonplace
  accessModes:
    - ReadWriteOnce
  sources:
    requests:
      storage: 10Gi

This instance manifest just isn’t appropriate for manufacturing use. It’s best to choose an applicable storage class and quantity mounting mode in your Kubernetes distribution.

Subsequent configure your backup profile to make use of your persistent quantity by including a storage.persistentVolumeClaim subject:

apiVersion: mysql.oracle.com/v2
type: InnoDBCluster
metadata:
  identify: mysql-cluster
spec:
  secretName: mysql-root-user
  situations: 3
  tlsUseSelfSigned: true
  router:
    situations: 1
  backupSchedules:
    - identify: hourly
      enabled: true
      schedule: "0 * * * *"
      backupProfileName: hourly-backup
  backupProfiles:
    - identify: hourly-backup
      dumpInstance:
        storage:
          persistentVolumeClaim:
            claimName: backup-pvc

The persistent quantity declare created earlier is referenced by the claimName subject. The MySQL operator will now deposit backup information into the amount.

Setting Backup Choices

Backups are created utilizing the MySQL Shell’s dumpInstance utility. This defaults to exporting an entire dump of your server. The format writes construction and chunked information recordsdata for every desk. The output is compressed with zstd.

You’ll be able to cross choices by means of to dumpInstance through the dumpOptions subject in a MySQL operator backup profile:

apiVersion: mysql.oracle.com/v2
type: InnoDBCluster
metadata:
  identify: mysql-cluster
spec:
  # ...
  backupProfiles:
    - identify: hourly-backup
      dumpInstance:
        dumpOptions:
          chunking: false
          compression: gzip
        storage:
          # ...

This instance disables chunked output, creating one information file per desk, and switches to gzip compression as a substitute of zstd. You could find an entire reference for obtainable choices in the MySQL documentation.

Restoring a Backup

The MySQL operator can initialize new database clusters utilizing beforehand created recordsdata from dumpInstance. This lets you restore your backups straight into your Kubernetes cluster. It’s helpful in restoration conditions or while you’re migrating an present database to Kubernetes.

Database initialization is managed by the spec.initDB subject in your InnoDBCluster objects. Inside this stanza, use the dump.storage object to reference the backup location you used earlier. The format matches the equal dumpInstance.storage subject in backup profile objects.

apiVersion: v1
type: Secret
metadata:
  identify: s3-secret
stringData:
  credentials: |
    [default]
    aws_access_key_id = YOUR_S3_ACCESS_KEY
    aws_secret_access_key = YOUR_S3_SECRET_KEY

---

apiVersion: mysql.oracle.com/v2
type: InnoDBCluster
metadata:
  identify: mysql-cluster-recovered
spec:
  secretName: mysql-root-user
  situations: 3
  tlsUseSelfSigned: true
  router:
    situations: 1
  initDB:
    dump:
      storage:
        s3:
          bucketName: backups
          prefix: /mysql/mysql20221031220000
          config: s3-secret
          profile: default

Making use of this YAML file will create a brand new database cluster that’s initialized with the dumpInstance output within the specified S3 bucket. The prefix subject should comprise the total path to the dump recordsdata inside the bucket. Backups created by the operator will routinely be saved in timestamped folders; you’ll want to point which one to get better by setting the prefix. For those who’re restoring from a persistent quantity, use the path subject as a substitute of prefix.

Abstract

Oracle’s MySQL operator automates MySQL database administration inside Kubernetes clusters. On this article you’ve realized configure the operator’s backup system to retailer full database dumps in a persistent quantity or object storage bucket.

Utilizing Kubernetes to horizontally scale MySQL provides resiliency, however exterior backups are nonetheless important in case your cluster’s compromised or information is unintentionally deleted. The MySQL operator can restore a new database instance out of your backup if you happen to ever must, simplifying the post-disaster restoration process.



Source link

Leave a Reply

Your email address will not be published.