In the previous post in this series, we got Veeam Kasten for Kubernetes up and running on our local development cluster. If you’re picking up from there, you should have a dashboard that looks like this:
In this post I will walk you through this dashboard, highlight some key features and functionalities, and show you how to get started.
Dashboard Overview
The top of the Veeam Kasten dashboard displays a list of applications, the policies that exist in your system, and a summary of your cluster’s backup data footprint.
In the “Applications” box, you will see the number of applications discovered and the following three categories:
- Unmanaged: No protection policies currently cover this object.
- Non-compliant: A policy applies to this object, but the actions associated with that policy are failing (e.g., due to underlying storage slowness, configuration problems, etc.) or the actions haven’t been invoked yet (e.g., right after policy creation).
- Compliant: Both policies apply to this object and policy service level agreements (SLAs) are successfully being met.
Not much is happening in our cluster, so if you click on “applications”, you should see something like like the following:
From here we can hit those three dots by the application and perform several activities:
Location Configuration
To create a backup, we are going to want to send our data outside of the cluster. For us to do that, we need to create a location profile. On the left ribbon, select “Location” under “Profiles”.
On this new screen, select “New Profile”.
As you can see below, there are various options to choose from when it comes to sending and storing our backup data in an external location.
Amazon S3, Azure Storage, and Google Cloud Storage provide us with an external Object Storage repository that we can send our backups to. With Amazon S3 and Azure Storage, we can use immutability as well.
NFS FileStore: You may be required to store your backups locally and have an NFS server available. This NFS server must be reachable from the nodes in your cluster, and the NFS share must be exported and mountable on all nodes.
S3 Compatible: Much like Amazon S3, you may be required to leverage another cloud-based object storage provider or have your own object storage on-premises. This option can also use immutability.
Veeam Repository: Much like the NFS export option, you may have already invested in a Veeam Repository for your other platform backups. This option enables any vSphere CSI-related Kubernetes cluster backups. Only PVC data is sent to the Veeam Repository and metadata must be sent to NFS or Object Storage location profiles.
For our scenario, I am going to use a Wasabi S3-compatible location profile, but the steps are the same for other object storage options.
Here, we will select the S3-compatible radio button and the wizard will appear. Provide a name and add your access and secret key. You will also need to enter your endpoint, region, and bucket name. The bucket in this example was created previously in the Wasabi console.
The final checkbox is for immutability. We won’t cover this now, but we will later on in this blog series.
When complete, select “Save Profile”. If all details are correct, you should see something like this in “Location Profiles”:
Storage and Application Deployment
At this point, we have Kasten up and running, and we have a location profile to store our backups in. We now need an application and some data to protect. For this, we must ensure that the storage in our development cluster is ready, and we can check this by using a primer script to confirm.
*NOTE: This primer script is useful to run before Kasten installation to confirm you have a valid cluster configuration.
If you open a terminal that has access to your Kubernetes cluster and is connected to the correct context, run this command against your cluster.
For context, I am using the same minikube cluster that I used in our first post in this series.
curl https://docs.kasten.io/tools/k10_primer.sh | bash
From the above, you can see that we do not have any Container Storage Interface (CSI) capabilities on this cluster. The CSI is what we will generally use across the board when it comes to protecting persistent data within a Kubernetes cluster. I will cover CSI in more detail in our next blog.
For us to continue, we need to make sure we have relevant storage available in our cluster. One option is to run the primer script against our cluster to highlight what this does:
- Install the Hostpath CSI and VolumeSnapshotClass with instructions.
- Annotate StorageClass
- Change default storage class
- Deploy our mission-critical application (i.e., K10app) into our cluster.
This may seem daunting, especially if this is your first endeavour into Kubernetes. Just think of this process as adding a datastore into your vSphere cluster so that your virtual machines (VMs) run on your most performant and efficient storage.
The following command block will install the External Snapshotter and CSI-driver-hostpath. It is worth noting that this approach is not going to provide enterprise performance; only test and learning content should be stored with this storage option.
git clone https://github.com/kubernetes-csi/external-snapshotter.git
cd external-snapshotter/
kubectl kustomize client/config/crd | kubectl create -f -
kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
cd ..
git clone https://github.com/kubernetes-csi/csi-driver-host-path.git
cd csi-driver-host-path/
./deploy/kubernetes-latest/deploy.sh
kubectl create -f ./examples/csi-storageclass.yaml
If you then run the following commands, you will see we have a new StorageClass and a new VolumeSnapshotClass.
We then need to annotate the newly created VolumeSnapshotClass with the following command:
kubectl annotate volumesnapshotclass csi-hostpath-snapclass \
k10.kasten.io/is-snapshot-class=true
Finally, before deploying our application, we need to change our default StorageClass where new applications will store their persistent data.
kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
At this stage, you could run that primer script again and see the CSI capabilities available on our cluster.
We can now deploy our application. By using Helm, we can also add a demo application with the following command.
helm repo add k10app https://k10app.github.io/k10app/
Following this we can use the command to install the application into our cluster.
helm install k10app --namespace k10app k10app/k10app --create-namespace
Once deployed and once all pods are in a ready state, we can explore and start protecting the application and data in case of emergency.
First, we can check the persistent volume claims we have for our application with the following command:
kubectl get pvc -n k10app
As you can see from the above, we have three different databases that are being used in this application. If you want to find out more about this demo application, look here: https://github.com/k10app/k10app
Protecting Applications
We now have our mission-critical applications up and running. Now, we’re at the stage where we want to get a backup policy created before we move to the subject of our next blog. When we head back to the dashboard, you can see that we have a new application that shows up. You will also notice in the box below that we have all the objects associated to that application as well.
We can now create a policy using the drop-down menu:
First, we will create a snapshot schedule to protect our application. Think of these as really fast recovery points that also should adhere to the 3-2-1 Rule, even in Kubernetes. These also can get a copy of your data offsite and out of the cluster.
If we then scroll down a little, we can see “Enable Backups via Snapshot Exports”. Turn this on and select the frequency at which you want to send snapshots off into your object storage location. In my case, I’musing Wasabi as my location profile.
At this point, we can create the policy. All further information will be covered in blog posts to come! These posts will include Kanister, which is how we specifically speak to the data services in order to provide them with an application-consistent backup.
Once you hit “Create Policy”, you should see a screen like this:
At this stage, we have defined a backup policy and we can leave this to run on the schedule. You can also hit “Run Once” to start a new backup. If you want to do this, you can head back to the dashboard to check the progress of your backup until it completes. You will also notice that our data usage has gone up and we now have one policy and two applications. We have more to come on these topics later in the series.
Don’t forget you can try Veeam Kasten for free!
The post Veeam Kasten for Kubernetes: Back to Basics appeared first on Veeam Software Official Blog.
from Veeam Software Official Blog https://ift.tt/vJulfxW
Share this content: