ClusterAPI for vSphere, now with CNS support

Introduction If you want to learn about the basics and key concepts of ClusterAPI, then check out my post on the Alpha back in June here - it covers the high level concepts and troubleshooting of ClusterAPI, as well as what it offers to you as a user who wants to set up Kubernetes. This blog is a look at what has changed and how you can use ClusterAPI to deploy K8s clusters on vSphere that use CNS and the CSI plugin for storage, that was introduced as part of vSphere 6....

October 10, 2019 · Myles Gray

Using Velero for K8s Backup and Restore of CSI Volumes

Introduction We’ve covered off prepping and installing K8s on this blog a few different ways; with VM templates built manually, with cloud-init, and with ClusterAPI vSphere. Let’s say you’ve grown attached to some of the workloads you’re running on one of your clusters, naturally. It would be nice to backup and restore those should something go wrong - or even, as was my case, I deployed a distro of K8s on my Raspberry Pi cluster that I wasn’t wild about and wanted to move to another - how do you migrate those workloads?...

October 4, 2019 · Myles Gray

First-look: Automated K8s lifecycle with ClusterAPI

Introduction K8s lifecycle is something people are still struggling with, despite amazing tools out there like kubeadm which take care of the K8s setup itself, we are still lacking something fundamental - they day-0 setup. Who/what actually creates the VMs and installs the packages on them so we can get to the stage that we can use kubeadm? Typically it’s up to the user, and as such can vary wildly - so how can that experience be improved, and even better - totally automated and declarative....

June 26, 2019 · Myles Gray

Using cloud-init for VM templating on vSphere

This isn’t necessarily a follow-on from the other three blogs so far in this series, but more of an alternative to parts one and two. Following on from those I felt that the process could be much more automated, and less “ssh into every box and change things manually”. After all, the less changes we can make iteratively and imperatively ↗, the more it is programmed or declarative, the better....

June 9, 2019 · Myles Gray

Using the vSphere Cloud Provider for K8s to dynamically deploy volumes

Using the VCP As of the last part in the series we have a fully up and running k8s cluster with the vSphere Cloud Provider installed! Let’s make sure it works and is provisioning storage for us by deploying a StorageClass and a test app. Prerequisites Tools I am using macOS, so will be using the brew package manager to install and manage my tools, if you are using Linux or Windows, use the appropriate install guide for each tool, according to your OS....

February 8, 2019 · Myles Gray

Setting up K8s and the vSphere Cloud Provider using kubeadm

Intro In the last installment we created an Ubuntu 18.04 LTS image to use to clone VMs from for spinning up our K8s nodes, we then cloned four VMs out, one as the master and three to be used as workers. This time we are going to step through installing all the necessary K8s components on each of the nodes (kubeadm, kubectl and kubelet), the container runtime (Docker) and configuring the vSphere Cloud Provider for Kubernetes using kubeadm to bootstrap the cluster....

January 28, 2019 · Myles Gray

Creating an Ubuntu 18.04 LTS cloud image for cloning on VMware

Intro I have been experimenting a lot over the past 18 months with containers and in particular, Kubernetes, and one of the core things I always seemed to get hung up on was part-zero - creating the VMs to actually run K8s. I wanted a CLI only way to build a VM template for the OS and then deploy that to the cluster. It turns out that with Ubuntu 18.04 LTS (in particular the cloud image OVA) there are a few things need changed from the base install (namely cloud-init) in order to make them play nice with OS Guest Customisation in vCenter....

January 27, 2019 · Myles Gray

Migrating vSAN vmkernel ports to a new subnet

After deploying a vSAN cluster, the need sometimes arises to make changes to its network configuration, such as migrating the vmkernel network of the cluster to a new subnet. This requirement may appear for example when changing the network in which the vSAN cluster is running, or even, in a more complex scenario such as when a standalone vSAN needs to be converted to a stretched cluster. In these sorts of situations, complications may be encountered if the subnet in use for the vSAN vmkernel ports cannot be routed to the network as a whole, as it is in use elsewhere in the organization, and is currently isolated in an L2 segment....

November 3, 2017 · Myles Gray

vSphere 6.5 Host Resources Deep Dive

Over the last 6-9 months, I have been reviewing the vast majority of a new book just released to print by Frank Denneman ↗ and Niels Hagoort ↗ - The vSphere 6.5 Host Resources Deep Dive ↗. This book is, without a doubt, the most in-depth look at host design I have ever read, we are not talking about standard best practices here, though those are in there too. More, low-level understanding of why best practices exist and even challenging some existing perceptions and paradigms about why technologies should be used and more importantly, how they should be utilised....

June 21, 2017 · Myles Gray

Configuring Auto Deploy Stateless Caching in vSphere 6.0

Following on from my previous post on configuring custom ESXi images for PXE deployment, it piqued my interest again in Auto Deploy, now that I have a lab large enough (enough physical failure domains) to justify auto-deploy I figured i’d give it another go. I have chosen to implement stateless caching as it will allow the hosts to boot from the last used ESXi image they had if the PxE/AutoDeploy server goes down - then when it comes back up will pull the new version, this accounts for a total infrastructure outage and still allows the hosts to be bootable....

August 19, 2016 · Myles Gray