It’s time to add KubeVirt to my lab environment where my blog and other miscellaneous projects like accolli.it reside. Here are some specifications of the physical node I’m using.


[root@node1 opt]# cat /etc/redhat-release
Rocky Linux release 9.4 (Blue Onyx)

[root@node1 opt]# free -m
               total        used        free      shared  buff/cache   available
Mem:           31781        4160       20486           5        7594       27620
Swap:              0           0           0

[root@node1 opt]# lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         39 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  8
  On-line CPU(s) list:   0-7
Vendor ID:               GenuineIntel
  BIOS Vendor ID:        Intel(R) Corporation
  Model name:            Intel(R) Xeon(R) CPU E3-1230 v6 @ 3.50GHz
    BIOS Model name:     Intel(R) Xeon(R) CPU E3-1230 v6 @ 3.50GHz
    CPU family:          6
    Model:               158
    Thread(s) per core:  2
    Core(s) per socket:  4
    Socket(s):           1
    Stepping:            9
    CPU(s) scaling MHz:  93%
    CPU max MHz:         3900.0000
    CPU min MHz:         800.0000
    BogoMIPS:            6999.82
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_ts
                         c art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pd
                         cm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_sh
                         adow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dt
                         herm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities

Firstly, I’m installing k8s through KubeSpray (single-node) along with kube-ovn and multus for the SDN component. Here are some attributes and the inventory.

[root@ns3137793 kubespray]# cat inventory/local/group_vars/k8s_cluster/k8s-cluster.yml | grep -i kube_netwo
kube_network_plugin: kube-ovn
kube_network_plugin_multus: true

[root@ns3137793 kubespray]# cat inventory/local/hosts.ini
node1 ansible_connection=local local_release_dir={{ansible_env.HOME}}/releases

[kube_control_plane]
node1

[etcd]
node1

[kube_node]
node1

I’m going straight with Podman to get a Python and Ansible setup ready for KubeSpray. I like using an image that already has everything I need for a certain version of KubeSpray.


podman run --rm -it -v /opt/kubespray/inventory/local:/kubespray/inventory quay.io/kubespray/kubespray:v2.25.0 ansible-playbook -i /kubespray/inventory/hosts.ini -b /kubespray/upgrade-cluster.yml

[root@node1 ~]# kubectl get pods -A
NAMESPACE     NAME                                   READY   STATUS    RESTARTS      AGE
kube-system   coredns-d665d669-g7hpq                 1/1     Running   0             16s
kube-system   dns-autoscaler-597dccb9b9-wwmf2        1/1     Running   0             31h
kube-system   kube-apiserver-node1                   1/1     Running   1             31h
kube-system   kube-controller-manager-node1          1/1     Running   2             31h
kube-system   kube-multus-ds-amd64-gmbs9             1/1     Running   2 (31h ago)   31h
kube-system   kube-ovn-cni-2p26b                     1/1     Running   0             31h
kube-system   kube-ovn-controller-6bd964f544-7hghj   1/1     Running   0             31h
kube-system   kube-ovn-monitor-6974f7f95f-djlmv      1/1     Running   0             31h
kube-system   kube-ovn-pinger-jwpkw                  1/1     Running   0             31h
kube-system   kube-proxy-5rpf5                       1/1     Running   0             31h
kube-system   kube-scheduler-node1                   1/1     Running   1             31h
kube-system   nodelocaldns-g6w2m                     1/1     Running   0             31h
kube-system   ovn-central-799c6555b6-xxvsg           1/1     Running   0             31h
kube-system   ovs-ovn-dwwb9                          1/1     Running   0             31h
[root@node1 ~]# kubectl version
Client Version: v1.31.3
Kustomize Version: v5.4.2
Server Version: v1.31.3

For first, let’s check out https://kubevirt.io/user-guide/cluster_admin/installation/ for a step-by-step guide. Just to be safe, I’ll update the KubeSpray cluster.

[root@k8s1 kubespray]#  podman run --rm -it -v /opt/kubespray/inventory/kimsufi:/kubespray/inventory quay.io/kubespray/kubespray:v2.14.0 ansible-playbook -i /kubespray/inventory/inventory.ini   -b /kubespray/upgrade-cluster.yml

Trying to pull quay.io/kubespray/kubespray:v2.14.0...
...
.....

I’m validating that the API server has the –allow-privileged=true parameter enabled.

root@node1 ~]# ps -ef | grep allow-privileged 
root        1661    1473 12 Dec02 ?        06:30:06 kube-apiserver --advertise-address=10.10.10.4 --allow-privileged=true

So, KubeVirt suggests using cri-o or containerd. I’m pretty sure it’ll work fine, but let’s double-check with ‘grep containerd systemctl list-unit-files’

[root@node ~]# systemctl list-unit-files | grep containerd.service
containerd.service                                                                    enabled         disabled

I’ll run virt-host-validate qemu to check for hardware virtualization. It’s missing on my server, so I’ll install libvirt-clients to enable it

[root@node1 ~]#  virt-host-validate qemu
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for device assignment IOMMU support                         : PASS
  QEMU: Checking if IOMMU is enabled by kernel                               : WARN (IOMMU appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments)
  QEMU: Checking for secure guest support                                    : WARN (Unknown if this platform has Secure Guest support)

Add container-selinux

dnf install container-selinux -y

I am ready to initiate the KubeVirt installation process. I will follow the exact steps provided below, which I’ll incorporate into a bash script for streamlined execution https://kubevirt.io/user-guide/cluster_admin/installation/#installing-kubevirt-on-kubernetes

# Point at latest release
export RELEASE=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt)
# Deploy the KubeVirt operator
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml
# Create the KubeVirt CR (instance deployment request) which triggers the actual installation
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr.yaml
# wait until all KubeVirt components are up
kubectl -n kubevirt wait kv kubevirt --for condition=Available

I’m verifying that KubeVirt has started

[root@node1 opt]# kubectl get pods -A
NAMESPACE     NAME                                   READY   STATUS    RESTARTS      AGE
kube-system   coredns-d665d669-g7hpq                 1/1     Running   0             13m
kube-system   dns-autoscaler-597dccb9b9-wwmf2        1/1     Running   0             31h
kube-system   kube-apiserver-node1                   1/1     Running   1             31h
kube-system   kube-controller-manager-node1          1/1     Running   2             31h
kube-system   kube-multus-ds-amd64-gmbs9             1/1     Running   2 (31h ago)   31h
kube-system   kube-ovn-cni-2p26b                     1/1     Running   0             31h
kube-system   kube-ovn-controller-6bd964f544-7hghj   1/1     Running   0             31h
kube-system   kube-ovn-monitor-6974f7f95f-djlmv      1/1     Running   0             31h
kube-system   kube-ovn-pinger-jwpkw                  1/1     Running   0             31h
kube-system   kube-proxy-5rpf5                       1/1     Running   0             31h
kube-system   kube-scheduler-node1                   1/1     Running   1             31h
kube-system   nodelocaldns-g6w2m                     1/1     Running   0             31h
kube-system   ovn-central-799c6555b6-xxvsg           1/1     Running   0             31h
kube-system   ovs-ovn-dwwb9                          1/1     Running   0             31h
kubevirt      virt-api-5f9f64fd4d-bwcqc              1/1     Running   0             114s
kubevirt      virt-controller-7c8d6577b6-6p26s       1/1     Running   0             88s
kubevirt      virt-controller-7c8d6577b6-fm4tc       1/1     Running   0             88s
kubevirt      virt-handler-746wz                     1/1     Running   0             88s
kubevirt      virt-operator-79456d8689-tddjr         1/1     Running   0             2m23s
kubevirt      virt-operator-79456d8689-v9nnh         1/1     Running   0             2m23s

I’ve realized that it creates extra replicas by default. While this is beneficial for production setups, I don’t require high availability and my resources are limited. I’m going to remove all extra replicas and scale down to the minimum

The initial step is to adjust the replicas parameter within the kubevirt-operator.yaml file and subsequently scale down to a single replica

[root@k8s1 opt]# for i in $(kubectl get deployment -n kubevirt | grep -v NAME | awk '{ print $1}'); do kubectl scale deployment $i --replicas=1 -n kubevirt; done
deployment.apps/virt-api scaled
deployment.apps/virt-controller scaled
deployment.apps/virt-operator scaled

The operator is correctly resetting the replicas for the controller and APIs to 2. I’m going to check the kube-virt code, specifically the ‘virt-operator’ part. It seems I can’t modify the number of replicas due to a hardcoded minReplicas value. I’ll revisit this later 😀

$ > cat resource/apply/apps.go | grep -i minReplicas
	const minReplicas = 2
	if replicas < minReplicas {
		replicas = minReplicas

Alright, it’s time to create my first VM. I’m going to install virtctl (https://kubevirt.io/user-guide/user_workloads/virtctl_client_tool/) to do so.

[root@node1 ~]# virtctl version
Client Version: version.Info{GitVersion:"v1.4.0", GitCommit:"9b9b3d4e7b681af96ca1b6b6a5cea75e2f05ce5b", GitTreeState:"clean", BuildDate:"2024-11-13T08:23:36Z", GoVersion:"go1.22.8 X:nocoverageredesign", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{GitVersion:"v1.4.0", GitCommit:"9b9b3d4e7b681af96ca1b6b6a5cea75e2f05ce5b", GitTreeState:"clean", BuildDate:"2024-11-13T09:51:17Z", GoVersion:"go1.22.8 X:nocoverageredesign", Compiler:"gc", Platform:"linux/amd64"}

I found this Git repo https://github.com/Tedezed/kubevirt-images-generator and decided to use it as a starting point for applying my first virtual machine manifest!

kind: VirtualMachine
metadata:
  name: ubuntu-bionic
spec:
  runStrategy: Always
  template:
    metadata:
      labels:
        kubevirt.io/size: small
        kubevirt.io/domain: ubuntu-bionic
    spec:
      domain:
        cpu:
          cores: 1
        devices:
          disks:
            - name: containervolume
              disk:
                bus: virtio
            - name: cloudinitvolume
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
        resources:
          requests:
            memory: 2048M
      networks:
      - name: default
        pod: {}
      volumes:
        - name: containervolume
          containerDisk:
            image: tedezed/ubuntu-container-disk:20.0
        - name: cloudinitvolume
          cloudInitNoCloud:
            userData: |-
              #cloud-config
              chpasswd:
                list: |
                  ubuntu:ubuntu
                  root:toor
                expire: False

[root@node1 ~]# kubectl create -f myvm.yml
virtualmachine.kubevirt.io/ubuntu-bionic created

State is Scheduling...

[root@node1 ~]# kubectl get vmis -A -o wide
NAMESPACE   NAME            AGE   PHASE        IP    NODENAME   READY   LIVE-MIGRATABLE   PAUSED
default     ubuntu-bionic   9s    Scheduling                    False

Listing vmis…

[root@k8s1 kubespray]# kubectl get vmis -A -o wide
NAMESPACE   NAME    AGE     PHASE     IP              NODENAME   READY   LIVE-MIGRATABLE   PAUSED
default     my-vm   5m38s   Running   10.233.88.188   k8s3       True    False

[root@node1 ~]# kubectl get vmis -A -o wide
NAMESPACE   NAME            AGE   PHASE     IP             NODENAME   READY   LIVE-MIGRATABLE   PAUSED
default     ubuntu-bionic   36s   Running   10.233.64.20   node1      True    True
[root@node1 ~]# ping 10.233.64.20
PING 10.233.64.20 (10.233.64.20) 56(84) bytes of data.
64 bytes from 10.233.64.20: icmp_seq=1 ttl=62 time=0.750 ms
64 bytes from 10.233.64.20: icmp_seq=2 ttl=62 time=0.537 ms

Now that the VM is pingable, it’s time to SSH into it.

By default, this image doesn’t have SSH enabled (password authentication on the SSH daemon is probably disabled). So I connect to via console to change PasswordAuthentication in /etc/ssh/sshd_config

[root@node1 ~]# kubectl get vmis
NAME            AGE   PHASE     IP             NODENAME   READY
ubuntu-bionic   10m   Running   10.233.64.20   node1      True

[root@node1 ~]# virtctl console 10.233.64.20
virtualmachineinstance.kubevirt.io "10.233.64.20" not found
[root@node1 ~]#
[root@node1 ~]#
[root@node1 ~]# kubectl get vmis
NAME            AGE   PHASE     IP             NODENAME   READY
ubuntu-bionic   10m   Running   10.233.64.20   node1      True
[root@node1 ~]# virtctl console ubuntu-bionic
Successfully connected to ubuntu-bionic console. The escape sequence is ^]

Password:

Login incorrect
ubuntu-bionic login: ubuntu
Password:
Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-73-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Sat Dec  7 18:08:04 UTC 2024

  System load:  0.0               Processes:               105
  Usage of /:   63.9% of 1.96GB   Users logged in:         0
  Memory usage: 9%                IPv4 address for enp1s0: 10.0.2.2
  Swap usage:   0%


1 update can be applied immediately.
To see these additional updates run: apt list --upgradable


The list of available updates is more than a week old.
To check for new updates run: sudo apt update
Failed to connect to https://changelogs.ubuntu.com/meta-release-lts. Check your Internet connection or proxy settings


Last login: Sat Dec  7 18:01:04 UTC 2024 on ttyS0
ubuntu@ubuntu-bionic:~$ sudo su -

root@ubuntu-bionic:~# systemctl restart sshd
root@ubuntu-bionic:~#
root@ubuntu-bionic:~# cat /etc/ssh/sshd_config | grep -i password
#PermitRootLogin prohibit-password
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication yes

I can try using SSH now

[root@node1 ~]# kubectl get vmis
NAME AGE PHASE IP NODENAME READY
ubuntu-bionic 12m Running 10.233.64.20 node1 True
[root@node1 ~]# ssh ubuntu@10.233.64.20
ubuntu@10.233.64.20's password:
Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-73-generic x86_64)

Thank you for having been there. I will continue with part 2!