Virtual PCA on KVM

KVM domain definition XML

Below is a sample libvirt KVM domain definition XML file that can be used to create the virtual PCA KVM domain to accommodate up to 250 monitoring points. Adjust the <vcpu>, <cpu> and <memory> values for trial (lower memory) or higher monitoring point (higher vCPU/CPU and memory) support. Please also note:

  1. The <source file=.../> attribute of each disk device configuration needs to be an absolute path to a .qcow2 disk image file on the hypervisor
  2. The <source bridge=.../> attribute of the bridge interface device needs to refer to an actual bridge interface on the hypervisor
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
<domain type="kvm">
  <name>myvpca</name>
  <description>kvm vpca</description>
  <os>
    <type>hvm</type>
  </os>
  <vcpu placement='static'>4</vcpu>
  <memory unit='G'>16</memory>
  <cpu sockets='2' cores='2' threads='1'/>
  <devices>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/data/kvm_images/myvpca/pca-base.qcow2'/>
      <target dev='vda' bus='virtio'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/data/kvm_images/myvpca/pca-data.qcow2'/>
      <target dev='vdb' bus='virtio'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/data/kvm_images/myvpca/pca-backup.qcow2'/>
      <target dev='vdc' bus='virtio'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/data/kvm_images/myvpca/pca-flow-data.qcow2'/>
      <target dev='vdd' bus='virtio'/>
    </disk>
    <interface type='bridge'>
      <source bridge='extbr0'/>
      <model type='virtio'/>
    </interface>
    <graphics type="vnc" autoport="yes" multiUser='yes'/>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
  </devices>
</domain>

Create KVM domain virtual PCA

Launch a new KVM virtual PCA using the following commands:

virsh define <KVM domain definition XML>
virsh autostart <KVM domain name>
virsh start <KVM domain name>

Verify the KVM virtual PCA domain is persistent and will auto-restart if the KVM host is rebooted:

virsh list --autostart --persistent

Access to KVM domain virtual PCA console

The virt-manager program can be used to access the KVM virtual PCA’s console. You will need this console access in order to complete the virtual PCA configuration wizard.

  1. Run virt-manager and connect to your KVM hypervisor host
  2. Find your virtual PCA in the list and either double-click it or highlight it and click the “Show the virtual machine console and details” button
  3. Proceed with completing the virtual PCA configuration wizard
    vPCA-virt-manager-0.png

Initial configuration wizard walk-through

At initial startup, the virtual PCA will run a configuration wizard on the virtual machine console. First, you will be prompted for an initial virtual PCA organization and organization administrator username and password. Second, you will be prompted for hostname and network settings.

  1. On the initial screen of the configuration wizard, you supply the name, email address, organization name and credientials of the person who is to be the administrator of the virtual PCA. This information will be used to create the administrator login account on the virtual PCA.
    vPCA-Setup-01.png

  2. After entering the above information, arrow-down to “Next” and press Enter.
    vPCA-Setup-02.png

  3. On the second screen, you supply the virtual PCA’s hostname, domain and network configuration.
    vPCA-Setup-Host-01.png

  4. Enter a hostname and domain for accessing your virtual PCA on your network.
    vPCA-Setup-Host-02.png

  5. If you choose the network configuration to be set by DHCP, arrow down to the “Preferred NTP Servers” section. If you need to set the IP address statically, arrow down to “Static,” press Enter and follow the next step.

  6. For static IP address configuration, you will be prompted for the host IP address (IPv4 only), a netmask (in dot-decimal notation), the default gateway/next-hop IP and primary and secondary DNS server IP addresses. After setting these values, arrow down to the “Preferred NTP Servers” section.
    vPCA-Setup-Network-Static-02.png

  7. The default NTP server your vPCA will connect to is pool.ntp.org. If this meets your needs, arrow down to “Finish” and press Enter. Alternatively, follow the next step.
    vPCA-Setup-NTP-01.png

  8. You can choose the NTP servers your vPCA connects to by editing the “Preferred NTP Servers” field, and optionally the “Additional NTP Servers” field. These fields accept comma separated hostnames and IP Addresses. Your vPCA will prefer synchronizing with any of the “Preferred NTP Servers”. When you’ve finished editing these fields, arrow down to “Finish”, press Enter.
    vPCA-Setup-NTP-02.png

  9. Please do wait while the configuration wizard values are applied. The virtual PCA services need to restart for all configuration to take effect.
    vPCA-Setup-Complete-01.png

  10. Once the configuration wizard has been applied, a “Ready” message will appear with the URL that you can access with a browser and log into the new virtual PCA web interface.
    vPCA-Setup-Ready.png

  11. Use the username and password you entered on the first screen of the configuration wizard to log in.
    vPCA-Webui-Login.png

  12. At this point, you will need to contact AppNeta Support for your licenses to be installed.

Virtual PCA configuration via API

The virtual PCA hostname and network configuration can be changed by calls to a REST API. There is also an endpoint to enable or disable the maintenance service tunnel. The API can be accessed interactively by going to the virtual PCA’s swagger UI:

      https://<virtual PCA hostname>:9000/swagger