- Virtual PCA on KVM
- Initial configuration wizard walk-through
- Virtual PCA configuration via API
Virtual PCA on KVM
KVM domain definition XML
Below is a sample libvirt KVM domain definition XML file that can be used to create the virtual PCA KVM domain to accommodate up to 250 monitoring points. Adjust the
<memory> values for trial (lower memory) or higher monitoring point (higher vCPU/CPU and memory) support. Please also note:
<source file=.../>attribute of each disk device configuration needs to be an absolute path to a .qcow2 disk image file on the hypervisor
<source bridge=.../>attribute of the bridge interface device needs to refer to an actual bridge interface on the hypervisor
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
Create KVM domain virtual PCA
Launch a new KVM virtual PCA using the following commands:
virsh define <KVM domain definition XML> virsh autostart <KVM domain name> virsh start <KVM domain name>
Verify the KVM virtual PCA domain is persistent and will auto-restart if the KVM host is rebooted:
virsh list --autostart --persistent
Access to KVM domain virtual PCA console
virt-managerand connect to your KVM hypervisor host
- Find your virtual PCA in the list and either double-click it or highlight it and click the “Show the virtual machine console and details” button
- Proceed with completing the virtual PCA configuration wizard
Initial configuration wizard walk-through
At initial startup, the virtual PCA will run a configuration wizard on the virtual machine console. First, you will be prompted for an initial virtual PCA organization and organization administrator username and password. Second, you will be prompted for hostname and network settings.
On the initial screen of the configuration wizard, you supply the name, email address, organization name and credientials of the person who is to be the administrator of the virtual PCA. This information will be used to create the administrator login account on the virtual PCA.
After entering the above information, arrow-down to “Next” and press Enter.
On the second screen, you supply the virtual PCA’s hostname, domain and network configuration.
Enter a hostname and domain for accessing your virtual PCA on your network.
If you choose the network configuration to be set by DHCP, arrow down to the “Preferred NTP Servers” section. If you need to set the IP address statically, arrow down to “Static,” press Enter and follow the next step.
For static IP address configuration, you will be prompted for the host IP address (IPv4 only), a netmask (in dot-decimal notation), the default gateway/next-hop IP and primary and secondary DNS server IP addresses. After setting these values, arrow down to the “Preferred NTP Servers” section.
The default NTP server your vPCA will connect to is pool.ntp.org. If this meets your needs, arrow down to “Finish” and press Enter. Alternatively, follow the next step.
You can choose the NTP servers your vPCA connects to by editing the “Preferred NTP Servers” field, and optionally the “Additional NTP Servers” field. These fields accept comma separated hostnames and IP Addresses. Your vPCA will prefer synchronizing with any of the “Preferred NTP Servers”. When you’ve finished editing these fields, arrow down to “Finish”, press Enter.
Please do wait while the configuration wizard values are applied. The virtual PCA services need to restart for all configuration to take effect.
Once the configuration wizard has been applied, a “Ready” message will appear with the URL that you can access with a browser and log into the new virtual PCA web interface.
Use the username and password you entered on the first screen of the configuration wizard to log in.
At this point, you will need to contact AppNeta Support for your licenses to be installed.
Virtual PCA configuration via API
The virtual PCA hostname and network configuration can be changed by calls to a REST API. There is also an endpoint to enable or disable the maintenance service tunnel. The API can be accessed interactively by going to the virtual PCA’s swagger UI:
https://<virtual PCA hostname>:9000/swagger