- About virtual PCA
- Physical vs. virtual PCA
- Supported hypervisors
- Guest system requirements
- Obtaining a virtual PCA image
- Virtual PCA on KVM
- Initial configuration wizard walk-through
- Virtual PCA configuration via API
About virtual PCA
The AppNeta Private Cloud Appliance can be deployed as a virtual machine. All of the features of the existing physical private cloud appliance are available in the virtual PCA.
Physical vs. virtual PCA
There is feature parity between the physical and virtual private cloud appliances. There are only minor differences in the initial setup and networking configuration:
- Initial network and administrator user setup is handled with a new setup wizard
- Subsequent virtual PCA network and hostname configuration is available via an API rather than the LCD display on physical PCAs
- The FlowView API feature is available on the virtual PCA
Production virtual PCAs can be deployed onto Linux KVM or VMWare vSphere 5.5 or ESXi 5.5. For a VMWare-based virtual PCA, AppNeta will provide an Open Virtual Appliance (OVA) virtual PCA image. For a Linux KVM-based virtual PCA, AppNeta will provide a compressed tarfile of the required .qcow2 disk image files.
Guest system requirements
The virtual PCA image will create a guest machine with the parameters listed below. The host system must be able to handle these as a minimum.
|Component||Virtual PCA image value configured|
|Hard Disk 1 (pca-base)||40 GB (SSD performance required)|
|Hard Disk 2 (pca-data)||750 GB (SSD performance required)|
|Hard Disk 3 (pca-backup)||2000 GB|
|Hard Disk 4 (pca-flow-data)||326 GB (SSD performance required)|
|Network Adapter||1 x 1 GigE|
|Video Card||4 MB|
Compute resources should be adjusted based on planned usage:
|Trial||up to 250 appliances||up to 1000 appliances|
Storage resources can be thin-provisioned. Actual storage usage may vary depending on number of monitored paths and actual performance of monitored networks and applications.
Obtaining a virtual PCA image
AppNeta Support will provide a link to download the virtual PCA OVA image file or the compressed tarfile of the .qcow2 disk image files.
Virtual PCA on KVM
KVM domain definition XML
Below is a sample libvirt KVM domain definition XML file that can be used to create the virtual PCA KVM domain to accommodate up to 250 appliances. Adjust the
<memory> values for trial (lower memory) or higher appliance (higher vCPU/CPU and memory) support. Please also note:
<source file=.../>attribute of each disk device configuration needs to be an absolute path to a .qcow2 disk image file on the hypervisor
<source bridge=.../>attribute of the bridge interface device needs to refer to an actual bridge interface on the hypervisor
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
Create KVM domain virtual PCA
Launch a new KVM virtual PCA using the following commands:
virsh define <KVM domain definition XML> virsh autostart <KVM domain name> virsh start <KVM domain name>
Verify the KVM virtual PCA domain is persistent and will auto-restart if the KVM host is rebooted:
virsh list --autostart --persistent
Access to KVM domain virtual PCA console
virt-managerand connect to your KVM hypervisor host
- Find your virtual PCA in the list and either double-click it or highlight it and click the “Show the virtual machine console and details” button
- Proceed with completing the virtual PCA configuration wizard
Initial configuration wizard walk-through
At initial startup, the virtual PCA will run a configuration wizard on the virtual machine console. First, you will be prompted for an initial virtual PCA organization and organization administrator username and password. Second, you will be prompted for hostname and network settings.
On the initial screen of the configuration wizard, you supply the name, email address, organization name and credientials of the person who is to be the administrator of the virtual PCA. This information will be used to create the administrator login account on the virtual PCA.
After entering the above information, arrow-down to “Next” and press Enter.
On the second screen, you supply the virtual PCA’s hostname, domain and network configuration.
Enter a hostname and domain for accessing your virtual PCA on your network.
If you choose the network configuration to be set by DHCP, arrow down to the “Preferred NTP Servers” section. If you need to set the IP address statically, arrow down to “Static,” press Enter and follow the next step.
For static IP address configuration, you will be prompted for the host IP address (IPv4 only), a netmask (in dot-decimal notation), the default gateway/next-hop IP and primary and secondary DNS server IP addresses. After setting these values, arrow down to the “Preferred NTP Servers” section.
The default NTP server your vPCA will connect to is pool.ntp.org. If this meets your needs, arrow down to “Finish” and press Enter. Alternatively, follow the next step.
You can choose the NTP servers your vPCA connects to by editing the “Preferred NTP Servers” field, and optionally the “Additional NTP Servers” field. These fields accept comma separated hostnames and IP Addresses. Your vPCA will prefer synchronizing with any of the “Preferred NTP Servers”. When you’ve finished editing these fields, arrow down to “Finish”, press Enter.
Please do wait while the configuration wizard values are applied. The virtual PCA services need to restart for all configuration to take effect.
Once the configuration wizard has been applied, a “Ready” message will appear with the URL that you can access with a browser and log into the new virtual PCA web interface.
Use the username and password you entered on the first screen of the configuration wizard to log in.
At this point, you will need to contact AppNeta Support for your licenses to be installed.
Virtual PCA configuration via API
The virtual PCA hostname and network configuration can be changed by calls to a REST API. There is also an endpoint to enable or disable the maintenance service tunnel. The API can be accessed interactively by going to the virtual PCA’s swagger UI:
https://<virtual PCA hostname>:9000/swagger