LiveNX components can be deployed via the following methods: Virtual, Physical, and Cloud. The Virtual Deployment Specifications, as well as the Cisco and Multi-Vendor Device Support lists are provided below.
If you are interested in deploying LiveNX in a Physical, Cloud (Azure, AWS, & Google Cloud), Hyper-V, or KVM environment, please contact your LiveAction SE or sales representative, or technical support (email@example.com) for the specifications appropriate for those environments and your needs.
Virtual Deployment Specifications
The LiveNX Server is primarily deployed in a VMware vCenter environment and is fully operational right out of the box. The server operating system runs on a Linux (TinyCore or Ubuntu) platform.
Server Platform Specifications:
- VMware ESXi v5.0 or higher – VMware Hardware Version 8 (vmx-8)
- Network Hardware – At least two Physical NICS on ESXi
– Support up to 10 Gbps
– Virtual NICs on OVA are utilizing E1000
Client Platform Specifications:
- Windows 10 or Mac OSX 64-bit OS
- 4 Cores
- 8 GB RAM
- Web browser: IE11 and higher, Firefox, Chrome, and Safari
NOTE: The client application can be launched via Web Start directly from the LiveNX Web Server, or it can be installed as a 64-bit client application for Windows or Mac. For large scale deployments, the client application installer is recommended as it can scale and perform to a higher capacity than the Web Start versions.
Cisco Device Support – SNMP & Flow
Multi-Vendor Device Support – Flow
LiveSP Deployment Requirement
|Component||Sizing Tool *|
|Hardware||LiveSP can be deployed on a single server or a distributed infrastructure. I/O is optimized for random data access, Data storage is implemented on the physical machine with SSD. The other components can be virtualized.|
|Operating System||The current supported and validated Linux distribution are (Debian, Red hat, Ubuntu) with Kernel version greater than 3.10.
Kernel version 3.16 onwards is recommended for higher performance data access.
|Browser||Service providers, administrators, operation team and end-customers access LiveSP through support web browsers.
Supported web browsers are IE 11, Mozilla Firefox (latest), Google Chrome (latest, Safari (latest)
LiveSP Sizing Guide
|Component||Sizing Tool *|
|Link (Bandwidth)||Bandwidth = Average flow size * Flow count at this max traffic * Predicted max aggregated traffic.
Typical enterprise network with 10000 live interfaces, static template = 200 Mbps
|Hardware (Storage)||Storage = Client profiles * Data retention rule * Predicted average aggregated traffic
* LiveSP sizing tool is designed to help size Link and Storage. It is based on observations on large networks, but could vary on traffic profile. Please contact LiveSP support for details analysis.
Flexible NetFlow FNF V9: (IPv4&6 compatible) Version v9 has brought FNF capability, which makes Netflow a highly versatile protocol. Its flexibility makes it particularly more relevant for complex reporting and heterogeneous data.
- Flexible key field aggregation
- variable number of data fields.
- unidirectional or bidirectional
- sampled or not
- multi-vendor (430 standardized fields, thousands vendor-specific fields)
- aggregated, synchronized or not for exports
IPFIX: (“IP Flow Information eXport”) also referred to as NFv10, IPFIX is the industry standardized version of Netflow. It builds on NFv9 for most of the features, and brings additional flexibility (variable-length fields, sub -application extracted fields, options-data).
Note: Netflow version 9 and IPFIX are the export protocols of choices for AVC, because they can accommodate flexible record format and multiple records required by Flexible Netflow infrastructure. IPFix is recommended.
If service providers choose a centralized collection, they must size the collection link properly. Link sizing recommendation depends on:
- IWAN features enabled: More features = more data to export.
- Bandwidth per site repartition: headquarter with 500 employees will have more variety (=more export) than a small office with 20 employees)
- The time traffic distribution: the CPEs don’t have their max traffic at the same time.
|A Typical 10000 CP enterprise SP IWAN network required a 200 Mbps bandwidth
Collection Link Max Speed = Average Flow Size * Predicted Max Aggregated Traffic * Flow Count at this Max Traffic