What is Network Simulation? – Part Two

January 31, 2019 by The Scalable Team

Last week, we introduced our new series on Network Simulation where we took a look at the meaning of Network Simulation, and how it can help organizations of all kinds optimize performance and network management.  This week, we will continue with Part 2 of this series by taking a look at “What is Network Simulation” as well exploring the requirements that it must follow to be a useful tool for operators.

In network simulation, a software replica of the network, or ‘network model’, is used to analyze how different components of the network (network devices such as routers and access points, smart phones, radios, satellites, wireless channels, etc.) interact to provide end-to-end delivery of traffic from the supported applications.  For instance, the network model of a Wi-Fi network will replicate the networked environment, including all hardware. It simulates the traffic movements through the network and interfaces with all components as well as mobility, terrain, and interference.

Network simulation provides tools to analyze the application, network, or device performance: real-time visualization and statistics display while the simulation takes place and post-simulation analysis of statistical data is collected during the simulation (for example, the number of calls dropped and the average end-to-end latency and throughput of text/video data). Such analyses may validate requirements or help identify potential problems and subsequently evaluate the effectiveness of alternative solutions. For example, a battlefield network planner may use network simulations to assess if a given laydown using airborne and ground based communication assets will provide the needed Quality of Service to support both high-priority ‘call for fire’ messages and the periodic updates needed to maintain Situational Awareness (SA) for the successful completion of the mission.  As another example, a network planner for a power distribution grid may run simulations to predict the behavior of networked environments based on various operational scenarios, including cyber attacks.

Network simulation can also be used as a testbed by application developers. The simulator can interface with the application being tested while running on external devices. The application’s performance can be easily analyzed under different operational conditions by modifying the network model to represent the various conditions. In many cases, running comprehensive experiments to thoroughly test the application using live equipment can be very difficult and costly. For example, testing an SA application over a network consisting of thousands of ground and air-based entities as well as satellites, under different weather conditions, will be very difficult using live equipment, but can be done in a fast, cost-effective and convenient way using a simulation testbed.

For a network simulation tool to be of practical use to network operators, planners, designers, testers and analysts, it must satisfy the following requirements:

    • Fidelity: To have confidence in the simulation results, the network model must accurately represent the characteristics of the network. Among the factors which determine the network dynamics, and hence network performance, are protocols used in the network, terrain and environmental factors, movement of communicating devices, and the bursty nature of network traffic, with potentially different quality of service requirements. The network model should have enough ‘fidelity’ or accuracy to adequately capture the effects of these factors. As an example, when determining the effective communication bandwidth for hand-held radios in a mountainous region, if the path loss calculations do not take the terrain effects into account, the results can be very inaccurate. Similarly, if a simple, uniform-rate model is used to represent traffic, and the traffic in the target network is highly variable and bursty in nature, the simulation results may lead a network planner to incorrectly deduce that a lower capacity link is sufficient to support the traffic demands.
    • Scalability: The network model should accurately mirror the size of the target network. This is critical because large networks behave much differently from small networks and exhibit network dynamics that are either not present or immaterial in smaller networks. Hence, it is often not possible to directly extrapolate the performance of large networks from simulations of small networks. Only ‘at-scale’ models can provide reliable results. For example, for on the move networks, the effective throughput and end-to-end message completion rates for ad hoc routing protocols are similar for small networks but can differ by over 400% as the network size, connectivity, or traffic intensity changes. Another example is the effect of network size on the end-to-end delay in TCP/IP networks: as network size increases, the number of hops, and hence the end-to-end delay, from a source to the destination also increases, but in a non-linear fashion. These effects cannot be extrapolated from simulation of smaller networks.
    • Speed: In addition to the fidelity and scalability requirements, it is important the simulations run fast in order for the results to be useful.  For at-scale models of large networks, the speed becomes even more critical because the simulation execution time can increase as a quadratic or exponential function of network size.  If the simulation takes too long, the user might need to resort to abstractions that mask significant network dynamics or rely on simulations of smaller sized networks. As discussed above, both of these will lead to erroneous conclusions. Legacy network simulators relied on sequential model execution which often ran 50x slower than real-time, i.e., it would take 50 hours for the model to simulate an hour of operation of the target network being modeled.  In contrast, network simulators that leverage parallel discrete-event simulation (PDES) can leverage multi-core processors and parallel computing technologies to ensure that these models run at or faster than real-time.

SCALABLE Network Technologies is the leading provider of live/virtual/constructive communications/networking modeling and simulation tools across all domains (undersea-to-space). We deliver virtualization technology for development, analysis, evaluation and training to military, governmental, commercial, and academic institutions. Our high fidelity, real-time simulation platform incorporates physics-based models of military and commercial satellite, tactical, acoustic and optical networks along with emulation interfaces for live/virtual/constructive integration. Our cyber behavior models provide a vulnerability analysis framework with configurable cyber attack and defense models for IP networks, weapon systems, as well as cyber-physical networks. SCALABLE’s solutions are used by our customers to assess the performance and cyber resiliency of networked communications environments, and support system lifecycle management and operator training.

Continue to follow us here for the continuation of the SCALABLE Tech Talk Blog Series. In upcoming weeks we will continue to explore the benefits and functionality of network simulation.