Network Simulation and Predictable Communications - Part Two

The next in our Network Simulation series, today’s blog continues the important discussion around Predictable Communications with respect to support for diverse application traffic. 

Heterogeneity, scalability, and simultaneous support for diverse application traffic are hallmarks of contemporary computer and communication networks. Why is communication quality hard to predict for such networks?  Let’s take a simple example of Voice-over-IP (VOIP).  According to the phone.com website, one call needs a minimum bandwidth of 100Kbps with a recommended target of 3Mbps for optimal quality. To support 100 concurrent calls, the minimum and recommended bandwidths are 1Mbps and 5-10 Mbps. So, if one has a reasonable expectation of available bandwidth and expected call volume, it would be easy to ballpark bandwidth needs with a spreadsheet model for different call volumes and different types of networks.   But consider the problem of estimating the VOIP quality as we bring in other factors that are integral to contemporary networks: 

  1. First, in addition to VOIP, the network will be provisioning other types of traffic – email, transaction processing, web-traffic, situation awareness data, etc.
  2. Second, the end-to-end VOIP traffic might flow over diverse networks that include wireless, wired, and satellite links, often with very different operating characteristics. 
  3. A third aspect is user mobility with a user moving from an indoor area to urban areas, or through jungle foliage.  Whereas it might be possible for a VOIP provider to optimize for voice-quality across many of these operating regimens, how does an organization predict the voice quality under common and stressed operating conditions and determine its adequacy?  Particularly, when some conditions might require superior video quality and sacrifice voice, whereas others might be able to use ‘chat lines’ if voice quality degrades substantially?
  4. Last, consider the impact of jamming, where the strength or location of the interfering signal changes dynamically. 

Each of the above factors: network, protocol, and application heterogeneity; network and/or traffic scalability; and wireless signal propagation, introduce dynamics in the behavior of the network that substantially impact its end-to-end performance and hence the quality of service it provides for the different services that it provisions. Network simulations that accurately capture such network dynamics can be a powerful tool towards meeting the objective of predictable communications for critical military and enterprise applications.   In contrast, simulators that use aggregate or static data (e.g., an average voice call requires 100kbps and goes over 3 hops and can tolerate an average jitter of 25ms) can provide data on the steady state performance of networks that might be useful for network sizing exercises, (e.g., under normal operating conditions, my wireless LAN can support 100 simultaneous VOIP calls), but will likely provide grossly inaccurate predictions on network and hence application performance under realistic (i.e., dynamic) operating conditions.  High-fidelity network simulations are critical to accurately capture the impact of realistic operating conditions on network and application performance. 

Written by Dr. Rajive Bagrodia. As founder and CEO of SCALABLE Network Technologies (SCALABLE), Dr. Bagrodia is a thought leader in the field of modeling and simulation, test and analysis, and assessment of the resiliency and impact of cyber threats on large scale networks.

SHARE