The Application Usage Monitoring Point Details charts include columns showing network and application latency. The calculations for these are based on concepts in this nTop article by Luca Deri.

Network latency

Network latency is the time it takes a packet to travel between a client and a server. It is calculated as follows:

network latency = (server network delay + client network delay) / 2

where server network delay and client network delay can be seen here:

usage-network-latency.png

Network latency is calculated using the TCP handshake because these packets are processed by the client/server only up to Layer 3, so we can assume negligible host delay. Also, network latency is only available for TCP flows in which the handshake is seen. There are two cases in which handshake will not be present:

  • Long-lived sessions (like SSL) - the flow is terminated every five minutes and a new one is created. The handshake is only present in the initial flow.
  • Mid-stream TOS/DSCP changes - these are also interpreted as new flows. Again, the handshake is only present in the initial flow.

In situations where network latency can’t be calculated, you will see a ‘—’ in the charts.

Note that ‘network latency’ in Usage monitoring is different than ‘network response’ in Experience monitoring. Experience monitoring maps the activities involved in a page load to ‘server’, ‘network’, or ‘browser’. This results in ‘network response’ covering the time from the TCP handshake to the first byte of the response, while Usage monitoring’s ‘network latency’ considers only the TCP handshake.

Application latency

Application latency is the time the server takes to respond to a request from the client. It is calculated as follows

application latency = application delay - server network delay

where application delay can be seen here:

usage-application-latency.png

In practice, application delay is the time between the last request byte sent by the client to the first response byte received by the client. Because this includes server network delay, it must be subtracted to get the application latency. Because server network delay is part of the calculation:

  • The TCP handshake must be seen so that server network delay can be calculated.
  • For any flow where network latency is much greater than application latency, application latency becomes inconsequential. In this case, Usage monitoring takes application latency to be zero.

The Application Usage Monitoring Point Details charts provide two application latency values: the average of all matching flows, and the maximum of all matching flows. This is possible because latency is calculated on every client-server exchange that occurs within the flow. Whenever flow history is summarized or collated for presentation, the average of the average and the max of the max are taken. The calculated application latency applies equally to the inbound and outbound directions, so you’ll see the same values in each direction.

Note that ‘application latency’ in Usage monitoring and ‘server response’ in Experience monitoring are comparable but not equivalent. ‘Application latency’ and ‘server response’ are calculated in the same way: the total response time (GET to first byte in Usage monitoring, and SYN to first byte in Experience monitoring) minus the network component. So in theory, if measured simultaneously against the same web service, ‘application latency’ and ‘server response’ would be the same. In practice however, Usage monitoring provides an average and a maximum, while Experience monitoring takes one single measurement periodically. So, over time, these measurements should be comparable but not equivalent.