The Usage monitoring application and network latency calculations are based on concepts in this nTop article by Luca Deri.
Network latency is essentially a packet’s travel time between client and server. It’s calculated using the TCP handshake because these packets are processed by the client/server only up to Layer 3, so we can assume negligible delay is introduced during the back-and-forth. In practice, APM takes the equivalent of one-half the total time between the client sending the SYN and the server receiving the ACK.
Network latency is only available for TCP flows in which the handshake is seen. There are two cases in which handshake will not be present in the flow:
- For long-lived sessions like SSL, every 5 minutes the existing flow is terminated and a new one is created. Only the initial flow contains the handshake and would therefore have latency calculations.
- Mid-stream TOS/DSCP changes are also interpreted as new flows. Again, only the initial flow contains the handshake and would therefore have latency calculations. Where latency couldn’t be calculated, you’ll see a ‘—’.
What’s the difference between Usage monitoring’s ‘network latency’ and Experience monitoring’s ‘network response’? Experience monitoring is designed to illustrate and help improve end-user experience, while Usage monitoring is essentially acontextual. Thus their respective ‘network’ calculations are not meant to be compared 1:1. Experience monitoring essentially maps the activities involved in a page load to ‘server’, ‘network’, or ‘browser’. Specifically, this design results in ‘network response’ covering the time from the TCP handshake to the first byte of the response, while Usage monitoring’s ‘network latency’ considers only the TCP handshake.
Application latency is the time the server takes to field a request from the client. In practice, it is the time between the last request byte sent by the client to the first response byte received by the client. Naturally, the result includes network latency so we then subtract it. This has two notable consequences:
- The TCP handshake must be seen (else network latency cannot be calculated).
- For any flow where network latency is much much greater than application latency, application latency becomes inconsequential; in this case Usage monitoring takes application latency to be zero.
Usage monitoring provides two application latency values, the average of all matching flows and the maximum of all matching flows. This is possible because latency is calculated on every client-server exchange that occurs within the flow. Whenever flow history is summarized or collated for presentation, the average of the average and the max of the max are taken. The calculated application latency applies equally to the inbound and outbound directions, so you’ll see the same values in each direction.
What’s the difference between Usage monitoring’s ‘application latency’ and Experience monitoring’s ‘server response’? ‘Application latency’ and ‘server response’ are calculated the same way: you take the total response time (SYN to first byte in Experience monitoring, and GET to first byte in Usage monitoring) and subtract the network component. So in theory, if measured simultaneously against the same web service, ‘application latency’ and ‘server response’ would be the same. In practice however, Usage monitoring provides an average and a maximum, while Experience monitoring takes one single measurement periodically. Over time these measurements should be comparable but not equivalent.