Just to review, you define a web app in AppView in terms of: an endpoint, a series of scripted actions, and a location from which the endpoint is accessed and those actions are performed. You can specify one or more of each of those components for each of your web apps. AppView will then generate all of the possible unique combinations and set up monitoring for each. For example, two locations, three targets, and one script, yields six combinations. Each one of them is a ‘web monitor’.

So a web app is essentially a container for one or more web monitors, which collectively they represent all of your service instances and the geographic distribution of your user base. AppView collects data at the web monitor level, and then the data for all the web monitors in your app is aggregated to give you an app-level view of performance. App-level performance can be viewed from the web applications page or from your web dashboard, while monitor-level and test-level data can be viewed from the web monitors page. Remember that each web monitor collects data from a specific geographic perspective. Its important to keep an eye on this level to make sure that end user experience is positive for this contingent of your user base, even if your app-level metrics are within tolerance.

Drill down into web monitors

The icon in the first column represents the latest results. Click a row to see the timeline for a target. The AppView test timeline page continuously plots web monitoring results; scrub across the time range chart to zoom in on a particular time period.

OK
HTTP status 200 was returned for all milestones in the script. If an alert profile is applied, no violations occurred.
Error/Violated
An HTTP error status code was returned for at least one milestone in the script. If an alert profile is applied, the at least one milestone violated.
Failed
HTTP status of all milestones is ‘failed’.
Disabled
Web monitoring is disabled.
Unlicensed
Web monitor source or target is unlicensed.

End user experience

The end user experience chart shows, of the total time needed to complete the user action script, how much of that time was due to the server, network, or browser each. The pie chart illustrates the average over the filtered time period. Changes in monitoring configuration are marked by a thin grey line. Hover over the timeline and click a blue pip to see the results of a script execution. A marker is placed when results violate or clear a condition in the web alert profile. Hover for event details.

Milestone breakdown

The milestone breakdown chart shows, of the total time needed to complete the user action script, how much of that was due to each milestone. The pie chart illustrates the average over the filtered time period. Changes in monitoring configuration are marked by a thin grey line. Hover over the timeline and click a blue pip to see the results of a script execution. A marker is placed when results violate or clear a condition in the web alert profile. Hover for event details.

HTTP throughput

The pie chart illustrates the average over the filtered time period.

Total capacity
Taken from the PathView analysis, the maximum throughput for the path.
HTTP throughput
From the time you receive the first byte, how fast did you get the rest of the resource?
HTTP performance (overall)
Compares http throughput to total capacity.

Response time

The response time chart shows you how much of the response is due to the network and how much is due to the server.

Total response
The amount of time that it takes to establish a TCP connection and receive the first byte (i.e., SYN to first byte).
Server response
Total response time minus network response time.
Network response
Assuming no server delay, the minimum amount of time it takes to establish a TCP connection and receive the first byte (i.e., time on the wire). For HTTP AppView calculates this as 2*RTT; for HTTPS it’s 4*RTT. Why? RTT is the time from A to B and B to A. So from TCP handshake to first byte: for SYN to SYN-ACK we take 1 RTT, and from ACK to first byte we take one more. In case of HTTPS, a couple more round trips are necessary for the SSL handshake.

Compare web monitors

When you have multiple web monitors that share a common attribute, e.g. they have the same source or target, or use the same set of user actions, AppView can show you performance results for all those web monitors in one place.

Compare by group

  1. Change the group by setting as desired.
  2. Click the compare button for your group.
  3. Your new view does not persist until you save it. Click > save.
  4. Saved comparison views are shown in AppView > comparison views.

Custom comparison

  1. Click bulk action.
  2. Select the paths you want to compare.
  3. From the bulk action menu, select compare.
  4. Your new view does not persist until you save it. Click > save.
  5. Saved comparison views are shown in AppView > comparison views.

Drill down into web tests

  1. Scrub across the time range chart to zoom in on a particular time period. Click a blue pip in user experience or milestone breakdown to go to the drill-down page where you can see complete results for the script execution. An alert banner will appear on this page if the web monitor is targeting a webserver that does not respond to icmp, and you’ll be prompted at that time to select a supplementary target.
  2. Move along the timeline to see results from a different execution. See how the server-network-browser breakdown works in transaction performance.
  3. Each piece of a webpage—images, stylesheets, javascript, etc.—has to be requested separately. Each row in this panel shows you the name, type, and domain of a requested resource, along with a bar graph showing how the download time was spent. The bar graphs are aligned into a waterfall chart so you can see how each request corresponds to the progress of the entire page.
Blocking
Time spent in a browser queue waiting for a network connection.
DNS lookup
Time spent resolving the domain name in the url.
Connecting
Time spent establishing a tcp connection with the web server.
Waiting
Time spent waiting for the first http response from the web server.
Receiving
The time it took for the server to send the entire response starting from the first byte received.

Transaction performance

appview_latency_etc.png

The transaction performance panel shows a breakdown of the milestone time by server, network, and browser. But how does AppView derive these times? Most page loads are complex interactions that involve downloading a lot of elements—images, javascript libraries, stylesheets, etc.— from multiple servers. When there are performance fluctuations, it’s important to be able to tell at a glance when and why they are occurring. AppView gives you this insight by apportioning the total time taken to complete a scripted transaction or an individual milestone into three categories: server, network, and browser.

Network time
Comprised of two parts: the time it takes to establish a connection and send the resource request to the server, and the time between receiving first and last byte of the response. Modern browsers download many resources in parallel; so, while network time is calculated for each resource in the milestone and then totaled, concurrent times are counted only once.
Server time
The time between the server receiving the request and the browser receiving the first byte of the response, adjusted for network latency. As with network times, server activity may be happening in parallel for many resources, so we count concurrent times only once.
Browser time
The time it takes for the AppView internal browser to render the page after all the content is received; what’s important to understand is that browser rendering begins after the first byte is received, and occurs in parallel to all of the other activities you see in the milestone performance waterfall chart.

So how do you take advantage of the network, server, browser breakdown? What you want to do is look at each time as a percentage of the total page load time. Find the biggest color block on the bar graph and speed up that element because chances are it’s the one slowing things down; when you do, you’ll see that color block get smaller and the others get bigger. If performance is still slow, repeat. This is how the transaction breakdown is useful: it keeps you focused on the elements that are most impacting performance so that you can resolve issues faster. Keep in mind that, out of context, the individual times won’t give you much information about an element’s impact on page load; arbitrarily shaving off server time might not result in noticeably faster page load; this is especially true for browser time, since it’s concurrent with other activities.