“Domains” Report
The “Domains” report shows the speed status of the measured domains. It is the most critical report when it comes to the speed development of the entire site.
It answers all these questions:
- How is our speed now?
- What is the trend of the metrics?
- Have optimizations succeeded?
- Did changes to the site cause a speed degradation?
- What are the priorities for optimization?
In addition to the main Core Web Vitals (LCP, INP, CLS), the “Domains” report also includes other data from the Chrome UX Report. This allows you to see website speed in a broader context – for example, what types of devices visitors use, how far they are from the server (RTT), which content source determines the LCP, or what exactly is delaying image loading.
Before diving further, make sure you're clear on the following:
- All data in this report come from Google (Chrome UX Report), so it's essential to know the differences between various types of speed measurements (synth, CrUX, RUM).
- You should also know how we measure website speed in our monitoring.
- It is highly recommended to properly set the measured domains. You might include domains for your language versions or even competitors.
Ready? Let’s proceed.
Differences Between Free and PLUS Tests
The differences in the “Domains” report are as follows:
- In free tests, you can measure only 3 domains per test. In PLUS tests, 5 or more for an additional fee.
- In free tests, you only see monthly or quarterly history (if logged in). In PLUS tests, you see up to a year of measurement history.
- The PLUS tests also include the Navigation Types report.
For professional speed measurement, we recommend PLUS tests.
Relationship Between “Watchdog” and “Domains” Reports
You might already know that the “Watchdog” report is the primary tool for tracking potential improvements or deteriorations in metrics and thus, speed itself.
However, we consider the “Domains” report more important. The main benefit of the Domains report is that it tracks user data (CrUX). Watchdog only collects synthetic data from measured URLs.
Same metrics, different results. Why?
Watchdog has the advantage of quickly detecting metric improvements or deteriorations. However, the actual status needs to be verified in the “Domains” report, which displays CrUX user data. These are only shown cumulatively over the last 28 days, making them unsuitable for daily notifications sent from Watchdog.
Thus, we need both the fast Watchdog and the precise Domains report.
Specific Graphs in the “Domains” Report and How to Use Them
Let's now look at the individual graphs, their content, and meaning:
User Measurements for Domains
The first graph displays the current state of metrics for a particular device type (mobile or desktop) and potential metric movements for better or worse:
Here you see Google user data, from the Chrome UX Report.
It includes both Core Web Vitals (LCP, INP, CLS) and auxiliary metrics – TTFB and FCP.
In the table, you can see how your metrics compare to competitors or other domains of yours.
Red or green triangles next to numbers indicate significant changes compared to the state a month ago. Metrics marked this way deserve more attention.
User Metric Trends Over Time
The next graph shows metric trends over time:
Two different views on user data: metric value trend (75th percentile) and distribution of three types of values.
As with other graphs in speed monitoring, you can choose how to display the data here:
- 75th Percentile – Google uses this value for evaluating entire domains in the Core Web Vitals metrics. On the sides of the graph, you see whether the value is within recommended limits (green), requires improvement (orange), or is poor (red).
- Distribution – You'll see the percentage share of different metric values for all users. Again, you see what portion of users meet the metric (green), need improvement (orange), or record it as poor (red).
And note, in both cases, it's cumulative data for the last 28 days. What does “cumulative” mean? The value for the current day does not show today’s status but the 75th percentile of values collected over 28 days.
Larger metric changes simply won't be visible immediately, but will gradually appear in the graph over nearly a month. Even a small change can indicate a more significant shift in the metric value.
The 75th percentile value may appear stable, but if the distribution changes, it's likely that it will sooner or later affect the metric value.
How to use this graph? In our PageSpeed.ONE consulting team, this is the graph we watch the most in monitoring. In case of optimizations being deployed, it may indicate positive changes within a few days. We primarily monitor the Metric value, but if it's stable, we also look at distribution trends.
Domain Speed by Months
Another set of graphs again shows metric values, but not by days, rather by months:
The domain speed graph by month is full of colors. The more green, the more speed.
In these graphs, you can decide whether to see the metric value (75th percentile) or simply the distribution of green, orange, and red values.
You'll see the following:
- Trends of Core Web Vitals (LCP, CLS, INP) and others (FCP, TTFB).
- Navigation types trends (see also graph below).
Data here provide a long-term view and come from values CrUX database in BigQuery, where numbers are stored cleansed.
We regard this graph as a managerial view over a longer period, showing whether your speed optimizations are succeeding or not.
Graphs Available in PLUS Tests
All data come from the Chrome UX Report (CrUX), so these are data from Google users.
🔐 These reports are available only for PLUS test users.
Device Types (Form Factor)
The “Device Types” graph shows from which devices your users are accessing the web. Data are divided by User-Agent into three categories: mobile, tablet, and desktop.
Is it necessary to optimize more for mobile or desktop? And what about tablets?
How to Read the Graph?
- The X-axis displays the timeline in individual days.
- The Y-axis shows the percentage representation of each device.
- Each color represents one device: mobile, tablet, or desktop.
- Hover over a specific date to see the exact share of visits from that device.
How Can the Device Distribution Graph Help You?
- Easily determine which type of device is most important to your users.
- If most visitors come from mobile, it makes sense to focus optimizations there.
- Small device shares (such as tablets) often don't need to be prioritized unless they have significant speed issues.
- Monitor whether the device ratio changes over time. For example, a growing share of mobiles means higher demands on the speed and simplicity of the mobile version of the site.
Server-to-User Delay (Round Trip Time, RTT)
The Server-to-User Delay graph shows how “far” or “close” your users are from a network perspective. The Round Trip Time (RTT) metric expresses the time it takes for a request to travel back and forth between the user and the server.
How to Read the Graph
- The Y-axis shows the RTT value in milliseconds.
- The X-axis represents the timeline in individual days.
- Two display options are available:
- 75th Percentile – the value surpassed only by the worst quarter of connections.
- Distribution – percentage share of different RTT values among users.
RTT values within the distribution are divided into three categories:
| Network Latency | From | To |
|---|---|---|
| Low | 0 ms | < 75 ms |
| Medium | 75 ms | < 275 ms |
| High | ≥ 275 ms | ∞ |
How Can the RTT Graph Help You?
- If RTT is consistently high (well over 150 ms), consider using a CDN like Cloudflare or bringing content closer to users.
- Stable low RTT values mean that speed is not limited by the network.
- A sudden RTT increase may indicate network or infrastructure issues.
The Navigation Types Distribution graph shows how users reached the pages of your domain. Data differentiate several navigation types, such as standard loading, loading from cache, or instant loading using back/forward cache.
Navigation Types show the technical methods by which people access specific pages of a particular domain.
Let's illustrate how to think about this graph with an example:
The trend of share of different navigation types accessing pages on www.mall.cz.
- The X-axis displays the timeline in individual days.
- The Y-axis shows the percentage share of each loading type.
- Each color represents one loading type, such as:
- navigate – standard page load
- reload – page reload
- back_forward – back or forward in browser history
- back_forward_cache – instant load from back/forward cache
- prerender – preloaded using Speculation Rules API
- navigate_cache – loading from HTTP cache
- See how often pages load from cache or bfcache, providing an almost instant experience for users.
- If you have a high share of standard loads (navigate), look for opportunities to better utilize cache or prerender.
- A rise in reloads might indicate users are dissatisfied and reloading the page.
- The ideal state is when most navigations occur instantly (bfcache, navigate_cache, prerender).
LCP Resource Type
The Largest Contentful Paint (LCP) Resource Type graph shows whether the element that determined the metric value (LCP element) is from an image or a text element (like a heading or block of text).
How to Read the Graph?
- The X-axis displays the timeline in individual days.
- The Y-axis shows the percentage share of each source.
- Each color represents a different element type – image or text.
- Hover over a specific date to see the exact ratio of the two sources.
How Can LCP Resources Help in Tuning This Metric?
- If most LCP elements come from images, focus on their optimization – formats (WebP, AVIF), compression, and loading.
- If text elements prevail, it makes sense to address webfonts and text rendering.
- This graph quickly helps you understand what to focus on to improve LCP.
We also have tips on LCP optimization.
Reasons for LCP Image Delays (LCP Image Subparts)
The Reasons for LCP Image Delays graph dissects the image loading time into several parts. This way, you see which phase of image loading most affects the final Largest Contentful Paint (LCP) metric.
What's spoiling LCP image loading here?
How to Read the Graph
- The graph consists of multiple colored parts corresponding to individual phases:
- Server Response – time before the server starts sending the image.
- Download Delay – waiting before the image download actually starts.
- Download Duration – how long data transfer takes.
- Rendering Delay – time between downloading and displaying the image in the browser.
- The height of each part shows how much time it takes.
- Hover over a specific date to see exact values in milliseconds.
How Can This Help You?
- If download time dominates, optimize images (compression, modern formats, CDN).
- If server response is high, the solution is accelerating the backend or deploying a CDN.
- If rendering delay is significant, it might be an issue with JavaScript or how images are embedded into the page.
- This graph allows you to focus precisely on the loading part that most hinders LCP.
Summary
Let’s summarize the information about the “Domains” report:
- This report is one of the most important in our monitoring. It allows you to track speed metric values from the Chrome UX Report for all measured domains.
- Besides the immediate status, you see daily trends, which is excellent for tracking the impact of specific changes on the site, as well as monthly trends, providing feedback on whether you're succeeding in speed optimizations.
- The Navigation Types report shows the potential for utilizing instant navigation types.
Detailed information on the performance of specific URLs is provided by the “Pages” report and the “Watchdog” report provides feedback on the daily development of metrics for your URLs.
