Best Practices - Performance and Lighthouse Score with Arc XP
Search engines (Google in particular) rank each indexed article with performance scores, and the better the score, the more prominent the article appears in search results. Therefore, the better the score, the more users find their way to your site.
Everyone strives to have the best performance score possible to increase the traffic on their site. To achieve a good score, you must adjust your code to minimize the impact of the performance factors that Google emphasizes. At least once a year Google adjusts its algorithm, and you have to potentially respond by adjusting your code to focus on what has the most impact according to Google.
Google provides a Lighthouse Score Calculator that helps you understand the weight for each of the different scores and lets you focus on which score you want to improve. You can’t work on some scores individually, as one change often impacts multiple items, while some scores are easier to improve than others.
You can find the releases with a recap of the changes in Google Chrome’s GitHub.
To test your individual page performance, see Measuring and monitoring your page performance.
Recommendation Summary
This section provides best practices and recommendations to help you navigate the possible performance improvements on Arc XP within your code. If you are interested in How to win at SEO with Arc XP, that is another great article to get you set up for success.
All recommendations are not applicable for Themes Blocks and are targeted toward custom code only.
Everyone has different requirements and not all of the following recommendations are applicable to all clients. Some points might be only partially possible, but it is still important to keep them all in mind when developing on Arc XP.
Each recommendation is accompanied by relevant Lighthouse metrics. These metrics include:
First Contentful Paint (FCP)
Speed Index (SI)
Largest Contentful Paint (LCP)
Total Blocking Time (TBT)
Cumulative Layout Shift (CLS)
Interaction to Next Paint (INP)
Mobile First
Related: CLS
Google’s scoring is always executed on the mobile setting. That leaves us to mainly dismiss the desktop values and focus on mobile. One big mindset and design approach that specifically help with content jumping (CLS) is Mobile First. When content should display differently on Mobile or Desktop, the default setting should be Mobile.
Therefore, you should always configure the server-side render (Rehydration versus Server-Side versus Client-Side (SPA) Rendering), which is done independently of client devices, to expect a Mobile output in order to optimize the performance score.
A content jump on Desktop has no impact on the scoring from an SEO point of view, but you should still minimize it to not impact the user experience. You can perform additional measures to improve the transition with the smoothest breakpoint change by using CSS Media Queries, as they are the first aware of the device and minimize the visual impact for users.
Ads
Related: CLS TBT INP
For most clients, ads are among the most important items on a page because they generate revenue. That makes them one of the first client-side loaded elements. Due to the fast loading time, they often load before the page-load is finished, and this has an impact on performance scoring in multiple ways.
Because advertising is such an integral part of most clients’ revenue, it is usually a non-negotiable item. To minimize the impact on performance scoring, we recommend two important set ups:
You should provide the container with your ad sizes and a minimum-height. These items might be different for each breakpoint for each ad slot, but preventing the ads from pushing the content down during the page-load has a positive impact. With Lighthouse v10 giving more weight to Cumulative Layout Shift, you should really focus on minimizing your layout shift.
Note
Large banner ads, which push down all of the DOM during performance scoring, have a heavy impact on performance and are not recommended or should happen post scoring, after page-load has finished.
Add async or defer to your ad script to prevent the browser from waiting for the script to fully load before continuing. This allows other critical resources, including scripts that are necessary for user-interaction, to load without waiting for the deferred scripts. See
async
anddefer
scripts to know which item to apply for your use-case.
Note
As clients in the EU must follow the GDPR and those that qualify must follow CCPA requirements, the ad script loads conditionally based on user compliance. You should still apply async or defer, if not in conflict by the Consent Management Platform.
Largest Content as text
Related: LCP
A scoring item with a high value on performance scoring is the largest content item above the fold. Usually, the largest content item above the fold is the Lead Art image (or video/gallery/HTML), which is also a DOM element that browsers load with the lowest priority.
A good way of improving the largest content score (for mobile specifically because desktop can ignore; see Mobile First), is to make the largest content a text element, not a media element. Because browsers deliver text with the DOM, the text loads with the first client-side render, which should improve the timing compared to the media element. To do this, ensure that the DOM elements above the image are large enough to push the image far enough down to not be registered as the largest content and ensure that there is enough text to select as the largest content.
Preload Lead Art Image
Related: LCP SI
In regard to mobile-first, and assuming we cannot make the Largest Content text, you still want to improve the load time for the Lead Art image as much as possible. Preloading the image should quicken the load time of the image, as images are triggered as one of the last elements to load, and if it has been preloaded, the resource is available quickly.
You should only preload lead art for the mobile image size because you have to add the preload to the head, and the outputType does not have client-side render; it would be the same preload for all devices. Preloading lead art could technically have a minor impact on desktop performance, but it should not be noticeable by end-users or even any performance tests.
Preconnect
Related: LCP SI INP
You must be careful to not overuse preconnect. A browser has only a certain amount of connections it can run at the same time. By pre-connecting the crucial domains, you create a one-time effort for multiple gains through threads that have to follow within the next 10 seconds. If, however, you create multiple connections that you don’t use, you hoard resources that the browser needs and delay the load instead.
Good example
To load a critical resource from another domain in the head, one that cannot be deferred or async, use preconnect to shorten the blocking time of that resource by pre-establishing the connection. This can positively impact the Input Latency (INP) metric by improving the webpage's responsiveness to user interactions.
Bad example
pre-connecting to a Video Player script that is loaded lazy on the page and does not start loading within 10 seconds of the initial page load is a poor use of preconnect. This scenario blocks a resource that is otherwise needed.
The PageSpeed Insight tool identifies bad preconnects, which you can then fix. You can have different preconnects per page. If the globalContent inside the outputType can determine the preconnect, you should be able to distinguish between Article and Section pages, and so on.
async
and defer
scripts
Related: TBT SI INP
While async
and defer
are two flags with different implications regarding the runtime of the script, they solve the same issue: prevent the resource from becoming a render blocker.
You should flag any script that is not crucial to the initial render but should still run as soon as possible with potential render impact as async
. Scripts with this flag could delay the page load event and increase the score for Content Layout Shift if they change the DOM. Downloading the script is asynchronous, but the execution delays the render. For example, script for Consent Management Platform.
You should flag any script that is not crucial to the initial render and can be executed after page load as defer
. For all performance scores, defer
would be the best choice, but isn’t always possible. For example: Script for Video Player.
Script dependencies
Related: FCP SI CLS INP
When including third-party libraries, you should always use defer and async as mentioned previously, but another way to improve the performance around third-party libraries is to switch to their minified scripts. Minified scripts are significantly smaller with the same functionality.
Alternatively, lite versions of common libraries often exist created by other third parties, making their integration less heavy. For example, for embedded YouTube videos, you can use the Youtube Lite JS package instead. JavaScript-heavy components or scripts can introduce delays in responsiveness and interactivity.
Style dependencies
Related: TBT FCP SI
Stylesheets are often not scrutinized when considering performance, but most browsers consider them a render-blocking resource. That gives this a great weight when trying to improve performance.
Most clients in Arc XP are multi-site clients, and you should try to segregate the CSS to include only what is relevant for each site. You can easily do that with the PageBuilder Engine Multi-site styling support, which allows you to build separate style sheets for each site while also having common resources they all use.
Robust SSR
Related: FCP SI INP
With Arc XP’s engine rendering isomorphically (Rehydration versus Server-Side versus Client-Side (SPA) Rendering) it is crucial that the render on the server side (How does rendering work in PageBuilder Engine) is successful. While critical errors during Global Content fetch or in the OutputType can completely break your page and not render at all, that is not what we are concerned about for the Lighthouse score performance.
Instead, what is crucial for performance is that, at the end of the second server-side render cycle, as much as possible of the rendered DOM is complete, which is delivered for the first client-side render. The more complete it is, the better in regards to client-side render and client-side re-render. Reducing the client-side processing required to display the initial content can lead to faster perceived load times and potentially lower INP scores.
If the server-side render is broken, the first client-side render is empty (a white page), and then all rendering occurs on the client side, which includes many client-side content fetches followed by renders and potential content jumping, resulting in larger scoring for most metrics.
Move code into useEffect
Related: SI TBT
The useEffect
hook is the earliest called hook after the initial render of a Component. If you have specific code that needs to run client side but is not required for the render, you should consider moving it into useEffect
.
Additionally, to be executed at a stage where potentially all scoring has already finished, useEffect
is also triggered asynchronously and does not delay the render cycle or count toward render blocking.
Bundle Size
Related: SI FCP INP
One of the largest resources the user downloads is the bundle that Engine creates. The bundle contains all the custom code created in the Feature Packs’ /components
folder, their imported resources and installed dependencies. See How to optimize your PageBuilder Engine bundle size for better page load and render performance. The following sections provide additional notes for each step that help you to understand when to use and which pitfalls to look out for.
Using .static
as much as possible
A great way of decreasing the bundle size and also reducing the client-side render time is to mark custom Components as .static = true
. By doing so, you remove the code for the Component from the bundle. The only transferred data for that Component is the rendered HTML in the DOM.
Components marked as static can only render server side, so they cannot be lazy loaded or otherwise be loaded asynchronously. You can use the static flag on Features, Chains and Layouts, not on individual resources. Marking a Chain or Layout as static also renders all Components inside static without excluding their code from the bundle. Converting non-interactive elements to static can improve your INP score by minimizing JavaScript execution and reducing client-side re-renders. See Optimizing Interaction to Next Paint (INP) metric on your pages for more details.
Components marked as static cannot include any dynamic elements unless provided as inline JS. The Component does not re-render on the client side, and code that usually fires client side, like useEffect
, is not executed.
If rendered code exists in your dynamic Component that does not require a client-side re-render and could be treated as static, use the Static Component from Engine for those parts of the Component. Rendered code wrapped in <Static />
is treated as static, but the code is not excluded from the bundle. While this isn’t an improvement for the bundle, optimizing the client-side render impacts performance positively.
Code split for niche Blocks/Components
Usually, clients use most blocks repeatedly across a site. For example, Homepage uses the same blocks as Sections, which are also used on Articles. Exceptions exist, which are blocks that are used only on one specific Page or Template, or are only time-based, like for elections. As this code is irrelevant for most of the site-traffic, excluding it from the bundle is beneficial to performance.
Engine provides a way to code split features, chains, and even layouts. See How to do code splitting. While the flag is called .lazy
, it does not automatically lazy load the content. Instead, it excludes the related code into a separate “chunk”—a mini bundle Webpack created to load as needed. If you place the block on a Page and it is part of the server-side render, the chunk is included along with the bundle on page load. If you wrap the block in a lazy-loading component, then the chunk is first included when that block loads.
Chunks are not grouped; each .lazy
component becomes its own chunk. If applied to too many blocks on a single render, the amount of chunk-files could turn into render-blocking resources. Chunks also include all the imports required to complete the render. This can result in code being loaded multiple times if the same dependency is used in multiple chunks and/or the bundle.
Note
If not done correctly, using Code Splitting can decrease performance and therefore should be tested thoroughly.
If you are trying to specifically increase performance on one page type, you could focus on code splitting larger components from the main bundle. So long as you don’t run into any render-blocking on the other pages, this could be beneficial.
Custom Code
Be aware that all code that cannot be excluded from the bundle is included if it is imported in the Webpack-targeted files. Keep your code as concise as possible by following common best coding practices.
Code split NPM Packages and Dynamic import
A source for much hidden code can be NPM Packages. Due to NPM Packages being installed and stored in a separate folder, larger packages can stay unnoticed. A good way to find if the Feature Pack contains larger packages is to look into the locally-created Webpack stats file. In that file you see all impacting code for your Feature Pack. Not all might be included in the bundle that the user can download, but they could all be. Each client must analyze this file separately.
One way of downsizing those dependencies is to look for alternative packages that are smaller. If that is not an option or not desired, you can exclude the packages from the bundle by Code Splitting and Dynamic import, so long as they are not essential to the site. The packages would then only be loading client side and only when needed. It might make the integration a bit slower to load for user experience, but it should be an overall improvement to performance.
While originally designed for NPM Packages, you can also use this strategy to dynamically load icon-files or other static content within the Feature Pack, so long as they are not imported anywhere else.
NPM Package imports
NPM packages are often large libraries that are not being used completely in your Feature Pack. If only a subset of an NPM package’s functionality is used, investigate if you can import just the part you are using.
Some packages offer to install only a subset, for example, lodash. You can separate the imports to the minor libraries, like lodash.get and others. Some libraries offer path-imports instead, for example, Material UI that allows you to import individual modules. For example:
import Button from '@mui/material/Button';
Webpack then does not import the whole library, but only the path-imported modules. This strategy must be consistent in the whole Feature Pack because one wrong import statement results in the whole NPM Package being included.
DOM size matters
Related: SI FCP LCP INP
When a page is requested, what is returned to the browser is the rendered HTML as the Payload in the response. This means that all HTML from the server-side render is downloaded directly to the user, so the size (content_length
) matters in terms of how much data has to be downloaded.
You mostly decide the DOM size. The size could be larger for clients that have much static content and smaller for clients that decide to lazy load content instead.
It is important to be aware of the impact of the DOM size when writing code, as split out into smaller Components, it often gets lost how large the end result is.
This directly relates to Downsizing Response Data, which should be implemented, as this data is included as globalContent
or contentCache
(for non-static Components) in the DOM.
The DOM size influences the efficiency of the browser in updating a page. When interactions modify the DOM, it triggers resource-intensive layout processes that can impede the page's ability to respond promptly. Consequently, updating a large DOM may lead to decreased responsiveness and adversely affect the Interaction to Next Paint (INP) metric.
Lazyload large client-side implementations
Related: SI FCP LCP TBT INP
When a page should include an inline third-party integration, usually very popular during larger events such as elections or global sporting events, the resources are commonly added as a raw HTML block. You must always render HTML blocks server side if they include any JavaScript or script-tags. Raw HTML blocks do not work if included only in the client-side render or not in a Static Component context.
The downside to this is that the external scripts and resources are usually very large and require much JavaScript runtime to complete. These resources delay the page load event and potentially block other resources. Most of the time, these resources are below the fold and are not required to be available immediately and are not relevant to crawlers.
Instead, you should wrap these resources in a Lazy Loading component and have the embed code adjusted for delayed client-side render based on scroll proximity. This requires either a custom component in the Feature pack or a specific block of code in the raw HTML block. Either way, the additional component or inline JavaScript should be worth the performance boost.
This could be extended to inline videos. The OOTB Video Player from Arc XP (PoWa) takes up a resource on page load. You can mitigate this by bringing the player behind a Facade and only loading the whole Player, including the script, upon user interaction.
Note
While Arc XP provides the PoWa Player, adding a Facade is a custom implementation. You must write custom code for handling multiple instances of videos on a single page so the page does not load the same resources multiple times.
Images
Related: SI LCP FCP
Images are crucial to telling compelling stories and creating reader interest. Images are often collectively one of the largest resources on a page and are usually prioritized as one of the last sets of resources to load. See Preload Lead Art Image.
Be aware of the following items so that a page can be performant and contain as many images as a story requires.
Loading flag
In recent years, most browsers added the support of the loading
flag on <img />
tags. When used incorrectly, loading
has a negative impact on performance, and the PageSpeed test flags it as an issue.
When to use:
lazy: Any image below the fold should receive this and only load if necessary. The browser manages the individual loading of images for you based on user interaction.
eager: Any image above the fold should receive this (especially the LCP item, if it is an image). You want these images to load immediately without delay.
Note
Determining what is below or above the fold can be difficult and usually requires either client-side JavaScript code, which has a negative impact on performance, or a setup in your custom layouts or components.
For example: Create a designated section in your layout in which you place only above-the-fold items. Then in your code, create a React Context around those items that all components can check against, or add a customField
to your blocks to designate each instance as above or below the fold.
Responsive images
Arc XP offers a Resizer to everyone that uses PageBuilder Engine as their rendering engine (and OBF clients) to generate a uniquely sized image for each use case and breakpoint. As we are mainly focusing on performance with Mobile First, we should pay special attention to the image sizes in mobile and have alternative sizes in the source for larger breakpoints.
Content Sources, rendering, and caching
Related: SI LCP FCP
Making your Content Sources and rendering more performant is relevant for performance in only two cases:
When no page cache on CDN/origin exists and a request has to go through Engine to render
Client-side render/fetches are made. This can happen due to invalid
contentCache
, or a component’s render breaks during server-side render and only renders client side, or fetching content client side only due to delayed render (lazy load or other conditional use)
Specifically for high-traffic pages, this should not be an issue, as CDN is pre-fetching the rendered pages from Engine before the cache invalidates. But the performance tests also run on older articles, which is where you have the most opportunity for impact.
Both CDN and Engine have a cache trying to streamline the Content Sources as much as possible to improve performance. Content Source calls during server-side render have access to only Engine cache.
The following resources explain caching within Arc XP:
Content Source TTLs
As a general recommendation, you should give your Content Sources the largest TTL that you are comfortable with to improve performance and reduce the instances of invalid caches. Large TTLs cache content longer both within Engine and on CDN, which results in faster page responses and renders.
Endpoints also benefit from larger TTLs, as it reduces the number of calls made by Content Source. Within Arc XP, that would directly translate to the API rate limits, where it is one step in the recommended Content API Best Practices: Avoid Rate Limiting and Improve Performance.
Partial Caching
It is not uncommon that a Content Source makes more than one call within the fetch function, or that multiple Content Sources are making the same call within them. A very common example is Resizer calling for the Signing Service Content Source (Resizer V2).
The same image is usually received through various Content Sources. The same image still needs to run an identical call to receive the unique token for an image to display securely. That is where we can improve performance with Partial Caching in PageBuilder Engine.
Especially long-living content can benefit from Partial Caching as it allows for individual parts of a Content Source to be cached separately from the Content Source, while the result is still part of the Content Sources returned data. You can also have the whole Content Source be cached individually, to be used inside other Content Sources. It allows for a more granular caching of your data and specifically has a positive impact on un-cached page render.
Transform into fetch
Content Source cache within Engine is not applied to the final result of a Content Source. It is instead applied after resolve
or fetch
. This allows Engine to return the same cache to different transforms and filters. But if a Content Source runs a large amount of code within the transform function, it executes on every call of that Content Source regardless of caching status within Engine.
Note
Client-side executed calls through CDN are not affected by this.
Instead, if large amounts of code have to be executed for a Content Source, the transform code should instead be included in the fetch
to be cached.
This is also specifically relevant regarding the 1MB limit for Content Source cache. If manual filtering is occurring in the transform, it is not considered in regards to the to-cache data. See more in Content Sources, rendering, and caching.
Downsizing response data
As a general rule, the smallest amount of data transferred during a page’s render results in the best page performance. Any response data from Content Sources data during the server-side rendering for non-static Components is included in contentCache
, and the global Content Sources are added as globalContent
to the DOM. As outlined in DOM size matters, you should keep the DOM as small as possible. To do that, you have the option of Content Filtering in PageBuilder Engine.
While filtering is solving the DOM aspect, cutting down the data fetched from the APIs would make Content Sources even more performant. Most Content API endpoints allow filtering in the requests by adding included_fields
, _sourceInclude
, or _sourceExclude
with a list of ANS fields. Already downsizing the data from Content API is beneficial for performance, but also lowers the risk of running into the 1MB cache size limit from Engine listed in PageBuilder Engine Limits and Requirements. If that limit is exceeded, the data not only doesn’t cache in Engine, but it also significantly decreases performance as responses up to 6MB still pass.
Extended TTL for page cache
Related: SI
In addition to the Content Source TTLs and the caching TTLs listed in Caching at Arc XP, an additional TTL exists that you can apply to page cache in Arc XP Web Delivery.
Adding the new Extended TTL to page cache extends the page cache from two minutes to a maximum of one week. Pages with this functionality return cached content immediately on any request in the set TTL timeframe, while in the background fetching content if the regular two-minute page cache has turned stale. As a result low-traffic pages serve stale content for longer, but high-traffic pages are not impacted.
The Extended TTL for page cache can be an individual TTL for any group of pages targetable by regex in Web Delivery.
Note
For non-US clients, this could be largely beneficial as all Lighthouse tests are executed from California, US. Having Extended TTLs results in faster response times and significantly lowers the TTFB score.
Resolvers
Related: SI LCP FCP
It is uncommon to find performance issues in the resolvers, but they can be a hidden source of delays on uncached pages. When looking at resolvers in PageBuilder Editor, it is important to understand that all resolver regexes are run against an incoming URL until one of them successfully passes, or if there are no matches the URL results in a 404 error. If there could be potentially multiple passing regexes, the first one that is triggered is the one that parses that URL.
Regex can range from very simple to very complex and costly in runtime. The timeout of page requests is 10 seconds and that includes the Resolver Lambda. Bad regex can cause multiple seconds of delay, timeouts, and other server errors.
You can test individual regex at https://regex101.com/ using an example URL and the regex formula. When the regex is run against the URL in this tool you can see the complexity result in the top right corner of the tool where it shows how many steps and milliseconds it took to complete. The milliseconds of all regex that were run before the matching one (those resolvers with a higher priority) adds up to the total amount of time the Resolver Lambda uses to find the correct match. Examples of bad regex: Catastrophic Backtracking, Lookaheads/Lookbehinds, Nested Capture Groups exc.
Supporting Tools
To help you reach your performance goals, Arc XP provides tools to help monitor performance. It is important to constantly pay attention to performance, especially after deployments. Arc XP currently has one performance Dashboard you can use to monitor your performance.
Note
We encourage you to have continuous performance monitoring outside of Arc XP.
PageBuilder Performance Dashboard
The PageBuilder Performance Dashboard examines the results of the Site Performance Dashboard. The metrics in this dashboard are internal metrics you otherwise would not have access to. This dashboard gives specific insight into PageBuilder performance, which is directly related to the Lighthouse/CWV scores.
One very useful feature is that deployments are highlighted in all metrics and you can immediately connect any impact from code changes to certain deployments, even weeks later. For more information, see Reviewing PageBuilder performance.
Conclusion
Improving and maintaining performance scores requires constant mindfulness of new and old code in regard to the algorithm and overall performance of SEO. It is not a one-time investment, and the tools that Arc XP provides can greatly help you achieve your performance goals.