CARVIEW |
Navigation Menu
-
Notifications
You must be signed in to change notification settings - Fork 9.6k
[Request for feedback] Adding shared performance insights to Lighthouse #16462
Replies: 35 comments · 86 replies
-
well.....first feedback I guess :D my site is fully http2 and http3 supported via cloudflare but I get modern http error and it seems that all page elements are loaded over http1.1 so its either lighthouse bug with new metrics or some weird cloudflare issue. but testing the webpage gives it a green on http2 :/ |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 2
-
Thanks for the heads-up. This issue is limited to PageSpeed Insights - we actually have some infrastructure limitations regarding accurate reporting of h1/h2/h3 for PSI. We had worked around that by disabling the original http2 audit, but didn't remember to do that for the new one. So: you can ignore this insight when coming from PSI. I'll try to push a change soon to disable the insight audit in PSI until we can resolve the underlying issue. EDIT: this audit should now be hidden in PSI. |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 9
This comment has been hidden.
This comment has been hidden.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
Note the workaround to disable the modern HTTP test from PageSpeed Insights rolled out a while ago so you shouldn't see any false positives for that now. |
Beta Was this translation helpful? Give feedback.
All reactions
-
So I'm kinda a noob and developing on my own I got lighthouse running a cloud phone but not sure what it has accomplished. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Wow! This is a really great updateβthe only issue is that Iβm seeing the HTTP error even though my site is fully supported with http2 and http3. |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 2
-
Same here, served through Cloudflare. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Thanks! FYI I responded to the http thing here. |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 4
-
Note the workaround to disable this particular test from PageSpeed Insights rolled out a while ago so you shouldn't see any false positives for that now. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Some of the language used on the new audits is much more developer-centric than before. Your tool is used by normal website owners, not just developers, so please don't alienate the average website owner even more.
Pagespeed insights is already hard for normal people to understand - please don't make it worse. |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 9 -
β€οΈ 2 -
π 1 -
π 2
-
Thanks for the feedback! Hopefully the documentation on these insights that we're working on will also help address this. To specifically address the two examples you gave:
We will have a think about whether the audit name can be improved. However a key point is that "document request latency" is about more than just the "server response time". It also includes whether redirects were used, and lack of compression: ![]() These both can also result in a slower download of that crucial initial HTML, in a very similar manner to the server being slow. Many people consider the "server response time" but don't think about those other partsβand they can often be the cause of the real issue, even if the server response time itself is reasonably fast. Unlike individual audits, the insights attempt to group multiple similar pieces together, and this is a perfect example of that!
So the "Layout shift culprits" audit still exists and is a separate thing. But seems like we could do a better job on explaining this one to the more general audience who may not be familiar with that term.
Do let us know if you have other examples like those two where you feel this is making it harder. But we are aiming to make it easier by grouping things togetherβrather than having a huge, long list of audits even when some are very related. I think those are good examples where we can make it clearer still, but that's the aim. |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 3
-
I think the new audit is unfair. I have a homepage that currently passes with flying colours (100% across the board). with the new insights, issues arise. One issue it states is that I'm serving an oversized image in a small slot. What I did was combine a couple of pictures together as one picture to minimize requests and used it as a CSS sprite. and despite it measuring my sprite with the correct dimensions, its claiming it can be optimized due to its "large size". I did this deliberately because I'm using an HTTP/1.1 only server and I'm trying to minimize connection requests for pictures. for reference, here's the sprite: https://buy.ontariospeeddating.ca/r0/1/allvenues.jpg and here's the website the sprite is loaded into: https://buy.ontariospeeddating.ca/ If pagespeed continues this way, then the people (think baby boomers and gen-x) who don't have new computers that want websites that serve their needs won't be able to see them because google would hide them over these silly new "optimizations". |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 1
-
I'm not seeing an issue with images when testing this site with PageSpeed Insights. Do you still see the issue? Did you change something? Or perhaps we fixed this false alert on the Lighthouse side. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Dumbing the UI down is never good, my initial server response time went from 1.2s to just 'server responded slowly'...? I like numbers. |
Beta Was this translation helpful? Give feedback.
All reactions
-
β€οΈ 3
-
I can't tell if its lighthouse or if its servers because if a server is doing poorly then numbers would naturally be unfavorable regardless of if you use lighthouse or webpagetest.org |
Beta Was this translation helpful? Give feedback.
All reactions
-
@dymoo I like numbers too. This info will certainly be added back here, thanks for bringing it up. |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 4 -
β€οΈ 2
-
I literally logged in just to speak to this! I don't just like numbers, I need numbers. Whether it passes or fails, I really need the exact response time measured, like audits currently does. This is a HUGE deal for sites with this issue. In order to tell if one is making headway on the issue or not, or are testing if a new server performs the same as the old ones etc., details are a must. Otherwise, I wouldn't know if I made any headway on the issue. Getting 3.2s vs later getting 1.8s is a big deal but I can't see that on this new Insights mode. No dumbing the UI down please! |
Beta Was this translation helpful? Give feedback.
All reactions
-
β€οΈ 1
-
Lighthouse 12.6.1 fixes this. It will roll out to PSI soon. |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 1
-
I think the HTTP/2 suggestion can be improved.Looking through the insights code, it suggests that it flags a page if it has 6 or more static resources via HTTP/1.1, but it does not factor how they are loaded. Picture this: That shouldn't be flagged, because only 2 resources are immediately needed: the HTML page itself, and the 1 picture above the fold. It would take the user to scroll to load the remaining pictures. |
Beta Was this translation helpful? Give feedback.
All reactions
-
I think his point is that we have a nuance that we we don't flag pages with less than 6 resources (since HTTP/2 wouldn't really benefit the site in that case). And maybe that nuance should be expanded to also included pages that don't fetch more than 6 resources at once. I don't think a change is necessary here as, as far as I know, Lighthouse only looks at in-flight resources for page load. So already excludes the lazy-loaded issue. Plus if you're dancing that close to the edge, then I'm happy for Lighthouse to be a bit more proactive in suggesting HTTP/2. Like even in the lazy-loading example scrolling down quickly (or deep-linking down the page) then they could be hit with the issue. |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 1
-
Tunetheweb understands my idea, but this: "Lighthouse only looks at in-flight resources for page load. So already excludes the lazy-loaded issue." isn't accurate. On one of my pages, I have 6 images loaded from the main HTML of which at least 1 is marked as lazy-loaded (loading=lazy set in the IMG tag). So that means 5 should be factored when determining the need for HTTP/2 because the 6th one wouldn't load until the user scrolls to it. |
Beta Was this translation helpful? Give feedback.
All reactions
-
How far offscreen is the image? If it's within the threshold, then it may be downloaded anyway. Especially on throttled connections where the thresholds are even higher. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Some images are at least 1500 CSS pixels (DPR 1) away from the top. think regular desktop PCs. |
Beta Was this translation helpful? Give feedback.
All reactions
-
That could easily be in-scope for mobile Lighthouse test which uses Slow 4G emulation as per the thresholds I linked above. It may even be that this triggers the HTTP warning for mobile test, but not for desktop. Anyway, since this has not changed between the old audits, and the new insights-based audits, I think this is a getting a little off topic. I'd suggest you raise an issue if you want to discuss this further but if it's OK with you we'd like to keep this discussion on the move between the old audits and the insights. |
Beta Was this translation helpful? Give feedback.
All reactions
-
A bit more emphasis on and perhaps accurate tracking of page size (network activity can sometimes be a lot more) would be great. We are currently seeing a month-on-month increase in page size despite a month-on-month increase in page loading speed. An excessive focus on speed may cause people to be cavalier about something that can be a source of instability and poor user experience. Related, I'm starting to wonder if Lighthouse needs to adapt its metrics a little. The real-life user experience can vary enormously between a site that scores 93-100-100-100 and one that scores 100-100-100-100. Meanwhile, the difference between something in the 70s and high 80s or even low 90s is often not so tangible. I love the general direction of these insights. It gives people things to improve even when they are getting top results. It's also good to focus more on standards and quality in light of the general internet bloat. I can see this applying good, subtle pressure on organisations to up their game a bit. Nice work. |
Beta Was this translation helpful? Give feedback.
All reactions
-
If your page size is large, its more data that needs to be downloaded. More data to download would naturally result in reduced page loading speed, provided that you're using the exact same 100% reliable (preferably wired) internet connection setup everytime. |
Beta Was this translation helpful? Give feedback.
All reactions
-
I'm using pagespeed.web.dev, so it's a controlled test environment using Google's servers, not based on my connection or device. What Iβm saying is that page size can keep creeping up even while loading speeds improve. You can optimise for speed, sure, with lazy loading, better server response, compression, but still end up with a bloated site. Perhaps because sometimes things really are being optimised for the tests and not user experience. My own sites are tiny, so I rarely see much variation on PageSpeed/Lighthouse. It's always 100-100-100-100. The bigger concern is how this plays out at scale - larger, heavier pages can still impact user experience, especially on mobile, and contribute to the general problem of internet bloat and sustainability. The common solution of throwing more power at our connections, along every part of the chain, just isn't very efficient. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Once v13 (~October 2025) releases β will it show exactly what PageSpeed Insights shows today when toggled in Insight Mode? E.g. all the audits with id ending in When reading the blog posts I thought only the new insights would remain, but after reading the code I understand that, for example, |
Beta Was this translation helpful? Give feedback.
All reactions
-
Hey Juan, yes there are some audits that will remain even after the switch to Insights as per the post:
In most cases these are audits that make less sense in the Performance Panel as they are either already covered in a different way (like For the main part however, we're aiming to keep these to the minimum and have parity as much as possible between Lighthouse and Performance Panel. |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 1 -
β€οΈ 1
-
Hello lighthouse and thank you for this exchange I have a recurring issue with image optimization. for example, I had an image that was originally 88kb. PageSpeed suggested compressing it to 60 kb. Great I did that. But after retesting, it suggested compressing it further to around 35 kb. I compressed it again, but it didnβt stop there!! next suggestion was 9kb. every time I optimize, it keeps flagging the image and asking for more compression, without any clear "endpoint" π (the same problem for every image on the website) The problem is that each time I compress further, the image loses more quality (especially if the image is the image of the CEO XD), and I don't know when Iβve βdone enough to satisfy the tool". we can't just keep squeezing quality forever. speed is important, but so is the look of the site.. especially when the site is already fast. it would be great if there were clearer guidelines or a point where page speed acknowledges "good enough" π«‘ Secondo, iβm using a wordpress block theme, and i noticed page speed always flags the navigation and cover blocks (which are default core blocks) under render blocking requests and network dependency tree issues to test, i completely removed those blocks and just used my own custom min.css which is super lightweight.. only 4.7kb minified, and itβs the only css file iβm using now for the whole website to run far a way from "Network dependency tree" as much as i can. even with that, it still flags it under both categories. the other day it flags even a file with .9kb!!!!!!!!!!! Honestly, iβm wondering.. do you want us to stop using css completely? π π because i donβt see how i can possibly get lighter than that without turning the whole site into plain html! π€ Lastly also, for LCP by phase flag βeach phase has specific improvement strategiesβ¦β, it always flags the logo (an svg in my case) as the LCP by phase. maybe you could exclude typical logos from this check π€ since 99% of websites have their first image as a logo i think. If you fix the first two issues at least i will thank you forever, anyway iam thanking you in advance for reeding this comment, good luck and have mercy on us (developers) i spend more hours to fix google page speed insights instead of building other websites π |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 1
-
Are you preloading your CSS? That stops it from being a render-blocking resource. link rel="preload" as="style" href="path/to/css.css" media="screen" |
Beta Was this translation helpful? Give feedback.
All reactions
-
@monkeysdocode yes i tried that already but it broke the website unfortunately |
Beta Was this translation helpful? Give feedback.
All reactions
-
Can you provide the site so we can check this out? It shouldn't be doing that. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Hey @FahemCH! With all due respect, how does your issue relate to this topic (i.e., Performance Insights Beta)? I would create a separate topic to prevent further confusion. Note: To solve your issue, use FlyingPress for caching, Perfmatters for performance optimization, and the recently-released 5-star plugin CompressX for image optimization and delivery. Our team uses all three. Our PSI and other performance tools scores are 98 (Mobile) and 100 (Desktop). You're welcome :) |
Beta Was this translation helpful? Give feedback.
All reactions
-
@Generosus hello, thank you for your reply, to be honest i left everything as it is, because it consumed lot of my time, thanks again for the solutions you provided (i'll try them out in my future projects) π«‘ |
Beta Was this translation helpful? Give feedback.
All reactions
-
A great addition, but there may be a small issue with the tool recognising all types of responsive images. The corresponding HTML uses picture -> source elements with media queries. For a Moto G4 size screen, the browser should choose the appropriate image size based on the 'sizes' attribute and image widths listed in the 'srcset' attribute, whilst accounting for the display pixel density. It seems that the Insights tool is only accounting for the displayed dimensions (399x300) and not screen pixel density. The existing Lighthouse audit does seem to get this right - or at least it never throws a warning for mobile image sizes. Link to the site in question: https://creativetouchrotherham.co.uk/ |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 1
-
Thanks - I filed an internal issue and will provide an update here when we have one. |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 2
-
I am having very inaccurate and inconsistant results in performance. I am working on www.catalogoenlinea.shop and I have made several changes to the site, even removing images that were marked as the LCP (small webp image of 20k) and the metrics are always the same, it never changes. ![]() ![]() ![]() ![]() |
Beta Was this translation helpful? Give feedback.
All reactions
-
A lot of your JavaScript is third-party and blocking. If that is delivered slowly during a test, it's going to have a big impact on performance scores. So it's quite possible to see large variations in pagespeed.web.dev results. For your real user experience, is your server located close to your visitors? What are the network connection speeds like for your visitors? Do your visitors tend to opt for budget phones or high-end phones? Are you on shared hosting with potentially quite variable server response times? These are the kinds of factors that can differ dramatically between sites. And it's not something that testing with Lighthouse is going to give you a clear picture of. For the UK-based site I linked to above, ~90% of UK visitors are using mobile phones. It has very, very few Chrome + Windows desktop visitors from the UK contributing to CrUX data. The majority of its desktop users are from the US, Australia, and New Zealand. This is making the desktop CrUX data look far worse than mobile, whereas you would expect it to be the other way around. That's not something that Lighthouse testing could pre-empt. For your site, try to make the CSS non-blocking. Google Tag Manager's JavaScript is not doing the site's performance any favours either. Fix those two things, and you should see quite an improvement. |
Beta Was this translation helpful? Give feedback.
All reactions
-
I've identified the issue, and it should be resolved in the next Lighthouse release (12.7). That should be in PSI by the end of the month. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Beta Was this translation helpful? Give feedback.
All reactions
-
Apologies for ignoring you. I agree the Insight seems to be over estimating the savings here (which is why it's failing this audit). We'll investigate why. |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 1
-
Thanks for the reply @tunetheweb |
Beta Was this translation helpful? Give feedback.
All reactions
-
Hey Sean, sorry for not replying earlier. I think this may be resolved? We had some issues with network sizes being wrong (we accidentally were ignoring compression), which would cause an issue like what you saw. Do you still have the URL that showed this issue? |
Beta Was this translation helpful? Give feedback.
All reactions
-
Not the same url, but here is a new one using the same domain for the test: https://pagespeed.web.dev/analysis/https-fylgja-dev/8ei91stgqf?form_factor=mobile |
Beta Was this translation helpful? Give feedback.
All reactions
-
I am still seeing a error, instead of a warning, for render blocking CSS. |
Beta Was this translation helpful? Give feedback.
All reactions
-
It complains about |
Beta Was this translation helpful? Give feedback.
All reactions
-
The image is preloaded without <link rel="preload" as="image" href="https://d1csarkz8obe9u.cloudfront.net/posterpreviews/blank-wanted-poster-template-design-37a9a451cb794713d8bba77bd3c0ca6f_screen.jpg?ts=1698190633"> So it initially starts as low until the The Insight could do a better job of making this clearer so I raised https://issues.chromium.org/issues/419817756 to track this. |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 1
-
I use the https://pagespeed.web.dev/ website to test websites and pages and most of the time the numbers I can achieve are in the Green but I do have a life outside creating websites and it would be good if there was more explanation / links. When there is a problem I struggle to understand what I should do to improve the score. Please think of those who are interested in creating better websites but are not experts in every topic. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Most of the audits have links to more information and we're also working on documenting the insights. However if there is a particular section that you think could benefit from more links then let us know. Now there is going to be some point whether this does get technical unfortunately no matter how much we try to explain. And non-technical people may have to defer to their platform or other experts for a lot of this. However, the fact you're able to tell from a glance if there is a problem is a great first step! Even if the next step of how to actually fix it is a little harder... |
Beta Was this translation helpful? Give feedback.
All reactions
-
I got an LCP request discovery issue where I think I shouldn't. On my website, what generates the LCP is a background image generated in-line from a data URI, and that generation is in a javascript tag right after the HTML body tag. Your pagespeed insights suggests I should add "fetch-priority=high" but how does that work with data URI's and why would that be needed if the image can be pulled from the HTML itself. Also, the data URI is within the first 20K of code (uncompressed). |
Beta Was this translation helpful? Give feedback.
All reactions
-
Are you able to share your URL? |
Beta Was this translation helpful? Give feedback.
All reactions
-
I also have in my HTTP header output: You'd think I'd get brownie points for putting the Link in the HTTP header instead of the compressed HTML code because no decompression is needed to determine what image needs to be urgently loaded. |
Beta Was this translation helpful? Give feedback.
All reactions
-
I'm not seeing a LCP request discovery issue for that site on PageSpeed Insights?: |
Beta Was this translation helpful? Give feedback.
All reactions
-
Sometimes it shows an LCP issue but other times it doesn't. Your test shows it doesn't. |
Beta Was this translation helpful? Give feedback.
All reactions
-
That's weird! We shouldn't do that. I created an issue here. |
Beta Was this translation helpful? Give feedback.
All reactions
-
I ran my site again through page speed insights and the stats are awfully confusing. What I show here is the desktop results. It states the LCP element loaded in 0.3 seconds (same time as FCP). It also states Speed index is 0.9 seconds. However, the new stat "Document request latency" goes red claiming the response was served slowly. How is that claim true when pretty much everything loaded in 0.3 seconds and the rest of the indicators are green? I also did manage to get the reported TTFB value through the pagespeed API which is roughly 160mS (which is below the 800mS threshhold). So now I'm completely lost. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Yes we've noticed false positives with this "Server responded slowly" in some cases. We're investigating why this happens... |
Beta Was this translation helpful? Give feedback.
All reactions
-
uh oh |
Beta Was this translation helpful? Give feedback.
All reactions
-
re: the "Server responded slowly", I shared this in another thread on this page but FYI: we've since included the actual server response time here, and we fixed a bug that caused us to use the wrong time here. EDIT: today (Oct 1) I found that there was still a bug with how the server response time was being reported for the new document latency insight audit. I've resolved it for Lighthouse v13. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Overall Assessment: Insights -- if implemented -- is going to permanently disrupt the internet ecosystem, annoy many people, and steer the majority towards using website performance tools other than PageSpeed Insights. Performance Audits works well for everybody. It took countless years to learn and adapt to it. "If it ain't broke, don't fix it." To assist with your decision-making process, why not release a poll to gather the necessary go/no-go votes? Do it "Elon Musk" style. It works well. Thanks for reading! |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 1
-
Do you have any more specific feedback? |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 1
-
The name of the new version is more difficult for me to understand than the old version. At least I knew the cause and solution in the old version, but the new version leaves me at a loss. |
Beta Was this translation helpful? Give feedback.
All reactions
-
The name of what? |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 1
-
I seem to be having issues with 'Improve image delivery' - in mobile mode, it's highlighting picture elements with sources and a base img tag that are deliberately double width / double height for 2x DPI screens, claiming the image is (exactly double) the dimensions. See https://share.muckypuddle.com/6qup655A, and the correspoding element:
|
Beta Was this translation helpful? Give feedback.
All reactions
-
This is resolved for the next Lighthouse release. https://crbug.com/416580500 |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 2
-
I would love to get more insight regarding what resources are causing the render delay phase of LCP. We regularly get very high LCP scores that have 80-90% of their duration in the render delay phase. That number doesn't necessarily correspond in any meaningful way to the estimated time of the Render blocking requests. Sometimes it's significantly higher, sometimes significantly lower. I have tried using Chrome devtools to give me a waterfall graph so In diagnose the issue, but even when I set the CPU and network throttling to match Lighthouse, I can't ever seem to recreate the long LCP times. I regularly get ~half of the LCP in Chrome. Getting either a waterfall graph or just a list of resources causing the render delay would be incredibly helpful! |
Beta Was this translation helpful? Give feedback.
All reactions
-
Hey Eric, so yeah there's nothing directly in the "LCP breakdown" insight that highlights what could be the cause of a high render delay. There's a few things it could be other than render blocking requests β long tasks blocking the main thread is probably the most common. I created an issue for us to explore how this insight can better highlight where to look to further debug a high render delay (without resorting to docs). Would you happen to still have a URL that presents a high render delay that you had trouble determining the cause of? |
Beta Was this translation helpful? Give feedback.
All reactions
-
Hey Connor, thanks for the reply! I've been doing a ton of testing over the last month and we're getting somewhat different results now, but I'm happy to provide some examples. The main change we've made on this site is to eliminate all render blocking resources by inlining critical CSS and deferring everything else. We eliminated render blocking JS a couple years ago. Here are a couple tests run in relative close proximity:
Subsequent tests can vary anywhere from low 60s to upper 90s. This is the case for most of our sites, even when removing all render blocking resources. A couple noteworthy observations:
My best theory for why we might be getting long load or render delay is because I'm using some kind of really expensive CSS property or something that ends up causing long or repeated layout calculations, but I haven't really been able to find a way to suss that out. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Wil this have an impact on how the numeric performance score is calculated, or just the suggestions and feedback provider? |
Beta Was this translation helpful? Give feedback.
All reactions
-
No impact on the performance score. That's always determined from just the metric results. |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 1
-
Thanks for the clarification! |
Beta Was this translation helpful? Give feedback.
All reactions
This comment was marked as spam.
This comment was marked as spam.
-
Audits is easy to understand, and analysing a page's Core Web Vitals using the metrics looks a bit simpler. |
Beta Was this translation helpful? Give feedback.
All reactions
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
Hello Please take a look at: So I preloaded all those fonts: To me, this is misleading because I did something to fix a "red" warning, but I made my LCP worse. I work with a performance plugin and we know customers will see that red warning and think they need to fix it. But in doing so, they will make other things worse and they won't understand why. |
Beta Was this translation helpful? Give feedback.
All reactions
-
@connorjclark or @tunetheweb any chance of getting your feedback on this conflict between network dependency warning vs LCP metric? Thanks! βοΈ |
Beta Was this translation helpful? Give feedback.
All reactions
-
I kinda agree that, due to Ultimately we definitely don't want to encourage things that make page load slower! WDYT @connorjclark ? |
Beta Was this translation helpful? Give feedback.
All reactions
-
π 1
-
I think, but don't remember the details, we might be able to detect how the font is being displayed (IIRC in other insights we do this). I'm not sure if it would tell us exactly if the font is display: swap, but we might be able to detect cases like this where the font is not critical and exclude them. But I will defer to @connorjclark for how feasible this is or not. |
Beta Was this translation helpful? Give feedback.
All reactions
-
What is it your needing?
β¦On Sat, Aug 23, 2025, 8:13 p.m. Lucy Beer ***@***.***> wrote:
@connorjclark <https://github.com/connorjclark> any chance of getting
your feedback on this conflict between network dependency warning vs LCP
metric? Thanks! βοΈ
β
Reply to this email directly, view it on GitHub
<#16462 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BUWCVJEVAU7BS6QX6Z5U5D33PENVTAVCNFSM6AAAAAB3ZRCFPKVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTIMJZHE3TKMA>
.
You are receiving this because you are subscribed to this thread.Message
ID: <GoogleChrome/lighthouse/repo-discussions/16462/comments/14199750@
github.com>
|
Beta Was this translation helpful? Give feedback.
All reactions
-
Beta Was this translation helpful? Give feedback.
All reactions
-
As we're only discussing the Performance insights for this discussion, could you raise a separate issue for that? |
Beta Was this translation helpful? Give feedback.
All reactions
-
Done. Thank you. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Not sure if anyone posted this (I read through quite a bit here but maybe I still missed it): WHY not just LEAVE both??? I just don't understand the point. Everyone here complaining about not seeing numbers and how it's basically more complicated for the average user now (the 'new' insights) I agree with 100%. What GOOD reason is there for removing something that works and is fine. Just leave it. Add your 'new' hard to understand version if you want but PLEASE KEEP THE OLD VERSION! I'll go to GTMetrix (and will pay them) if the numbers/good version disappears and only this new more complicated dumbed down version is left. I hope everyone goes to GTMetrix in that case. Just makes no sense at all. Leave both. Make everyone happy. Simple. |
Beta Was this translation helpful? Give feedback.
All reactions
-
LCP detection likely inaccurate on mobile PSISite: https://www.schmidtkramer.com/ What PSI reportsClaimed LCP Element: This element is roughly 1600px from the top of the mobile viewport β far outside the visible area on initial load. It shouldn't qualify as LCP under normal interpretation of βlargest above-the-fold content.β What likely is the real LCPThe hero section background image is the first large visual element loaded and rendered. Relevant Element: I plan to apply resource hints (e.g., This seems like a case of false-positive LCP detection. Please correct me if I'm wrong. |
Beta Was this translation helpful? Give feedback.
All reactions
-
This does seem to be a false positive and I can't repeat it when debugging locally (including in Lighthouse locally). However it happens in both the old-style Audit and the new-style Insights. So is not related to the change being discussed here. Can you raise a separate issue for this? https://github.com/GoogleChrome/lighthouse/issues ? |
Beta Was this translation helpful? Give feedback.
All reactions
-
PageSpeed ββreports that I should use web/avif images when the image is already in .avif format and compressed ![]() . How do you explain this? Thanks for your attention. |
Beta Was this translation helpful? Give feedback.
All reactions
-
The issues is the images are not serving a ![]() Best practice is to always serve a @connorjclark do you think we could make this clearer in the Insight? Or perhaps we should check the file extension when no |
Beta Was this translation helpful? Give feedback.
All reactions
-
Thanks, I'll follow the discussion you mentioned. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Yes, of course, but I can access my htaccess file. What rule should I add? |
Beta Was this translation helpful? Give feedback.
All reactions
-
I have this rule in my htaccess file, do I need to add avif? IfModule mod_headers.c> |
Beta Was this translation helpful? Give feedback.
All reactions
-
There should be a mime.conf or mime.types file where this should be set up. Alternatively, the following might work in .htaccess file:
|
Beta Was this translation helpful? Give feedback.
All reactions
-
I tried, but the results got worse. I'd better leave it alone, as I'm not an expert. I'm waiting for you to resolve the PageSpeed ββissue. Thanks for your attention |
Beta Was this translation helpful? Give feedback.
All reactions
-
Update: I tried this rule and it seems to work: |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
EDIT: Oct 10, 2025: https://developer.chrome.com/blog/lighthouse-13-0
A more detailed blog post about these upcoming changes: Lighthouse is moving to performance insight audits
The Performance panel in Chrome DevTools recently added insights to the trace view. These performance insights are powered by a trace analysis library that was designed to also run in Lighthouse. The performance insights are analogous to existing Lighthouse performance audits, but with some tweaks and consolidation. The goal is to offer the same performance advice across all our performance tools - Chrome DevTools, Lighthouse and PageSpeed Insights.
In Lighthouse, the set of performance audits that have been replaced by equivalent insight audits will be removed.
If you have any questions of feedback, please add a comment here.
Related issue: #16323
Beta Was this translation helpful? Give feedback.
All reactions