Talking Drupal #415 - Front End Performance

September 11, 2023
Today we are talking about Front End Performance, Common Front End Issues, and Ways to test and fix said issues with guest Andy Blum. We’ll also cover Webp Fallback Image as our module of the week.


direct Link
  • How do we break down front end performance
  • How do we measure front end performance
  • What are web vitals
    • Standard, objective measurements
    • First/Largest contentful paint
    • Cumulative layout shift
    • Time to Interactive/First Input Delay/Time To Next Paint/Total Blocking Time
  • What are some common client side performance problems
    • “Flickering”
    • “Slow loading”
    • Image size/resolution issues
    • Render-blocking resources
    • Screen jitters
    • Memory leaks
    • Memory Bloat
  • How do tracking scripts affect performance
  • Tools to help identify and resolve
  • Drupal front end performance


  • Brief description:
    • Do you want your Drupal site to generate WebP images in the most optimal way? There are a number of modules for that, today we’re going to talk about…
  • Brief history
    • How old: created in Jun 2022 by pedrop
    • Versions available: 1.0.0 and 1.1.0 versions available, both of which support Drupal 8, 9, and 10
  • Maintainership
    • Actively maintained
  • Number of open issues
    • 3, 2 of which are bugs
  • Has test coverage
  • Usage stats:
    • Almost 252 sites
  • Maintainer(s):
    • Most recent release is by dj1999
  • Module features and usage
    • Anyone using testing tools like Lighthouse will have seen suggestions to use modern image formats like WebP, and with good reason. They allow for much smaller image files at the same quality, which means a better user experience and less bandwidth used by both the server and the visitor. WebP is a natural choice because it enjoys over 95% browser support, but many sites still care about that other 5%
    • Drupal core added its own support for webp in 9.2, but without a fallback image, so browsers that don’t have WebP support have been out of luck
    • Contrib modules have allowed for generating a webp image and a jpeg fallback, to allow for universal support. Typically they have worked by creating the WebP variant from the output of a core image style, so after an image has been saved as something like a jpeg. That means the resulting WebP can’t compress as well, and can show compression artifacts
    • WebP Fallback Image is different because it allows Drupal core to generate the WebP image from the source file, and then creates the jpeg fallback.
    • Also worth noting that this module only creates the jpeg fallback when it’s requested, so it doesn’t add to the storage of your website unless it’s needed




 This is Talking Drupal, where we pitch out about web design development from a group of people with one thing in common-- we love Drupal.

 This is episode 415, Front End Performance.

 On today's show, we're talking about front end performance, common front end issues, and ways to test and fix set issues with guest Andy Bloom.

 We'll also cover WebP fallback image as our module of the week.

 Welcome to Talking Drupal. Our guest today is Andy Bloom. Andy is a front end developer andAcquia certified front end specialist. As a former science teacher, he loves finding simple and elegant solutions to complex problems. His passion for learning keeps him active in the Drupal issue queues and offering help in Drupal Slack channels, particularly the Twig channel.

 He is currently a subsystem maintainer of the Oliveiro theme and has been involved in Drupal since 2016.

 Andy, welcome to the show, and thanks for joining us for the last four weeks. It's been a lot of fun. Thanks for having me.

 I'm JohnPicozzi solutions architect at E-PAM, and today my co-host isNicLaflin founder of Enlightened Development.

 Good morning. Happy to be here.

 Or afternoon or evening.

 Whenever you're listening, good whenever you're listening.

 The New England Drupal community invites you to join us at NedCamp, the New England Drupal camp, celebrating its 10th year on November 17 and 18 in Providence, Rhode Island. Friday features trainings as well as an all day higher education summit, and Saturday is a day full of sessions.

 Speaking of sessions, NedCamp is accepting session and training submissions until September 25. Visit to submit your session or training topic, purchase tickets, and learn more about the camp. The New England Drupal community and Talking Drupal team hope to see you in November at NedCamp. Thanks, Mike, for telling us all about NedCamp, Nick, Stephen, and myself. We'll all be there at NedCamp. We look forward to talking to our Talking Drupal listeners in real life. And yeah, NedCamp is going to be a pretty awesome event. So hopefully, a lot of our listeners can attend.

 I would also like to note, for those looking at the video, you may notice that Andy and I are both wearing NedCamp t-shirts. So the NedCamp-- 0% planned. Yes, the Ned-- yeah, we did not coordinate our outfits. The NedCamp is strong with this episode. So awesome.

 All right, let's turn it over to MartinAnderson-Clutz a senior solutions engineer at Acquia and a maintainer of a number of Drupal modules of his own, to tell us about this week's module of the week.

 Thanks, John. Do you want your Drupal site to generate WebP images in the most optimal way? There are a number of modules for that. But today, we're going to talk about WebP fallback image. It's a newer module that was created in June of 2022 by PEDrop. It has 1.0 and 1.0 versions available, both of which support Drupal 8, 9, and 10. It is actively maintained and has three open issues, two of which are bugs. The module is currently in use by 252 sites, according to And the maintainer of the most recent release is DJ1999.

 Now, anyone using testing tools like Lighthouse will have seen suggestions to use modern image formats like WebP and with good reason. They allow for much smaller image files at the same quality, which means a better user experience and less bandwidth used by both the server and the visitor. WebP is a natural choice because it enjoys over 95% browser support. But many sites still care about that other 5%.

 Drupal Core added its own support for WebP in version 9.2, but without a fallback image. So browsers that don't have WebP support have been out of luck. Now, contrib modules have allowed for generating a WebP image and a JPEG fallback to allow for universal support. Typically, they have worked by creating the WebP variant from the output of a core image style, so after an image has been saved as something like a JPEG. That means the resulting WebP can't compress as well and can show visual compression artifacts. WebP fallback image is different because it allows Drupal Core to generate the WebP image from the source file and then creates the JPEG fallback. It's also worth noting that this module only creates the JPEG fallback when it's requested, so it doesn't add to the storage of your website unless it's needed. So let's talk about WebP image fallback or fallback image.

 So this is the opposite of the WebP module, right? So this is generating--

 so is this when you're starting with a WebP image or something? Or I don't--

 Right. So I believe the way it works is in your image style, you basically set it to convert to WebP, and then this module will also generate the JPEG fallback image.

 From the original source JPEG? Or like that--

 That's a good question. I think the module's assuming you're uploading a WebP, right? So like you would have a WebP image, or is that not-- I think that's what this is doing. I think it's doing the inverse, because most of the time you're uploading a PNG or JPEG, right? And all the WebP suite of modules are set up to kind of generate the WebP from that. So then those modules, the fallback has already been uploaded.

 So my understanding is this-- so Drupal core, like even Drupal, since 9.2, can take a JPEG or a PNG file. And in your image style, you can basically say, convert that to a WebP.

 The downfall of that is that it won't have that fallback JPEG. And so all this is doing is saying, use the WebP conversion in Drupal core, and then this module will also add that fallback as like a JPEG.

 So it fixes the issue I see. So it fixes it so you can actually use the core WebP generator. Exactly. Like you're saying. So basically, it adds that extra layer, where like, hey, I uploaded a PNG, convert it to WebP, and then if WebP can't be used, take that source PNG and apply the same rules to it so that it fits into whatever WebP was supposed to be in. I gotcha. Yep. Again, I don't understand why they don't just use the source image itself unless it-- The source image might not have-- so it's a link between like, if you had the source image and you still needed to do some steps to resize that image, like you'd have to use another image style for that. This way, you can use the same image style and get the two derivatives, also linking them together in the instance where WebP isn't supported.


 Yeah, it feels like a kind of niche, but it's also going to allow better, very-- like, you can't really use this core WebP converter right now because it doesn't provide that. So it sounds like this kind of bridges that gap. So that's a useful utility module. I'll have to check it out.

 Interesting. WebP, that's a very interesting extension. I'm sure it means something, but like, I don't know. Every time I see it, I'm just kind of like, hee hee, little kid.


 You can't say that and not find out what it actually means and say it. I was looking for it. I think it's like web photo is what I would get. I was hoping one of you guys that was smarter than me knew what it meant and could chime in there.

 Yeah, I'm not familiar with the origin of the name, but I do know it was a format that was originally sort of put forward by Google and sort of popularized. It was originally a format that was supported in Chrome, but not as much in sort of Safari and iOS browsers. But in recent years, it's really gotten to be near universal support. I've done a Google search, and it stands for Web Picture Format. There you go. So close, Andy. All right. Martin, thank you again for a wonderful module of the week. And let's move on to our primary topic. See you next week.

 OK, so, Andy, we are talking about front end performance, which is like a pretty massive topic and can go down a couple of different paths. So I think to frame today's conversation,

 we're going to talk a little bit about how people kind of measure front end performance, the tools that they can use to do that and to help them achieve better performance, and then some common problems that we have with front end performance in general. But before we get started there, I was reading something that you used, I think, for training. And you brought up this really interesting point of breaking down performance into two thought areas. Can you talk a little bit about that?

 So when we talk about performance, there's kind of two ways to do it. When you're talking with developers, we'll frequently talk about specific metrics, and we'll talk about some of those later. But there's actual metric data objective.

 How long did this take? How big was this thing? What's this mathematical formulation for how this was different? And then there's the perceived performance, which is, in my experience, typically what you're going to hear from your stakeholders on sites is the site feels slow, or the site has this weird flicker issue. And so you'll get these--

 even if you don't actually change the numbers of how long a site takes to load, if you're able to make it feel faster or if you're able to eliminate that flicker, those performance issues can be resolved without actually changing any numbers just by changing how that loading process is perceived. It's interesting, because when I read that, I was like, in my head, I converted it to machine performance and human performance, where machine performance is objective measurements, where we're saying, hey, Google's looking at your page load speed, and it's above three seconds. And it needs to be lower than three seconds. And then the perceived aspect is like your human user, where your user's like, ah, why isn't this page loading? It's been taking forever, versus like, oh, this page loaded really fast, and I'm able to do what I need to do on it. Well, one of the things to point out, too, though, is some of the perceived stuff is subjective, but some of it can be measured, too. So a good example of the two-- when I think about this, it's kind of two big use cases.

 So the objective performance is, how long does it take from when it starts till when everything is finished? And you can imagine, you click on a web page, the page is white, and when it comes into focus, everything is there and loaded. So you're just kind of waiting for everything to load before you show anything to the user. And let's say that takes four seconds,

 and a perceived performance situation might be the situation where you click on it, something loads. You can see the title. You can see some boxes that maybe not filled in. Maybe you can't interact with it yet, but maybe that extra request actually makes it take longer. It takes five seconds. The user will say, hey, the first one where you saw stuff sooner, maybe it takes one second to show the title in the boxes, and it takes another four seconds to finish loading. Even though it took five seconds, the user is going to be like, oh, that was faster. Another common case is with Ajax. If you have a button where somebody clicked something, and then it takes two seconds for something to happen, people are going to think that's an eternity. If they have a button, they click it, and then they get a little spinner. Even if it takes a little bit longer, the fact that something happened, there's some feedback, gives them the perception of better performance.

 Now, you want to reduce both of those, right? You don't want a page that's just blank, but you also don't want tons of stuff that you can't interact with for a long time either.

 But that's kind of where the line lies. Like, when can you interact with it? When does something change? When can you see something?

 Yeah, and the other thing to keep in mind is there's a very clear, like, when do we start the clock? Right? When somebody clicks a link or navigates your page. And whenever you're starting that navigation process, there's a very definitive start. But depending on how your site is built, the technology you're using, you could have any number of, when do we stop the clock? Do we stop the clock when I get the main page content? Or if I'm on a Drupal site with BigPipe, do I want to wait till all of the less cacheable stuff also makes it to the page? Am I waiting for the DOM content loaded event to fire? Am I waiting for all of the JavaScript that's come down on the page to stop running? There's a number of different ways you could decide when to stop that clock.

 And so that's-- when we get into the metrics, that's kind of where those are is standardize start to point x, point y, point z. Yeah. And you can do things to really kind of shift around these different perceptions too. Like if you have an infinite scroll situation, right?

 You might have a really poorly performing Ajax load that takes five seconds. But if you set up your code in a way where it loads that before a user ever gets that on screen, it doesn't matter that it's poor performance because it's going to seem performant because they're going to be able to just keep scrolling forever and never see a load. Right.

 So there's a lot of different things you can do to kind of move the actual performance kind of your site between the two categories. And the discussion, especially when you're talking about front end performance, the discussion becomes a lot of times, how do you--

 what bucket do you put your time into, right? Are you trying to load the page fast and then just make additional page requests to do Ajax?

 From a performance perspective, you also want to make sure you're not just loading hundreds of things you're never going to use just to make it seem a little bit more performant when somebody clicks on something, right? Because if you're hitting the server for that,

 you're going to have to have a much bigger server to handle that. So it's always a discussion with stakeholders and users to kind of find out where that line is. Where do you put time into the actual performance of the hardware versus when do you move stuff to the client side maybe and make it a little bit faster or at least seem faster?

 And some of those decisions will be based on the website side. If we're building something, what do we reasonably think is something that our server can handle to serve? But on the other side, like you said, do we want to download a bunch of extra stuff? What kind of site are we providing? One of the sites I like to go to for reading news is because it's just a text-only version. If I'm in an area with really low cell reception, it's just text. It doesn't bring along any analytics tracking. It doesn't bring along any images. It doesn't bring along any JavaScript or auto playing video or any of that. So depending on the site you're building, who your users are that you're trying to serve, that might also factor into what kinds of things do we need to do to make this more performant or as performant as possible for those users. We don't really want to be taking advantage of somebody who's on a mobile connection that doesn't have a great mobile connection because it's going to take forever to download

 and that's going to reduce their performance, even if it's something we added in to try to make it feel better.

 So you've mentioned that most performance discussions kind of have a definitive start point and kind of a nebulous end point. But what kinds of tools are out there to measure for net performance?

 So there's a handful-- I guess there's two different kind of buckets you can put measuring performance into. There's synthetic testing and there's real user monitoring testing. Synthetic testing is going to be anything you're doing in some kind of a lab environment. And I think that's what most people are probably the most familiar with is synthetic testing.

 With synthetic testing, you're going to something like and you're going to say, emulate this device with this amount of CPU and this amount of memory on this browser with this screen size. And just run this page three or four however many times and give me the average of how much time did it take to load?

 What kind of resources did it download? Give me the network request list and all of those timelines and everything.

 And so that's one way to do it is those kinds of tools where you're emulating a device in as standard an environment as you can. And then on the other hand, you have real user monitoring.

 And it's more difficult to do. And it may sound like you're bringing people in like you'd be doing focus group testing, but it's not that. What you're just doing is you're adding a little extra JavaScript on your site that is watching for specific metrics and then reporting that back in some way.

 So we'll talk about some of those metrics in a minute from the core web vitals, which is a Google initiative.

 But they have a web vitals JavaScript library that you can use. And it just says, hey, on metric A report back, on metric B report back. And then you just give it the JavaScript function of how do we report this back? Are we reporting back to Google Tag Manager, Google Analytics? Do you have your own custom endpoint you want to report this data to?

 And so in that sense, you get a more accurate picture of your user base with my users when they visit my site. What is their performance metrics look like? What are their objective measurements?

 And so you can use synthetic testing and real user monitoring, and they'll just provide kind of different insights.

 - So it's kind of like, I imagine it's the Heisenberg's uncertainty principle type of situation.

 By adding that JavaScript, you are affecting performance because they're downloading something else and you're doing a checkpoint. I imagine it's fairly small, but it's not something that you want on 100% of the time.

 They are measuring different things though. So for example, we don't want to get, I don't think we want to give the impression that synthetic testing is not useful because synthetic testing will give you a lot of the stuff that if you have a large image on a page or something, it can catch that. Or if you have some sort of blocking script that you added inadvertently or something, it will generally catch that kind of thing. But what the real world testing will do is, there's a few things that are just hard to test in a lab, like logged in users.

 Usually the lab testing will be an anonymous user. If you have a block that when you're logged in takes eight seconds to load, but it's not there for an anonymous user, well, you'll get great results in synthetic, which users will be like, hey, this page takes forever. And the real world reports will kind of show you that kind of thing.

 Also, you get better network simulation, network.

 You get more realistic network environments, obviously with real environments and simulated ones because people are on their cell phones or on their wifi. So it's good to do both. - Just out of curiosity there, Andy, have you ever used a tool for specifically the user testing aspect of it? So obviously there are plenty of tools out there for synthetic user testing where you're sending 50 users through this thing to test kind of like your front end performance there. But like, have you ever used a tool

 where you're getting real people and you're saying, hey, we need 50 people in this demographic to test these front end, these types of front end things?

 - So the real user testing, it's less about looking for, at least from my experience. No, I have not used real user monitoring. It's not something I have a personal experience with. But I would say you're not trying to find a group that fits a demographic and then have them sit down and you're watching them. You're literally, it's like A-B testing, or you could just put it out to everybody. And that is, when this page loads and you hit these specific, how are they defining those functions to report those metrics?

 Then you're just defining a custom callback to report those metrics.

 And so then all of the measuring and all of the watching and all of the observation happens on the user client.

 And then it reports it back in some way. - I see. So it's less about defining the group of people and more about just getting those metrics from real users as opposed to synthetic computerized users. - Yeah, it's less like political polling and more like an actual election. - There you go. That's a great analogy. All right, so we kind of skirted around web vitals and let's dive in. What are they and how can they help us define better front end performance?

 - Yeah, so web vitals are an initiative put forward by Google. And so they're about as close to industry standard, I think, as you can really get without being an industry standard from the W3C. But they are standard objective measurements that say define in some predictable pattern

 objective measurements, quantitative measurements so that you can compare, how's my site now? How is it after this PR? This is my site on this device. How is it on this device? This is site A versus site B, making it a true apples to apples comparison as much as possible between two things. And so there's a handful of them in there. Do you wanna run through them? - Yeah, I do. But I wanna ask a question first because you said Google is involved, right?

 I'm wondering, do you have any knowledge or insight as to how these may be used in analytics and those sorts of things? Or is it been noted that they're completely separate from those sorts of tracking and analytical sorts of things? - So I think they do play into Google page rankings. I don't know how, I don't know how much they're weighted.

 Google's rankings are always a black box and a moving target. - The algorithm, we all know about that. - Sure, sure, sure. Yeah, all hail the algorithm. But the idea is, the theory behind it is if you're making these metrics defined in a way that are user centric and that are measurable on the client device and we factor those into the page rankings, then SEO page rankings should be providing the best experience for a user. And so then you're also, not only are you boosting good sites, but you're punishing sites that are gonna use dark patterns, scammy tactics that are doing things that you don't wanna do or that are just really, truly awful experiences.

 The one that always stands out to me, and I don't know if it's this way anymore because I haven't visited the website in a long time, but like the CNN website for a while, if you open an article, it would load in and then you would get ads loading in and then you would get an auto-playing video that juts out and if you close that, it jumps in and then the text drops down. And so you just have a bad user experience on that page.

 And Google has decided that that bad experience should weigh into how pages are ranked. It's not just about what content is on the page or what keywords are on the page anymore. The ability of your page to display well on mobile,

 is your page served over HTTPS?

 Does your page load quickly and does it not jump the content around a whole bunch? So those kinds of things do factor into rankings.

 - Yeah, and I wanna point out something too that may be obvious to our listeners, but it's probably worth pointing out. Like Google's a business, this probably isn't altruistic, but it doesn't mean it's not helpful to the web as a whole. I think Google benefits directly from having people with good web core vitals for two reasons. I think one of the big ones is, if the user has a good experience on the site, they're likely gonna be more appreciative of Google giving them that site. So that's one. The second thing is, a lot of the core web vitals are related to performance and standardization. The more standard and performance this site is, the less time Google has to spend actually indexing individual sites, right? So CNN, for example, if the Google bot has to wait 45 seconds to index a particular page on CNN, that's 45 seconds of CPU time versus another site that has similar content that takes two seconds, right? So they benefit on that side, but-- - I would say they also benefit as a hardware manufacturer now with Google Chrome, or not Chrome, with Android. You want, well Chromebooks and Android phones, if you have a device and Android devices tend to have a longer tail out in society than iPhones do, if you have a older, lower powered device, you wanna make sure that the web is an experience that is tailored well to your devices that are out there. - Absolutely. - It's funny, the other aspect here, and I think you guys are all kind of hitting on this too, is user adoption, user growth, right? Like if they can get into markets where people need pages to load faster or be more performant, right? That increases their user pool as well. And a lot of those people may be using some of those older devices or devices that require more performant websites. Anyway, Nick, finish your thought. - Yeah, and there's a counter example, I think, of something similar that Google was doing, I think, in the performance sphere, but was much more directly beneficial to Google than the users and the websites, and that was Google AMP. I mean, Google AMP still exists, but AMP keeps people off of the actual website, so those websites don't even see the engagement. It's really good for Google because it keeps everybody on Google

 and they never leave and it loads quickly. But thankfully, over the last couple of years, Google AMP really has dropped off because even as a user, like if I see an AMP link, I never go to it because it kind of just hijacks the environment and you don't get to the site, you can't navigate around and do other things. So Google has been kind of pushing things in this space around SEO and performance for a while. I think Web Vitals is a much better initiative. I think they really hit the nail on the head here. And it's something that you're seeing even outside of Google, people are starting to use it as a way to kind of measure their sites.

 So it's a lot more collaborative, I think. - So crawling back out of the rabbit hole we just went down, Andy, what are some of the vital Web Vitals that we should be looking at?

 - So there are currently four,

 yeah, so there are currently four core Web Vitals that Google has. One of them is currently in an experimental mode, but what the Web Vitals, the core Web Vitals and other Web Vitals and other metrics all tend to fall into is how quickly does the page look ready and how quickly does the page feel ready? So what I mean by that is how quickly does stuff get painted on the screen? How quickly does it look good? Does it look like it's ready to go? If you didn't have a keyboard or mouse, how quickly would a user think the page is ready for them? And then the second bucket is how quickly can a web application feel ready? As a user wants to click buttons, open accordions, scroll down the page.

 So the metrics for how do they look ready include the largest contentful paint, LCP you might see.

 The largest contentful paint is the biggest piece of actual content being painted on the screen. That'll be text, that'll be images, and that'll be elements that have a background image, it's not a gradient.

 So all three of those could be considered contentful paints. And the largest one, whichever one shows up on the screen and takes up the biggest chunk of the screen, if it loads in in different bits, that's what's considered the largest contentful paint.

 And so what you're measuring there is from the start of navigation to when do we get the biggest chunk of content on the screen? Is the largest contentful paint. Previously there was a first contentful paint.

 That is still a useful metric, but it's over there with like time to first byte, which is how quickly do we get anything on the screen? And especially when you start looking at maybe not Drupal sites, but things that are like React apps or Angular apps or Vue or whatever JavaScript flavor of the week it is.

 If you put up a splash screen or a loading icon, or even just say loading, immediately you get a very quick first contentful paint, but it's not actually ready yet. And so you wind up cheating the metric by saying we have a very quick first contentful paint, but then you have a very long largest contentful paint.

 So that's the first one. - And that one has more to do with like, when is the page ready for you to like view it and read it, right? As opposed to that second part, which I'm sure you're gonna talk about next, which is like the interactive, when can I interact with the page, move, scroll, click buttons. - Yeah, and then still in the first bucket, the other one that we have is cumulative layout shift, CLS. This one is difficult to understand intuitively. It's, there's a lot of math involved in how this happens, but when you load a stuff on a page, you might see a text come in and then an image and it pushes the text down and then you have something else pop in and it loads stuff over. And so cumulative layout shift is a number, some kind of quantitative data that says, how much does the layout of the page jump around after the initial page load?

 - Well, I was just saying, it's important because if you are trying to interact with something and then something shifts the layout, you can end up clicking on the wrong thing. We talked about this a little bit on episode 373 that's in the show notes, but yeah, that's what that is. That's like, will the user be able to click on what they're trying to click on or is something gonna make it jump out of the way?

 - Yeah, and cumulative layout shift has an interesting kind of wrinkle into it, which is not all layout shifts are bad, right? Because if you think about an accordion on a page, you click the accordion and you would expect the layout to shift because you're expecting to find more content coming in and pushing stuff down the page. And so the core web vital measurement is,

 does the, and I don't remember the exact number, but basically like within 50 milliseconds of some user interaction, if a layout shift occurs, we don't care, we're not looking for those. If we see a user click and then we get a long wait time and then it happens, if there's some perceivable decoupling of user interaction and layout shifting, then you get knocked for it. And that's useful, especially in some of those more JavaScripty app type things, where you're not really navigating a single page application where you click on something and you get a new page, big air quotes for the podcast listeners there. It's not really a new page. You're still on the same document. You're just changing what's in the DOM. And so you're changing the layout of the whole page.

 So as long as you are able to click and within 50 milliseconds respond with that layout shift, then you're good.

 - Ooh, that's, yeah, that's interesting. - 50 milliseconds is not a lot of time. - It is not a lot of time.

 And I could be, let me see if I can, expected versus unexpected.

 It's actually 500 milliseconds. It's half a second. - Okay, that's a lot more. - It's a little bit more time. - It's only about 10 times more. So yeah, 500 milliseconds, excuse me, half a second. If you can respond with an half a second of user interaction, your cumulative layout score is not impacted. - But keep in mind too, that this,

 I imagine that that number, the 500,

 has a lot of user research behind it. So it's not like, hey, let's find a time that's reasonable that people can react within. It's more like, hey, if it's longer than this, then people are surprised by the move. If it's shorter than this, then people are not surprised by it. So there's a bit of, you really should be within that. It's not like, hey, if I'm 600, I'm okay. It's like, no, if you're 600, you're in the red zone. You really should try to get below that. Otherwise people are gonna think that the website's not interacting, they're not interacting with the website, the website's interacting with them. - Great. Yeah, I'll have to find a link to this with the show notes, but I do recall seeing some research, basically looking at orders of magnitude from 10 milliseconds, 100 milliseconds, 1,000 milliseconds being a second, and it's kind of in that range, what does a user consider instant? What is near instant and what is considered, then I believe the next level is like subsequent. So if you click on something and within 10 seconds,

 users are not able to perceive the difference between two things happening at the same time. You could put two images on the screen within 10 milliseconds of each other, and users would say those came on exactly the same time. Then with 100 milliseconds, you're looking at near instant, which is like, you can perceive a 100 millisecond difference,

 but nobody's gonna really knock you and say, well, that wasn't instant. And then when you get to about the 500 millisecond mark, you start to see things as subsequent. This came on, then this came on. And then after that, you just start losing the connection between two events.

 - It just brings me back to the days when you would be like, I don't know, when the internet first came out and you'd be like loading a gamer guide or something to try to figure out, you would just see, you knew if it was a PNG or JPEG, right? Cause one would load in from the top, one would load in from the bottom. Like we're talking about 500 milliseconds and then you'd go to the whole page. And back then you'd be trying to figure out how to beat Legend of Zelda. And you'd be waiting 35 seconds for the image to come up. - Like close faster. - Comes in line by line. And then if you, what was it? Was it interlacing where it comes in like every other line and then the in-betweens? - I think that's what it was. I don't even know if PNGs were around back then, but I remember one of them loaded top down, one loaded bottom up. Maybe it was the interlacing. But anyway, yeah, performances come a long way. People are used to a lot shorter wait times. - Definitely, yeah. - Okay, so you've got the two kind of, the first contentful and largest contentful paints and cumulative layout shift. Those are kind of more about what you see on the screen.

 What about the other category? What about like the interactive side? What do we get there? - So the other side is you have two of them. And the one that is ready to go and is in production as Google would say, is first input delay. And that's gonna be all about how much time is there between when a page is loaded and when it's ready for input. And so to kind of backtrack on this, why wouldn't a page be immediately ready?

 JavaScript running on a page, JavaScript is a single threaded language. And so what that is gonna mean is that you can't do two things at once in JavaScript. As you start throwing functions on the call stack, they pile up and then you can't do anything until that call stack is cleared again. You can't start a new function. You can't even interact with the page. If you've ever interacted with a page that's had this issue, you may have had the experience where your mouse will move over the page, but clicking on things, even hover effects and CSS don't work. You can't scroll the page. The page just seems like your browser has crashed, but it's the webpage itself is blocking that main thread. And so you can't do anything to interact with it. And so the delay of when is the page ready to when is the page ready for interaction? When can a user actually start to do something with this is a core vital that Google has been tracking.

 - And the last one, which is actually currently pending

 to replace the first interaction, apparently. - So the next one is interaction to next paint. And so painting is something that happens on the screen anytime something changes. Your browser is, when browsers are implementing new features and they start talking about performance issues, the benchmark is, I think, can the browser do this operation quickly enough that we maintain 60 frames per second in the browser? Scrolling the page, interacting with something, making this animation happen. Does that happen at 60 FPS? If it doesn't, then the browsers are looking at this going, we can't make this happen more performant enough to do reliably. And so we're not gonna implement it yet. We're gonna work on making this better.

 With first input delay, what we're measuring is the very first page load. What's happening when the page loads and then how quickly can we interact? So then interaction the next paint is measuring how much time does it take from an interaction to the next time the browser makes a paint? So can we, not only are we performing quickly when the page loads, but are we performing throughout the life cycle of the page? And this is especially useful in some of those JavaScripty ones when you have that single page application. You might be interactive quickly, but when you do that fake page change, are we still quick? If we're doing something heavy with Ajax on the page, are we doing it in a way that's asynchronous enough that we're not blocking the entire use of the page until something has come back and then we're able to do more on the page? - So it's basically making sure

 you're not cheating the user by saying like, "Hey, the page loaded really fast and now you hit that submit button and we're gonna take three minutes before we give you anything back." - Exactly, because a lot of JavaScript, if you look at your user tracking libraries and those kinds of things, you put that on your page and a lot of them will say, "Hey, we're not gonna do anything until the DOM contents loaded and then we'll maybe even wait another second or two." Let you get all of your page setup JavaScript out of the way. And now we're gonna turn on the really heavy, meaty, everything your user does. We wanna know every pixel this mouse has looked over, we wanna know how far they scrolled. If they clicked away from the browser window and came back, we wanna know everything that they've done. Well, if that library or that framework or whatever it is that your user tracking is in, if that starts to impact your page performance, we wanna knock you on that. We want you to know that, "Hey, if you're gonna bring in 13 different levels of spyware onto your page, but you're gonna do it after the page loads," that's still not a performance experience and we wanna make sure that users

 aren't having to deal with that. - Google has a tool for that, which is it's like Google and Drupal are starting to become synonymous, like, "Oh, we have a module for that." I mean, isn't that the chief goal of Google Tag Manager is to manage all of your add-on scripts and enable them to be able to load certain points in time?

 - And to be able to track on its own, as you're doing stuff with your Google Analytics, being able to do all that, yeah. - Yeah, the difference there though, is GTM doesn't know necessarily what's in it. It's more about, "When I do this, I do that." It's about convenience for kind of the marketing team. But I just wanted to, I feel like the INP or the interaction to Next Paint is really to kind of fix the React issue, is what I'm hearing. So React used to be very much like, "Hey, yeah, the initial load is gonna be really long, right? Because we're downloading eight megabytes of JavaScript in the single-page application." But the advantage is once that initial load happens, everything's on the page. And so changing pages is gonna be very quick, right? But React has been out now for what, 10 years? And you have people that have built really complex applications that are not performing. Like, yeah, they've actually fixed the front-end loading thing, they've gotten it down to like, production builds of React can be like, one megabyte or something. They can be pretty snappy. But then, every single interaction on the page, like you said, is like a, it's an API call or something. And those API calls very often can be pretty slow. So it's like, yeah, it loads fast now, but then switching pages takes, you know, 15 seconds or something because it needs to do all this stuff. You know, we need to track, it seems like the goal is what Vitals is tracking the actual user experience and how they see the site and trying to quantify it in a way that doesn't just help with page ranking, that's part of it, but is actionable. So they can say, "Hey, this particular page," because again, it can be different for different pages, right? Your homepage might be great. Your landing page might be great, but this other page might be really slow, right? And being able to measure these Vitals in a repeatable way across different pages is valuable when you're trying to find out what's going on. - And that's kind of the benchmark for what Google considers their core web Vitals, not just some measurement, but what ones make it into that core category is they have to be user-centric. They have to prioritize what is the user seeing on the page and what's their experience like, and they have to prioritize being measurable in real user monitoring. So some metrics are just difficult to get with real user monitoring, but the ones that are considered core Vitals are the ones that you can put out on every single page to every single user and measure every single user's performance if you want it to. - Hmm.

 So what are some common client-side performance problems that people will experience?

 - So we talked about some of them at the top and some of the ones that are more feely and less database, and that's flickering or the site load slow. And then others that you will see in a lot of places will be a little bit more concrete. Image size is a big one that we'll see out there. You'll go to a page and it'll have this nice, big, beautiful hero image at the top. And if you look at what image it's loading, it's like a four megabyte image. And it's like, did you crop this down at all? Did you compress this at all? And what kind of image format is it in? So those are some issues.

 - Wait, Andy, I can't just use images directly from my digital camera on the web. - Yeah, just throw them raw right up there and it'll look beautiful. And no, you don't wanna do that. The images that you put on your website,

 as you are working them into your design, you wanna be looking at what is the biggest, reasonable size this image could have. If you start looking at your user data that you may or may not have, or if it's just design, if your design is locked in with a max width of, say, 1280 pixels, there's no reason to have an image any wider than 1280 pixels. You're just sending extra data that the browser is going to have to downscale out anyway.

 And then as you look at break points, you wanna make sure you're doing responsive images if you can.

 There's an HTML picture element that allows you to use sources and say, hey, if the browser is 300 pixels wide, use this image. And then below that, oh, it's actually 600? Okay, use this one instead. Oh, it's actually 900, use this one instead. And so as you scale up the page, you can actually change what image is displaying in these places using media queries in those source pieces.

 - Just as a programming note, if you're interested in image optimization for Drupal, you can listen to talking Drupal 368. We did a whole show on that. - So then not only are you picking the right sizes, but you can also go through different formats. Different formats have different advantages and disadvantages. JPEG has huge amounts of support and you can compress it down really low if you want to, but it's lossy compression. And so as you start to compress it more, you're gonna lose some of that data. PNG is, I believe, lost less and also supports transparency,

 but it keeps a bigger file size. And then as we talked about as the module of the week, there's WebP, which is lossy or lossless and uses a predictive analysis algorithm or something or other to decide what the next pixel in the image will be, allowing you to have even smaller file sizes, even at big wide viewports.

 - One thing to point out about images though, is you wanna look at the individual image and check them because sometimes a PNG will be smaller, sometimes a JPEG will be smaller. Usually the more photograph type images are better JPEGs and the more design images, I guess you could say, which are inspiracies are smaller PNGs. But the same is true for WebP. I've had situations where certain images with WebP are bigger than the JPEGs. It's pretty rare, but it can happen. So you do wanna make sure that you're kind of not just doing a blanket rule, you wanna kind of test a few cases and see what is on average smaller. - So I feel like there are probably a lot of resources out there for image optimization. And I'm kind of interested in our next point on the list and understanding more about that. So I'm gonna push us down the list a little bit to render blocking resources and understanding what those are and how we can combat against those. - Yeah, so render blocking resources. There's a great write up on MDN of how browsers work. If you just Google MDN how browsers work, it's fantastic reading.

 But what the browser does when you make a request, it goes through all of the DNS lookup and it makes a connection to the server and it requests a specific asset. And then you wait for the server to respond. And then the server doesn't send the entire document all at once, it sends a bit stream down and you're getting, I think it's like 14 kilobytes, I think per packet. And whatever those packets as they come in, the browser immediately starts parsing that HTML and it runs through that. And when it gets to a CSS file, if it gets that link rel style sheet, it's gonna go fetch that and pull it in. And it continues to move past that as it's fetching and pulling that in. And it moves past and it moves all the way down. But if you're in that CSS file and you're declaring font faces, you've decided we're gonna host our own fonts and we want to declare that. You'll have URLs in the CSS.

 And then once the browser is parsed through the HTML, it's gonna parse with the CSS and it says, oh, there's another URL, I need to go fetch that resource and bring it in. And so what you wind up with is on, if you watch this, the first time you load the page, if you do it from a cold cache on your browser,

 you'll get the file that loads in and it'll load in in some font that your system has installed. And then once it knows to go fetch that and bring it in, then it will switch out that font and bring in a new one.

 The same kind of things, if you're doing background images in CSS, it has to do the same thing. You'll load in with no background image and then later it'll pop in into place. So render blocking resources is looking at how many different URLs do I have to go through to get a resource I need on the page.

 You'll get similar kinds of things if you use JavaScript to load in CSS or if you're using JavaScript to load in images or if you're using JavaScript to load in more JavaScript, you'll get all that same kind of stuff.

 So the way to work around that is there's a, if you're using the link element in HTML with the rel,

 I'm gonna say prefetch, I don't have that in front of me, but basically you're telling the browser, I'm going to ask you for this and I'm going to need it right then. Please go fetch it right now. And so if you know that you're gonna need to bring in fonts,

 you can tell the browser, this is the font I'm gonna go get, just go ahead and fetch it so that when the CSS asks for it, it's ready. - I see, so it's basically like, it's basically telling the browser like, hey, here's the list of things I'm gonna need, go get them all at once basically, instead of in your natural method of like upon request. It's interesting because I think this has a little bit to do with our first thing we talked about, which is flickering, which I always go back to a talk I heard from our friend Jason Pomantel about a flash of unstyled font, right? He's big into fonts, obviously he was talking about that. And like that scenario you just described is kind of what causes that, right? Where you flash between one font and another. But that's super interesting. So you do have the ability to kind of like set those as like a preload or pre-render sort of thing where it can kind of pull those in all at once as opposed to as needed.

 - And if you're looking to correct that issue, if you're starting to see that, oh, I have this font that switches out, or I have styles that load in really late for some reason, looking in your browser's dev tools in the network tab, I know in Chrome, and I would find it hard to believe it's not in Firefox,

 is what's called like an initiator chain. And so what you're looking for is the very first initiator is what's the document we're requesting? So, you know, is the initial thing. And then what does it request in that document? And then you get the second level. And then each second level resource, whether that's JavaScript or CSS or whatever, what's requested in that? And if you find yourself more than about four,

 really the third level or more down, if it's something that's really crucial for the design of the page or the look of the page or, you know, something that's above the fold as it were,

 then you wanna consider prefetching that. Because if you have to wait for the first thing to finish and then for the second thing to finish before you can get to the third thing, there's gonna be a noticeable delay. - And really quick question, since we are a Drupal podcast, how do you prefetch things in Drupal? Is there a module for that?

 - That's a great question.

 I don't have the answer to it.

 I would imagine there's a module for it.

 - Is that where-- - I have been out of Drupal. I haven't worked in Drupal for probably two years. I've been on a big web components project that's not working out of Drupal for two years. And so I don't actually know the answer to that question.

 - I feel like next week's guest might be able to help us with that. As I think he had a module at one point that did a bunch of preloading of things, that being the honorable Mike Herschel. So maybe we can surface that question to him later on. Maybe he's got some sort of idea.

 So the last two here, screen jitters,

 how is that different from flickering? I mean, I would assume flickering is not moving and jittering is moving of items, right? Is that correct?

 - So with flickering, personally, I've heard people describe flickering typically only on page load. You get something comes in and the new things come in and you get that cumulative layout shift or you get the render blocked resources that come in and swap themselves out. Where screen jitters is gonna be more like, as I'm scrolling on the screen or as I'm trying to interact with it, that interaction to next paint metric is slow. And so if you start to see, as I scroll down the screen and the page is kind of jumpy as it scrolls,

 if you're listening to the scroll event and trying to respond to that, or if you're doing something like listening to resize observers or intersection observers, if you're doing something that's happening very rapidly and you start to block the main thread, even for tiny little bits, if you do it a bunch in succession, you wind up ruining the browser's ability to render the page at 60 frames per second. And so if you start to see those screen jitters, then you're doing something in the main thread a lot.

 - I'm just laughing because I just kind of, it's on my task list to revisit it this week to make sure that it was done properly. But I had a client that had a masonry layout thing. And in some cases, that masonry layout was really long and towards the bottom, it just wasn't doing it properly. And so I needed to rebuild the masonry layout. And so those pages are really long.

 So I needed to do it on scroll. And so I was like, okay, now I need to remember how modulus works. So basically, if you're scrolling and you're within like 10 pixels of a 500 even thing, then it will, and it de-bounces it, it'll call the shift and that fixed the issue. And I did a bunch of testing, like scrolling 20,000 pixels down to see how many times it calls and it calls like maybe once or twice in that time. And if you scroll really fast, it calls it just once. But I was like, I was actually dealing with, I'm like, I was writing scroll Y. I'm like, this is really bad. So that's why there's a task to revisit it. But that was just kind of the quick fix. The other place where I've seen screen jitters mentioned is you sometimes have a situation where

 if you hover your mouse in a certain place, like a hover effect will come up and that hover effect changes the hover space. And so like you hover over, if you hover within a few pixels of something, it'll trigger the hover effect. And then out of the mouse isn't over the thing that triggers the hover. And so then it goes away. Then it goes, oh wait, you're hovering over me. It'll trigger it again. And you'll get this like flickering, jittering, jumping mess.

 - Yeah.

 And that's, yeah, you can definitely get that kind of thing. And I know one of the main reasons that the CSS working groups has been slow to do some of the more powerful CSS selectors is because if you start running into things like this, like if I'm hovering this and I cause it to transition away from that place and I don't move my mouse, well, now it's not there. And then you wind up in this back and forth over about a two pixel difference as things happen into there. Yeah, you've basically just done recursion in your CSS.

 - Yep.

 I've done that before with Web Components. It's fun. - Yeah.

 - But yeah, that's the other one. And then obviously the last thing that you have on the list here is memory leaks.

 Nobody has issues with that, right? - Never, right? Yeah, and so memory leaks, and I guess the other one that we should mention here is memory bloat, is your browser has a finite amount of resources. Even if you're running on the most powerful computer ever built, you do have a finite set of resources and the browser has access to some sliver of them that it requests from your system.

 And so if you're trying to do too much, an example of this that I have seen in the wild is we had a page that loaded up and the page itself was like 28,000 DOM nodes and it had a whole bunch of JavaScript loading on the page. And at some point, the amount of memory that we were trying to use was too much for iPhones. And what the iPhone did was as the page loaded, it wasn't interactive, you couldn't scroll the page, you couldn't click on anything, and that little loading bar would get really close to the end and then it would try to reload the page. And it would do the same thing a second time. And after that, you just got the frowny folder face that says, "Sorry, we can't load this page. Something went wrong." And so what you've done there is you've created a page that has exhausted, you've got an out of memory error on your browser.

 And so memory blow is when you're just trying to use too much, whether the page is too big, you've tried to download, JavaScript is gonna run for way too much, that kind of stuff. And then memory leaks are a little bit different where you are allocating stuff in memory that's not able to be garbage collected. And so JavaScript and PHP are both Compile at runtime and they're, I believe the term is memory safe. The idea is when you write code, you don't have to ask for memory and then use that memory. And then when you're done with it, say, "Okay, I'm done with this, you can have it back."

 Inside the compiled byte code, as it's being run, the runtime has a garbage collector.

 And the way that JavaScript works is different from browser engine to browser engine. But the general idea is if you create a variable or a function or something, every time you do that, it has to use up a little bit of memory.

 And if you are not able to ever access that again, either through an event listener or you've got it stored in some way in the global scope, if there's no way for your code to ever reach that again, then it's safe to be garbage collected. And so the browser will say, "Yep, we don't need this anymore, "we'll throw this all away "and this memory is now freed up again." With a memory leak, what you've done is you've in some way in your code, you keep reallocating new memory for the same thing that you've done before. And so you just build this pile of memory. And if you're doing this on, if you're doing something and requesting new data every time you scroll, for example, just because it's a rapidly firing event, if I scroll and every time a scroll event fires, I request new memory for the same thing, well, I've built this pile of memory that gets bigger and bigger and bigger and bigger, and eventually we do run out. And so you wind up with that same problem as memory bloat,

 but you're doing it in a way that you just need to clean up and refactor your code to say, "Hey, this needs to be thrown away, "this needs to be unreachable." In some way, we need to get rid of the old data and replace it with the new data.

 Whereas with just a memory bloat problem, you've saying, "I need all of this memory," and the browser says, "I just don't have it." - Yeah, and the way to think about it too is with memory bloat, it's like the page won't load because it just, you don't have enough memory. With a memory leak, it's like at some point in time, it won't, and with a memory leak, a short-term fix is refresh the page, right? - Right. - Because refreshing the page will get rid of everything, but some people leave pages open for a really long time and they never shut down tabs and stuff, and so eventually it'll crash and force a reload, but you want your,

 if your site is taking up memory, it's taking away memory from other people too, other tabs, so you're making other sites slower too, so it's definitely something-- - Or even other applications. - Other applications, yeah, yeah. So it's something that you do want to clean up, but it presents in a different way. - So let's shuffle down the road here a bit, and I want to touch a little bit on something that we talked about, which is tracking scripts, right? So we all, well, hopefully we all know that tracking scripts can cause performance issues on the front end, especially if you have a bazillion tracking scripts, and marketing teams tend to want to add a bazillion tracking scripts, so I was just curious,

 how can folks overcome the tracking script conundrum, right? We talked about Google Tag Manager, I don't know if that's 100% the best solution, but I'm just wondering, are there tips and tricks that you have to kind of overcome that?

 - Generally, I would say track as little as possible. Obviously, I would prefer no tracking. My personal site, I have no user tracking. - Andy Bloom is a big fan of marketing teams everywhere, just saying.

 - Public enemy number one for marketing.

 My personal site has no user tracking. I don't have any business use case to know who has visited my site, or how many people have visited, or what devices they came in on, because my site is my blog, and I link out to the podcast episodes I'm on. And it's not that big a deal, it's something I can point somebody to if I'm in a job search, I can say, here's things I've written, and things that I've spoken on, and that kind of stuff. But if you're a big Fortune 500 company, you do have a business use case to know who is on your site, and what pages are they visiting, and what paths are they taking through your site from the homepage to a product that you may be interested in to, oh, we recommended this other product, and they clicked on that. And so we know that we've been able to extract more value from that user because of something that we've done on the site, and you need to be able to track that.

 And that's fine, I understand that. But what I would say is, do you need multiple tracking suites going at once? Do you need to be tracking every single behavior? Track only the stuff that you actually know that you have a business use case to be working with.

 And then the other thing is, understand how your specific tracking and user behavior management suites work. One of the ones that I've interacted with recently and have had massive problems with,

 the way that it's working is, when a user comes to the site and the page is loaded, then the user tracking suite comes in, and it is relaying every single DOM node, every single event listener, as much as possible. It's trying to make a duplicate copy of a page in an emulated browser on a server somewhere, and handling that through a WebSocket connection.

 (laughing) And so what it's doing is basically every time the user gets data, then the user also sends that same data off to the server, and as they're working through stuff and as they are, so we wound up in an issue where we had this site with lots of DOM nodes, and just the idea of how all those DOM nodes and all that layout happened, all of the memory that we were using there also ended up having to be transmitted off to this end server. And so just the way that it worked was fine for a small site, but we had a big site, and the big site with heavy memory pages caused this issue with the user tracking suite. So making sure you're tracking as minimally as possible, because every resource you put into user tracking is something you're taking away from potentially a good user experience, and then also knowing how your trackers work and making sure that your site is optimized to make sure that that's not an issue.

 - I have to say, one of, I'm a huge advocate of privacy, and as our listeners will know, and I think one of the best things to come out of these laws that are coming out of GDPR and California and other states that are starting to follow them is for the first time, I have clients reaching out to me saying, "Hey, please remove all of these tracking scripts except for this one, it's not performing." It's just that they now know that they need to be aware of what they're tracking, and they look at their list of 10 scripts. And even though I ask them usually every quarter, "Do we really need all these 10 scripts?" They always say, "Yes." Now they're like, "Well, we haven't used seven of these in 10 years. Can you please remove them?" So I have to say that that's been a nice breath of fresh air.

 A lot of marketing teams are becoming aware of these laws and are starting to kind of critically evaluate it. But that's the discussion, like, do you need this? If you do need it, do you need it for everybody? Do you need it all the time? Like Hotjar, Hotjar is actually, I have a lot of clients that use it, and they do find a lot of user issues from it. It does catch stuff sometimes in the wild that we haven't gotten our testing, but does it have to be on every single user? Can it be on 10%? Can it be on 5%? Can it be on 1%?

 And those types of things can be controlled through Google Tag Manager.

 But yeah, I'm the same way as you. My site doesn't have any tracking whatsoever.

 Okay, so we talked a little bit about WebCore Vitals and some other tests that can be done, but what are some other tools that you can use to help you identify front-end performance issues and maybe help resolve them?

 - So if you're just getting started and you wanna just take a look at any site, even if it's not yours, you can use something like WebPageTest, and that'll let you put in a URL and it will run the synthetic user testing.

 And so it'll get you kind of an idea, a snapshot of the page in an idealized setting. And that would be what I would say is the first place to go if you're just starting to dip your toes in the water of front-end performance, is look there, because it's got a dropdown menu that will even go through those metrics with you and say, "Hey, your largest contentful paint was at this time." And then you can start to look through the other analysis it has of, "What was I requesting? What was the size of my document?" All of the time to first byte. And so you can start to identify where your problems are.

 So WebPageTest would be a good one. The other one is the best tool you have for, as you start to identify those issues, if you find one that says, "Hey, we have a lot of render blocking resources," you can debug those in your browser. Your browser dev tools in Chrome or Firefox are really, really good. I personally don't like Safari's dev tools, but if you have gotten to use them, Safari, I'm sure, is able to do a lot of the same kind of stuff. It's just not my cup of tea. But the Firefox and Chrome dev tools are very good at looking at all of your network requests. The Performance tab is really great. I prefer Chrome's Performance tab for emulating lower-powered devices. One of the things that lets you do is throttle your network connection so that when you start to download stuff, big files take forever to download and you can feel the pain of your users on a cellular connection. It also allows you to throttle the CPU in the Chrome dev tools. And so you can feel the pain of a low-power Chromebook if you're on some high-end MacBook Pro or Linux thing or whatever.

 Firefox, if you're starting to look for what does the browser do during its lifecycle, when does it have to stop parsing HTML to look at stuff? It's got a great timeline. I think it's in its Performance tab.

 And then they both have pretty good memory tabs. If you start to have issues with memory, you can take a heap snapshot at when the page loads and then 10 minutes later, if you're trying to identify a memory leak, you can start to see and compare the two. Where did my memory usage grow even though this page just sat here idle for a while? Yeah. If you're looking at what JavaScript is running on my page load up and where do I think I can maybe make some optimizations, you want to look at allocation sampling in your memory tab. And what that'll do is over the course of time as the page is loading, is running, is whatever, what functions are making the heaviest use of your memory?

 And then the last one is the Sources tab. And the Sources tab is like a superpower. If you're working with a file from a CDN and you want to say, "Hey, I want to see this production version of the site, but what if the CDN version of this file looked like this instead?" You can actually create local copies of those external sources and tell your browser to swap those in instead of the external source. And so I've done this before where I've had a page that's come down from the browser and I know I want to swap out the version of a JavaScript library I'm pulling in, for example. And so in the document file that's coming from the server, it's requesting 1.6.2. And I say, "I'd like to see what happens if I put in 2.0.0 and you can just replace that in the document. And then when you request that file, it will actually pull it from your local machine and pretend it's coming from the server."

 - Well, that's interesting. - Interesting. - I didn't know that. - How do those tools, those two sound like more in browser, like kind of nuts and bolts sort of things.

 How do they differ from Lighthouse? Is Lighthouse more of just like an SEO-specific tool?

 - Lighthouse is,

 I know a lot of people put a lot of stock in Lighthouse numbers, and I think it's because it's a very simplistic report and a very simple overview of what your site's current measurements are. They do, it's a one number, right? For an entire category of things. What's my accessibility score? You gotta see, right? Like that's very easy to understand. Like, "Oh, I'm not failing, but I could do a lot better." - We all go back to that grade school grading method. - Right. And so with Lighthouse, you kind of have, I know accessibility is one, performance is one. I think SEO is one. And so you get these kind of four high level categories and a single number. And then when you open that, it says, "This is why we docked you points for accessibility, or this is why we docked you for performance, and this is why we docked you for SEO." So Lighthouse is another good tool if you're getting into this for the first time, but it's not only for performance.

 And it's not as,

 if you're working on an issue and you're trying to find what that specific issue is, it might tell you it's an issue, but it's not gonna tell you why or how to fix it. - Yeah. - Got it. - I also wanna mention before we move on that both Chrome and Firefox have this, but I think Chrome's interface is a little bit better. You can also add breakpoints to your JavaScript in Chrome. Well, like I said, both, but I think Chrome's is a little bit easier. So if you're trying to find

 what pathway the logic is actually taking, it's similar to Xdebug breakpoints, it's just JavaScript. And it perseveres through page loads. So you can set your breakpoints, then refresh the page and see what happens. - Yep. - So it's pretty useful. - So our listeners may be upset that this is really the first time that we're talking about Drupal and front-end performance with my next question. But in our last two or three minutes here,

 any specific modules or things Drupal folks can do to improve or help improve their front-end performance?

 - Yeah, so there's a documentation page and we'll put this in the show notes, but it lists a handful of strategies that you can do for front-end performance.

 Things you can do, minifying your JavaScript, your CSS, your HTML.

 Minification is kind of a two-part process. One is if you're doing something like JavaScript, it's, you know, uglifying the code is another term that I've heard used for it. And basically, instead of sending along big long variable names that take up 20 characters, renaming them to just A. And so you're actually reducing the character count of your thing. And then secondly, it's all of the white space, stripping all of that out. Removing the white space doesn't do a ton as much anymore. If your server is properly configured because it'll get compressed out with gzip.

 But doing minification of your files is a big one.

 Another thing you can do, it may or may not have a great impact on your site is aggregation.

 And so in the performance admin config development performance section of your site, you can turn on CSS and JavaScript aggregation and I think it may also compress.

 One possible downside of aggregating your JavaScript is it puts it all into one JavaScript file.

 If you haven't written your stuff in closures properly, like a immediately invoked function expressions,

 what you can wind up with is if you have an error in one of your JavaScript files, then all of them fail to run. So aggregation can introduce new problems

 as well as fix some. I would say the benefits probably outweigh the risks there. - Well, you need to fix your code if it's breaking. - Right.

 The thing about aggregation is like previously in Drupal seven days, browsers had, if you were connecting to a server through HTTP one, browsers would not carry over, I think it was like the TCP handshake from connection to connection. And so you would have to redo that for every connection and then it would throttle how many active connections you can make to the server. And so if you had a site where you've split out all of your functions to make it more maintainable, but now you're trying to send 30 JavaScript files, it's gonna do the first six. And when the first one stops, then the seventh one can start and you get this rolling limitation of six requests, the browser.

 If you're using more modern infrastructure with HTTP two, they now all share the SSL and TCP connections and there's no limitation anymore. As soon as the browser hits that resource and knows it wants to look for it, it will start to pull it down and go through it.

 The other thing you can do for performance of your JavaScript especially is if you are requesting your JavaScript as asynchronous and deferred, those are both attributes that go on the script tag in the HTML, but if you can set those up in your library's definition, Async will say, you don't need to wait for this to fetch, continue parsing HTML while this document fetches. And then the defer one says, don't run it until the DOM content has been fully parsed. And so basically you're telling it, don't wait while I'm downloading this and then don't run it until all the HTML is done. And that way you can run all of your JavaScript at one time. There are a few exceptions to this. If you're interested in more of the reasoning behind this, there is a link at about how to do this and what the pros and cons are for that. - Be sure to add that to the show notes.

 Anything else you wanna add as a final thought there? - And then we'll go to Randy before we close out the show. - There are lots, the front end performance is just a big wide field and any one of these issues could cause your problem. And so performance problems tend to be a death by a thousand cuts kind of thing. You're not gonna have typically one silver bullet solution that fixes them.

 As you start to develop stuff, you wanna make sure everything you're doing is as performant as possible. But then once it's all an aggregate, you wanna make sure you're continuously testing and making sure you're not having problems pop up. - Yeah, I feel like we could probably talk about this topic every six months and find new and exciting things to talk about.

 As always, Andy, thanks for joining us today and talking about this. And also thanks for joining us for the last four weeks and contributing to the show. Thanks for having me. - Yeah, thanks for joining us. It's been great.

 So if you do have questions or feedback, you can reach out to Talking Drupal on X with the handle Talking Drupal or by email with show at You can connect with our host and other listeners in the Drupal Slack and the Talking Drupal channel. - You, just like Ned Camp this week, can promote your Drupal community event on Talking Drupal. Learn more at slash TD promo.

 - Get the Talking Drupal newsletter to learn more about our guest hosts, show news, upcoming Drupal camps, local meetups, and much more. Sign up from the newsletter at slash newsletter.

 - Thank you, patrons, for supporting Talking Drupal. Your support is greatly appreciated. You can learn more about becoming a patron at and then choosing to become a patron button in the sidebar.

 All right, we have made it to the end of our show.

 Andy, if folks wanted to get ahold of you to talk more about front-end performance or anything else for that matter, what's the best way for them to do that? - You can visit and all of my social links are in the top bar there. - And if you missed it, you won't be tracked on that website. - You won't be tracked. - Browse assured that you will not be followed.

Nic if folks wanted to get ahold of you, how could they do it? - Find me pretty much everywhere@nicxvan andNICXVAN - And I'm JohnPicozzi Solutions Architect at EPAM. You can find me on all the major social networks at JohnPaccosie as well as And you can find out more about EPAMat - And if you've enjoyed listening, we've enjoyed talking.

 - Thanks everyone.

 Have a good one.