Code. Code by Symbolon from the Noun Project

Web Performance, Next.js, and the Institution of Marriage

As you probably already know, Bex and I got married back in October. And while we, wisely if I do say so myself, managed to avoid the Pinterest/DIY Wedding Industrial Complex, we did build our own websites.

When we walked into a flower shop and the extremely helpful owner asked us what we were looking for, we exchanged blank stares. When we talk about building things for the web, well, we both do this for a living and have many, many opinions.

So I did this because I can, and mostly for fun.

The original app was the invitation system. It send invites via Mailgun, which also happened to be Magic Links that transparently led you to an RSVP form. Since each invitation was a login, it automatically had all the right events, plus one options for the people who had them, hotel information for traveling friends and family, etc.

It was pretty fun, and backed back a Django admin so that RSVP data was basic CRUD: we could see responses in real time, export the entree selection data to a Google sheet and share it with the caterer, or run into a friend at a happy hour and fix “oh yeah, I have a plus one” right on the spot.

It’s also now obsolete. The RSVP engine, the schedule of events, the “send me a new login link” button all served their purpose by the time we stomped on a glass.

I figured I’d tear it down and replace it with a big gallery once the photos came in. Which they did, and deep in the social distance, overengineering a personal site is a fine way to blow an afternoon.

1.

A running theme on this blog is my conflict between wanting to learn new things that have taken over the industry and lacking use cases that they make any sense for.1 The original site—logins, forms, and admin—screams Django. This could be whatever, but I decided to do it in React because I am a sheeple and feel like I’m falling behind.

I’d used Next.js once before, for v2 of Ziggy For America, at the suggestion of a colleague. Next is impressive, it scratches an itch that had turned me off the entire SPA ecosystem: hours and hours of setup time. They were inspired in part by PHP, by the way pages are files, and you can just start making stuff. I love that idea. What Next does is take care of the React/SSR/magic bullshit out of the box so you can start making a webpage, and if you find yourself having strong feelings about an implementation detail, you can override that later (I always want SASS, if only for nesting media queries).

More importantly, Next is built to export static sites, and its trivial to throw them on Github pages. Over the years, I’ve built one or two static site generators myself (a Jinja2/SASS/Babel builder written in Python, and something basic built using Gulp), but if I’m going to do side projects, I’d rather spend time on features than tooling.

2.

The Ziggy for America site had already brought to my attention that there’s definitely a cost to these toys. Yes, JSX is a fine template language with nested components (but not the only one), yes, reactive javascript is pretty fun, but you pay for all that in complexity.

Consider a technique I’ve used on every site I’ve ever made: lazy-loading. I’ve built it a dozen ways, with timers and scroll listeners and passive flags and intersection observers. This is a solved problem, and it’s not that hard.

The version running on this site looks something like this:

//
// LazyLoad images and iframes.
// Backwards compatible with lazysizes.js
//
function bindLazyLoad() {
  const lazyLoadObserver = new IntersectionObserver(
    (entries, observer) => {
      // Grab the elements that are actually in view
      const elements = entries
        .filter(entry => entry.isIntersecting)
        .map(entry => entry.target);

      elements.forEach(el => {
        // data-src -> src
        // lazyload -> lazyloaded
        const src = el.getAttribute("data-src");
        if (src) {
          el.removeAttribute("data-src");
          el.setAttribute("src", src);
          el.classList.add("lazyload--loaded");
        }

        // And now forget about this so we don't
        // try to load it again
        observer.unobserve(el);
      });
    },
    {
      // Consider elements within 2x the screen height as in view
      rootMargin: `${window.screen.height * 2}px`
    }
  );

  // Observe all the lazyload elements
  [
    ...document.querySelectorAll(".lazyload, img, picture, source, iframe")
  ].map(el => {
    lazyLoadObserver.observe(el);
  });
}

bindLazyLoad();

Not bad! Intersection Observer takes maybe an hour or two to get used to but after that it’s really not complicated.

But you’ll notice, the goal is to have the observer instance shared across all the lazy-loaded elements. React really doesn’t like that and doesn’t provide a way to do it, so you wind up relying on a third party package. It’s not that big, but I spent more time than I want to admit trying to read it and still only have a vague notion of what it does and why.2

Here’s my code, using the abstracted Intersection Observer hook:

/**
 * Lazyloading image that automatically selects the right size
 */
export default function Image({ photo, alt = "" }) {
  const [ref, inView, entry] = useInView({
    rootMargin: "500px",
  });

  // We'll use this to know the size of the element
  const pictureRef = useRef(null);

  // Things go in and out of view. We need to know if they
  // _were_ viewed, because we'd want to persist the image in that case.
  const [isLoaded, setIsLoaded] = useState(false);
  const [shouldHaveImage, setShouldHaveImage] = useState(false);

  if (inView && !shouldHaveImage) {
    setShouldHaveImage(true);
  }

  const width = pictureRef?.current?.getBoundingClientRect().width;

  // Let's only compute this once per element width
  // This is something I don't have in the vanilla version, 
  // it's basically a container query.
  let src = useMemo(() => {
    return getBestCrop(photo, width);
  }, [width, inView, isLoaded]);

  const noScriptCrop = photo.sizes["600"];

  const handleLoad = () => {
    setIsLoaded(true);
  };

  return (
    <picture
      ref={pictureRef}
      className={styles.picture}
      style={{
        paddingBottom: `${photo.aspectRatio * 100}%`,
      }}
    >
      <img
        ref={ref}
        key={src}
        className={isLoaded ? "loaded" : ""}
        src={shouldHaveImage ? src : ""}
        alt={alt}
        onLoad={handleLoad}
      />
      <noscript>
        <img className="loaded" src={noScriptCrop} alt="alt" />
      </noscript>
    </picture>
  );
}

If you consider the logic, not the code, but the logic, the first one is simpler. In that case, you look for things that are in view. In the React version, Intersection Observer’s behavior is hidden away, but now I also need to track state. In the first case, state is not reversible. Once an element starts loading, it won’t stop. In React, I have to guard against the case where the user scrolls out of view before the image loads.

It makes sense, but it’s pretty weird.

Another thing that isn’t obvious is this code runs a lot. The vanilla JS one sets up observers on load, and then runs the callback over the elements in small batches as you scroll. Because React with Hooks has no separation between on-mount and on-event, all the logic runs 4+ times.

We’ll talk about whether this matters later.

3.

So I made a website. It’s live at JasonBeccaWedding.com. There’s a few things I learned that I think are worth sharing.

I’m storing the images on Linode equivalent of an S3 bucket and resizing them at build time through Thumbor, which I already have an instance of. Thumbor uses a secret key to create the thumbnail urls, which means I need to compute those in SSR (or static export) and not have the key available in the client.

This is one of those things Next.js just does right by default: it supports .env files, which let you have environment variables that are just all undefined in the client. It means all my generated image urls need to come from getStaticProps, but that’s perfectly okay.

I also have a big hero image, which should have different crops and file sizes at different breakpoints, because I don’t want to ship a 1600px wide photo on mobile where I don’t need it. Typically I’d create a <style> tag there and pass the urls (which are props) as a background image, but JSX isn’t cool with that. It has inline styles, but inline styles don’t support media queries.

Digging into this I learned you can use inline styles to set CSS variables, which I’d never thought about before.

<div
  className={styles.header}
  style={{
    "--background-small": `url(${crops["vertical"]})`,
    "--background-large": `url(${crops["1440"]})`,
    "--background-xlarge": `url(${crops["1680"]})`,
  }}
>

And then write CSS that uses them

.header {
  background-image: var(--background-small);
  font-display: block;

  @media (min-width: 900px), (min-aspect-ratio: 2/3) {
    background-image: var(--background-large);
  }
  @media (min-width: 1440px) {
    background-image: var(--background-xlarge);
  }
}

Which is pretty cool.

4.

So we’re live, I’m feeling pretty good, and I think to see how performance looks. Does it matter if my personal site is wicked fast? No, but I am a snob and it matters to me!

I ran Lighthouse and WebPageTest, and my jaw dropped. Lighthouse was down in the basement, at a slogging 59. How could that be? It’s a static page with lazy loaded images.

Was it Next? Next is supposed to be fast, but who knows. I checked the Ziggy site, which clocks in at 85. Not great for a static site, but not awful.

Some of the complaints I can’t do much about. Fonts are hosted on Github Pages and it sets the max-age at 10 minutes, which I’d prefer to dial up to 7 days or so, but they don’t have that option.

And loading a big image at the top of the page will affect time to first meaningful point and speedindex, but that’s fine, that’s a conscious decision. It was the other numbers that bother me more, including a 9s to interactive, and a lot of time sitting around parsing javascript.

Since all my photos were static props for the page, they were hanging out on the page as JSON, and Next doesn’t support a way to defer loading that information at this time. I suppose that makes sense, in a more interactive scenario the app would be useless until that data appears.

I brought the score up to 94 desktop and 92 mobile with two changes:

First, lazy-load the whole photo gallery. That image component that runs 4 times per photo on load had to a huge chunk of the noise on the main thread. And if I’ve learned anything about performance, it’s to think of the browser’s capacity as a zero sum game. Everything matters and little bits of usage add up.

Next provides a way to import a component onto the page when you need it using dynamic(() => import("path/to/component")). Of course, if you just do that inside another component, there’s no benefit because it will fetch it right away. It has to wait for something, like a button click.

I tied it to a scroll event, limited to {once: true}, so the first time the user scrolls on the page, we’ll fetch that gallery, taking some pressure off the top of the page experience.

To do this I need a useEffect hook around the event binding, and to set didScroll as state to prevent that code from running more than once.3 As a result, I’ve now taken the HTML for the images and their associated javascript execution out of the page as it loads.

Looking at what was left, I noticed that fonts were loading later than I expect, and all the CSS was coming from javascript. On other sites, every test I’ve seen shows pulling Critical CSS directly into the HTML has a significant affect on the initial page metrics. If my blockers to visual completion are fonts and a big image that are loaded with CSS, inlining it to the head should cut out an entire request/response cycle and let the browser fetch those right away.

Next doesn’t support this out of the box, but I found a Stack Overflow comment that suggested something like this in pages/_document.js:

import Document, { Html, Head, Main, NextScript } from "next/document";

import { readFileSync } from "fs";
import { join } from "path";

class InlineStylesHead extends Head {
  getCssLinks() {
    return this.__getInlineStyles();
  }

  __getInlineStyles() {
    const { assetPrefix, files } = this.context._documentProps;
    if (!files || files.length === 0) return null;

    return files
      .filter((file) => /\.css$/.test(file))
      .map((file) => (
        <style
          key={file}
          data-href={`${assetPrefix}/_next/${file}`}
          dangerouslySetInnerHTML={{
            __html: readFileSync(join(process.cwd(), ".next", file), "utf-8"),
          }}
        />
      ));
  }
}

export default class MyDocument extends Document {
  render() {
    return (
      <Html lang="en" dir="ltr">
        <InlineStylesHead>
          <meta name="theme-color" content="#ffcc66" />
        </InlineStylesHead>
        <body>
          <Main />
          <NextScript />
        </body>
      </Html>
    );
  }
}

This replaces Next’s document and overrides the head with a version that inlines a stylesheet. This wipes out the waterfall of requests needed to get that hero image and the fonts, makes the layout CSS available right away, and speeds up the page.

What’s fascinating about this process is Next obscures what’s really happening on the page, because, frankly, that’s what we’re asking it to do. The price of hip new tools with a convenient setup is giving up control over what loads and when. The price of React handling your components and re-renders means giving up control over what executes, and when.

5.

Would I use Next again? Definitely. If I actually had an app that wanted to be an SPA for a good reason, there’d be little reason not to. It’s a much saner way to bundle this stuff up.

Would I have considered this for the original invitation site? No, that would be ridiculous. It would have made the project harder with no observable benefit.

But for static sites? On the one hand, this took far longer than my rusty Gulp build tool would have and resulted in a heavier app than the vanilla version would be. On the other, I don’t have to maintain the tooling, and for a fun little project on a weekend, that’s a real benefit.