← home

Eleventy and Craft

I love Eleventy, and I love it even more when I can split code from content.

Earlier this summer, I switched this site from Jekyll to Eleventy. Jekyll was fine, but I'm not a Ruby developer, so when I ran into issues or wanted to extend functionality a little bit, I was stuck. Not that big of a deal, because I didn't need that much extra functionality, but then I ran into build errors. Suddenly Ruby, or Jekyll, or Bundler, or something was behind or ahead by a couple versions, and I couldn't get anything to work anymore. Not only could I not add functionality, now I couldn't manage my site at all.

"BUT VINCE," you might be saying, "THIS IS A GREAT OPPORTUNITY TO LEARN RUBY. EMBRACE IT."

Learn Ruby? In THIS economy?!

Instead, I looked into this Eleventy SSG (static site generator) I'd heard everyone talking about. A few hours later, I was up and running. Surely, this would help encourage me to write more, like I'd promised myself so many times before.

A few months pass, and nothing happens.

It took me that long to pinpoint my issues:

Everything I considered kept telling me I need to separate code from content—a radical idea, I know. So now, this site is generated by Eleventy using Craft CMS, and it's (mostly) found on GitHub. (And this is the first blog post I've published using this method. NEAT.)

I did this by separating the build process into three pieces:

  1. The JSON API
  2. Eleventy's configuration and templates
  3. A "smarter" build process

The JSON API

Craft's Element API plugin makes it easy to define some endpoints, tell it what content to include, and even do a little finessing of things before it is sent back to my Eleventy build script.

Since every site is special and unique, I won't get into my endpoints' specifics, but assuming you installed Craft and the Elements API plugin, and created a channel-type section named "Blog" with some fields named summary and blogPost, your elements-api.php config file would look something like this:

use craft\elements\Entry;
use craft\helpers\UrlHelper;

return [
  'endpoints' => [
    'entries.json' => function () {
      return [
        'elementType' => Entry::class,
        'resourceKey' => 'entries',
        'paginate' => false,
        'criteria' => [
          'section' => 'blog',
          'orderBy' => 'postDate asc'
        ],
        'transformer' => function(Entry $entry) {
          return [
            'layout' => 'blog-post',
            'title' => $entry->title,
            'id' => $entry->id,
            'date' => $entry->postDate->format('Y-m-d'),
            'published_at' => $entry->postDate->format(\DateTime::ATOM),
            'permalink' => $entry->uri.'/',
            'url' => $entry->uri.'/',
            'description' => $entry->summary,
            'post' => $entry->blogPost,
          ];
        }
      ];
    }
  ]
];

If you were to visit example.com/entries.json, you'd receive something like this:

{
  "entries": [
    {
      "layout": "blog-post",
      "title": "An exciting blog post",
      "id": "10019",
      "date": "2019-09-09",
      "published_at": "2019-09-09T10:58:00-07:00",
      "permalink": "blog/an-exciting-blog-post",
      "url": "blog/an-exciting-blog-post",
      "description": "This is a blog post on my new site.",
      "post": "Infographic infrastructure business plan stock metrics iteration iPad client equity influencer value proposition startup traction termsheet. Holy grail paradigm shift bootstrapping interaction design. Product management non-disclosure agreement gen-z startup innovator stock focus leverage. Stealth buzz iPad seed round virality ramen.\n\nPivot product management ownership investor marketing launch party mass market incubator early adopters analytics buzz first mover advantage infrastructure interaction design. Creative focus innovator partner network client pitch business plan. Angel investor innovator network effects return on investment infographic. Early adopters A/B testing analytics equity agile development buyer research & development supply chain partner network growth hacking.\n\nIncubator seed round A/B testing mass market market bootstrapping partnership influencer branding vesting period series A financing social media backing. Assets lean startup leverage mass market direct mailing network effects. Beta responsive web design partnership paradigm shift ramen virality accelerator business-to-consumer ecosystem mass market launch party. Bandwidth incubator first mover advantage business-to-consumer marketing long tail pitch product management freemium channels."
    },
    ...
  ]
}

In addition to the Elements API plugin, Craft released version 3.3, with support for a headless mode and built-in GraphQL API, no plugin needed. I was nearly finished with this process when they released v3.3, and since any GraphQL API was a bit overpowered for my purposes, I decided not to backtrack and redo things using these new built-in features.

You do you, though. I believe in you and all of your wildest dreams.

Eleventy's configuration and templates

If you're interested in Eleventy's general configuration, I refer you to its wonderful documentation. There's a lot in there, but it's well organized, and pretty accessible to all JavaScript skill levels.

I need Eleventy to consume and build pages from this JSON API, so I decided to use fetch (because it's the tool I'm most familiar with, but Node has several options for making HTTP requests; follow your heart on this one):

eleventyConfig.addCollection('blogPosts', async function(collection) {
    collection = await fetch(process.env.BLOG_ENDPOINT).then(resp => {
      if (resp.ok) return resp.json();
      throw new Error('network error');
    }).then(resp => {
      return resp.entries;
    }).catch(err => console.log(err));
});

You may have noticed that there is a second, seemingly unnecessary .then() method in the chain. I did that as a shortcut because, by default, the Elements API returns a JSON object with two root properties, data and meta, the latter containing pagination data, which is of no use to me here.

What to do now that Eleventy is retrieving the JSON? Make it usable. Building on the above code:

eleventyConfig.addCollection('blogPosts', async function(collection) {
    collection = await fetch(process.env.BLOG_ENDPOINT).then(resp => {
      if (resp.ok) return resp.json();
      throw new Error('network error');
    }).then(resp => {
      return resp.entries;
    }).catch(err => console.log(err));

    collection.map(post => {
      post.published_at = new Date(post.published_at);
      post.parsed = md(post.post);

      return post;
    });

    return collection;
  });

At this point, I iterate over every entry returned by the API, converting each entry's published_at value to a proper JavaScript Date() object, and also converting the post's Markdown here, rather than in a template.

Hooray, now Eleventy has a collection named "blogPosts" that it can work with as if it were a set of Markdown files in its src folder. In my setup, I have a template named blog-posts.html with the following front-matter:

---
pagination:
  data: collections.blogPosts
  size: 1
  alias: post
  addAllPagesToCollections: true
permalink: '{{ post.url }}'
layout: base
---

Again, I'll defer to Eleventy's documentaion on pagination for specifics, but this front-matter is telling Eleventy to use the collection we added from the API (collections.blogPosts) and treat it as if the data were coming from files in src, regardless of what templating engine you choose.

A "smarter" build process

Here is where I got hung up for a while. My goal was automating the Eleventy build, but I also didn't want to run Eleventy if I didn't need to. Out of the box, Eleventy doesn't provide a way to conditionally build—and really, why would it? I knew I could detect changes in content simply by hashing the API response, and running the build step if there had been a change, so I set out to write a cron job to handle this.

Because that makes sense, right?

Sure... and then it didn't work. Things were executed, but the errors piled up, mostly the "THIS VARIABLE IS UNDEFINED" variety. So I wrote some more code to account for how cron works with Node2.

But you know what's fun about researching "Node" and "cron?" You don't find a ton of information about running Node scripts via cron, instead you get a lot of information about node-cron, a Node package that allows you to schedule tasks in a Node app using the same syntax as cron (more or less). I tossed this information to the side because I didn't want something like cron, I wanted to run a Node script via Cron, dammit. Give me THAT knowledge, plz, google.

Dear reader, I should've taken the hint.

So here's my new and improved, node-cron-powered scheduled-build.js:

require("dotenv").config();
const cron = require('node-cron');
const shell = require('shelljs');
const fs = require('fs');
const { DateTime } = require("luxon");
const fetch = require('node-fetch');
const hash = require('object-hash');
const buildLock = (fs.existsSync('build.lock') ? fs.readFileSync('build.lock', 'utf8') : null);
const buildTime = DateTime.local().toLocaleString(DateTime.DATETIME_MED);
let results = null;
let entriesHash = null;

cron.schedule('*/15 * * * *', async () => {
    // fetch the entries (in JSON) from the CMS for comparison purposes
    // i send the CMS a header that contains the build environment, because i use it
    // to determine whether to include drafts/pending entries or leave them out
    // this is really helpful when i'm using certain markup for the first time and need
    // to work through the styling (ex: tables, figures, etc)
    const apiResponse = await fetch(process.env.BUILD_BLOG_ENDPOINT, {
        'headers': {
            'x-build-environment': process.env.BUILD_ENVIRONMENT
        }
    }).then(resp => {
        if (resp.ok) return resp.json();
        throw Error('network error');
    }).catch(err => {
        return `${err.type.toUpperCase()} ERROR (#${err.errno}: ${err.code}): ${err.message}`;
    });

    if (typeof apiResponse === 'object') {
        // if fetch was successful, i'll have a JSON object, so do the build stuff

        // this happens
        entriesHash = hash(apiResponse);

        if (entriesHash !== buildLock) {
            await shell.exec(`npm run-script build`);

            // update/write the build lock
            await fs.writeFileSync('build.lock', entriesHash);

            results = `build script ran`;
        } else {
            results = 'no build, lock matched response';
        }
    } else {
        // the fetch was not successful
        results = apiResponse;
    }

    // output the events of the quarter-hour
    console.group(`build for ${buildTime}`)
        console.log(`build lock: ${buildLock}`);
        console.log(`entries hash: ${entriesHash}`)
        console.group('Settings');
            if (process.getuid) console.log(`Current uid: ${process.getuid()}`);
            // manually adding/removing dotenv variables from this output was annoying as hell
            Object.keys(process.env).forEach(key => {
                if (key.substr(0,6) === 'BUILD_') console.log(`${key}: ${process.env[key]}`);
            });
        console.groupEnd();
        console.group('Result');
            console.log(results);
        console.groupEnd();
    console.groupEnd();
    console.log('========================================================================');
});

This works and after scratching my head for a while, it's a relief. But scheduling tasks via node-cron only works if the script itself is running.

Hello, PM2. PM2 keeps things running without my involvement, AND it solves the issue of logging build results. HOORAY.

That's it. That's the post.

I'm pretty happy with how things turned out, but there are a couple of gotchas:

  1. The scheduled build process runs both Eleventy and Gulp, but only if Eleventy's content has changed. If I make some CSS or JavaScript changes, I still need to run the build script locally and upload the new CSS files manually. It would be nice to have the script check for changes relevant only to Gulp, too.
  2. The process does not have any support for deleting posts. If I take down a post, Eleventy sees that something has changed, but there isn't anything in place to tell it "hey, build the site, but also delete these things, too." Not a deal-breaker, but also not critical to me right now.
  3. Closely related to (2), I noticed that Craft hid a couple of entries from the API response (they weren't visible in the control panel, either!) that only returned after I ran an update on the CMS. Don't know what happened, but if I fix gotcha #2 and this happens again, it means content disappears from the site without me knowing it. This is part of the reason I added as much logging as I did.

Footnotes

  1. I've still not found "official" documentation about this (which I would love to read, if anyone out there has it!), but from what I've gathered, the gist is this: you can run Node scripts via cron, but even if you set the working directory, and pass that down to any exec commands you run, Node will eventually not be able to find the contents of node_modules and things will break.