How I Made This Website
The code for this website is available in a GitHub repository, at https://github.com/alexyuly/alexyuly.com. I hope that this example can help and inspire others to build and deploy their own low-cost, highly customized blog.
My writings here are strongly influenced by Maciej Radzikowski's piece, "Headless CMS with Gatsby on AWS for $0.00 per month", which presents a complete technical approach for making a blog using React components, paired with a CMS (Content Management System) for publishing articles.
First, I created a new Gatsby site in my local development environment, according to the Gatsby Quick Start guide. I used the default language of JavaScript, though TypeScript is supported. I can switch later, if my site grows complex. TypeScript is a very helpful tool, but for this project, I wanted to see how far I can get with plain-old JavaScript. I may come to regret that choice. Making poor decisions and learning from them is part of the fun of software development (and life).
Then, I signed up for an account with Prismic. I have some qualms about hosting my content with a product that opaquely handles its storage and provisioning and only allows me to manage it through a website. However, I dig the design of their site, and it was free and easy to get started. Going for "free and easy" will probably come back to bite me at some point, especially if my site grows large. But for now, it enables me to focus on web design and content creation without getting bogged down in back-end concerns.
After setting up Gatsby and Prismic, I integrated my Gatsby site with the Prismic API. I followed Prismic's documentation on this process, which begins with installing the Gatsby Source Plugin for Prismic (and a couple of supporting dependencies):
npm install gatsby-source-prismic gatsby-plugin-image @prismicio/react
I created a new git-ignored file named .env.development, for sensitive, environment-specific information, such as the name of my Prismic repo, and my Prismic Custom Types API token. This token is used by Gatsby to determine the types of data that my Prismic repo provides. I used the Prismic website to create it, by navigating to Settings (the little gear icon in the bottom-left corner), clicking "API & Security" (in the lefthand sidebar, under the "Configuration" heading), clicking "Custom Types API", and scrolling down to "Generate a new token".
GATSBY_PRISMIC_REPO_NAME=your-prismic-repo-name PRISMIC_CUSTOM_TYPES_API_TOKEN=your-prismic-api-token
(It's also possible, though not required, to create an API token for accessing Prismic content. Since I plan to use Prismic to host only content that will be publicly visible through my website, I'm not concerned about this, but if you have any content you want to keep private, then you should add one.)
I updated my gatsby.config.js file to load these environment variables into process.env, and to fetch types and content from the Prismic API during the Gatsby build process, like so:
require("dotenv").config({ path: `.env.${process.env.NODE_ENV}`, }); module.exports = { // Other Gatsby configuration goes here... plugins: [ // Other plugins go here... { resolve: "gatsby-source-prismic", options: { repositoryName: process.env.GATSBY_PRISMIC_REPO_NAME, customTypesApiToken: process.env.PRISMIC_CUSTOM_TYPES_API_TOKEN, }, }, ], };
This configuration enables me to use Gatsby's built-in GraphQL interface to access content from Prismic. Each Gatsby webpage is defined by a file in the src/pages/ folder, which exports a GraphQL query and a React component and looks something like this:
import React from "react"; import { graphql } from "gatsby"; export const query = graphql` # Your GraphQL query goes here... # Gatsby will pass the results into your React component's data prop. `; const MyPage = ({ data }) => { // Your React rendering code goes here... }; export default MyPage;
Also, I chose to use styled-components to handle CSS, because it integrates seamlessly with Gatsby, it allows me to write native CSS syntax, and I can share variables with my JavaScript code. So far, I'm happy with it. (There are many other options to consider for CSS, including plain-old CSS files, inline styles with the React style prop, and a plethora of community packages.)
In addition to the src/pages/ folder, I created a src/components/ folder to start chipping away at my design system, that is a collection of reusable React components concerned purely with presentation. My first two components are Layout and BlogPost. Currently, I have one page, which imports these components, specifies a GraphQL query, and feeds data into the components to render itself.
To define the type of data for each blog post, I used the Prismic website to create a new custom type, blog post, with a uid, a title, an author (which forms a relationship with another custom type), and some content. Each type automatically comes with fields for "first published date" and "last published date". My content field is a rich text field, so I can edit the content for each blog post in Prismic's rich text editor without writing any code. The Prismic API provides this rich text as a JSON structure which I render in Gatsby using the PrismicRichText component from the @prismicio/react package.
To write a GraphQL query to fetch the data for all of my blog posts, I ran my local Gatsby development server and navigated to http://localhost:8000/___graphql, which presents the "GraphiQL" GUI for exploring and testing the Prismic API's available data. With a bit of trial and error writing and running queries, I produced one that delivered the data I wanted, and I copied it into my Gatsby page component's file.
At this point, I had a complete website working in my local development environment: Gatsby was rendering my React components using data fetched from my Prismic repository. The next challenge was to deploy it to the web for everyone to see!
I had previously purchased a domain name (alexyuly.com) from GoDaddy, so I decided to keep them as my registrar, but to set up AWS (Amazon Web Services) to handle DNS concerns for my domain. AWS provides DNS management through its Route 53 interface. From there, I created a new public hosted zone for alexyuly.com, and then I added a couple of CNAME records to match what was already specified in my GoDaddy DNS records, namely mapping "_domainconnect.alexyuly.com" to "_domainconnect.gd.domaincontrol.com." and "www.alexyuly.com" to "alexyuly.com.". Finally, I updated my domain's GoDaddy DNS settings to use custom nameservers that match the NS records that Route 53 created automatically within my new hosted zone.
So, GoDaddy was routing my domain name to AWS, but I hadn't yet configured the content that AWS should serve up to visitors. To do this, I created an S3 bucket (to host my site's static files), a CloudFront distribution (to help cache and serve files from S3), and an SSL certificate (to enable the https protocol on my website).
Creating an S3 bucket was easy. The only catch was that I had to enable public access for my bucket, which S3 disabled by default during the creation flow, warning me against it with a big, bold message. However, the message specifically includes this exception: "AWS recommends that you turn on block all public access, unless public access is required for specific and verified use cases such as static website hosting." (emphasis mine) Seeing as I am indeed hosting a static website here, and I want it to be visible to the world, I'm good to go.
Creating an SSL certificate was also surprisingly simple using the AWS Certificate Manager. I used the default public certificate option, entered my domain name, and kept the default "DNS validation" method. I used *.alexyuly.com (note the asterisk) for my domain name, so that the certificate would be valid on both https://alexyuly.com and https://www.alexyuly.com. The validation process took about a half hour.
Creating a CloudFront distribution was the final step of my AWS setup. In the "Create distribution" flow, I selected my S3 bucket's domain as the "Origin domain" for my distribution. (Focusing the "Origin domain" input field automatically displays a list of available S3 bucket domains.) I also selected my SSL certificate as the "Custom SSL certificate" for my distribution. I added both alexyuly.com and *.alexyuly.com as "Alternate domain names" too. I kept all of the other default settings. Once created, I copied my CloudFront distribution's domain name by clicking its ID within the list of distributions and then clicking the copy icon under the "Distribution domain name" field. Finally, I went back to Route 53 and added an A record pointing to this domain name. Now, visitors to alexyuly.com get content served by CloudFront, which pulls my static site's files from S3 and provides an SSL certificate to enable https connections.
At that point, I needed to have Gatsby build my static site for production and upload the content to S3. This certainly could be done manually via the S3 website on AWS. However, it'd be a pain to do this every time I update my site's code or publish new content, and if I ever welcomed non-technical users to contribute to my blog, it'd be nearly impossible.
As I just suggested, there are two scenarios where I'd like to build and deploy my site: (1) when I update the code, and (2) when I update the content. I had already taken care of the build part: each Gatsby website comes pre-packaged with an NPM script to do this via npm run build. The deploy part is quite easy too. There's a Gatsby plugin, gatsby-plugin-s3, which encapsulates the details of this process. I simply installed it and added the following to my gatsby.config.js file's plugins array:
{ resolve: "gatsby-plugin-s3", options: { bucketName: "your-s3-bucket-name", }, }
Then, I added an NPM script to my package.json file to fire up the deployment:
"deploy": "gatsby-plugin-s3 deploy --yes"
Once AWS credentials are present in the current environment, running npm run deploy uploads my built Gatsby site directly to the specified S3 bucket, making it immediately available via my domain name.
The final piece of the puzzle was to set up GitHub Actions as a CI (Continuous Integration) pipeline to respond to events and kick off my build and deploy process. GitHub Actions revolves around a concept called "workflows", and each workflow is defined by a YAML file within a repository's .github/workflows/ folder. A workflow is run when certain events occur, specified in the on section of its YAML file. In my alexyuly.com repository's deploy workflow, the on section specifies that it should run when (1) a push occurs to the repository's master branch, or (2) a repository_dispatch occurs, which is activated via the GitHub API.
When my deploy workflow runs, it first checks out the repository, and then it configures my AWS credentials, in order for gatsby-plugin-s3 to work when npm run deploy is called. My AWS Access Key ID and Secret Access Key are stored as secrets within my repository's settings. I added them by navigating to Settings, then clicking "Secrets" (under the "Security" section in the lefthand sidebar), clicking "Actions", and clicking "New repository secret".
Then, my deploy workflow sets up Node.js, runs npm ci to install my package dependencies, creates an .env.production file with my GATSBY_PRISMIC_REPO_NAME and PRISMIC_CUSTOM_TYPES_API_TOKEN environment variables (whose values are also stored as secrets), and finally runs npm run build and npm run deploy.
I had to do a little more work to connect Prismic to GitHub Actions, so that each update to my Prismic repo would activate a repository_dispatch event to run my deploy workflow. The Prismic website has a Webhooks feature on its Settings page, which can be configured to send a POST request to a given URL each time content is updated. It turns out that repository_dispatch can be activated via a POST API endpoint, namely https://api.github.com/repos/alexyuly/alexyuly.com/dispatches. However, I couldn't provide this URL to a Prismic webhook directly, because GitHub requires that each request be made with a JSON body that has an object with a property named event_type, to indicate the source of the repository_dispatch, and Prismic doesn't support making requests with a body. So, in order to get Prismic to talk to GitHub, I created a function in AWS Lambda to handle the translation process. I provided the endpoint URL of my Lambda function to my Prismic webhook, and then that function calls the GitHub Actions API for each event it receives.
The code for my lambda function is as follows:
// Adapted from https://stackoverflow.com/a/50891354 const https = require('https'); function httpsPost({ body, ...options }) { return new Promise((resolve, reject) => { const req = https.request({ method: 'POST', ...options, }, res => { const chunks = []; res.on('data', data => chunks.push(data)); res.on('end', () => { let resBody = Buffer.concat(chunks); switch (res.headers['content-type']) { case 'application/json': resBody = JSON.parse(resBody); break; } resolve(resBody); }); }); req.on('error', reject); if (body) { req.write(body); } req.end(); }); } // Adapted from https://betterdev.blog/gatsby-website-with-headless-cms-on-aws/ exports.handler = async (event) => { const body = JSON.parse(event.body || '{}'); if (body.secret !== process.env.PRISMIC_SECRET) { return { statusCode: 403, }; } const response = await httpsPost({ hostname: 'api.github.com', path: `/repos/${process.env.GITHUB_USER}/${process.env.GITHUB_REPO}/dispatches`, headers: { 'Authorization': `token ${process.env.GITHUB_TOKEN}`, 'Content-Type': 'application/json', 'User-Agent': 'Node.js', }, body: JSON.stringify({ event_type: 'prismic_update', }) }); console.log('GitHub response', response.toString('utf8')); return {}; };
As you can see, I borrowed a lot of code from a couple of sources: (1) a StackOverflow post explaining how to send an HTTPS POST request in Node.js, and (2) Maciej Radzikowski's article linked at the beginning of my blog post.
I configured environment variables within Lambda to store my Prismic webhook's secret, and my GitHub username, repository name, and API token, which are accessible within process.env in the Lambda function's code.
And that's it! Now, each time I push code changes to my GitHub repository or publish content via Prismic, Gatsby rebuilds my website and deploys it to Amazon S3, and alexyuly.com becomes up-to-date once again.
I'm sure there are some gaps in my explanation of how to set everything up here, as well as missing references to other articles and posts online that helped me along the way. I'd like to add a commenting feature to my website soon, so that readers can call out the gaps and I can update this article accordingly. In the meantime, please take whatever knowledge you can gain from what I've shared here, and go forth and build your own custom blog. Cheers!