This post consists of questions and answers about this website.

Le Penseur

Le Penseur by Auguste Rodin; Photo by Joe deSousa, Licensed Under CC0

How was this website created?

I am not a web developer, but I like competently created websites. One characteristic of competence is responsive web design: the website looks great on both desktop and mobile. In early 2018, I was hankering to make a blog and website of my own, but responsive web design and other advanced web-development concepts were way beyond my ken. When I saw Jesse Squire’s post on how he made his blog and website, I was happy to fork his repo. I encourage readers to do the same, but please remove his branding, a process demonstrated in this pull request.

Like Mr. Squires, I use Jekyll to generate my website. But whereas he hosts his on, I host mine on AWS S3.

Why not use a simpler solution like Squarespace?

I’ve used Squarespace. It’s a great product. But I like the flexibility of the Squires solution and the ability to write posts in Markdown.

Why host on AWS S3?

AWS is increasingly a software-industry standard. My company uses it extensively. I am a software developer. I therefore wanted to learn about AWS by using it to host my website.

Are there any drawbacks to S3?

Yes, there are two.

First, S3 has no built-in analytics whatsoever. By “analytics” I mean the ability to answer the question of how many people read a particular post. The closest thing to analytics that S3 has is logging. When a reader visits my website and views a page, S3 generates a log file. S3 has generated 110,066 log files for my website in the past nine months. Notwithstanding the absence of analytics from S3, I am able to calculate that these log files record 57,509 reads of blog posts. I do this by periodically using the AWS Command Line Interface to sync the log files on S3 to my MacBook Pro. Here is the command:

aws s3 sync s3:// ~/website/logs

To get the read count of, for example, my post on unit testing, I run the following command: unit-testing is the following shell script:

for PATTERN in "$@"
find . -name "201*" -exec cat {} + | grep -ic $PATTERN   # All log files start with "201".

Second, unlike Squarespace, S3 lacks cost certainty. S3’s free tier is generous, but I have no idea how many people are going to visit my website each month or how much money, if any, I will have to pay AWS. That said, my total AWS bill since my first post on April 8, 2018 is $60.93, which compares favorably with Squarespace. That said, if my website becomes wildly popular, causing my AWS bill to skyrocket, I will consider either monetizing the blog or moving it to a cost-certain host like Squarespace.

Any regrets?

I am, for the most part, pleased with the implementation of my handsome, flexible, and Markdown-friendly website.

The one problem I have encountered is that Jekyll incorrectly generates URLs in my RSS files. The effect of this is that RSS readers have been seeing, for example, URLs like http://localhost:4001/blog/programmatic-layout/ rather than the correct As an interim fix, I run the following command before uploading my website to S3:

find . -type f -name '*.xml' -exec sed -i '' s/localhost:4001/ {} +

The genesis of this post was, in part, my desire to test this fix. I am working on a cleaner fix.


Mr. Squires kindly diagnosed the localhost problem as an error in my Jekyll workflow. Problem solved.