Creating social sharing images in Rust
2022-01-19
Introduction
This blog had languished without a new post for about a year. In the new year I decided to write more often. So I wrote my last post about Rust enums and then shared it on Twitter. This is what it looked like:
I tweeted and forgot about it. Then sometime later, in reply to someone else, I tweeted about the IntelliJ Rust plugin and it looked like this:
This made me wonder. Why was a cool image shown for the IntelliJ Rust plugin but a tweet about my blog post was just plain, boring text? This is a story about how I configured my blog to show such images when someone tweets a link about one of my blog posts.
Open Graph and Twitter cards
Many websites automatically convert links into images when people share it. This is true for Twitter, Facebook, LinkedIn and many others. When a link is shared, the sharing platform fetches the link's HTML and looks for an og:image
meta tag similar to this:
If og:image
meta tag is found, its content
attribute is treated as a link to the image to be rendered in the shared post. The og:image
tag is part of the Open Graph Protocol. Open Graph is accepted by many social platforms. There is another Twitter specific protocol called Twitter cards. Its image tag looks like this:
You should probably implement both for you website. I will not go into too much depth about these protocols as you can read about them on their respective pages. What I will talk about though is how I implemented these protocols for this blog.
Social sharing images for HashRust
Adding a bunch of meta tags for HashRust was easy. The difficult bit was creating the actual images for each of my blog posts. I could have used images from something like Unsplash but I decided against that for a couple of reasons. First, I didn't think generic photographs went really well with HashRust's aesthetics. And second, I didn't want to spend hours searching for that perfect image for each post.
I wanted to create images automatically with a repeatable and consistent process. I looked around to see if others had already solved this problem. Indeed, GitHub and IndieHackers do generate social sharing images automatically. I decided to do something similar for this blog. While my solution was inspired by GitHub and IndieHackers, it differed from them in a few ways.
First, GitHub and IndieHackers both have most of their content generated by their users. This forces them to have a service that can generate images on the fly. Think what happens when someone creates a new repo on GitHub or a new post on IndieHackers? That new repo or post should have an accompanying image created automatically when someone shares it. This is not true for HashRust because I write all the content on this website. I don't need to generate the images dynamically. I can create them when I build my static website.
Second, both GitHub and IndieHackers use Puppeteer. Although Puppeteer is a very mature library and would have worked fine, I wanted something written in Rust. After all HashRust is a blog about Rust. With a bit of googling I found the headless_chrome crate. This crate, while not even close in features to Puppeteer, was good enough for me.
Template design
Having made all the tech choices, I needed to decide on the look of the images. After two gruelling days my programmer artist brain came up with this in Inkscape:
This design needed to be converted into a webpage template. The HTML was straightforward:
Replace this text with the title
Replace this text with the subtitle.
And CSS looked like the following:
:;
:;
:;
}
:;
:;
:;
}
:;
:;
:;
:;
:;
}
:;
:;
:;
:;
:;
:;
:;
:;
:;
:;
:;
}
:;
}
:;
:;
:;
:;
:;
:;
:;
:;
:;
:;
:;
}
:;
}
You might have noticed in the CSS that I have put the elements at absolute positions. This is fine because unlike a responsive website the size of the images I wanted was constant at 800×419 pixels.
The parts that varied — #title
and #subtitle
text — needed a bit of JavaScript to keep them within their bounding boxes. For example, a longer title might overflow #title
's dimensions because it has a fixed size of 600×150 pixels. Similarly, a shorter title might leave too much whitespace around it. The solution was to adjust the font-size
property until the #title
(and #subtitle
) had a snug fit.
The resizeText
JavaScript function computed the font-size
for such a fit:
element.offsetHeight > parent.clientHeight;
;
;
while !overflown && i < maxSize
element.style.fontSize = ` px`;
overflown = element, parent;
if !overflown
i++;
}
}
if overflown
element.style.fontSize = ` px`
}
}
The resizeText
function took an element
containing the text, its parent
element and a minSize
and maxSize
for a range of possible values for font-size
. It tries out all the values of font-size
from minSize
to maxSize
until the text overflows the parent. This technique was inspired by this post by Jan Küster on Dev.to. If you look at the above code and are itching to use binary search, remember a wise man's words about premature optimization.
Next I needed a function that called resizeText
for both #title
and #subtitle
:
,
,
,
}
,
,
,
}
}
And finally, I needed a function that could replace the placeholder text in the template with actual title and subtitle text:
;
;
titleSpan.innerText = titleText;
subtitleSpan.innerText = subtitleText;
}
This completed the HTML/CSS/JS template. Now I had to write the Rust code to open this template in chrome. The setText
and fitText
functions will be called by the Rust code before capturing an image as you will see later.
Static file server
All that was left now was to open the template in a headless_chrome
controlled browser, update title and subtitle, fit the texts inside their bounding boxes and capture the image.
Using tokio, hyper and hyper_staticfile crates, I started a static file HTTP server in the folder where the HTML template was saved. This is the complete function:
pub async
let address = from;
let make_service = make_service_fn
let root = new;
async
;
let server = bind.serve;
if let Err = server.await
eprintln!;
Let's breakdown the start_server
function. It takes two arguments — root_path
and port
:
pub async
root_path
is the folder that should be served by the static server. port
is where the static file server listens for HTTP connections. Next we create a localhost SocketAddr
for port
:
let address = from;
And a service using the make_service_fn
:
let make_service = make_service_fn
let root = new;
async
;
As per hyper's docs, a service is an asynchronous function from a Request
to a Response
. In make_service_fn
we create a Static
instance from the hyper-staticfile
crate and pass it to the handle_request
function which simply serves the files at root
:
async
root.serve.await
Next we bind to the address
created earlier and start serving requests using the service that was just created:
let server = bind.serve;
And finally we wait forever for the server to exit, printing any error if the server fails to start:
if let Err = server.await
eprintln!;
With the server serving the HTML template, capturing the image was all that was left now.
Capturing the image
Next, I used headless_chrome
to open a chrome browser to the static server's URL, updated #title
and #subtitle
, adjusted their font-size
and captured the PNG image:
pub
let options = default
.path
.build.expect;
let browser = match new
Ok => browser,
Err =>
eprintln!;
return;
;
let tab = match browser.wait_for_initial_tab
Ok => tab,
Err =>
eprintln!;
return;
;
if let Err = tab.navigate_to
eprintln!;
return;
if let Err = tab.wait_until_navigated
eprintln!;
return;
let title = title.replace;
let subtitle = subtitle.replace;
let js_expr = format!
setText('{}', '{}');
fitText();
";
if let Err = tab.evaluate
eprintln!;
return;
let container = match tab.find_element
Ok => container,
Err =>
eprintln!;
return;
;
let viewport = match container.get_box_model
Ok => box_model.border_viewport,
Err =>
eprintln!;
return;
;
let png_data = match tab.capture_screenshot
Ok => png_data,
Err =>
eprintln!;
return;
;
if let Err = write
eprintln!;
Again, the save_image
function is very long and needs to be discussed bit by bit. save_image
accepts five arguments:
pub
chrome_path
is the path of the chrome browser's executable. This doesn't have to be strictly Chrome; any browser from its lineage should work. I tested with Brave, for example, and it worked. static_server_url
is the URL on which the static file server is listening. title
and subtitle
contain the text to be updated in the template. image_path
is the path where the PNG image has to be saved.
Next we create the launch options using LaunchOptionsBuilder
:
let options = default
.path
.build.expect;
Here I set the path of the Chrome executable by calling the path
method on the builder. If path
is not called, headless_chrome
will try to automatically detect a suitable Chrome binary. headless
is another useful method for debugging. It is on by default, but you can disable it if you want to see the browser UI.
Then we launch the browser and wait for an empty tab to open:
let browser = match new
Ok => browser,
Err =>
eprintln!;
return;
;
let tab = match browser.wait_for_initial_tab
Ok => tab,
Err =>
eprintln!;
return;
;
Nothing much to explain here. Then we navigate to the static server URL and wait until it opens in the new tab:
if let Err = tab.navigate_to
eprintln!;
return;
if let Err = tab.wait_until_navigated
eprintln!;
return;
After that we call the setText
and fitText
JavaScript functions from the template defined earlier:
let title = title.replace;
let subtitle = subtitle.replace;
let js_expr = format!
setText('{}', '{}');
fitText();
";
if let Err = tab.evaluate
eprintln!;
return;
Notice that single quotes in title
and subtitle
are escaped so that they don't interfere with the arguments to setText
in the js_expr
JavaScript expression. The evaluate
method on tab
will wait for the js_expr
to complete if the second argument is true
. Now I was ready to capture the image. So I got the #container
element and its viewport:
let container = match tab.find_element
Ok => container,
Err =>
eprintln!;
return;
;
let viewport = match container.get_box_model
Ok => box_model.border_viewport,
Err =>
eprintln!;
return;
;
And finally, captured the image and saved it to the disc at image_path
location:
let png_data = match tab.capture_screenshot
Ok => png_data,
Err =>
eprintln!;
return;
;
if let Err = write
eprintln!;
Phew.
And that is how I captured an image for one blog post. There was still work to do to parse the frontmatter from each blog post to extract the title and the subtitle. I also had to handle synchronization between code that started static file server and the code that captured the image. While what you see above covers the essence of the code, I omitted some detail for the sake of clarity. If you are curious, you can take a look at the full code in its repository.
Conclusion
Twitter cards and Open Graph images are a great tool to facelift your posts on social media. While these protocols themselves are quite simple, I still needed to create the images that are shown in the shared posts. In this blog I showed you how I wrote a tool to automate creation of images for sharing on social media. Feel free to view source on this page and see the metadata tags yourself. Now when you share a link to this post on social media you will see a beautiful image rendered in the post. Go ahead, try it now 👇.