


AWS JavaScript WordPress = Fun Content Automation Strategies Using Artificial Intelligence
Months ago, I began collaborating on a project all about AI generated content for a client focused on the tech sector. My role was mostly focused on setting up SSG using WordPress as a Headless CMS for a Nuxt Front-end.
The client used to write articles a couple of times per week about different trends or situations affecting the sector, in the hope of increasing traffic to the site and its output of articles he decided to use AI to generate articles for him.
After some time, with the right prompts the client had pieces of information that were close to an exact match of a human written article, it is super difficult to spot they are machine made.
Sometime after I moved to work on different features, I would continuously get asked one specific thing.
Ey, can you update the featured image for this article?
After 2 weeks of daily updating Posts I had a small eureka moment.
Why don't I automate the featured image generation for these articles using Artifical Intelligence?
We already automated post writing, why don't automate the featured images?
In my free time, I was experimenting with generative LLMs on my computer so I had a solid idea of more or less how to tackle this side-quest. I sent a message to the client detailing what is the problem, what I want to do and what were going to be the advantages and without having to do convincing, I got the green-lit to work on this feature and right away I went with my first step.
1. Architecting how the solution is going to look.
Given that I had some exposure to running models locally I knew right away it was not feasible to self host those models. With that discarded I started to play around APIs that generated images based on text prompts.
Featured images consisted of 2 parts: the main composed graphic and a catchy tagline.
The composed graphic would be some elements related to the article, arranged in a nice way with then some colors and textures with some blend modes applied to achieve some fancy effects following the branding.
Taglines were short, 8-12 words sentences with a simple drop shadow under them.
Based on my testing, I realized that pursuing the AI route for image generation wasn’t practical. The image quality didn’t meet expectations, and the process was too time-consuming to justify its use. Considering this would run as an AWS Lambda function, where execution time directly impacts costs.
With that discarded, I went with Plan B: mashing images and design assets together using JavaScript's Canvas API.
Taking a deep look we had mainly 5 styles of simple posts, and around 4 types of textures and 3 of them using the same text alignment, style and position. After doing some math I thought:
Hmm, If i take these 3 images, grab 8 textures and play with blend-modes, I can get around post 24 variations
Given that those 3 types of posts had the same text style it was practically one template.
With that settled, I moved to the Tagline Generator. I wanted to create a tagline based on the content and title of the article. I decided to use ChatGPT’s API given that the company was already paying for it, and after some experimenting and promps tweaking, I had a very good MVP for my tagline generator.
With the 2 hardest parts of the task figured out, I spent some time in Figma putting together the diagram for the final architecture of my service.
2.Coding my lambda
The plan was to create a Lambda function capable of analyzing post content, generating a tagline, and assembling a featured image—all seamlessly integrated with WordPress.
I will provide some code but just enough to communicate the overall idea to ke.
Analyzing the content
The Lambda function starts by extracting the necessary parameters from the incoming event payload:
const { title: request_title, content, backend, app_password} = JSON.parse(event.body);
- title and content: These provide the article’s context.
- backend: The WordPress backend URL for image uploads.
- app_password: The authentication token im going to use to upload as my user using Wordpress Rest API.
Generating the Tagline
The function’s first major task is to generate a tagline using analyzeContent function, which uses OpenAI’s API to craft a click-worthy tagline based on the article's title and content.
Our function takes the post title and content but returns a tagline, post sentiment to know if post is a positive, negative or neutral opinion and an optional company symbol from the S&P index companies.
const { tagline, sentiment, company } = await analyzeContent({ title: request_title, content });
This step is critical, as the tagline directly influences the image’s aesthetics.
Creating the Featured Image
Next, the generateImage function kicks in:
let buffer; buffer = await generateImage({ title: tagline, company_logo: company_logo, sentiment: sentiment, });
This function handles:
- Designing the composition.
- Layering textures, colors, and branding elements.
- Applying effects and creating the title.
Here is a step by step breakdown on how it works:
The generateImage function begins by setting up a blank canvas, defining its dimensions, and preparing it to handle all the design elements.
let buffer; buffer = await generateImage({ title: tagline, company_logo: company_logo, sentiment: sentiment, });
From there, a random background image is loaded from a predefined collection of assets. These images were curated to suit the tech-oriented branding while allowing for enough variety across posts. Background image is selected randomly based on its sentiment.
To ensure each background image looked great, I calculated its dimensions dynamically based on the aspect ratio. This avoids distortions while keeping the visual balance intact.
Adding the Tagline
The tagline is short but based on some rules, this impactful sentence is split into manageable pieces and is styled dynamically to ensure it’s always readable, regardless of length or canvas size based on the word count for the line, word length, etc.
const COLOURS = { BLUE: "#33b8e1", BLACK: "#000000", } const __filename = fileURLToPath(import.meta.url); const __dirname = path.dirname(__filename); const images_path = path.join(__dirname, 'images/'); const files_length = fs.readdirSync(images_path).length; const images_folder = process.env.ENVIRONMENT === "local" ? "./images/" : "/var/task/images/"; registerFont("/var/task/fonts/open-sans.bold.ttf", { family: "OpenSansBold" }); registerFont("/var/task/fonts/open-sans.regular.ttf", { family: "OpenSans" }); console.log("1. Created canvas"); const canvas = createCanvas(1118, 806); let image = await loadImage(`${images_folder}/${Math.floor(Math.random() * (files_length - 1 + 1)) + 1}.jpg`); let textBlockHeight = 0; console.log("2. Image loaded"); const canvasWidth = canvas.width; const canvasHeight = canvas.height; const aspectRatio = image.width / image.height; console.log("3. Defined ASPECT RATIO",) let drawWidth, drawHeight; if (image.width > image.height) { // Landscape orientation: fit by width drawWidth = canvasWidth; drawHeight = canvasWidth / aspectRatio; } else { // Portrait orientation: fit by height drawHeight = canvasHeight; drawWidth = canvasHeight * aspectRatio; } // Center the image const x = (canvasWidth - drawWidth) / 2; const y = (canvasHeight - drawHeight) / 2; const ctx = canvas.getContext("2d"); console.log("4. Centered Image") ctx.drawImage(image, x, y, drawWidth, drawHeight);
Finally, the canvas is converted into a PNG buffer.
console.log("4.1 Text splitting"); if (splitText.length === 1) { const isItWiderThanHalf = ctx.measureText(splitText[0]).width > ((canvasWidth / 2) + 160); const wordCount = splitText[0].split(" ").length; if (isItWiderThanHalf && wordCount > 4) { const refactored_line = splitText[0].split(" ").reduce((acc, curr, i) => { if (i % 3 === 0) { acc.push([curr]); } else { acc[acc.length - 1].push(curr); } return acc; }, []).map((item) => item.join(" ")); refactored_line[1] = "[s]" + refactored_line[1] + "[s]"; splitText = refactored_line } } let tagline = splitText.filter(item => item !== '' && item !== '[br]' && item !== '[s]' && item !== '[/s]' && item !== '[s]'); let headlineSentences = []; let lineCounter = { total: 0, reduced_line_counter: 0, reduced_lines_indexes: [] } console.log("4.2 Tagline Preparation", tagline); for (let i = 0; i item !== '' && item !== '[s]' && item !== '[/s]'); const lineWidth = ctx.measureText(finalLine[0]).width const halfOfWidth = canvasWidth / 2; if (lineWidth > halfOfWidth && finalLine[0]) { let splitted_text = finalLine[0].split(" ").reduce((acc, curr, i) => { const modulus = finalLine[0].split(" ").length >= 5 ? 3 : 2; if (i % modulus === 0) { acc.push([curr]); } else { acc[acc.length - 1].push(curr); } return acc; }, []); let splitted_text_arr = [] splitted_text.forEach((item, _) => { let lineText = item.join(" "); item = lineText splitted_text_arr.push(item) }) headlineSentences[i] = splitted_text_arr[0] + '/s/' if (splitted_text_arr[1]) { headlineSentences.splice(i + 1, 0, splitted_text_arr[1] + '/s/') } } else { headlineSentences.push("/s/" + finalLine[0] + "/s/") } } else { headlineSentences.push(line) } } console.log("5. Drawing text on canvas", headlineSentences); const headlineSentencesLength = headlineSentences.length; let textHeightAccumulator = 0; for (let i = 0; i item !== '/s/'); const nextLine = headlineSentences[i + 1]; if (nextLine && /^\s*$/.test(nextLine)) { headlineSentences.splice(i + 1, 1); } let line = headlineSentences[i]; if (!line) continue; let lineText = line.trim(); let textY; ctx.font = " 72px OpenSans"; const cleanedUpLine = lineText.includes('/s/') ? lineText.replace(/\s+/g, ' ') : lineText; const lineWidth = ctx.measureText(cleanedUpLine).width const halfOfWidth = canvasWidth / 2; lineCounter.total += 1 const isLineTooLong = lineWidth > (halfOfWidth + 50); if (isLineTooLong) { if (lineText.includes(':')) { const split_line_arr = lineText.split(":") if (split_line_arr.length > 1) { lineText = split_line_arr[0] + ":"; if (split_line_arr[1]) { headlineSentences.splice(i + 1, 0, split_line_arr[1]) } } } ctx.font = "52px OpenSans"; lineCounter.reduced_line_counter += 1 if (i === 0 && headlineSentencesLength === 2) { is2LinesAndPreviewsWasReduced = true } lineCounter.reduced_lines_indexes.push(i) } else { if (i === 0 && headlineSentencesLength === 2) { is2LinesAndPreviewsWasReduced = false } } if (lineText.includes("/s/")) { lineText = lineText.replace(/\/s\//g, ""); if (headlineSentencesLength > (i + 1) && i (canvasWidth / 2.35)) { ctx.font = "84px OpenSansBold"; assignedSize = 80 } else { ctx.font = "84px OpenSansBold"; assignedSize = 84 } } else { if (i === headlineSentencesLength - 1 && lineWidth (canvasWidth / 2) + 120) { if (assignedSize === 84) { ctx.font = "72px OpenSansBold"; } else if (assignedSize === 80) { ctx.font = "64px OpenSansBold"; textHeightAccumulator += 8 } else { ctx.font = "52px OpenSansBold"; } } } else { const textWidth = ctx.measureText(lineText).width if (textWidth > (canvasWidth / 2)) { ctx.font = "44px OpenSans"; textHeightAccumulator += 12 } else if (i === headlineSentencesLength - 1) { textHeightAccumulator += 12 } } ctx.fillStyle = "white"; ctx.textAlign = "center"; const textHeight = ctx.measureText(lineText).emHeightAscent; textHeightAccumulator += textHeight; if (headlineSentencesLength == 3) { textY = (canvasHeight / 3) } else if (headlineSentencesLength == 4) { textY = (canvasHeight / 3.5) } else { textY = 300 } textY += textHeightAccumulator; const words = lineText.split(' '); console.log("words", words, lineText, headlineSentences) const capitalizedWords = words.map(word => { if (word.length > 0) return word[0].toUpperCase() + word.slice(1) return word }); const capitalizedLineText = capitalizedWords.join(' '); ctx.fillText(capitalizedLineText, canvasWidth / 2, textY); }
Finally!!! Uploading the Image to WordPress
After successfully generating the image buffer, the uploadImageToWordpress function is called.
This function handles the heavy lifting of sending the image to WordPress using its REST API by Encoding the Image for WordPress.
The function first prepares the tagline for use as the filename by cleaning up spaces and special characters:
const buffer = canvas.toBuffer("image/png"); return buffer;
The image buffer is then converted into a Blob object to make it compatible with the WordPress API:
const file = new Blob([buffer], { type: "image/png" });
Preparing the API Request Using the encoded image and tagline, the function builds a FormData object and I add optional metadata, such as alt_text for accessibility and a caption for context.
const createSlug = (string) => { return string.toLowerCase().replace(/ /g, '-').replace(/[^\w-]+/g, ''); }; const image_name = createSlug(tagline);
For authentication, the username and application password are encoded in Base64 and included in the request headers:
formData.append("file", file, image_name + ".png"); formData.append("alt_text", `${tagline} image`); formData.append("caption", "Uploaded via API");
Sending the Image A POST request is made to the WordPress media endpoint with the prepared data and headers and after awaiting the response I validate for success or errors.
const credentials = `${username}:${app_password}`; const base64Encoded = Buffer.from(credentials).toString("base64");
If successful, I return that same media response in the lambda.
This is how my lambda looks in the end.
const response = await fetch(`${wordpress_url}wp-json/wp/v2/media`, { method: "POST", headers: { Authorization: "Basic " + base64Encoded, contentType: "multipart/form-data", }, body: formData, }); if (!response.ok) { const errorText = await response.text(); throw new Error(`Error uploading image: ${response.statusText}, Details: ${errorText}`); }
This is a sample image produced by my script. It's not used in production, just created with generic assets for this example.
Aftermath
Some time has passed and everybody is happy that we no longer have shoddy or empty looking image-less articles, that images are a close match to the ones that the designer crafts, the designer is happy that he gets to only focus on designing for other marketing efforts across the company.
But then a new problem arose: sometimes the client did not like the Image generated and he would ask me to spin up my script to generate a new one for a specific post.
This brought me to my next sidequest: A Wordpress Plugin to Manually Generate a Featured image using Artificial Inteligence for an Specific Post
The above is the detailed content of AWS JavaScript WordPress = Fun Content Automation Strategies Using Artificial Intelligence. For more information, please follow other related articles on the PHP Chinese website!

JavaScript core data types are consistent in browsers and Node.js, but are handled differently from the extra types. 1) The global object is window in the browser and global in Node.js. 2) Node.js' unique Buffer object, used to process binary data. 3) There are also differences in performance and time processing, and the code needs to be adjusted according to the environment.

JavaScriptusestwotypesofcomments:single-line(//)andmulti-line(//).1)Use//forquicknotesorsingle-lineexplanations.2)Use//forlongerexplanationsorcommentingoutblocksofcode.Commentsshouldexplainthe'why',notthe'what',andbeplacedabovetherelevantcodeforclari

The main difference between Python and JavaScript is the type system and application scenarios. 1. Python uses dynamic types, suitable for scientific computing and data analysis. 2. JavaScript adopts weak types and is widely used in front-end and full-stack development. The two have their own advantages in asynchronous programming and performance optimization, and should be decided according to project requirements when choosing.

Whether to choose Python or JavaScript depends on the project type: 1) Choose Python for data science and automation tasks; 2) Choose JavaScript for front-end and full-stack development. Python is favored for its powerful library in data processing and automation, while JavaScript is indispensable for its advantages in web interaction and full-stack development.

Python and JavaScript each have their own advantages, and the choice depends on project needs and personal preferences. 1. Python is easy to learn, with concise syntax, suitable for data science and back-end development, but has a slow execution speed. 2. JavaScript is everywhere in front-end development and has strong asynchronous programming capabilities. Node.js makes it suitable for full-stack development, but the syntax may be complex and error-prone.

JavaScriptisnotbuiltonCorC ;it'saninterpretedlanguagethatrunsonenginesoftenwritteninC .1)JavaScriptwasdesignedasalightweight,interpretedlanguageforwebbrowsers.2)EnginesevolvedfromsimpleinterpreterstoJITcompilers,typicallyinC ,improvingperformance.

JavaScript can be used for front-end and back-end development. The front-end enhances the user experience through DOM operations, and the back-end handles server tasks through Node.js. 1. Front-end example: Change the content of the web page text. 2. Backend example: Create a Node.js server.

Choosing Python or JavaScript should be based on career development, learning curve and ecosystem: 1) Career development: Python is suitable for data science and back-end development, while JavaScript is suitable for front-end and full-stack development. 2) Learning curve: Python syntax is concise and suitable for beginners; JavaScript syntax is flexible. 3) Ecosystem: Python has rich scientific computing libraries, and JavaScript has a powerful front-end framework.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Atom editor mac version download
The most popular open source editor

Notepad++7.3.1
Easy-to-use and free code editor

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

SublimeText3 Mac version
God-level code editing software (SublimeText3)

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool
