I would like to share my journey on building a self-sustainable content management system that does not require a content database in a traditional sense.
The Problem
The content (blog posts and bookmarks) of this website is stored in a Notion database:
The problem that I was trying to solve was to not have to deploy the website manually after each bookmark that I add there. And on top of that – keep the hosting as cheap as possible, because for me it does not really matter how fast the bookmarks that I add to my Notion database end up online.
So, after some research I came up with the following setup:
The system consists of several components:
- The “Push to Main” action that deploys the changes
- The “Update Content” action that downloads content from Notion API and commits the changes
- The “Update Content on Schedule” action runs once in a while and triggers the “Update Content” action
Let us look into each one of them from the inside out in detail.
The “Push to Main” Workflow
There is not a lot to say here, pretty standard setup, – when there is a push to the main branch, this workflow builds the app and deploys it to Cloudflare Pages using the Wrangler CLI:
name: Push to Main on: push: branches: [main] workflow_dispatch: {} jobs: deploy-cloudflare-pages: runs-on: ubuntu-latest timeout-minutes: 5 steps: - name: Checkout uses: actions/checkout@v4 - name: Setup pnpm uses: pnpm/action-setup@v4 - name: Setup Node uses: actions/setup-node@v4 with: node-version-file: .node-version cache: pnpm - name: Install node modules run: | pnpm --version pnpm install --frozen-lockfile - name: Build the App run: | pnpm build - name: Publish Cloudflare Pages env: CLOUDFLARE_ACCOUNT_ID: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }} CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }} run: | pnpm wrangler pages deploy ./out --project-name ${{ secrets.CLOUDFLARE_PROJECT_NAME }}
The “Update Content” Workflow
This Workflow can only be triggered “manually”… but also automatically because you can trigger it using a GitHub Personal Access Token, a.k.a. PAT. I initially wrote it because I wanted to deploy changes from my phone. It downloads the posts and bookmarks using the Notion API and then – if there are any change to the codebase – creates a commit and pushes it. In order to function properly, this workflow must be provided with a PAT that has “Read and Write access to code” of the repository:
name: Update Content on: workflow_dispatch: {} jobs: download-content: runs-on: ubuntu-latest timeout-minutes: 5 steps: - name: Checkout uses: actions/checkout@v4 with: # A Github Personal Access Token with access to the repository # that has the follwing permissions: # ✅ Read and Write access to code token: ${{ secrets.GITHUB_PAT_CONTENT }} - name: Setup pnpm uses: pnpm/action-setup@v4 - name: Setup Node uses: actions/setup-node@v4 with: node-version-file: .node-version cache: pnpm - name: Install node modules run: | pnpm --version pnpm install --frozen-lockfile - name: Download articles content from Notion env: NOTION_KEY: "${{ secrets.NOTION_KEY }}" NOTION_ARTICLES_DATABASE_ID: "${{ secrets.NOTION_ARTICLES_DATABASE_ID }}" run: | pnpm download-articles - name: Download bookmarks content from Notion env: NOTION_KEY: ${{ secrets.NOTION_KEY }} NOTION_BOOKMARKS_DATABASE_ID: ${{ secrets.NOTION_BOOKMARKS_DATABASE_ID }} run: | pnpm download-bookmarks - name: Configure Git run: | git config --global user.email "${{ secrets.GIT_USER_EMAIL }}" git config --global user.name "${{ secrets.GIT_USER_NAME }}" - name: Check if anything changed id: check-changes run: | if [ -n "$(git status --porcelain)" ]; then echo "There are changes" echo "HAS_CHANGED=true" >> $GITHUB_OUTPUT else echo "There are no changes" echo "HAS_CHANGED=false" >> $GITHUB_OUTPUT fi - name: Commit changes if: steps.check-changes.outputs.HAS_CHANGED == 'true' run: | git add ./src/content git add ./public git commit -m "Automatic content update commit" git push
The “Update Content on Schedule” Workflow
This one is pretty simple: it just runs every once in a while and triggers the workflow above. In order to function properly, this workflow must be provided with a GitHub PAT that has “Read and Write access to actions” of the repository. In my case it’s a different PAT:
name: Update Content on Schedule on: schedule: - cron: "13 0,12 * * *" workflow_dispatch: {} jobs: trigger-update-content: runs-on: ubuntu-latest timeout-minutes: 5 steps: - name: Checkout uses: actions/checkout@v4 - name: Dispatch the Update Content workflow env: # A Github Personal Access Token with access to the repository # that has the follwing permissions: # ✅ Read and Write access to actions GH_TOKEN: ${{ secrets.GITHUB_PAT_ACTIONS }} run: | gh workflow run "Update Content" --ref main
Conclusion
For me this setup has proven to be really good and flexible. Because of the modular structure, the “Update Content” action can be triggered manually – e.g. from my phone while travelling. To me this was another valuable experience of progressive enhancement of a workflow.
Hope you find this helpful ?
The above is the detailed content of Updating website content on schedule via GitHub Actions. For more information, please follow other related articles on the PHP Chinese website!

C and JavaScript achieve interoperability through WebAssembly. 1) C code is compiled into WebAssembly module and introduced into JavaScript environment to enhance computing power. 2) In game development, C handles physics engines and graphics rendering, and JavaScript is responsible for game logic and user interface.

JavaScript is widely used in websites, mobile applications, desktop applications and server-side programming. 1) In website development, JavaScript operates DOM together with HTML and CSS to achieve dynamic effects and supports frameworks such as jQuery and React. 2) Through ReactNative and Ionic, JavaScript is used to develop cross-platform mobile applications. 3) The Electron framework enables JavaScript to build desktop applications. 4) Node.js allows JavaScript to run on the server side and supports high concurrent requests.

Python is more suitable for data science and automation, while JavaScript is more suitable for front-end and full-stack development. 1. Python performs well in data science and machine learning, using libraries such as NumPy and Pandas for data processing and modeling. 2. Python is concise and efficient in automation and scripting. 3. JavaScript is indispensable in front-end development and is used to build dynamic web pages and single-page applications. 4. JavaScript plays a role in back-end development through Node.js and supports full-stack development.

C and C play a vital role in the JavaScript engine, mainly used to implement interpreters and JIT compilers. 1) C is used to parse JavaScript source code and generate an abstract syntax tree. 2) C is responsible for generating and executing bytecode. 3) C implements the JIT compiler, optimizes and compiles hot-spot code at runtime, and significantly improves the execution efficiency of JavaScript.

JavaScript's application in the real world includes front-end and back-end development. 1) Display front-end applications by building a TODO list application, involving DOM operations and event processing. 2) Build RESTfulAPI through Node.js and Express to demonstrate back-end applications.

The main uses of JavaScript in web development include client interaction, form verification and asynchronous communication. 1) Dynamic content update and user interaction through DOM operations; 2) Client verification is carried out before the user submits data to improve the user experience; 3) Refreshless communication with the server is achieved through AJAX technology.

Understanding how JavaScript engine works internally is important to developers because it helps write more efficient code and understand performance bottlenecks and optimization strategies. 1) The engine's workflow includes three stages: parsing, compiling and execution; 2) During the execution process, the engine will perform dynamic optimization, such as inline cache and hidden classes; 3) Best practices include avoiding global variables, optimizing loops, using const and lets, and avoiding excessive use of closures.

Python is more suitable for beginners, with a smooth learning curve and concise syntax; JavaScript is suitable for front-end development, with a steep learning curve and flexible syntax. 1. Python syntax is intuitive and suitable for data science and back-end development. 2. JavaScript is flexible and widely used in front-end and server-side programming.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

Zend Studio 13.0.1
Powerful PHP integrated development environment

SublimeText3 English version
Recommended: Win version, supports code prompts!