Routing Revolution: SSG, BrowserRouter, and the SEO Fix
Routing Revolution: SSG, BrowserRouter, and the SEO Fix
For a long time, Fezcodex lived behind the "Hash Gap." If you looked at your address bar, you’d see that familiar /#/ slicing through every URL. While functional, this was the primary reason social media thumbnails were failing and search engines were only seeing the home page.
Today, I’ve completed a total migration to BrowserRouter combined with SSG (Static Site Generation). Here is the technical breakdown of why this was necessary and how it works.
The Problem: The Hash Black Hole
We originally used HashRouter because Fezcodex is hosted on GitHub Pages. Since GitHub Pages is a static file host, it doesn't know how to handle a request for /apps/markdown-table-formatter. It looks for a folder named apps and an index.html inside it. When it doesn't find them, it throws a 404.
HashRouter solved this by putting everything after the #. The server ignores the hash, always serves the root index.html, and React handles the rest.
The SEO Cost: Most crawlers (Twitter, Facebook, Discord) do not execute JavaScript and ignore the hash entirely. To them, every single link you shared looked like fezcode.com/—resulting in generic "Fezcodex - Personal Blog" thumbnails instead of page-specific content.
The Solution Part 1: BrowserRouter
I switched the core engine from HashRouter to BrowserRouter. This gives us "clean" URLs:
- Old:
fezcode.com/#/blog/my-post - New:
fezcode.com/blog/my-post
But how do we make this work on a static host without a backend?
The Solution Part 2: react-snap & SSG
Enter Static Site Generation via react-snap.
Instead of shipping a nearly empty index.html and letting the browser build the page (Client-Side Rendering), we now build the pages during the deployment phase.
- The Crawl: During
npm run build,react-snapfires up a headless browser (Puppeteer). - The Snapshot: It visits every route defined in our sitemap and apps list.
- The Export: It captures the fully rendered HTML (including meta tags, titles, and unique descriptions) and saves it as a physical
index.htmlfile in a matching folder structure.
In our latest build, this generated 281 unique HTML files. Now, when you share a link, the crawler sees a real, static HTML file with the correct Open Graph tags immediately.
The Solution Part 3: Hydration
Once the browser loads the static HTML, we don't want to lose the interactivity of React. I updated src/index.js to use ReactDOM.hydrateRoot.
This process, known as Hydration, allows React to "attach" to the existing HTML already on the screen rather than re-rendering everything from scratch. It preserves the fast initial load of a static site with the power of a modern web app.
Global Content Cleanup
Switching the router was only half the battle. Thousands of internal links within our .piml logs and .txt blog posts still pointed to the old /#/ structure.
I executed a global recursive replacement across the public/ directory:
Get-ChildItem -Path public -Include *.json, *.txt, *.piml, *.md -Recurse | ForEach-Object { (Get-Content $_.FullName) -replace '/#/', '/' | Set-Content $_.FullName }This ensured that the entire ecosystem—from the timeline to the project descriptions—is now synchronized with the new routing architecture.
Conclusion
Fezcodex is no longer just a Single Page Application; it is a high-performance, SEO-optimized static engine. Clean URLs, unique thumbnails, and faster perceived load times are now the standard.