After a brief attempt at using the Notion API, I've circled back to my setup with Obsidian. While Notion seemed promising at first, it fell short of my expectations. So, I've returned to my homebrew publishing pipeline, which I'll break down in more detail.
This system leverages Obsidian for writing, GitHub for storage and versioning, and Next.js for rendering and publishing. Let's dive into the core components that make this setup tick.
The Backbone: JavaScript Functions and GitHub's GraphQL API
At the heart of this system are a handful of JavaScript functions that interact with the GitHub GraphQL API. These functions handle fetching my Obsidian markdown files and parsing them for use in my Next.js application. Here's a closer look at each key function:
fetchFromGitHubGraphQL
: Making the GraphQL Call
The fetchFromGitHubGraphQL
function handles the heavy lifting of making the actual GraphQL API call. It takes a query string and variables as arguments and returns the fetched data.
async function fetchFromGitHubGraphQL(query: string, variables: any) {
const token = process.env.NEXT_PUBLIC_GITHUB;
const response = await fetch("https://api.github.com/graphql", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${token}`,
},
body: JSON.stringify({ query, variables }),
});
if (!response.ok) {
console.error("HTTP Error:", response.status);
return response;
}
return response.json();
}
This function allows me to pull all my markdown files from a designated GitHub repository. Thanks to Obsidian's built-in sync capabilities, any changes I make in Obsidian are automatically pushed to the repo.
parseMarkdownContent
: Extracting Frontmatter and Content
Once the markdown files are fetched, the parseMarkdownContent
function uses the gray-matter
library to extract the YAML frontmatter and markdown body from each file.
function parseMarkdownContent(content: string) {
const { data, content: body } = matter(content);
return {
slug: data.id,
name: data.name,
created: data.created ? new Date(data.created).getTime() : null,
updated: data.updated ? new Date(data.updated).getTime() : null,
body: body,
public: data.public,
tags: data.tags,
address: data.address,
};
}
The function returns an object containing all the essential data: the slug, name, creation and update timestamps, tags, and the markdown content itself. This structured data can then be easily consumed and rendered by my Next.js application.
getObsidianEntries
: Fetching All Content
When I need to list all my content, the getObsidianEntries
function comes into play. It uses the fetchFromGitHubGraphQL
function to grab all the markdown entries from the designated 'Content' folder in my GitHub repo.
export async function getObsidianEntries() {
const {
data: {
repository: {
object: { entries },
},
},
} = await fetchFromGitHubGraphQL(
`
query fetchEntries($owner: String!, $name: String!) {
repository(owner: $owner, name: $name) {
object(expression: "HEAD:Content/") {
... on Tree {
entries {
name
object {
... on Blob {
text
}
}
}
}
}
}
}
`,
{
owner: `GITHUB_USERNAME`,
name: `REPO_NAME`,
first: 100,
}
);
if (entries.errors) {
console.error("GraphQL Error:", entries.errors);
return [];
}
if (!entries) {
console.error("No data returned from the GraphQL query.");
return [];
}
return Promise.all(
entries.map((entry: { object: { text: any } }) => {
const content = entry.object.text;
return parseMarkdownContent(content);
})
);
}
The function includes error handling to gracefully deal with any issues in the GraphQL response. It then maps over the fetched entries, passing each one to the parseMarkdownContent
function to extract the structured data.
getObsidianEntry
: Fetching a Single Entry
For individual blog posts or pages, the getObsidianEntry
function provides a targeted approach. It's a specialized version of getObsidianEntries
that retrieves a single markdown file based on its unique slug.
export async function getObsidianEntry(slug: string) {
const { data } = await fetchFromGitHubGraphQL(
`
query fetchSingleEntry($owner: String!, $name: String!, $entryName: String!) {
repository(owner: $owner, name: $name) {
object(expression: $entryName) {
... on Blob {
text
}
}
}
}
`,
{
owner: `GITHUB_USERNAME`,
name: `REPO_NAME`,
entryName: `HEAD:Content/${slug}.md`,
}
);
const text = data.repository.object.text;
return parseMarkdownContent(text);
}
Zettelkasten-Inspired File Naming
To ensure each file has a unique identifier, I employ a file naming convention inspired by the Zettelkasten method. Each file name is a timestamp down to the millisecond, like so:
1671418753342.md
This approach guarantees uniqueness while also encoding the creation date and time into the file name itself. It serves as a simple, time-based tagging system that keeps files organized chronologically and provides a unique key for fetching specific content.
Putting It All Together
So, that's the high-level overview of my setup: I write in Obsidian, which syncs with GitHub for storage and versioning. Next.js then pulls the content it needs via GitHub's GraphQL API and renders it on my site.
By leveraging the strengths of each tool—Obsidian for its excellent writing experience, GitHub for robust storage and versioning, and Next.js for its powerful rendering and publishing capabilities—I've created a streamlined, efficient workflow for managing and publishing my content.