Some changes of note from the last few days: * UI tweaks * Made the content container a little wider on large displays * Switched to using Google fonts instead of self-hosting * More minimal UI * Bunch of fixes for smaller screens, e.g. text-overflow: ellipsis for long usernames ++ * Button styling * Added a link to the signup page from the login page * Tweaked username/email validation, made it more restrictive * Implemented basic flash messages on pages where it makes sense * User pages now display threads/posts authored by that user (with pagination!). Categories for user pages are broken, though, will have to fix that. It's a lot of fun working on this, even though nothing may come of if it the end. If nothing else, I will be able to reuse big parts of the codebase for another project I have in mind that has a much narrower scope and more targeted audience. More on that some other time!
Got user page categories working, though the category links on the posts still point to / so filtering has to be done manually.
Just did a major refactor of the codebase to get a better structure, and hopefully improve velocity for future features. Not a lot of interesting stuff to say about it. There's a decent change I'll refactor it some more at some point, as I'm still trying to figure out the most ergonomic way of creating request handlers in Go. The codebase has really low test coverage, so I hope I didn't break anything. #devlog
I'm thinking of categorizing threads through #tagging. Right now it's not possible to query threads by topic, and I don't want to go all full-text or semantic search (yet at least). Without reading up on any best practices for a system like this, I'm thinking of the following process: - user submits post to thread (also what happens on thread creation) - store thread and return to user, but also kick off a background job to process the text content and categorize the thread based on tags found in it's posts - add a (repeatable) c query parameter to /t that, if present, filters threads by catgeory - add a op query paramter to /t that defaults to any, but can also be set to all to only include threads that has all the selected categories To avoid a very complicated parser to extract tags, I'll restrict it to - only try to extract tags from the last line - the line must start with a '#' character - tags are on the format #[a-z]{3,}
I have to make some UI changes if I want to display tags on posts as links to a thread query that includes the given tag. I think it would be best to gather the tags at the top of the thread and display it under the thread ID and author heading. If I’m using # to denote tags, I’ll start using $ to denote thread and post IDs instead of # to avoid any confusion.
If I've done everything correct, it should be possible to categorize threads now using the syntax described above. I dropped changing the symbol used to denote thread and post IDs, but might change that later. #devlog
One UI thing I missed: don't include the thread-categories div if there are no categories to display.
Tweaked the UI a little. Removed the border between threads, and added a border-top on the categories div. That also removed the need to hide the categories div if it's empty. Also, my initial rough implemntation of categorization took the entire thread and processed all posts every time. Now it just takes the post that has been written and processes that. I think I can call this feature complete. Only thing I'm considering is if I should remove the text line containing the tags from the post content after processing. Makes for a cleaner look. Or, I could keep it in the database and remove it when it's rendered. Either will work.
If I decide to remove it, I should take care to handle posts consisting of only tags correctly.
Another day with some spare time, and another improvment incoming. I've set up Litestream on my VPS to do streaming replication of the SQLite database backing this site. So far I've set up ansible to install dependencies (sqlite and litestream itself, latter installed from a downloaded .deb), create the litestream config file from a template, and start the service. Service will also be reloaded on configuration changes. Right now I'm just replicating the database to another directory on the host to verify that everything is working. Next step is to set up replication to Backblaze B2 so I can have a proper backup setup.
Streaming replication to Backblaze works like a charm! Seeing as this is a very low-traffic site, I've set the sync interval on the s3 bucket to 1m. This should really limit any costs associated with this backup, even in the case of a massive traffic surge. For normal operation, I'm pretty sure I'm looking at free backup here. We'll see in a month, I guess.
One thing I actually should try is a recovery of the B2 backups. Not today, though.
Feeling inspired today, so I've got some more work done: * Rate limiting of signin attempts * Improved error handling * Now that I have a working rate limiter, I've enabled signups! No button anywhere that will take you there, though, so if you're reading this head to /signup and fill in the form, and I'll probably activate your account. A side note: I have a linux desktop, and a macbook, and I've previously only built and deployed micronotal from the linux machine. Running the ansible playbook from my mac, I hit a snag with mattn/sqlite3 and CGO. Luckily, there's a very straight-forward how-to in the README (https://github.com/mattn/go-sqlite3?tab=readme-ov-file#cross-compiling-from-macos): $ brew install FiloSottile/musl-cross/musl-cross $ CC=x86_64-linux-musl-gcc CXX=x86_64-linux-musl-g++ GOARCH=amd64 GOOS=linux CGO_ENABLED=1 go build -ldflags "-linkmode external -extldflags -static"
Made a couple of incremental improvements today! Yet to actually deploy the updates, but the short changelist is: * Fixed pagination buttons. Renamed from next/previous to older/newer, as the sematics are a bit clearer. Also, it's no longer possible to paginate into the void, as the the buttons are hidden if there are no more results to paginate to. * Switched from storing passwords using bcrypt to argon2id. Added some logic to autoupgrade users on login (since that's the only point in time where I have access to the password in plain text). Not exactly going to be a big migration, as I'm the only user, still nice to do it properly. * Slight improvement of the profile page. Now displays some stats like when you joined, and how many threads/posts you've written.
Deployed, and it looks to be working great!
I've been thinking a lot about https://eieio.games/essays/scaling-one-million-checkboxes/ lately. I not only think it's a fascinating story, but also on a technical level it's got a bunch of interesting challenges to solve. So, naturally, I've been considering alternative solutions, and I'm starting to play with the idea of creating my own version of OMCB. Something I've read about, but never have actually had the need for myself, is sharding. So instead of doing the pragmatic thing and use redis like the original, I was thinking of creating a distributed monstrosity where state is spread out over several different instances. The basic architecture would consist of a 3 layers: 1) caddy to serve static content and act as a reverse proxy for 2) 2) webserver responsible for handling the requests from clients, including * establishing and maintaining websocket connections * forwarding updates to the correct shard * subscribing to incremental updates from all shards and forwarding to all connected clients * regularly broadcasting a full state to all connected clients to reset state in case we lose some incremental updates along the way 3) application responsible for maintaining the state of a given shard * stores state in-memory as a bitmap * regularly backs up state to disk so that we can recover in case of failure * has methods for updating shard state, fetching whole shard state, and broadcasting incremental updates to subscribed webservers There's probably a billion things that can go wrong with this architecture, and that is among the reasons why I'd like to try it out. It should provide ample opportunity for learning new things.
Some more random ideas: * Create an orchestrator/control plane that can handle redistribution of data if we increase or decrease the number of shards * The control plane can also be responsible for backing up the entire grid * It should be able to configure the web servers without having to restart them to let them know of the changes in shards * It should be possible to dynamically set the shard state so that we can recover the entire grid from a snapshot
I read a bit more on SQLite this morning, and came across this https://www.sqlite.org/np1queryprob.html. Being used to work with client/server databases and being accustomed to thinking that n+1 is the enemy, this was a little surprising. A quick refactor of the function responsible for getting threads, and it's now not only more readable, but threads and posts are even correctly limited when doing pagination.
One thing missing from micronotal is pagination. Let's see how quick I can get something working.
Turns out I'd actually already implemented pagination using thread UUIDs as cursors! So this should be quick. I'd managed to mix up > and < when comparing the UUIDs, so ?after=<uuid> gives you every thread _before_ the provided ID, but that's a quick fix.
No, I was actually right earlier. Got after and before semantics mixed up when combined with ordering on newest first.
Got a very basic implementation working now. It's far from flawless, as it allows you to navigate to empty pages, and pagination buttons are never hidden. But now it's actually possible to see old threads!
Got to do a bit more working on the styling as well.
Calling it good enough now. You can still paginate into the void, but the buttons look ok at least. Biggest issue remaining to solve with pagination is how the limiting in the query to fetch threads and posts is applied. Right now I’m joining thread and post tables and taking the limit on the entire result. This can cause the last thread in the result to not have all posts included. Not critical to fix, but it’s definitely on the todo list.
Fixed pagination by embracing n+1 https://micronotal.com/t/019289d4-e55b-7b19-b46a-215840607cf0
I don’t know exactly what it is about this microblogging style, but it’s got me started writing, and I’m enjoying it. I just finished and published a blogpost on my personal site https://frodejac.dev/blog/go-poor-mans-cron.html It’s not groundbreaking stuff, but it’s actually written by me (not a smidgen of gen AI involved)!