@frode
[30/92]
Just read https://technicalwriting.dev/ml/embeddings/overview.html, and while it didn’t present me with a lot of new stuff, it had one idea that would be interesting to implement; providing embeddings through an API. I might take a stab at adding sqlite-lembed and sqlite-vec, and create embeddings locally for all content, and serve it up to anyone who wants it (mainly me, I suppose). Will also get me most of the way to some semantic search functionality and «related threads».
Idea for learning: feature flag service built on etcd, with a OpenFeature provider implementation and CEL for server-side runtime evaluation
I did avtually follow up on this, though not quite as described. I went for an implementation using BadgerDB instead, with clustering provided through the Raft consensus protocol. Whole project is here: https://github.com/frodejac/switch Incidentally, this was built when testing out Cursor for the first time, and I was pretty amazed at how quickly I could go from nothing to a functional MVP. Have to admit the amazement faded a bit after working with it for a while, as the tool quickly reaches some limits once the complexity starts to grow. And some routine tasks such as refactoring can quickly eat up hours of your time if left to the AI to figure out. The simple task of moving around existing code can get really hard when it’s done through «memorizing» the existing code and then reproducing it from memory instead of just copy-pasting things or using proper rafctoring tools.
I found that a much better approach for AI assisted refactoring is to have the AI suggest changes instead of running in agent mode. Get it to suggest new structure, which pieces can be extracted, etc., then have it create the scaffolding for the new files, and then lastly do all the copying and fixing manually. Much less hassle. And you don’t run the risk of suddenly getting a new feature you didn’t ask for.
Note to self: add some linkblog functionality. A simple form with a input field for the link and a free text form for short commentary.
I find a lot of interesting links to articles that would be great to keep in one place for later.
Found this one quite interesting (copy-pasted from https://news.ycombinator.com/item?id=42620446): > [...] all you need to know is velocity. The Y Combinator people call it doing things > that don’t scale. Here is how it works for absolutely anything: > > 1. Get the right tools in place. This is an intrinsic capability set you have to build. > People tend to fail here most frequently and hope some framework or copy/paste of a > library will just do it for them. Don’t be some worthless pretender. Know your shit from > experience so you can execute with confidence. > > 2. Build a solid foundation. This will require a lot of trial and error plus several > rounds of refactoring because you need some idea of the edge cases and where you the > pain points are. You will know it when you have it because it’s highly durable and > requires less of everything compared to the alternatives. A solid foundation isn’t a > thing you sell. It’s your baseline for doing everything else at low cost. > > 3. Create tests. These should be in writing but they don’t have to be. You need a list > of known successes and failures ready to apply at everything new. There are a lot of > whiners that are quick to cry about how something can’t be done. Fuck those guys and > instead try it to know exactly what more it takes to get done. > > 4. Finally, measure things. It is absolutely astonishing that most people cannot do this > at all. It looks amazing when you see it done well and this is ultimately what separates > the adults from the children. This is where velocity comes from because you will know > exactly how much faster you are compared to where you were. If you aren’t intimately > aware of your performance in numbers from a variety of perspectives you aren’t more > special than anyone else. > > People who accomplish hard things are capable of doing those because they didn’t get > stuck. They had the proper tools in place to manipulate their environment, redefine > execution (foundation), objectively determine what works without guessing, and then > know how much to tweak it moving forward. It's perhaps a bit rough, and written from a tech perspective, but I think it makes sense and the principles can be applied to most hard problems.