Some Recent Random Thoughts on Software, Agents, and What Comes Next
Haven’t written anything in months. No idea what I’ve been doing, doesn’t feel like I’ve been working on anything in particular. Every day just running Codex, Claude Code, and OpenClaw.
OpenClaw is what made me buy a Mac Mini (got it early, before the price went up), a Raspberry Pi, and a VPS. Nothing sits idle anymore. Smart home stuff connected together, a robotic arm, random software I build because I need it for twenty minutes. A lot of it never gets released — publishing something takes more energy than building it, and putting out software I made in an afternoon feels irresponsible to everyone involved.
Then there’s LaunchNext, which I’ve been chipping away at for six months — gesture support, Core Animation ProMotion 120fps, an uninstaller integration, right-click batch actions. I think it’s genuinely good now.
Building software is still fun. More fun with AI, honestly. Since around December last year the success rate on complex features went up noticeably. Cloud models I like: Gemini 3.1 Pro, Claude Opus 4.6, GPT 4.5 xhigh, Kimi K2.5 (glad Cursor likes this one), MiniMax 2.7. Locally I keep Qwen 3.5 27B running — great especially for anything involving photos.
I think SaaS and software as a business is dead. Look at Adobe, Figma, Salesforce, IBM — Figma is down 80% from its IPO peak, worth less than what Adobe tried to pay for it; Salesforce’s CEO said AI agents have already replaced 4,000 of their customer service employees; Adobe’s CEO just resigned and an analyst literally titled their piece “AI SaaSpocalypse.” This isn’t the future. It’s already happening. So these homemade projects of mine are just hobbies now.
I can’t really stand React Native and Electron (sorry). Cross-platform doesn’t mean much anymore — just have an AI translate the codebase from one language to another, except for very complex software. Takes a bit more time, but the result is just better. React Native renders something that’s neither native nor web, and using it feels like that. A Cloudflare engineer just rebuilt 94% of Next.js in one week with AI — vinext. Think about how much Vercel spent on that. The moat is gone. Long term, most software doesn’t need Electron unless you enjoy using several extra GB of RAM. Personal opinion, companies have their own considerations. Try GPUI. Whatever I build, whether I maintain it or not, it’ll basically always be open source. More fun that way.
Highly recommend voice input for coding — way faster than typing. But talking is still too slow. Neuralink might help — you need to upgrade your bandwidth. The more fundamental approach is automation: remove humans from the loop. Humans are the most error-prone and inefficiency-generating part of any process. Let the agent find the problems and fix them itself. Karpathy’s autoresearch is genuinely fun — 630 lines of Python, the agent runs its own experiments, modifies its own code, 100 iterations overnight, you just write program.md to tell it where to search. If you’re getting AI to write code and the result is off, it might not be the agent’s problem — are your requirements actually clear? Once the agent is running, GUI doesn’t make much sense anyway. CLI and API are what agents actually use. The stuck point right now: getting multiple agents to coordinate cleanly is still hard. Current tooling isn’t satisfying. If nothing good exists I’ll build it myself. This is fundamentally a data infrastructure problem and getting it wrong is painful. Bigger screens are becoming more necessary.
OpenClaw is genuinely fun. Reminds me of Ghost in the Shell. Context management is becoming the real skill.
We don’t need as many programmers, designers, finance people, or artists anymore. Organizations can run leaner. The economic problem I can’t figure out: when people get laid off they have less money to spend, markets shrink, companies buy more tokens to stay competitive and lay off more people — that loop is ugly. Big companies too — efficiency goes up, layoffs go harder, loop runs faster. I genuinely don’t have an answer. Maybe the only way out is that agents give you the ability to do harder things — things that were previously out of reach because of skill barriers. For small companies at least it’s good — you just don’t need to hire as many people.
My view on AI is still clear. Before you speak, you think — and that thinking happens at a higher, more abstract level. Language is just the output interface. LLMs, whether in thinking mode or regular output, are fundamentally operating in token space — same level throughout. That’s a structural limitation. JEPA works differently: it predicts in representation space, not pixels, not tokens. It predicts what the abstract state of a situation should look like going forward. Reinforcement learning plus a world model is the right approach. Language is just the UI. LeCun’s AMI is going this direction. V-JEPA is the closest thing to it.
This 7-hour podcast is the most enjoyable thing I’ve watched all year.
Every day more anxious. Speed is still too slow, tokens burning too slowly, constant overload state — like losing my mind. 😂
But then again, there are a lot of interesting things left to explore. The universe is big. Should be fun. Otherwise what else is there to do in a simulation.



