Feeds

David Heinemeier Hansson

(30)

    Great AI Steals

    Picasso got it right: Great artists steal. Even if he didn’t actually say it, and we all just repeat the quote because Steve

    ...
    Full


    Picasso got it right: Great artists steal. Even if he didn’t actually say it, and we all just repeat the quote because Steve Jobs used it. Because it strikes at the heart of creativity: None of it happens in a vacuum. Everything is inspired by something. The best ideas, angles, techniques, and tones are stolen to build everything that comes after the original.

    Furthermore, the way to learn originality is to set it aside while you learn to perfect a copy. You learn to draw by imitating the masters. I learned photography by attempting to recreate great compositions. I learned to program by aping the Ruby standard library.

    Stealing good ideas isn’t a detour on the way to becoming a master — it’s the straight route. And it’s nothing to be ashamed of.

    This, by the way, doesn’t just apply to art but to the economy as well. Japan became an economic superpower in the 80s by first poorly copying Western electronics in the decades prior. China is now following exactly the same playbook to even greater effect. You start with a cheap copy, then you learn how to make a good copy, and then you don’t need to copy at all.

    AI has sped through the phase of cheap copies. It’s now firmly established in the realm of good copies. You’re a fool if you don’t believe originality is a likely next step. In all likelihood, it’s a matter of when, not if. (And we already have plenty of early indications that it’s actually already here, on the edges.)

    Now, whether that’s good is a different question. Whether we want AI to become truly creative is a fair question — albeit a theoretical or, at best, moral one. Because it’s going to happen if it can happen, and it almost certainly can (or even has).

    Ironically, I think the peanut gallery disparaging recent advances — like the Ghibli fever — over minor details in the copying effort will only accelerate the quest toward true creativity. AI builders, like the Japanese and Chinese economies before them, eager to demonstrate an ability to exceed.

    All that is to say that AI is in the "Good Copy" phase of its creative evolution. Expect "The Great Artist" to emerge at any moment.

    The Year on Linux

    I've been running Linux, Neovim, and Framework for a year now, but it easily feels like a decade or more.

    ...
    Full


    I've been running Linux, Neovim, and Framework for a year now, but it easily feels like a decade or more. That's the funny thing about habits: They can be so hard to break, but once you do, they're also easily forgotten.

    That's how it feels having left the Apple realm after two decades inside the walled garden. It was hard for the first couple of weeks, but since then, it’s rarely crossed my mind.

    Humans are rigid in the short term, but flexible in the long term. Blessed are the few who can retain the grit to push through that early mental resistance and reach new maxima.

    That is something that gets harder with age. I can feel it. It takes more of me now to wipe a mental slate clean and start over. To go back to being a beginner. But the reward for learning something new is as satisfying as ever.

    But it's also why I've tried to be modest with the advocacy. I don't know if most developers are better off on Linux. I mean, I believe they are, at some utopian level, especially if they work for the web, using open source tooling. But I don't know if they are as humans with limited will or capacity for change.

    Of course, it's fair to say that one doesn't want to. Either because one remain a fan of Apple, in dire need of the remaining edge MacBooks retain on efficiency/battery, or simply content inside the ecosystem. There are plenty of reasons why someone might not want to change. It's not just about rigidity.

    Besides, it's a dead end trying to convince anyone of an alternative with the sharp end of a religious argument. That kind of crusading just seeds resentment and stubbornness. I know that all too well.

    What I've found to work much better is planting seeds and showing off your plowshare. Let whatever curiosity that blooms find its own way towards your blue sky. The mimetic engine of persuasion runs much cleaner anyway.

    And for me, it's primarily about my personal computing workbench regardless of what the world does or doesn't. It was the same with finding Ruby. It's great when others come along for the ride, but I'd also be happy taking the trip solo too.

    So consider this a postcard from a year into the Linux, Neovim, and Framework journey. The sun is still shining, the wind is in my hair, and the smile on my lips hasn't been this big since the earliest days of OS X.

    Singularity & Serenity

    The singularity is the point where artificial intelligence goes parabolic, surpassing humans writ large, and leads to rapid, unpredictable change. The intellectual seed

    ...
    Full


    The singularity is the point where artificial intelligence goes parabolic, surpassing humans writ large, and leads to rapid, unpredictable change. The intellectual seed of this concept was planted back in the '50s by early computer pioneer John von Neumann. So it’s been here since the dawn of the modern computer, but I’ve only just come around to giving the idea consideration as something other than science fiction.

    Now, this quickly becomes quasi-religious, with all the terms being as fluid as redemption, absolution, and eternity. What and when exactly is AGI (Artificial General Intelligence) or SAI (Super Artificial Intelligence)? You’ll find a million definitions.

    But it really does feel like we’re on the cusp of something. Even the most ardent AI skeptics are probably finding it hard not to be impressed with recent advances. Everything Is Ghibli might seem like a silly gimmick, but to me, it flipped a key bit here: the style persistence, solving text in image generation, and then turning those images into incredible moving pictures.

    What makes all this progress so fascinating is that it’s clear nobody knows anything about what the world will look like four years from now. It’s barely been half that time since ChatGPT and Midjourney hit us in 2022, and the leaps since then have been staggering.

    I’ve been playing with computers since the Commodore 64 entertained my childhood street with Yie Ar Kung-Fu on its glorious 1 MHz processor. I was there when the web made the internet come alive in the mid-'90s. I lined up for hours for the first iPhone to participate in the grand move to mobile. But I’ve never felt less able to predict what the next token of reality will look like.

    When you factor in recent advances in robotics and pair those with the AI brains we’re building, it’s easy to imagine all sorts of futuristic scenarios happening very quickly: from humanoid robots finishing household chores à la The Jetsons (have you seen how good it’s getting at folding?) to every movie we watch being created from a novel prompt on the spot, to, yes, even armies of droids and drones fighting our wars.
    This is one of those paradigm shifts with the potential for Total Change. Like the agricultural revolution, the industrial revolution, the information revolution. The kind that rewrites society, where it was impossible to tell in advance where we’d land.

    I understand why people find that uncertainty scary. But I choose to receive it as exhilarating instead. What good is it to fret about a future you don’t control anyway? That’s the marvel and the danger of progress: nobody is actually in charge! This is all being driven by a million independent agents chasing irresistible incentives. There’s no pause button, let alone an off-ramp. We’re going to be all-in whether we like it or not.
    So we might as well come to terms with that reality. Choose to marvel at the accelerating milestones we've been hitting rather than tremble over the next.

    This is something most religions and grand philosophies have long since figured out. The world didn’t just start changing; we’ve had these lurches of forward progress before. And humans have struggled to cope with the transition since the beginning of time. So, the best intellectual frameworks have worked on ways to deal.
    Christianity has the Serenity Prayer, which I’ve always been fond of:

    God, grant me the serenity
    to accept the things I cannot change,
    the courage to change the things I can,
    and the wisdom to know the difference.

    That’s the part most people know. But it actually continues:

    Living one day at a time,
    enjoying one moment at a time;
    accepting hardship as a pathway to peace;
    taking, as Jesus did,
    this sinful world as it is,
    not as I would have it;
    trusting that You will make all things right
    if I surrender to Your will;
    so that I may be reasonably happy in this life
    and supremely happy with You forever in the next.
    Amen.

    What a great frame for the mind!

    The Stoics were big on the same concept. Here’s Epictetus:

    Some things are in our control and others not. Things in our control are opinion, pursuit, desire, aversion, and, in a word, whatever are our own actions. Things not in our control are body, property, reputation, command, and, in one word, whatever are not our own actions.

    Buddhism does this well too. Here’s the Buddha being his wonderfully brief self:

    Suffering does not follow one who is free from clinging.

    I don’t think it’s a coincidence that all these traditions converged on the idea of letting go of what you can’t control, not clinging to any specific preferred outcome. Because you’re bound to be disappointed that way. You don’t get to know the script to life in advance, but what an incredible show, if you just let it unfold.
    This is the broader view of amor fati. You should learn to love not just your own fate, but the fate of the world — its turns, its twists, its progress, and even the inevitable regressions.

    The singularity may be here soon, or it may not. You’d be a fool to be convinced either way. But you’ll find serenity in accepting whatever happens.

    It's five grand a day to miss our S3 exit

    We're spending just shy of $1.5 million/year on AWS S3 at the moment to host files for Basecamp, HEY, and

    ...
    Full


    We're spending just shy of $1.5 million/year on AWS S3 at the moment to host files for Basecamp, HEY, and everything else. The only way we were able to get the pricing that low was by signing a four-year contract. That contract expires this summer, June 30, so that's our departure date for the final leg of our cloud exit.

    We've already racked the replacement from Pure Storage in our two primary data centers. A combined 18 petabytes, securely replicated a thousand miles apart. It's a gorgeous rack full of blazing-fast NVMe storage modules. Each card in the chassis capable of storing 150TB now.

    Pure Storage comes with an S3-compatible API, so no need for CEPH, Minio, or any of the other object storage software solutions you might need, if you were trying to do this exercise on commodity hardware. This makes it pretty easy from the app side to do the swap. 

    But there's still work to do. We have to transfer almost six petabytes out of S3. In an earlier age, that egress alone would have cost hundreds of thousands of dollars in fees alone. But now AWS offers a free 60-day egress window for anyone who wants to leave, so that drops the cost to $0. Nice!

    It takes a while to transfer that much data, though. Even on the fat 40-Gbit pipe we have set aside for the purpose, it'll probably take at least three weeks, once you factor in overhead and some babysitting of the process.

    That's when it's good to remind ourselves why June 30th matters. And the reminder math pens out in nice, round numbers for easy recollection: If we don't get this done in time, we'll be paying a cool five thousand dollars a day to continue to use S3 (if all the files are still there). Yikes!

    That's $35,000/week! That's $150,000/month!

    Pretty serious money for a company of our size. But so are the savings. Over five years, it'll now be almost five million! Maybe even more, depending on the growth in files we need to store for customers. About $1.5 million for the Pure Storage hardware, and a bit less than a million over five years for warranty and support.

    But those big numbers always seem a bit abstract to me. The idea of paying $5,000/day, if we miss our departure date, is awfully concrete in comparison.



    pure-storage.jpeg


    To hell with forever

    Immortality always sounded like a curse to me. But especially now, having passed the halfway point of the average wealthy male life expectancy.

    ...
    Full


    Immortality always sounded like a curse to me. But especially now, having passed the halfway point of the average wealthy male life expectancy. Another scoop of life as big as the one I've already been served seems more than enough, thank you very much.

    Does that strike you as morbid?

    It's funny, people seem to have no problem understanding satiation when it comes to the individual parts of life. Enough delicious cake, no more rides on the rollercoaster, the end of a great party. But not life itself.

    Why?

    The eventual end strikes me as beautiful relief. Framing the idea that you can see enough, do enough, be enough. And have enjoyed the bulk of it, without wanting it to go on forever.

    Have you seen Highlander? It got panned on its initial release in the 80s. Even Sean Connery couldn't save it with the critics at the time. But I love it. It's one of my all-time favorite movies. It's got a silly story about a worldwide tournament of immortal Highlanders who live forever, lest they get their heads chopped off, and then the last man standing wins... more life?

    Yeah, it doesn't actually make a lot of sense. But it nails the sadness of forever. The loneliness, the repetition, the inevitable cynicism with humanity. Who wants to live forever, indeed.

    It's the same theme in Björk's wonderfully melancholic song I've Seen It All. It's a great big world, but eventually every unseen element will appear as but a variation on an existing theme. Even surprise itself will succumb to familiarity.

    Even before the last day, you can look forward to finality, too. I love racing, but I'm also drawn to the day when the reflexes finally start to fade, and I'll hang up the helmet. One day I will write the last line of Ruby code, too. Sell the last subscription. Write the last tweet. How merciful.

    It gets harder with people you love, of course. Harder to imagine the last day with them. But I didn't know my great-great-grandfather, and can easily picture him passing with the satisfaction of seeing his lineage carry on without him.

    One way to think of this is to hold life with a loose grip. Like a pair of drumsticks. I don't play, but I'm told that the music flows better when you avoid strangling them in a death grip. And then you enjoy keeping the beat until the song ends.

    Amor fati. Amor mori.

    Age is a problem at Apple

    The average age of Apple's board members is 68! Nearly half are over 70, and the youngest is 63. It’s not much

    ...
    Full


    The average age of Apple's board members is 68! Nearly half are over 70, and the youngest is 63. It’s not much better with the executive team, where the average age hovers around 60. I’m all for the wisdom of our elders, but it’s ridiculous that the world’s premier tech company is now run by a gerontocracy.

    And I think it’s starting to show. The AI debacle is just the latest example. I can picture the board presentation on Genmoji: “It’s what the kids want these days!!”. It’s a dumb feature because nobody on Apple’s board or in its leadership has probably ever used it outside a quick demo.

    I’m not saying older people can’t be an asset. Hell, at 45, I’m no spring chicken myself in technology circles! But you need a mix. You need to blend fluid and crystallized intelligence. You need some people with a finger on the pulse, not just some bravely keeping one.

    Once you see this, it’s hard not to view slogans like “AI for the rest of us” through that lens. It’s as if AI is like programming a VCR, and you need the grandkids to come over and set it up for you.

    By comparison, the average age on Meta’s board is 55. They have three members in their 40s. Steve Jobs was 42 when he returned to Apple in 1997. He was 51 when he introduced the iPhone. And he was gone — from Apple and the world — at 56.

    Apple literally needs some fresh blood to turn the ship around.


    The 80s are still alive in Denmark

    I grew up in the 80s in Copenhagen and roamed the city on my own from an early age. My parents rarely had

    ...
    Full


    I grew up in the 80s in Copenhagen and roamed the city on my own from an early age. My parents rarely had any idea where I went after school, as long as I was home by dinner. They certainly didn’t have direct relationships with the parents of my friends. We just figured things out ourselves. It was glorious.

    That’s not the type of childhood we were able to offer our kids in modern-day California. Having to drive everywhere is, of course, its own limitation, but that’s only half the problem. The other half is the expectation that parents are involved in almost every interaction. Play dates are commonly arranged via parents, even for fourth or fifth graders.

    The new hysteria over smartphones doesn’t help either, as it cuts many kids off from being able to make their own arrangements entirely (since the house phone has long since died too).

    That’s not how my wife grew up in the 80s in America either. The United States of that age was a lot like what I experienced in Denmark: kids roaming around on their own, parents blissfully unaware of where their offspring were much of the time, and absolutely no expectation that parents would arrange play dates or even sleepovers.

    I’m sure there are still places in America where life continues like that, but I don’t personally know of any parents who are able to offer that 80s lifestyle to their kids — not in New York, not in Chicago, not in California. Maybe this life still exists in Montana? Maybe it’s a socioeconomic thing? I don’t know.

    But what I do know is that Copenhagen is still living in the 80s! We’ve been here off and on over the last several years, and just today, I was struck by the fact that one of our kids had left school after it ended early, biked halfway across town with his friend, and was going to spend the day at his place. And we didn’t get an update on that until much later.

    Copenhagen is a compelling city in many ways, but if I were to credit why the US News and World Report just crowned Denmark the best country for raising children in 2025, I’d say it’s the independence — carefree independence. Danish kids roam their cities on their own, manage their social relationships independently, and do so in relative peace and safety.

    I’m a big fan of Jonathan Haidt’s work on What Happened In 2013, which he captured in The Coddling of the American Mind. That was a very balanced book, and it called out the lack of unsupervised free play and independence as key contributors to the rise in child fragility.

    But it also pinned smartphones and social media with a large share of the blame, despite the fact that the effect, especially on boys, is very much a source of ongoing debate. I’m not arguing that excessive smartphone usage — and certainly social-media brain rot — is good for kids, but I find this explanation is proving to be a bit too easy of a scapegoat for all the ills plaguing American youth.

    And it certainly seems like upper-middle-class American parents have decided that blaming the smartphone for everything is easier than interrogating the lack of unsupervised free play, rough-and-tumble interactions for boys, and early childhood independence.

    It also just doesn’t track in countries like Denmark, where the smartphone is just as prevalent, if not more so, than in America. My oldest had his own phone by third grade, and so did everyone else in his class — much earlier than Haidt recommends. And it was a key tool for them to coordinate the independence that The Coddling of the American Mind called for more of.

    Look, I’m happy to see phones parked during school hours. Several schools here in Copenhagen do that, and there’s a new proposal pending legislation in parliament to make that law across the land. Fine!

    But I think it’s delusional of American parents to think that banning the smartphone — further isolating their children from independently managing their social lives — is going to be the one quick fix that cures the anxious generation.

    What we need is more 80s-style freedom and independence for kids in America.

    Apple needs a new asshole in charge

    When things are going well, managers can fool themselves into thinking that people trying their best is all that matters. Poor outcomes are

    ...
    Full


    When things are going well, managers can fool themselves into thinking that people trying their best is all that matters. Poor outcomes are just another opportunity for learning! But that delusion stops working when the wheels finally start coming off — like they have now for Apple and its AI unit. Then you need someone who cares about the outcome above the effort. Then you need an asshole.

    In management parlance, an asshole is someone who cares less about feelings or effort and more about outcomes. Steve Jobs was one such asshole. So seems to be Musk. Gates certainly was as well. Most top technology chiefs who've had to really fight in competitive markets for the top prize fall into this category.

    Apple's AI management is missing an asshole:

    Walker defended his Siri group, telling them that they should be proud. Employees poured their “hearts and souls into this thing,” he said. “I saw so many people giving everything they had in order to make this happen and to make incredible progress together.”

    So it's stuck nurturing feelings:

    “You might have co-workers or friends or family asking you what happened, and it doesn’t feel good,” Walker said. “It’s very reasonable to feel all these things.” He said others are feeling burnout and that his team will be entitled to time away to recharge to get ready for “plenty of hard work ahead.”

    These are both quotes from the Bloomberg report on the disarray inside Apple, following the admission that the star feature of the iPhone 16 — the Apple Intelligence that could reach inside your personal data — won't ship until the iPhone 17, if at all.

    John Gruber from Daring Fireball dug up this anecdote from the last time Apple seriously botched a major software launch:

    Steve Jobs doesn’t tolerate duds. Shortly after the launch event, he summoned the MobileMe team, gathering them in the Town Hall auditorium in Building 4 of Apple’s campus, the venue the company uses for intimate product unveilings for journalists. According to a participant in the meeting, Jobs walked in, clad in his trademark black mock turtleneck and blue jeans, clasped his hands together, and asked a simple question: “Can anyone tell me what MobileMe is supposed to do?”

    Having received a satisfactory answer, he continued, “So why the fuck doesn’t it do that?”

    For the next half-hour Jobs berated the group. “You’ve tarnished Apple’s reputation,” he told them. “You should hate each other for having let each other down.” The public humiliation particularly infuriated Jobs. 

    Can you see the difference? This is an asshole in action.

    Apple needs to find a new asshole and put them in charge of the entire product line. Cook clearly isn't up to the task, and the job is currently spread thinly across a whole roster of senior VPs. Little fiefdoms. This is poison to the integrated magic that was Apple's trademark for so long.

    The most interesting people

    We didn’t used to need an explanation for having kids. That was just life. That’s just what you did. But now we do,

    ...
    Full


    We didn’t used to need an explanation for having kids. That was just life. That’s just what you did. But now we do, because now we don’t.

    So allow me: Having kids means making the most interesting people in the world. Not because toddlers or even teenagers are intellectual oracles — although life through their eyes is often surprising and occasionally even profound — but because your children will become the most interesting people to you.

    That’s the important part. To you.

    There are no humans on earth I’m as interested in as my children. Their maturation and growth are the greatest show on the planet. And having a front-seat ticket to this performance is literally the privilege of a lifetime.

    But giving a review of this incredible show just doesn’t work. I could never convince a stranger that my children are the most interesting people in the world, because they wouldn’t be, to them.

    So words don’t work. It’s a leap of faith. All I can really say is this: Trust me, bro.

    We wash our trash to repent for killing God

    Denmark is technically and officially still a Christian nation. Lutheranism is written into the constitution. The government has a ministry for the

    ...
    Full


    Denmark is technically and officially still a Christian nation. Lutheranism is written into the constitution. The government has a ministry for the church. Most Danes pay 1% of their earnings directly to fund the State religion. But God is as dead here as anywhere in the Western world. Less than 2% attend church service on a weekly basis. So one way to fill the void is through climate panic and piety.

    I mean, these days, you can scarcely stroll past stores in the swankier parts of Copenhagen without being met by an endless parade of ads carrying incantations towards sustainability, conservation, and recycling. It's everywhere.

    Hilariously, sometimes this even includes recommending that customers don’t buy the product. I went to a pita place for lunch the other day. The menu had a meat shawarma option, and alongside it was a plea not to order it too often because it’d be better for the planet if you picked the falafel instead.

    But the hysteria peaks with the trash situation. It’s now common for garbage rooms across Copenhagen to feature seven or more bins for sorting disposals. Despite trash-sorting robots being able to do this job far better than humans in most cases, you see Danes dutifully sorting and subdividing their waste with a pious obligation worthy of the new climate deity.

    Yet it’s not even the sorting that gets me — it’s the washing. You can’t put plastic containers with food residue into the recycling bucket, so you have to rinse them first. This leads to the grotesque daily ritual of washing trash (and wasting water galore in the process!).

    Plus, most people in Copenhagen live in small apartments, and all that separated trash has to be stored separately until the daily pilgrimage to the trash room. So it piles up all over the place.

    This is exactly what Nietzsche meant by “God is dead” — his warning that we’d need to fill the void with another centering orientation toward the world. And clearly, climatism is stepping up as a suitable alternative for the Danes. It’s got guilt, repentance, and plenty of rituals to spare. Oh, and its heretics too.

    Look, I'd like a clean planet as much as the next sentient being. I'm not crying any tears over the fact that gas-powered cars are quickly disappearing from the inner-city of Copenhagen. I love biking! I wish we'd get a move on with nuclear for consistent, green energy. But washing or sorting my trash when a robot could do a better job just to feel like "I'm doing my part"? No.

    It’s like those damn paper straws that crumble halfway through your smoothie. The point of it all seems to be self-inflicted, symbolic suffering — solely to remind you of your good standing with the sacred lord of recycling, refuting the plastic devil.

    And worse, these small, meaningless acts of pious climate service end up working as catholic indulgences. We buy a good conscience washing trash so we don't have to feel guilty setting new records flying for fun.

    I’m not religious, but I’m starting to think it’d be nicer to spend a Sunday morning in the presence of the Almighty than to keep washing trash as pagan replacement therapy.

    Our switch to Kamal is complete

    In a fit of frustration, I wrote the first version of Kamal in six weeks at the start of 2023.

    ...
    Full


    In a fit of frustration, I wrote the first version of Kamal in six weeks at the start of 2023. Our plan to get out of the cloud was getting bogged down in enterprisey pricing and Kubernetes complexity. And I refused to accept that running our own hardware had to be that expensive or that convoluted. So I got busy building a cheap and simple alternative. 

    Now, just two years later, Kamal is deploying every single application in our entire heritage fleet, and everything in active development. Finalizing a perfectly uniform mode of deployment for every web app we've built over the past two decades and still maintain.

    See, we have this obsession at 37signals: That the modern build-boost-discard cycle of internet applications is a scourge. That users ought to be able to trust that when they adopt a system like Basecamp or HEY, they don't have to fear eviction from the next executive re-org. We call this obsession Until The End Of The Internet.

    That obsession isn't free, but it's worth it. It means we're still operating the very first version of Basecamp for thousands of paying customers. That's the OG code base from 2003! Which hasn't seen any updates since 2010, beyond security patches, bug fixes, and performance improvements. But we're still operating it, and, along with every other app in our heritage collection, deploying it with Kamal.

    That just makes me smile, knowing that we have customers who adopted Basecamp in 2004, and are still able to use the same system some twenty years later. In the meantime, we've relaunched and dramatically improved Basecamp many times since. But for customers happy with what they have, there's no forced migration to the latest version.

    I very much had all of this in mind when designing Kamal. That's one of the reasons I really love Docker. It allows you to encapsulate an entire system, with all of its dependencies, and run it until the end of time. Kind of how modern gaming emulators can run the original ROM of Pac-Man or Pong to perfection and eternity.

    Kamal seeks to be but a simple wrapper and workflow around this wondrous simplicity. Complexity is but a bridge — and a fragile one at that. To build something durable, you have to make it simple.

    Closing the borders alone won't fix the problems

    Denmark has been reaping lots of delayed accolades from its relatively strict immigration policy lately. The Swedes and the

    ...
    Full


    Denmark has been reaping lots of delayed accolades from its relatively strict immigration policy lately. The Swedes and the Germans in particular are now eager to take inspiration from The Danish Model, given their predicaments. The very same countries that until recently condemned the lack of open-arms/open-border policies they would champion as Moral Superpowers

    But even in Denmark, thirty years after the public opposition to mass immigration started getting real political representation, the consequences of culturally-incompatible descendants from MENAPT continue to stress the high-trust societal model.

    Here are just three major cases that's been covered in the Danish media in 2025 alone:

    1. Danish public schools are increasingly struggling with violence and threats against students and teachers, primarily from descendants of MENAPT immigrants. In schools with 30% or more immigrants, violence is twice as prevalent. This is causing a flight to private schools from parents who can afford it (including some Syrians!). Some teachers are quitting the profession as a result, saying "the Quran run the class room".
    2. Danish women are increasingly feeling unsafe in the nightlife. The mayor of the country's third largest city, Odense, says he knows why: "It's groups of young men with an immigrant background that's causing it. We might as well be honest about that." But unfortunately, the only suggestion he had to deal with the problem was that "when [the women] meet these groups... they should take a big detour around them".
    3. A soccer club from the infamous ghetto area of Vollsmose got national attention because every other team in their league refused to play them. Due to the team's long history of violent assaults and death threats against opposing teams and referees. Bizarrely leading to the situation were the team got to the top of its division because they'd "win" every forfeited match.

    Problems of this sort have existed in Denmark for well over thirty years. So in a way, none of this should be surprising. But it actually is. Because it shows that long-term assimilation just isn't happening at a scale to tackle these problems. In fact, data shows the opposite: Descendants of MENAPT immigrants are more likely to be violent and troublesome than their parents.

    That's an explosive point because it blows up the thesis that time will solve these problems. Showing instead that it actually just makes it worse. And then what?

    This is particularly pertinent in the analysis of Sweden. After the "far right" party of the Swedish Democrats got into government, the new immigrant arrivals have plummeted. But unfortunately, the net share of immigrants is still increasing, in part because of family reunifications, and thus the problems continue.

    Meaning even if European countries "close the borders", they're still condemned to deal with the damning effects of maladjusted MENAPT immigrant descendants for decades to come. If the intervention stops there.

    There are no easy answers here. Obviously, if you're in a hole, you should stop digging. And Sweden has done just that. But just because you aren't compounding the problem doesn't mean you've found a way out. Denmark proves to be both a positive example of minimizing the digging while also a cautionary tale that the hole is still there.

    Apple does AI as Microsoft did mobile

    When the iPhone first appeared in 2007, Microsoft was sitting pretty with their mobile strategy. They'd been early to the market with Windows

    ...
    Full


    When the iPhone first appeared in 2007, Microsoft was sitting pretty with their mobile strategy. They'd been early to the market with Windows CE, they were fast-following the iPod with their Zune. They also had the dominant operating system, the dominant office package, and control of the enterprise. The future on mobile must have looked so bright!

    But of course now, we know it wasn't. Steve Ballmer infamously dismissed the iPhone with a chuckle, as he believed all of Microsoft's past glory would guarantee them mobile victory. He wasn't worried at all. He clearly should have been!

    After reliving that Ballmer moment, it's uncanny to watch this CNBC interview from one year ago with Johny Srouji and John Ternus from Apple on their AI strategy. Ternus even repeats the chuckle!! Exuding the same delusional confidence that lost Ballmer's Microsoft any serious part in the mobile game. 

    But somehow, Apple's problems with AI seem even more dire. Because there's apparently no one steering the ship. Apple has been promising customers a bag of vaporware since last fall, and they're nowhere close to being able to deliver on the shiny concept demos. The ones that were going to make Apple Intelligence worthy of its name, and not just terrible image generation that is years behind the state of the art.

    Nobody at Apple seems able or courageous enough to face the music: Apple Intelligence sucks. Siri sucks. None of the vaporware is anywhere close to happening. Yet as late as last week, you have Cook promoting the new MacBook Air with "Apple Intelligence". Yikes.

    This is partly down to the org chart. John Giannandrea is Apple's VP of ML/AI, and he reports directly to Tim Cook. He's been in the seat since 2018. But Cook evidently does not have the product savvy to be able to tell bullshit from benefit, so he keeps giving Giannandrea more rope. Now the fella has hung Apple's reputation on vaporware, promised all iPhone 16 customers something magical that just won't happen, and even spec-bumped all their devices with more RAM for nothing but diminished margins. Ouch.

    This is what regression to the mean looks like. This is what fiefdom management looks like. This is what having a company run by a logistics guy looks like. Apple needs a leadership reboot, stat. That asterisk is a stain.



    apple-id-asterisk.png


    Beans and vibes in even measure

    Bean counters have a bad rep for a reason. And it’s not because paying attention to the numbers is inherently unreasonable. It’s because

    ...
    Full


    Bean counters have a bad rep for a reason. And it’s not because paying attention to the numbers is inherently unreasonable. It’s because weighing everything exclusively by its quantifiable properties is an impoverished way to view business (and the world!).

    Nobody presents this caricature better than the MBA types who think you can manage a business entirely in the abstract realm of "products," "markets," "resources," and "deliverables." To hell with that. The death of all that makes for a breakout product or service happens when the generic lingo of management theory takes over.

    This is why founder-led operations often keep an edge. Because when there’s someone at the top who actually gives a damn about cars, watches, bags, software, or whatever the hell the company makes, it shows up in a million value judgments that can’t be quantified neatly on a spreadsheet.

    Now, I love a beautiful spreadsheet that shows expanding margins, healthy profits, and customer growth as much as any business owner. But much of the time, those figures are derivatives of doing all the stuff that you can’t compute and that won’t quantify.

    But this isn’t just about running a better business by betting on unquantifiable elements that you can’t prove but still believe matter. It’s also about the fact that doing so is simply more fun! It’s more congruent. It’s vibe management.

    And no business owner should ever apologize for having fun, following their instincts, or trusting that the numbers will eventually show that doing the right thing, the beautiful thing, the poetic thing is going to pay off somehow. In this life or the next.

    Of course, you’ve got to get the basics right. Make more than you spend. Don’t get out over your skis. But once there’s a bit of margin, you owe it to yourself to lean on that cushion and lead the business primarily on the basis of good vibes and a long vision.

    Air purifiers are a simple answer to allergies

    I developed seasonal allergies relatively late in life. From my late twenties onward, I spent many miserable days in the throes of sneezing,

    ...
    Full


    I developed seasonal allergies relatively late in life. From my late twenties onward, I spent many miserable days in the throes of sneezing, headache, and runny eyes. I tried everything the doctors recommended for relief. About a million different types of medicine, several bouts of allergy vaccinations, and endless testing. But never once did an allergy doctor ask the basic question: What kind of air are you breathing?

    Turns out that's everything when you're allergic to pollen, grass, and dust mites! The air. That's what's carrying all this particulate matter, so if your idea of proper ventilation is merely to open a window, you're inviting in your nasal assailants. No wonder my symptoms kept escalating.

    For me, the answer was simply to stop breathing air full of everything I'm allergic to while working, sleeping, and generally just being inside. And the way to do that was to clean the air of all those allergens with air purifiers running HEPA-grade filters.

    That's it. That was the answer!

    After learning this, I outfitted everywhere we live with these machines of purifying wonder: One in the home office, one in the living area, one in the bedroom. All monitored for efficiency using Awair air sensors. Aiming to have the PM2.5 measure read a fat zero whenever possible.

    In America, I've used the Alen BreatheSmart series. They're great. And in Europe, I've used the Philips ones. Also good.

    It's been over a decade like this now. It's exceptionally rare that I have one of those bad allergy days now. It can still happen, of course — if I spend an entire day outside, breathing in allergens in vast quantities. But as with almost everything, the dose makes the poison. The difference between breathing in some allergens, some of the time, is entirely different from breathing all of it, all of the time.

    I think about this often when I see a doctor for something. Here was this entire profession of allergy specialists, and I saw at least a handful of them while I was trying to find a medical solution. None of them even thought about dealing with the environment. The cause of the allergy. Their entire field of view was restricted to dealing with mitigation rather than prevention.

    Not every problem, medical or otherwise, has a simple solution. But many problems do, and you have to be careful not to be so smart that you can't see it.

    Human service is luxury

    Maybe one day AI will answer every customer question flawlessly, but we're nowhere near that reality right now. I can't tell you how

    ...
    Full


    Maybe one day AI will answer every customer question flawlessly, but we're nowhere near that reality right now. I can't tell you how often I've been stuck in some god-forsaken AI loop or phone tree WHEN ALL I WANT IS A HUMAN. So I end up either just yelling "operator", "operator", "operator" (the modern-day mayday!) or smashing zero over and over. It's a unworthy interaction for any premium service.  

    Don't get me wrong. I'm pretty excited about AI. I've seen it do some incredible things. And of course it's just going to keep getting better. But in our excitement about the technical promise, I think we're forgetting that humans need more than correct answers. Customer service at its best also offers understanding and reassurance. It offers a human connection.

    Especially as AI eats the low-end, commodity-style customer support. The sort that was always done poorly, by disinterested people, rapidly churning through a perceived dead-end job, inside companies that only ever saw support as a cost center. Yeah, nobody is going to cry a tear for losing that.

    But you know that isn't all there is to customer service. Hopefully you've had a chance to experience what it feels like when a cheerful, engaged human is interested in helping you figure out what's wrong or how to do something right. Because they know exactly what they're talking about. Because they've helped thousands of others through exactly the same situation. That stuff is gold.

    Partly because it feels bespoke. A customer service agent who's good at their job knows how to tailor the interaction not just to your problem, but to your temperament. Because they've seen all the shapes. They can spot an angry-but-actually-just-frustrated fit a thousand miles away. They can tell a timid-but-curious type too. And then deliver exactly what either needs in that moment. That's luxury.

    That's our thesis for Basecamp, anyway. That by treating customer service as a career, we'll end up with the kind of agents that embody this luxury, and our customers will feel the difference.

    AMD in everything

    Back in the mid 90s, I had a friend who was really into raytracing, but needed to nurture his hobby on a budget.

    ...
    Full


    Back in the mid 90s, I had a friend who was really into raytracing, but needed to nurture his hobby on a budget. So instead of getting a top-of-the-line Intel Pentium machine, he bought two AMD K5 boxes, and got a faster rendering flow for less money. All I cared about in the 90s was gaming, though, and for that, Intel was king, so to me, AMD wasn't even a consideration.

    And that's how it stayed for the better part of the next three decades. AMD would put out budget parts that might make economic sense in narrow niches, but Intel kept taking all the big trophies in gaming, in productivity, and on the server.

    As late as the end of the 2010s, we were still buying Intel for our servers at 37signals. Even though AMD was getting more competitive, and the price-watt-performance equation was beginning to tilt in their favor.

    By the early 2020s, though, AMD had caught up on the server, and we haven't bought Intel since. The AMD EPYC line of chips are simply superior to anything Intel offers in our price/performance window. Today, the bulk of our new fleet run on dual EPYC 9454s for a total of 96 cores(!) per machine. They're awesome.

    It's been the same story on the desktop and laptop for me. After switching to Linux last year, I've been all in on AMD. My beloved Framework 13 is rocking an AMD 7640U, and my desktop machine runs on an AMD 7950X. Oh, and my oldest son just got a new gaming PC with an AMD 9900X, and my middle son has a AMD 8945HS in his gaming laptop. It's all AMD in everything!

    So why is this? Well, clearly the clever crew at AMD is putting out some great CPU designs lately with Lisa Su in charge. I'm particularly jazzed about the upcoming Framework desktop, which runs the latest Max 395+ chip, and can apportion up to 110GB of memory as VRAM (great for local AI!). This beast punches a multi-core score that's on par with that of an M4 Pro, and it's no longer that far behind in single-core either. But all the glory doesn't just go to AMD, it's just as much a triumph of TSMC.

    TSMC stands for Taiwan Semiconductor Manufacturing Company. They're the world leader in advanced chip making, and key to the story of how Apple was able to leapfrog the industry with the M-series chips back in 2020. Apple has long been the top customer for TSMC, so they've been able to reserve capacity on the latest manufacturing processes (called "nodes"), and as a result had a solid lead over everyone else for a while.

    But that lead is evaporating fast. That new Max+ 395 is showing that AMD has nearly caught up in terms of raw grunt, and the efficiency is no longer a million miles away either. This is again largely because AMD has been able to benefit from the same TSMC-powered progress that's also propelling Apple.

    But you know who it's not propelling? Intel. They're still trying to get their own chip-making processes to perform competitively, but so far it looks like they're just falling further and further behind. The latest Intel boards are more expensive and run slower than the competition from Apple, AMD, and Qualcomm. And there appears to be no easy fix to sort it all out around the corner.

    TSMC really is lifting all the boats behind its innovation locks. Qualcomm, just like AMD, have nearly caught up to Apple with their latest chips. The 8 Elite unit in my new Samsung S25 is faster than the A18 Pro in the iPhone 16 Pro in multi-core tests, and very close in single-core. It's also just as efficient now.

    This is obviously great for Android users, who for a long time had to suffer the indignity of truly atrocious CPU performance compared to the iPhone. It was so bad for a while that we had to program our web apps differently for Android, because they simply didn't have the power to run JavaScript fast enough! But that's all history now.

    But as much as I now cheer for Qualcomm's chips, I'm even more chuffed about the fact that AMD is on a roll. I spend far more time in front of my desktop than I do any other computer, and after dumping Apple, it's a delight to see that the M-series advantage is shrinking to irrelevance fast. There's of course still the software reason for why someone would pick Apple, and they continue to make solid hardware, but the CPU playing field is now being leveled.

    This is obviously a good thing if you're a fan of Linux, like me. Framework in particular has invigorated a credible alternative to the sleek, unibody but ultimately disposable nature of the reigning MacBook laptops. By focusing on repairability, upgradeability, and superior keyboards, we finally have an alternative for developer laptops that doesn't just feel like a cheap copy of a MacBook. And thanks to AMD pushing the envelope, these machines are rapidly closing the remaining gaps in performance and efficiency.

    And oh how satisfying it must be to sit as CEO of AMD now. The company was founded just one year after Intel, back in 1969, but for its entire existence, it's lived in the shadow of its older brother. Now, thanks to TSMC, great leadership from Lisa Su, and a crack team of chip designers, they're finally reaping the rewards. That is one hell of a journey to victory!

    So three cheers for AMD! A tip of the hat to TSMC. And what a gift to developers and computer enthusiasts everywhere that Apple once more has some stiff competition in the chip space.

    The New York Times gives liberals The Danish Permission to pivot on mass immigration

    One of the key roles The New York Times plays in American society is as guardians of the liberal Overton window. Its

    ...
    Full


    One of the key roles The New York Times plays in American society is as guardians of the liberal Overton window. Its editorial line sets the terms for what's permissible to discuss in polite circles on the center left. Whether it's covid mask efficiency, trans kids, or, now, mass immigration. When The New York Times allows the counter argument to liberal orthodoxy to be published, it signals to its readers that it's time to pivot. 

    On mass immigration, the center-left liberal orthodoxy has for the last decade in particular been that this is an unreserved good. It's cultural enrichment! It's much-needed workers! It's a humanitarian imperative! Any opposition was treated as de-facto racism, and the idea that a country would enforce its own borders as evidence of early fascism. But that era is coming to a close, and The New York Times is using The Danish Permission to prepare its readers for the end.

    As I've often argued, Denmark is an incredibly effective case study in such arguments, because it's commonly thought of as the holy land of progressivism. Free college, free health care, amazing public transit, obsessive about bikes, and a solid social safety net. It's basically everything people on the center left ever thought they wanted from government. In theory, at least.

    In practice, all these government-funded benefits come with a host of trade-offs that many upper middle-class Americans (the primary demographic for The New York Times) would find difficult to swallow. But I've covered that in detail in The reality of the Danish fairytale, so I won't repeat that here.

    Instead, let's focus on the fact that The New York Times is now begrudgingly admitting that the main reason Europe has turned to the right, in election after election recently, is due to the problems stemming from mass immigration across the continent and the channel.

    For example, here's a bit about immigrant crime being higher:

    Crime and welfare were also flashpoints: Crime rates were substantially higher among immigrants than among native Danes, and employment rates were much lower, government data showed.

    It wasn't long ago that recognizing higher crime rates among MENAPT immigrants to Europe was seen as a racist dog whistle. And every excuse imaginable was leveled at the undeniable statistics showing that immigrants from countries like Tunisia, Lebanon, and Somalia are committing violent crime at rates 7-9 times higher than ethnic Danes (and that these statistics are essentially the same in Norway and Finland too).

    Or how about this one: Recognizing that many immigrants from certain regions were loafing on the welfare state in ways that really irked the natives:

    One source of frustration was the fact that unemployed immigrants sometimes received resettlement payments that made their welfare benefits larger than those of unemployed Danes.

    Or the explicit acceptance that a strong social welfare state requires a homogeneous culture in order to sustain the trust needed for its support:

    Academic research has documented that societies with more immigration tend to have lower levels of social trust and less generous government benefits. Many social scientists believe this relationship is one reason that the United States, which accepted large numbers of immigrants long before Europe did, has a weaker safety net. A 2006 headline in the British publication The Economist tartly summarized the conclusion from this research as, “Diversity or the welfare state: Choose one.”

    Diversity or welfare! That again would have been an absolutely explosive claim to make not all that long ago.

    Finally, there's the acceptance that cultural incompatibility, such as on the role of women in society, is indeed a problem:

    Gender dynamics became a flash point: Danes see themselves as pioneers for equality, while many new arrivals came from traditional Muslim societies where women often did not work outside the home and girls could not always decide when and whom to marry.

    It took a while, but The New York Times is now recognizing that immigrants from some regions really do commit vastly more violent crime, are net-negative contributors to the state budgets (by drawing benefits at higher rates and being unemployed more often), and that together with the cultural incompatibilities, end up undermining public trust in the shared social safety net. 

    The consequence of this admission is dawning not only on The New York Times, but also on other liberal entities around Europe:

    Tellingly, the response in Sweden and Germany has also shifted... Today many Swedes look enviously at their neighbor. The foreign-born population in Sweden has soared, and the country is struggling to integrate recent arrivals into society. Sweden now has the highest rate of gun homicides in the European Union, with immigrants committing a disproportionate share of gun violence. After an outburst of gang violence in 2023, Ulf Kristersson, the center-right prime minister, gave a televised address in which he blamed “irresponsible immigration policy” and “political naïveté.” Sweden’s center-left party has likewise turned more restrictionist.

    All these arguments are in service of the article's primary thesis: To win back power, the left, in Europe and America, must pivot on mass immigration, like the Danes did. Because only by doing so are they able to counter the threat of "the far right".

    The piece does a reasonable job accounting for the history of this evolution in Danish politics, except for the fact that it leaves out the main protagonist. The entire account is written from the self-serving perspective of the Danish Social Democrats, and it shows. It tells a tale of how it was actually Social Democrat mayors who first spotted the problems, and well, it just took a while for the top of the party to correct. Bullshit.

    The real reason the Danes took this turn is that "the far right" won in Denmark, and The Danish People's Party deserve the lion's share of the credit. They started in 1995, quickly set the agenda on mass immigration, and by 2015, they were the second largest party in the Danish parliament. 

    Does that story ring familiar? It should. Because it's basically what's been happening in Sweden, France, Germany, and the UK lately. The mainstream parties have ignored the grave concerns about mass immigration from its electorate, and only when "the far right" surged as a result, did the center left and right parties grow interested in changing course.

    Now on some level, this is just democracy at work. But it's also hilarious that this process, where voters choose parties that champion the causes they care about, has been labeled The Grave Threat to Democracy in recent years. Whether it's Trump, Le Pen, Weidel, or Kjærsgaard, they've all been met with contempt or worse for channeling legitimate voter concerns about immigration.

    I think this is the point that's sinking in at The New York Times. Opposition to mass immigration and multi-culturalism in Europe isn't likely to go away. The mayhem that's swallowing Sweden is a reality too obvious to ignore. And as long as the center left keeps refusing to engage with the topic honestly, and instead hides behind some anti-democratic firewall, they're going to continue to lose terrain.

    Again, this is how democracies are supposed to work! If your political class is out of step with the mood of the populace, they're supposed to lose. And this is what's broadly happening now. And I think that's why we're getting this New York Times pivot. Because losing sucks, and if you're on the center left, you'd like to see that end.

    Stick with the customer

    One of the biggest mistakes that new startup founders make is trying to get away from the customer-facing roles too early. Whether it's

    ...
    Full


    One of the biggest mistakes that new startup founders make is trying to get away from the customer-facing roles too early. Whether it's customer support or it's sales, it's an incredible advantage to have the founders doing that work directly, and for much longer than they find comfortable.

    The absolute worst thing you can do is hire a sales person or a customer service agent too early. You'll miss all the golden nuggets that customers throw at you for free when they're rejecting your pitch or complaining about the product. Seeing these reasons paraphrased or summarized destroy all the nutrients in their insights. You want that whole-grain feedback straight from the customers' mouth! 

    When we launched Basecamp in 2004, Jason was doing all the customer service himself. And he kept doing it like that for three years!! By the time we hired our first customer service agent, Jason was doing 150 emails/day. The business was doing millions of dollars in ARR. And Basecamp got infinitely, better both as a market proposition and as a product, because Jason could funnel all that feedback into decisions and positioning.

    For a long time after that, we did "Everyone on Support". Frequently rotating programmers, designers, and founders through a day of answering emails directly to customers. The dividends of doing this were almost as high as having Jason run it all in the early years. We fixed an incredible number of minor niggles and annoying bugs because programmers found it easier to solve the problem than to apologize for why it was there.

    It's not easy doing this! Customers often offer their valuable insights wrapped in rude language, unreasonable demands, and bad suggestions. That's why many founders quit the business of dealing with them at the first opportunity. That's why few companies ever do "Everyone On Support". That's why there's such eagerness to reduce support to an AI-only interaction.

    But quitting dealing with customers early, not just in support but also in sales, is an incredible handicap for any startup. You don't have to do everything that every customer demands of you, but you should certainly listen to them. And you can't listen well if the sound is being muffled by early layers of indirection.

    When to give up

    Most of our cultural virtues, celebrated heroes, and catchy slogans align with the idea of "never give up". That's a good default! Most

    ...
    Full


    Most of our cultural virtues, celebrated heroes, and catchy slogans align with the idea of "never give up". That's a good default! Most people are inclined to give up too easily, as soon as the going gets hard. But it's also worth remembering that sometimes you really should fold, admit defeat, and accept that your plan didn't work out.

    But how to distinguish between a bad plan and insufficient effort? It's not easy. Plenty of plans look foolish at first glance, especially to people without skin in the game. That's the essence of a disruptive startup: The idea ought to look a bit daft at first glance or it probably doesn't carry the counter-intuitive kernel needed to really pop.

    Yet it's also obviously true that not every daft idea holds the potential to be a disruptive startup. That's why even the best venture capital investors in the world are wrong far more than they're right. Not because they aren't smart, but because nobody is smart enough to predict (the disruption of) the future consistently. The best they can do is make long bets, and then hope enough of them pay off to fund the ones that don't.

    So far, so logical, so conventional. A million words have been written by a million VCs about how their shrewd eyes let them see those hidden disruptive kernels before anyone else could. Good for them.

    What I'm more interested in knowing more about is how and when you pivot from a promising bet to folding your hand. When do you accept that no amount of additional effort is going to get that turkey to soar?

    I'm asking because I don't have any great heuristics here, and I'd really like to know! Because the ability to fold your hand, and live to play your remaining chips another day, isn't just about startups. It's also about individual projects. It's about work methods. Hell, it's even about politics and societies at large.

    I'll give you just one small example. In 2017, Rails 5.1 shipped with new tooling for doing end-to-end system tests, using a headless browser to validate the functionality, as a user would in their own browser. Since then, we've spent an enormous amount of time and effort trying to make this approach work. Far too much time, if you ask me now.

    This year, we finished our decision to fold, and to give up on using these types of system tests on the scale we had previously thought made sense. In fact, just last week, we deleted 5,000 lines of code from the Basecamp code base by dropping literally all the system tests that we had carried so diligently for all these years.

    I really like this example, because it draws parallels to investing and entrepreneurship so well. The problem with our approach to system tests wasn't that it didn't work at all. If that had been the case, bailing on the approach would have been a no brainer long ago. The trouble was that it sorta-kinda did work! Some of the time. With great effort. But ultimately wasn't worth the squeeze.

    I've seen this trap snap on startups time and again. The idea finds some traction. Enough for the founders to muddle through for years and years. Stuck with an idea that sorta-kinda does work, but not well enough to be worth a decade of their life. That's a tragic trap.

    The only antidote I've found to this on the development side is time boxing. Programmers are just as liable as anyone to believe a flawed design can work if given just a bit more time. And then a bit more. And then just double of what we've already spent. The time box provides a hard stop. In Shape Up, it's six weeks. Do or die. Ship or don't. That works.

    But what's the right amount of time to give a startup or a methodology or a societal policy? There's obviously no universal answer, but I'd argue that whatever the answer, it's "less than you think, less than you want".

    Having the grit to stick with the effort when the going gets hard is a key trait of successful people. But having the humility to give up on good bets turned bad might be just as important.

    Europe must become dangerous again

    Trump is doing Europe a favor by revealing the true cost of its impotency. Because, in many ways, he has the manners

    ...
    Full


    Trump is doing Europe a favor by revealing the true cost of its impotency. Because, in many ways, he has the manners and the honesty of a child. A kid will just blurt out in the supermarket "why is that lady so fat, mommy?". That's not a polite thing to ask within earshot of said lady, but it might well be a fair question and a true observation! Trump is just as blunt when he essentially asks: "Why is Europe so weak?".

    Because Europe is weak, spiritually and militarily, in the face of Russia. It's that inherent weakness that's breeding the delusion that Russia is at once both on its last legs, about to lose the war against Ukraine any second now, and also the all-potent superpower that could take over all of Europe, if we don't start World Word III to counter it. This is not a coherent position.

    If you want peace, you must be strong.

    The big cats in the international jungle don't stick to a rules-based order purely out of higher principles, but out of self-preservation. And they can smell weakness like a tiger smells blood. This goes for Europe too. All too happy to lecture weaker countries they do not fear on high-minded ideals of democracy and free speech, while standing aghast and weeping powerlessly when someone stronger returns the favor.

    I'm not saying that this is right, in some abstract moral sense. I like the idea of a rules-based order. I like the idea of territorial sovereignty. I even like the idea that the normal exchanges between countries isn't as blunt and honest as those of a child in the supermarket. But what I like and "what is" need separating.

    Europe simply can't have it both ways. Be weak militarily, utterly dependent on an American security guarantee, and also expect a seat at the big-cat table. These positions are incompatible. You either get your peace dividend -- and the freedom to squander it on net-zero nonsense -- or you get to have a say in how the world around you is organized.

    Which brings us back to Trump doing Europe a favor. For all his bluster and bullying, America is still a benign force in its relation to Europe. We're being punked by someone from our own alliance. That's a cheap way of learning the lesson that weakness, impotence, and peace-dividend thinking is a short-term strategy. Russia could teach Europe a far more costly lesson. So too China.

    All that to say is that Europe must heed the rude awakening from our cowboy friends across the Atlantic. They may be crude, they may be curt, but by golly, they do have a point.

    Get jacked, Europe, and you'll no longer get punked. Stay feeble, Europe, and the indignities won't stop with being snubbed in Saudi Arabia.

    Europe's impotent rage

    Europe has become a third-rate power economically, politically, and militarily, and the price for this slowly building predicament is now due all at

    ...
    Full


    Europe has become a third-rate power economically, politically, and militarily, and the price for this slowly building predicament is now due all at once.

    First, America is seeking to negotiate peace in Ukraine directly with Russia, without even inviting Europe to the table. Decades of underfunding the European military has lead us here. The never-ending ridicule of America, for spending the supposedly "absurd sum" of 3.4% of its GDP to maintain its might, coming home to roost.

    Second, mass immigration in Europe has become the central political theme driving the surge of right-wing parties in countries across the continent. Decades of blind adherence to a naive multi-cultural ideology has produced an abject failure to assimilate culturally-incompatible migrants. Rather than respond to this growing public discontent, mainstream parties all over Europe run the same playbook of calling anyone with legitimate concerns "racist", and attempting to disparage or even ban political parties advancing these topics.

    Third, the decline of entrepreneurship in Europe has lead to a death of new major companies, and an accelerated brain drain to America. The European economy lost parity with the American after 2008, and now the net-zero nonsense has lead Europe's old manufacturing powerhouse, Germany, to commit financial harakiri. Shutting its nuclear power plants, over-investing in solar and wind, and rendering its prized car industry noncompetitive on the global market. The latter leading European bureaucrats in the unenviable position of having to both denounce Trump on his proposed tariffs while imposing their own on the Chinese.

    A single failure in any of these three crucial realms would have been painful to deal with. But failure in all three at the same time is a disaster, and it's one of Europe's own making. Worse still is that Europeans at large still appear to be stuck in the early stages of grief. Somewhere between "anger" and "bargaining". Leaving us with "depression" before we arrive at "acceptance".

    Except this isn't destiny. Europe is not doomed to impotent outrage or repressive anger. Europe has the people, the talent, and the capital to choose a different path. What it currently lacks is the will.

    I'm a Dane. Therefore, I'm a European. I don't summarize the sad state of Europe out of spite or ill will or from a lack of standing. I don't want Europe to become American. But I want Europe to be strong, confident, and successful. Right now it's anything but.

    The best time for Europe to make a change was twenty years ago. The next best time is right now. Forza Europe! Viva Europe!

    Leave it to the Germans

    Just a day after JD Vance's remarkable speech in Munich, 60 Minutes validates his worst accusations in a chilling segment on

    ...
    Full


    Just a day after JD Vance's remarkable speech in Munich, 60 Minutes validates his worst accusations in a chilling segment on the totalitarian German crackdown on free speech. You couldn't have scripted this development for more irony or drama!

    This isn't 60 Minutes finding a smoking gun in some secret government archive, detailing a plot to prosecute free speech under some fishy pretext. No, this is German prosecutors telling an American journalist in an open interview that insulting people online is a crime and retweeting a "lie" will get you in trouble with the law. No hidden cameras! All out in the open!

    Nor is this just some rogue prosecutorial theory. 60 Minutes goes along for the ride with German police, as they conduct a raid at dawn with six armed officers to confiscate the laptop and a phone of a German citizen suspected of posting a racist cartoon. Even typing out this description of what happens sounds like insane hyperbole, but you can just watch the clip for yourself.

    And this morning raid was just one of fifty that day. Fifty raids in a day! For wrong speech, spicy memes, online insults of politicians, and other utterances by German citizens critical of their government or policies! Is this is the kind of hallowed democracy that Germans are supposed to defend against the supposed threat of AfD?

    As I noted yesterday, even Denmark has some draconian laws on the books limiting free speech. And they've been used in anger too. Although I've yet to see the kind of grotesque enforcement -- six armed officers at dawn coming to confiscate a laptop! -- but the trend is none the less worrying all across Europe, not just in Germany.

    I suppose this is why European leaders are in such shock over Vance's wagging finger. Because they know he's dead on, but they're not used to getting called out like this. On the world stage, while they just had to sit there. I can see how that's humiliating.

    But the humiliation of the European people is infinitely greater as they're gaslit about their right to free speech. That Vance doesn't know what he's talking about. Oh, and what about the Gulf of America?? It's pathetic.

    So too is the apparent deep support from many parts of Europe for this totalitarian insanity. I keep hearing from Europeans who with a straight face will claim that of course they have free speech, but that doesn't mean you can insult people, hurt their feelings, or post statistics that might cast certain groups in a bad light.

    Madness.

    "The party told you to reject the evidence of your eyes and ears. It was their final, most essential command."
    -- Orwell, 1949

    Europeans don't have or understand free speech

    The new American vice president JD Vance just gave a remarkable talk at the Munich Security Conference on free speech and mass

    ...
    Full


    The new American vice president JD Vance just gave a remarkable talk at the Munich Security Conference on free speech and mass immigration. It did not go over well with many European politicians, some of which immediately proved Vance's point, and labeled the speech "not acceptable". All because Vance dared poke at two of the holiest taboos in European politics.

    Let's start with his points on free speech, because they're the foundation for understanding how Europe got into such a mess on mass immigration. See, Europeans by and large simply do not understand "free speech" as a concept the way Americans do. There is no first amendment-style guarantee in Europe, yet the European mind desperately wants to believe it has the same kind of free speech as the US, despite endless evidence to the contrary.

    It's quite like how every dictator around the world pretends to believe in democracy. Sure, they may repress the opposition and rig their elections, but they still crave the imprimatur of the concept. So too "free speech" and the Europeans.

    Vance illustrated his point with several examples from the UK. A country that pursues thousands of yearly wrong-speech cases, threatens foreigners with repercussions should they dare say too much online, and has no qualms about handing down draconian sentences for online utterances. It's completely totalitarian and completely nuts.

    Germany is not much better. It's illegal to insult elected officials, and if you say the wrong thing, or post the wrong meme, you may well find yourself the subject of a raid at dawn. Just crazy stuff.

    I'd love to say that Denmark is different, but sadly it is not. You can be put in prison for up to two years for mocking or degrading someone on the basis on their race. It recently become illegal to burn the Quran (which sadly only serves to legitimize crazy Muslims killing or stabbing those who do). And you may face up to three years in prison for posting online in a way that can be construed as morally supporting terrorism.

    But despite all of these examples and laws, I'm constantly arguing with Europeans who cling to the idea that they do have free speech like Americans. Many of them mistakenly think that "hate speech" is illegal in the US, for example. It is not.

    America really takes the first amendment quite seriously. Even when it comes to hate speech. Famously, the Jewish lawyers of the (now unrecognizable) ACLU defended the right of literal, actual Nazis to march for their hateful ideology in the streets of Skokie, Illinois in 1979 and won.

    Another common misconception is that "misinformation" is illegal over there too. It also is not. That's why the Twitter Files proved to be so scandalous. Because it showed the US government under Biden laundering an illegal censorship regime -- in grave violation of the first amendment -- through private parties, like the social media networks.

    In America, your speech is free to be wrong, free to be hateful, free to insult religions and celebrities alike. All because the founding fathers correctly saw that asserting the power to determine otherwise leads to a totalitarian darkness.

    We've seen vivid illustrations of both in recent years. At the height of the trans mania, questioning whether men who said they were women should be allowed in women's sports or bathrooms or prisons was frequently labeled "hate speech".

    During the pandemic, questioning whether the virus might have escaped from a lab instead of a wet market got labeled "misinformation". So too did any questions about the vaccine's inability to stop spread or infection. Or whether surgical masks or lock downs were effective interventions.

    Now we know that having a public debate about all of these topics was of course completely legitimate. Covid escaping from a lab is currently the most likely explanation, according to American intelligence services, and many European countries, including the UK, have stopped allowing puberty blockers for children.

    Which brings us to that last bugaboo: Mass immigration. Vance identified it as one of the key threats to Europe at the moment, and I have to agree. So should anyone who've been paying attention to the statistics showing the abject failure of this thirty-year policy utopia of a multi-cultural Europe. The fast changing winds in European politics suggest that's exactly what's happening.

    These are not separate issues. It's the lack of free speech, and a catastrophically narrow Overton window, which has led Europe into such a mess with mass immigration in the first place. In Denmark, the first popular political party that dared to question the wisdom of importing massive numbers of culturally-incompatible foreigners were frequently charged with claims of racism back in the 90s. The same "that's racist!" playbook is now being run on political parties across Europe who dare challenge the mass immigration taboo.

    But making plain observations that some groups of immigrants really do commit vastly more crime and contribute vastly less economically to society is not racist. It wasn't racist when the Danish Folkparty did it in Denmark in the 1990s, and it isn't racist now when the mainstream center-left parties have followed suit.

    I've drawn the contrast to Sweden many times, and I'll do it again here. Unlike Denmark, Sweden kept its Overton window shut on the consequences of mass immigration all the way up through the 90s, 00s, and 10s. As a prize, it now has bombs going off daily, the European record in gun homicides, and a government that admits that the immigrant violence is out of control.

    The state of Sweden today is a direct consequence of suppressing any talk of the downsides to mass immigration for decades. And while that taboo has recently been broken, it may well be decades more before the problems are tackled at their root. It's tragic beyond belief.

    The rest of Europe should look to Sweden as a cautionary tale, and the Danish alternative as a precautionary one. It's never too late to fix tomorrow. You can't fix today, but you can always fix tomorrow.

    So Vance was right to wag his finger at all this nonsense. The lack of free speech and the problems with mass immigration. He was right to assert that America and Europe has a shared civilization to advance and protect. Whether the current politicians of Europe wants to hear it or not, I'm convinced that average Europeans actually are listening.

    Serving the country

    In 1940, President Roosevelt tapped William S. Knudsen to run the government's production of military equipment. Knudsen had spent a pivotal decade

    ...
    Full


    In 1940, President Roosevelt tapped William S. Knudsen to run the government's production of military equipment. Knudsen had spent a pivotal decade at Ford during the mass-production revolution, and was president of General Motors, when he was drafted as a civilian into service as a three-star general. Not bad for a Dane, born just ten minutes on bike from where I'm writing this in Copenhagen!

    Knudsen's leadership raised the productive capacity of the US war machine by a 100x in areas like plane production, where it went from producing 3,000 planes in 1939 to over 300,000 by 1945. He was quoted on his achievement: "We won because we smothered the enemy in an avalanche of production, the like of which he had never seen, nor dreamed possible".

    Knudsen wasn't an elected politician. He wasn't even a military man. But Roosevelt saw that this remarkable Dane had the skills needed to reform a puny war effort into one capable of winning the Second World War.

    Do you see where I'm going with this? Elon Musk is a modern day William S. Knudsen. Only even more accomplished in efficiency management, factory optimization, and first-order systems thinking.

    No, America isn't in a hot war with the Axis powers, but for the sake of the West, it damn well better be prepared for one in the future. Or better still, be so formidable that no other country or alliance would even think to start one. And this requires a strong, confident, and sound state with its affairs in order.

    If you look at the government budget alone, this is direly not so. The US was knocking on a two-trillion-dollar budget deficit in 2024! Adding to a towering debt that's now north of 36 trillion. A burden that's already consuming $881 billion in yearly interest payments. More than what's spent on the military or Medicare. Second to only Social Security on the list of line items.

    Clearly, this is not sustainable.

    This is the context of DOGE. The program, lead by Musk, that's been deputized by Trump to turn the ship around. History doesn't repeat, but it rhymes, and Musk is dropping beats that Knudsen would have surely been tapping his foot to. And just like Knudsen in his time, it's hard to think of any other American entrepreneur more qualified to tackle exactly this two-trillion dollar problem. 

    It is through The Musk Algorithm that SpaceX lowered the cost of sending a kilo of goods into lower orbit from the US by well over a magnitude. And now America's share of worldwide space transit has risen from less than 30% in 2010 to about 85%. Thanks to reusable rockets and chopstick-catching landing towers. Thanks to Musk.

    Or to take a more earthly example with Twitter. Before Musk took over, Twitter had revenues of $5 billion and earned $682 million. After the take over, X has managed to earn $1.25 billion on $2.7 billion in revenue. Mostly thank to the fact that Musk cut 80% of the staff out of the operation, and savaged the cloud costs of running the service.

    This is not what people expected at the time of the take over! Not only did many commentators believe that Twitter was going to collapse from the drastic costs in staff, they also thought that the financing for the deal would implode. Chiefly as a result of advertisers withdrawing from the platform under intense media pressure. But that just didn't happen.

    Today, the debt used to take over Twitter and turn it into X is trading at 97 cents on the dollar. The business is twice as profitable as it was before, and arguably as influential as ever. All with just a fifth of the staff required to run it. Whatever you think of Musk and his personal tweets, it's impossible to deny what an insane achievement of efficiency this has been!

    These are just two examples of Musk's incredible ability to defy the odds and deliver the most unbelievable efficiency gains known to modern business records. And we haven't even talked about taking Tesla from producing 35,000 cars in 2014 to making 1.7 million in 2024. Or turning xAI into a major force in AI by assembling a 100,000 H100 cluster at "superhuman" pace. 

    Who wouldn't want such a capacity involved in finding the waste, sloth, and squander in the US budget? Well, his political enemies, of course!

    And I get it. Musk's magic is balanced with mania and even a dash of madness. This is usually the case with truly extraordinary humans. The taller they stand, the longer the shadow. Expecting Musk to do what he does and then also be a "normal, chill dude" is delusional.

    But even so, I think it's completely fair to be put off by his tendency to fire tweets from the hip, opine on world affairs during all hours of the day, and offer his support to fringe characters in politics, business, and technology. I'd be surprised if even the most ardent Musk super fans don't wince a little every now and then at some of the antics.

    And yet, I don't have any trouble weighing those antics against the contributions he's made to mankind, and finding an easy and overwhelming balance in favor of his positive achievements.

    Musk is exactly the kind of formidable player you want on your team when you're down two trillion to nothing, needing a Hail Mary pass for the destiny of America, and eager to see the West win the future.

    He's a modern-day Knudsen on steroids (or Ketamine?). Let him cook.



    time-knudsen.jpg


    Servers can last a long time

    We bought sixty-one servers for the launch of Basecamp 3 back in 2015. Dell R430s and R630s, packing thousands of cores and

    ...
    Full


    We bought sixty-one servers for the launch of Basecamp 3 back in 2015. Dell R430s and R630s, packing thousands of cores and terabytes of RAM. Enough to fill all the app, job, cache, and database duties we needed. The entire outlay for this fleet was about half a million dollars, and it's only now, almost a decade later, that we're finally retiring the bulk of them for a full hardware refresh. What a bargain!

    That's over 3,500 days of service from this fleet, at a fully amortized cost of just $142/day. For everything needed to run Basecamp. A software service that has grossed hundreds of millions of dollars in that decade.

    We've of course had other expenses beyond hardware from operating Basecamp over the past decade. The ops team, the bandwidth, the power, and the cabinet rental across both our data centers. But none the less, owning our own iron has been a fantastically profitable proposition. Millions of dollars saved over renting in the cloud.

    And we aren't even done deriving value from this venerable fleet! The database servers, Dell R630s w/ Xeon E5-2699 CPUs and 768G of RAM, are getting handed down to some of our heritage apps. They will keep on trucking until they give up the ghost.

    When we did the public accounting for our cloud exit, it was based on five years of useful life from the hardware. But as this example shows, that's pretty conservative. Most servers can easily power your applications much longer than that.

    Owning your own servers has easily been one of our most effective cost advantages. Together with running a lean team. And managing our costs remains key to reaping the profitable fruit from the business. The dollar you keep at the end of the year is just as real whether you earn it or save it.

    So you just might want to run those cloud-exit numbers once more with a longer server lifetime value. It might just tip the equation, and motivate you to become a server owner rather than a renter.

    It burns

    The first time we had to evacuate Malibu this season was during the Franklin fire in early December. We went to bed with

    ...
    Full


    The first time we had to evacuate Malibu this season was during the Franklin fire in early December. We went to bed with our bags packed, thinking they'd probably get it under control. But by 2am, the roaring blades of fire choppers shaking the house got us up. As we sped down the canyon towards Pacific Coast Highway (PCH), the fire had reached the ridge across from ours, and flames were blazing large out the car windows. It felt like we had left the evacuation a little too late, but they eventually did get Franklin under control before it reached us.

    Humans have a strange relationship with risk and disasters. We're so prone to wishful thinking and bad pattern matching. I remember people being shocked when the flames jumped the PCH during the Woolsey fire in 2017. IT HAD NEVER DONE THAT! So several friends of ours had to suddenly escape a nightmare scenario, driving through burning streets, in heavy smoke, with literally their lives on the line. Because the past had failed to predict the future.

    I fell into that same trap for a moment with the dramatic proclamations of wind and fire weather in the days leading up to January 7. Warning after warning of "extremely dangerous, life-threatening wind" coming from the City of Malibu, and that overly-bureaucratic-but-still-ominous "Particularly Dangerous Situation" designation. Because, really, how much worse could it be? Turns out, a lot.

    It was a little before noon on the 7th when we first saw the big plumes of smoke rise from the Palisades fire. And immediately the pattern matching ran astray. Oh, it's probably just like Franklin. It's not big yet, they'll get it out. They usually do. Well, they didn't.

    By the late afternoon, we had once more packed our bags, and by then it was also clear that things actually were different this time. Different worse. Different enough that even Santa Monica didn't feel like it was assured to be safe. So we headed far North, to be sure that we wouldn't have to evacuate again. Turned out to be a good move.

    Because by now, into the evening, few people in the connected world hadn't started to see the catastrophic images emerging from the Palisades and Eaton fires. Well over 10,000 houses would ultimately burn. Entire neighborhoods leveled. Pictures that could be mistaken for World War II. Utter and complete destruction.

    By the night of the 7th, the fire reached our canyon, and it tore through the chaparral and brush that'd been building since the last big fire that area saw in 1993. Out of some 150 houses in our immediate vicinity, nearly a hundred burned to the ground. Including the first house we moved to in Malibu back in 2009. But thankfully not ours.

    That's of course a huge relief. This was and is our Malibu Dream House. The site of that gorgeous home office I'm so fond to share views from. Our home.

    But a house left standing in a disaster zone is still a disaster. The flames reached all the way up to the base of our construction, incinerated much of our landscaping, and devoured the power poles around it to dysfunction.

    We have burnt-out buildings every which way the eye looks. The national guard is still stationed at road blocks on the access roads. Utility workers are tearing down the entire power grid to rebuild it from scratch. It's going to be a long time before this is comfortably habitable again.

    So we left.

    That in itself feels like defeat. There's an urge to stay put, and to help, in whatever helpless ways you can. But with three school-age children who've already missed over a months worth of learning from power outages, fire threats, actual fires, and now mudslide dangers, it was time to go.

    None of this came as a surprise, mind you. After Woolsey in 2017, Malibu life always felt like living on borrowed time to us. We knew it, even accepted it. Beautiful enough to be worth the risk, we said. 

    But even if it wasn't a surprise, it's still a shock. The sheer devastation, especially in the Palisades, went far beyond our normal range of comprehension. Bounded, as it always is, by past experiences.

    Thus, we find ourselves back in Copenhagen. A safe haven for calamities of all sorts. We lived here for three years during the pandemic, so it just made sense to use it for refuge once more. The kids' old international school accepted them right back in, and past friendships were quickly rebooted.

    I don't know how long it's going to be this time. And that's an odd feeling to have, just as America has been turning a corner, and just as the optimism is back in so many areas. Of the twenty years I've spent in America, this feels like the most exciting time to be part of the exceptionalism that the US of A offers.

    And of course we still are. I'll still be in the US all the time on both business, racing, and family trips. But it won't be exclusively so for a while, and it won't be from our Malibu Dream House. And that burns.



    palisades-plumes.jpg


    Waiting on red

    Americans often laugh when they see how often Danes will patiently, obediently wait on the little red man to turn green before crossing

    ...
    Full


    Americans often laugh when they see how often Danes will patiently, obediently wait on the little red man to turn green before crossing an empty intersection, in the rain, even at night. Nobody is coming! Why don't you just cross?! It seems silly, but the underlying philosophy is anything but. It's load bearing for a civil society like Denmark.

    Because doing the right thing every time can be put on autopilot, and when most people follow even the basic norms consistently, the second-order effects are profound. Like the fact that Copenhagen is one of the absolute safest major cities in the world.

    But the Danes also know that norms fray if they're not enforced, so they vigorously pursue even small infractions. The Danish police regularly celebrating ticketing bicyclists making even minor mistakes (like driving instead of dragging their bike on the sidewalk). And the metro is constantly being patrolled for fare evaders and antisocial behavior.

    It's broken windows theory on steroids. And it works.

    When we were living in the city for three years following the pandemic, the most startling difference to major US cities was the prevalence of unattended children everywhere, at all hours. Our oldest was just nine years-old when he started taking the metro alone, even at night.

    How many American parents would feel comfortable letting their nine-year old take the L in Chicago or the subway in Manhattan? I don't know any. And as a result, you just don't see any unattended children do this. But in Copenhagen it's completely common place.

    This is the prize of having little tolerance for antisocial behavior in the public space. When you take away the freedom from crackheads and bums to smoke up on the train or sleep in the park, you grant the freedom to nine-year olds to roam the city and for families to enjoy the park at dusk.

    This is the fundamental error of suicidal empathy. That tolerance of the deranged and dangerous few can be kept a separate discussion from the freedom and safety of the many. These are oppositional forces. The more antisocial behavior you excuse, the further families will retract into their protective shell. And suddenly there are no longer children around in the public city space or any appetite for public transit.

    Maybe you have to become a parent to really understand this. I admit that I didn't give this nearly the same attention before coming a father of three. But the benefit isn't exclusively about the freedom and safety enjoyed by your own family, it's also about the ambient atmosphere of living in a city where children are everywhere. It's a special form of life-affirming luxury, and it's probably the thing I've missed most about Copenhagen since we went back to the US.

    What's interesting is how much active effort it takes to maintain this state of affairs. The veneer of civil society is surprisingly thin. Norms fray quickly if left unguarded. And it's much harder to reestablish their purchase on society than to protect them from disappearing in the first place.

    But I also get that it's hard to connect the dots from afar, though. Many liberals in America keep Denmark as some mythical place where all their policy dreams have come true, without ever wrestling much with what it takes to maintain the social trust that allows those policies to enjoy public support.

    The progressive Nirvana of Denmark is built on a highly conservative set of norms and traditions. It's yin and yang. So if you're committed to those progressive outcomes in America, whether it's the paternity leave, the independent children, or the amazing public transit system, you ought to consider what conservative values it makes sense to accept as enablers rather than obstacles.

    MEGA

    Trump is back at the helm of the United States, and the majority of Americans are optimistic about the prospect. Especially the young.

    ...
    Full


    Trump is back at the helm of the United States, and the majority of Americans are optimistic about the prospect. Especially the young. In a poll by CBS News, it's the 18-29 demographic that's most excited, with a whopping two-thirds answering in the affirmative to being optimistic about the next four years under Trump. And I'm right there with them. The current American optimism is infectious!



    signal-2025-01-19-19-30-15-791.png



    While Trump has undoubtedly been the catalyst, this is a bigger shift than any one person. After spending so long lost in the wilderness of excessive self-criticism and self-loathing, there's finally a broad coalition of the willing working to get the mojo back.

    This is what's so exhilarating about America. The big, dramatic swings. The high stakes. The long shots. And I like this country much better when it's confident in that inherent national character.

    Of course all this is political. And of course Trump is triggering for many. Just like his opponent would have been if she had won. But this moment is not just political, it's beyond that. It's economic, it's entrepreneurial, it's technological. Optimism is infectious.

    As someone with a foot on both the American and European continent, I can't help being jealous with my euro leg. Europe is stuck with monumental levels of pessimism at the moment, and it's really sad to see.

    But my hope is that Europe, like usual, is merely a few years behind the American revival in optimism. That it's coming to the old world eventually.

    This is far more an article of faith than of analysis, mind you. I can also well imagine Europe sticking with Eurocrat thinking, spinning its wheels with grand but empty proclamations, issuing scorning but impotent admonishments of America, and doubling down on the regulatory black hole.

    Neither path is given. Europe was competitive with America on many economic terms as recently as 15 years ago. But Europe also lacks the ability to change course quite like the Americans. So the crystal ball is blurry.

    Personally, I choose faith. Optimism must win. Pessimism is literally for losers.

    Failed integration and the fall of multiculturalism

    For decades, the debate in Denmark around  problems with mass immigration was stuck in a self-loathing blame game of "failed integration". That somehow,

    ...
    Full


    For decades, the debate in Denmark around  problems with mass immigration was stuck in a self-loathing blame game of "failed integration". That somehow, if the Danes had just tried harder, been less prejudice, offered more opportunities, the many foreigners with radically different cultures would have been able to integrate successfully. If not in the first generation, then the second. For much of this time, I thought that was a reasonable thesis. But reality has proved it wrong.

    If literally every country in Europe has struggled in the same ways, and for decades on end, to produce the fabled "successful integration", it's not a compelling explanation that it's just because the Danes, Swedes, Norweigans, Germans, French, Brits, or Belgians just didn't try hard enough. It's that the mission, on the grand and statistical scale, was impossible in many cases.

    As Thomas Sowell tells us, this is because there are no solutions to intractable, hard problems like cultural integration between wildly different ways of living. Only trade offs. Many of which are unfavorable to all parties.

    But by the same token, just because the overall project of integrating many of the most divergent cultures from mass immigrations has failed, there are many individual cases of great success. Much of the Danish press, for example, has for years propped up the hope of broad integration success by sharing hopeful, heartwarming stories of highly successful integration. And you love to see it.

    Heartwarming anecdotes don't settle trade offs, though. They don't prove a solution or offer a conclusion either.

    I think the conclusion at this point is clear. First, cultural integration, let alone assimilation, is incredibly difficult. The more divergent the cultures, the more difficult the integration. And for some combinations, it's outright impossible.

    Second, the compromise of multiculturalism has been an abject failure in Europe. Allowing parallel cultures to underpin parallel societies is poison for the national unity and trust.

    Which brings us to another bad social thesis from the last thirty-some years: That national unity, character, and belonging not only isn't important, but actively harmful. That national pride in history, traditions, and culture is primarily an engine of bigotry.

    What a tragic thesis with catastrophic consequences.

    But at this point, there's a lot of political capital invested into all these bad ideas. In sticking with the tired blame game. Thinking that what hasn't worked for fifty years will surely start working if we give it five more. 

    Now, I actually have a nostalgic appreciation for the beautiful ideals behind such hope for humanity, but I also think that at this point it is as delusional as it is dangerous.

    And I think it's directly responsible for the rise of so-called populist movements all over Europe. They're directly downstream from the original theses of success in cultural integration going through just-try-harder efforts as well as the multicultural compromise. A pair of ideas that had buy-in across much of the European board until reality simply became too intolerable for too many who had to live with the consequences.

    Such widespread realization doesn't automatically correct the course of a societal ship that's been sailing in the wrong direction for decades, of course. The playbook that took DEI and wokeness to blitzkrieg success in the States, by labeling any dissent to those ideologies racist or bigoted, have also worked to hold the line on the question of mass immigration in Europe until very recently. 

    But I think the line is breaking in Europe, just as it recently did in America. The old accusations have finally lost their power from years of excessive use, and suppressing the reality that many people can see with their own eyes is getting harder.

    I completely understand why that makes people anxious, though. History is full of examples of combative nationalism leading us to dark edges. And, especially in Germany, I can understand the historical hesitation when there's even a hint of something that sounds like what they heard in the 30s.

    But you can hold both considerations in your head at the same time without losing your wits. Mass immigration to Europe has been a failure, and the old thesis of naive hope has to get replaced by a new strategy that deals with reality. AND that not all proposed fixes by those who diagnosed the situation early are either sound or palatable.

    World history is full of people who've had the correct diagnosis but a terrible prescription. And I think it's fair to say that it's not even obvious what the right prescription is at this point!

    Vibrant, strong societies surely benefit from some degree of immigration. Especially from culturally-compatible regions based on national and economic benefit. But whatever the specific trade-offs taken from here, it seems clear that for much of Europe, they're going to look radically different than they've done in the past three decades or so.

    Best get started then.

Bear Blog Trending Posts

Trending posts on Bear Blog

(20)

    you are what you launch: how software became a lifestyle brand

    software used to be functional. now it’s personal. this is an essay about tools, taste, and the quiet ways we curate identity through what we

    ...
    Full

    software used to be functional. now it’s personal. this is an essay about tools, taste, and the quiet ways we curate identity through what we launch.


    intro




    choosing software used to be straightforward. does the app do what you need, or not? but now, opening notion or obsidian feels less like launching software and more like putting on your favorite jacket. it says something about you. aligns you with a tribe, becomes part of your identity. software isn’t just functional anymore. it’s quietly turned into a lifestyle brand, a digital prosthetic we use to signal who we are, or who we wish we were.


    there’s been a shift. not dramatic, but gradual. slow. quiet. like how one day you realize everyone around you stopped using chrome. somewhere along the way, software stopped being invisible. it started meaning things. your browser, your calendar, your to-do list, these are not just tools anymore. they are taste. alignment. self-expression.


    suddenly your app stack said something about you. not in a loud, obvious way but like the kind of shoes you wear when you don’t want people to notice, but still want them to know. margiela replica. new balance 992. arcteryx. stuff that whispers instead of shouts, it’s all about signaling to the right people. you don’t want everyone to notice, just the ones whose opinion actually matter to you. maybe this isn’t an essay about software at all. maybe it’s about taste as self-construction. or function as aesthetic.


    notion: the aesthetic workspace




    staying with the fashion metaphor for a bit, our apps are like our outfits, our docks are our fit pics. curated, intentional, meant to be seen. maybe not by everyone, but by someone. that’s what made dockhunt so fascinating when it blew up like 2 years ago on twitter. it turned utility into exhibition, something private into a public moodboard. you weren’t just sharing what you use, you were showing who you are.


    and what did we see when the docks came out? arc over chrome and safari. notion over notes and reminders. tools with personalities, tools that signaled intention.
    and no tool owns that space better than notion. it’s not just a notes app, it’s a whole aesthetic.

    calm, blank, modular.

    the kind of calm that feels curated. everything’s clean, but not sterile. soft fonts, careful spacing, subtle off-white coloring, the tasteful thickness…


    notion might be one of the most unopinionated tools out there. you can build practically anything with it. databases, journals, dashboards, even websites. but for a tool so open-ended, it’s surprisingly curated. only three fonts, ten colors. it’s like apple before they started chasing android. there was a time when iOS felt like intentional restraint. now, with iOS 18 letting you recolor every icon to match your wallpaper, i’ve seen home screens so hideous that i’d never associate them with apple at all.


    notion’s cofounder painting an office


    this kind of restraint doesn’t happen by accident. it’s a mindset. a way of treating software like craft. the paint, the jackets, the “couldn’t find merch we love”. this is literally vibe-maxxing as company ethos. it shows how deeply they aestheticize every layer of the brand. they’re not just building a tool, they’re building a taste. a tone.


    and it shows. the app reflects the company. or maybe the company reflects the app. either way, the branding isn’t loud. it’s soft-spoken and super curated. like who else talks about paint swatches and jackets when hitting 100 million users?


    obsidian: the tinkerer’s lab




    and then there’s obsidian.

    same category. completely different energy.


    if notion is a sleek apartment in seoul, obsidian is a cluttered home lab. markdown files. local folders. keyboard shortcuts. graph views. it doesn’t care how it looks, it cares that it works. it’s functional first, aesthetic maybe never. there’s no onboarding flow, no emoji illustrations, no soft gradients telling you everything’s going to be okay. just an empty vault and the quiet suggestion: you figure it out.


    obsidian is built for tinkerers. not in the modern, drag and drop sense but in the old way. the “i wanna see how this thing works under the hood way”. it’s a tool that rewards curiosity and exploration. everything in obsidian feels like it was made by someone who didn’t just want to take notes, they wanted to build the system that takes notes. it’s messy, it’s endless, and that’s the point. it’s a playground for people who believe that the best tools are the ones you shape yourself.


    notion is for people who want a beautiful space to live in, obsidian is for people who want to wire the whole building from scratch. both offer freedom, but one is curated and the other is raw.


    obsidian and notion don’t just attract different users.

    they attract different lifestyles.


    simulacra of openness




    notion is collaborative, aesthetic, made to be shared. it leans soft. social.

    obsidian is solitary recursive, private. it leans hard. technical.

    one rewards presentation, the other rewards configuration. it’s easy to guess which one shows up more on pinterest and which one gets compared to vim.


    sharing plugins, writing css snippets, publishing vault setups like dotfiles. the whole obsidian ecosystem runs on a kind of quiet technical fluency.

    obsidian feels like 4chan to notion’s reddit. it’s thinkpad to notion’s macbook.


    it feels so arch linux-coded that people are shocked to learn it’s not open source. it really feels like it should be, the community talks like it is. this tweet from theo actually what triggered the whole essay. i knew i wanted to write about software companies as lifestyle brands, but this was the clearest example yet: vibes over facts.


    the fact that people think obsidian is open source matters more than whether it actually is. because open source, in this context, isn’t just a licence, it’s a vibe. it signals independence. self-reliance. a kind of technical purity. using obsidian says: i care about local files. i care about control. i care enough to make things harder on myself. and that is a lifestyle.


    it’s the same way people treat thinkpads, or vim, or mechanical keyboards with obscure key layouts. none of them are open source in the legal sense, but they feel open. tinkerable. resistant to the defaults. ==and that feeling creates the brand. perceived openness becomes a kind of cultural capital. obsidian didn’t sell the lifestyle. it just left enough space for people to build it themselves.==


    tasteware




    none of this is really about note taking. it’s about taste, identity, and the quiet ways we signal who we are through our tools. it just happens that notes are where the contrast shows up most clearly.


    but it’s everywhere now. in browsers, in email clients, in calendars, even in search engines. we’re not just picking tools anymore. we’re curating them. not just for what they do, but for what they say.


    now, there’s a “premium” version of everything. superhuman for email. cron (i don’t wanna call it notion calendar) for calendars. arc for browsing. raycast for spotlight. even perplexity, somehow, for search.


    these apps aren’t solving new problems. they’re solving old ones with better fonts. tighter animations, cleaner onboarding. they’re selling taste. they’re selling time. and people buy in, not just because the tools are better (some of them aren’t), but because they feel like tools made for people who care.


    it’s the same move you see in fashion. the aesop soap. the patagonia fleece. the rimowa carry-on. nothing flashy, nothing loud. just clean, expensive looking restraint. the kind of product that whispers.


    these apps are doing the same thing. arc isn’t just a browser, it’s a statement. chrome gets the job done, but arc gets you. the onboarding feels like a guided meditation. it’s not about speed or performance. it’s about posture. taste. the idea that how you use your computer should look and feel as considered as how you dress.


    arc makes you learn new gestures. it hides familiar things. it’s not trying to be invisible, it wants to be felt. same with linear. same with superhuman. these apps add friction on purpose. like doc martens or raw denim that needs breaking in. you suffer a little. but in that suffering, you build attachment. suddenly it’s not just an app, it’s yours.


    tools with mutuals




    and once enough people start caring about their tools like that, it stops being personal. it becomes cultural. taste starts to repeat itself. a certain kind of app starts showing up in all the same places. linear even has a “work with linear” page, a curated list of companies that use their tool. it’s a perfect example of companies not just acknowledging their lifestyle brand status, but actively leaning into it as a recruiting and signaling mechanism.


    it’s no longer just about functionality, it’s about as an ecosystem of taste. linear knows its design is aspirational, so it turns that into a cultural filter. it’s not “those companies use linear”, it’s “these are the kind of people who use linear.”


    it’s software as aesthetic passport. it’s a vibe directory. a design-forward who’s who. and of course, arc is on there. so is superhuman. so is raycast. so is perplexity. the stack signals itself. real recognize real.


    the medium is the message




    so why are we seeking identity in our digital toolkits? maybe because we live inside them now. these apps are our desks, our notebooks, our mirrors. they shape how we think, how we plan, how we remember. of course we want them to feel right. of course we want them to say something.


    macbook home


    people don’t just use these apps. they use them to imagine themselves differently. more organized. more intentional. more in control. apps like superhuman or linear aren’t just tools, they’re lifestyle upgrades. software that feels like a reward. a signal that you care about your time, your taste.


    aesop isn’t just soap. notion isn’t just notes. they’re elements in your identity collage.


    and the companies? they’ve caught on. they don’t just ship features anymore, they ship vibes. onboarding becomes a performance. the ui is the brand. the founder’s blog post is the manifesto.


    it’s not about what the software does.it’s about who it’s made for.


    the medium is the message. and now the message is: this is who i am. or at least who i’m trying to be.

    Is Anyone Else Tired of the Internet?

    A few months ago, I began listening to a podcast about sword and sorcery books. It was great, it began in 2019 and featured three

    ...
    Full

    A few months ago, I began listening to a podcast about sword and sorcery books. It was great, it began in 2019 and featured three guys, who all played off of each other incredibly well. I was excited that I had six years of episodes to catch up on until I reached the first episode recorded once COVID began. The tone was different. Their demeanor was different, and you could hear the anger and rage at the world rising up.


    I decided to skip ahead a few episodes, but it didn’t get any better. In fact, it may have gotten worse. I kept skipping ahead, all the way until I was through the COVID era shows, but the show never came back. The hosts, one particularly, were different. He was tired, beat down, and despite churning out content, you could tell life was different for him. I gave up on the show, which was a shame since it started out so great.


    I mention this story, because I feel like I’m seeing the same thing happening across the internet. Over the past couple of months, most of my favorite bloggers have written less. I’ve seen individuals who are practically walking away from the internet, as much as they can, within reason. I think if I had to sum up what I know I’m feeling, and I think a lot of others are in one word: exhaustion.


    It’s tiresome keeping up with the onslaught of negativity online. It doesn’t matter if it’s political, social, entertainment, tech related, or regular ole discussion, the negativity and frustration is at an all-time high. There is no escape. At one point, it seemed that personal blogs and small online communities were the refuge against big tech/advertising/negativity, but they are just as negative, if not more, without the advertising.


    I remember when surfing the web or escaping into cyberspace was a break from real life. It was a place where I could go to relax and escape the worries of my regular life. Now, it seems every time I log on, I find something new to stress about, something else to be pissed off about, and I just don’t know how much more of this I can take. I guess things have come full circle, now I go offline to escape real life and relax, as the internet has truly become the place where I hate to be.



    Reply via Email

    The Roanoke Colony's Forgotten Curse

    I LOVED making this episode. As a kid and through my teenage years, I frequented the Outer Banks multiple times a year. A big theme

    ...
    Full

    I LOVED making this episode. As a kid and through my teenage years, I frequented the Outer Banks multiple times a year. A big theme there was the disappearance of the Roanoke Colony. Digging into this was so fun.

    In 1590, 118 settlers mysteriously vanished from Roanoke Island, leaving only the word 'Croatoan' carved into a tree. This enigma, predating Jamestown by 17 years, intertwines with stories of reptilian entities, unnaturally cold temperatures, and mysterious lights in the forest. From Edgar Allan Poe's dying delirium to abandoned schooners, 'Croatoan' resurfaces in moments of historical bewilderment. The Croatoan tribe's traditions and the eerie Dare Stone hint at metaphysical transformations. Some propose mass possession leading to dismantled homes and unexplainable behavior among the colonists. Others suggest the settlers encountered a dimensional breach. As artifacts reveal psychic imprints and paranormal phenomena, could the colonists exist in a state where their consciousness still reaches out across centuries? Roanoke remains suspended between the realms of presence and absence, their essence lingering in a shadowy realm where past and present converge.


    00:00 The Mysterious Disappearance of Roanoke Colony


    01:04 The Haunting Word: Croatoan


    06:01 Eleanor Dare's Stone: A Message from the Past


    11:08 The Legend of Virginia Dare: Guardian of the Lost


    15:39 The Malevolent Force: Possession and Paranoia


    20:18 Dimensional Boundaries: Crossing Between Worlds


    24:27 Artifacts as Psychic Beacons


    29:02 Conclusion: The Unfinished Story of Roanoke




    Visit https://midnightsignals.net for more.


    Subscribe to the podcast on any of your favorite apps at https://midnightsignals.net/subscribe.

    nanuq: from bear blog to json, markdown, or a static site

    I previously made a micro tool to convert post exports to Markdown, but decided to expand it further. So here's

    ...
    Full


    I previously made a micro tool to convert post exports to Markdown, but decided to expand it further. So here's nanuq, a micro tool to convert your bearblog.dev post_export.csv to JSON, individual Markdown files, or a complete static site. The static site export includes everything functional: bearblog default design, Atom feed, sitemap, and theme injection.



    Note: nanuq isn’t about moving away from bear blog -- it’s simply a quick way to back up and repurpose your posts while keeping them accessible in multiple formats. Bear still doesn’t support Markdown exports, and while all of this could be done with scripts, I thought it’d be fun to put together a GUI to make the process easier for everyone.



    Example site


    nano.mgx.me


    Features



    • 🔄 Convert CSV to JSON

    • 📝 Export individual markdown files

    • 🌐 Generate a complete static site

    • 🎨 Inject bear blog themes

    • 📰 Atom feed generation

    • 🗺️ Sitemap generation

    • 📜 Custom JavaScript injection


    Usage



    1. Visit mgx.me/nanuq

    2. Upload your post_export.csv file from bearblog.dev

    3. Choose your export format:

      • JSON: Get a single JSON file with all your posts

      • Markdown Files: Get individual markdown files for each post

      • Static Site: Get a complete, ready-to-deploy website




    Export Formats


    JSON Export



    • Single JSON file containing all posts

    • Maintains default or user-configured post metadata

    • Easy to import into other systems


    Markdown Export



    • Individual markdown files for each post

    • Includes default or user-configured front matter with metadata

    • Files named with date and slug (e.g., 2024-03-20-my-post.md)


    Customizing Headers


    You can customize the column headers in your export to match your existing static site generator setup. This is particularly useful if you want to:



    • Import your bear posts into another SSG (like Hugo, Jekyll, or Astro)

    • Match your existing front matter structure


    To customize headers:



    1. In the "customize headers" section, modify the header names to match your target SSG's expected format

    2. In the "select fields to include" section, choose which fields to export

    3. Click "Reset to Default" to restore original headers


    Fields to Include


    The "select fields to include" section allows you to choose which fields from your CSV export will be included in the final JSON or markdown output. This is useful when you want to:



    • Keep only the essential metadata for your static site

    • Reduce the size of your export files


    By default, all fields are selected. You can:



    1. Uncheck fields you don't want to include

    2. Use the "Reset to Default" button to restore all fields

    3. Your selection is saved in the browser for future use


    JSON Export Example


    When exporting to JSON, only the selected fields will be included in the output. This reduces payload size and simplifies data handling. For example:


    {
    
    "posts": [
    {
    "title": "My Post",
    "meta_description": "meta description",
    "tags": "tags here",
    "content": "...",
    "published date": "2024-03-20T"
    }
    ]
    }

    Static Site Export


    nanuq creates a complete, ready-to-host website from your post_export.csv



    • Includes:

      • Atom feed (available at /feed)

      • Sitemap (available at /sitemap.xml)

      • SEO meta tags

      • Customizable navigation

      • Customizable footer

      • Full bearblog.dev CSS compatibility

      • Custom JavaScript injection




    Static Site Export Options


    When exporting as a static site, you can customize:



    • Site Title: The name of your blog

    • Site Domain: Your website's URL (e.g., https://example.com)

      Important: This setting is crucial for RSS feed and sitemap generation. Without setting your actual domain, these files will default to the web worker's domain or simply a forward slash.




    • Favicon: Use an emoji (e.g., 🐻) or link to an icon file

    • Lang: Default language code for the index page (e.g., en, es, fr)

    • Site Description: Default meta description for your site

    • Site Meta Image: Default image for social sharing

    • Navigation Links: Add menu items in markdown format

    • Footer Text: Customize footer content in markdown format

    • Inject JS to <head>: Add custom JavaScript that will be injected right before the closing head tag

    • Inject JS to <footer>: Add custom JavaScript that will be injected right before the closing footer tag

    • Custom CSS: Override default styles with your own CSS



    CSS Compatibility: The static site export inherits bear's HTML skeleton and CSS classes, making it fully compatible with existing bearblog.dev themes. You can paste your bear CSS directly into the Custom CSS field, and it will work as expected. This ensures a seamless transition if you're familiar with bear's design system.



    HTML lang Attribute (explained)


    The static site supports customizing lang attributes through:



    • Default Language: Set the default language for the index page using the "Lang" option

    • Per-Post Language: Each post can have its own language code in the CSV data (using the "lang" column)

      If no language is specified for a post, it will use the default language





    JavaScript Injection (explained)


    You can inject custom JavaScript in two locations:



    • Head Section: Scripts injected before the closing </head> tag

    • Footer Section: Scripts injected before the closing </footer> tag



    Note: Currently, the static site export does not include any syntax highlighting by default. To add syntax highlighting, you can inject Prism.js or similar libraries in the "Inject JS to <head>" section. For example:


    <link href="https://cdnjs.cloudflare.com/ajax/libs/prism/1.29.0/themes/prism.min.css" rel="stylesheet" />
    
    <script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.29.0/prism.min.js"></script>


    Example JS usage:


    <script>console.log('Hello from above!');</script>
    
    <script src="/analytics.js"></script>

    Once again, nanuq is a simple micro tool for those who write directly on bearblog.dev without backups, making it easy to keep your content safe, flexible, and almost future-proof. While nanuq is fully functional and ready to use, it is currently in beta. If you encounter any bugs or error messages, please report them to me along with your CSV file.

    My only memory of Neopets is my friend getting banned

    I was around ten years old at the time. Please remember that English is my second language (ESL), and I had barely learnt any as

    ...
    Full

    I was around ten years old at the time. Please remember that English is my second language (ESL), and I had barely learnt any as a child, only I remember one of my friends walking up to me and asking, "Hey. You know English right?"


    I had immigrated to New Zealand for a few years before coming back. "A bit," I said, nodding.


    She explained to me that she had been playing a game called Neopets, except she had gotten banned and was terribly confused as to why.


    "I wrote C-U-M in the chat," she said. "But it's not even a bad word. It's just latin for 'with', right?"


    The two of us grabbed an English dictionary and started pouring over it. At the end of the session, we agreed: 'Cum' simply meant 'with' or 'and'. There was nothing wrong with what she typed.


    "You got banned," I said.


    "I got banned," she replied.


    "Weird."


    I never thought about it again until I moved back to New Zealand and all the teenage boys around me started shouting the latin word for 'with'.

    dumbphones as a new status symbol

    After the April Fools joke yesterday, now a sincere post:

    Beginning of this week, I wondered if dumbphones are, or will be, a new status

    ...
    Full

    After the April Fools joke yesterday, now a sincere post:


    Beginning of this week, I wondered if dumbphones are, or will be, a new status symbol.


    Status symbols are, at their core, an indication of one's social or economic standing. Seeing them in action, they are usually forming aspirational trends with aspects that are unattainable for the masses, whether it is expensive brands, body types, social media content and more. They're there to ascribe certain positive values to people based on the symbol.


    I am biased in that exactly this type of online stuff obviously catches my eye, but I do believe the mood has been shifting the past few years. Seeing smartphones and social media as problematic or downright evil has moved from being a fringe position to a mainstream issue that has a thousand books and a million blog posts and magazine articles. A lot of people now talk about how they significantly lessened their activity on platforms or deleted their accounts. If not due to ethical reasons, it’s for mental health reasons or being tired of ads and boring influencers.


    Before this, we had an era idolizing influencers and other online celebs that made it a habit to post a lot, recorded everything, overshared and documented their life, and so people wanted to do that too. The aesthetic and perfect pictures showcasing the perfect lifestyle or going viral based on a meme/skit and getting paid for it was normalized, but still had this aura of unattainability and exclusivity to it. Not everyone has the body for it and an eye for good shots and content, and it used to be harder to get followers and get noticed by brands and get their sponsorships.


    Now though, it feels like they’re making influencers in a factory and people are tired of sharing things just to get annoying comments and likewise know too much personal stuff about complete strangers. Their feeds are now trash. As usual, the pendulum swings.


    I see more regard for privacy - not necessarily the tech kind, more like simply choosing to keep things to yourself online. More about disconnecting, the idea that offline is the new luxury, being harder to reach. ‘Chronically online’ is an insult. People raved about that joke app that makes you touch grass and many use some apps like onesec. More people talk about getting a dumbphone, particularly feature flipphones1 (not the smartphone foldables). The general comeback of Y2K aesthetics2, separate MP3 players and digital cameras3 lend itself to that, too. I even tried out the Barbie flipphone, but returned it because it felt like a Happy Meal toy and not like a proper phone.


    So in a world where constant connection, recording and oversharing while scrolling for hours a day has been very normal and even been presented as aspirational for years, it’s different to not to be doing that; but it’s even more radical to take the opportunity to do so away from yourself voluntarily. To me, many status symbols have something radical about them (positive or negative).


    It has the potential to be the new cool: Mysterious. Untouchable. De-influenced. It carries an image of nostalgia, cooler designs and customization, better mental health, offline connections and being outside, having eclectic hobbies without milking them for content. Then the cunty visuals of a flipphone: Their iconic look and associated femininity, clapping it shut, the charms. On the other hand, the small size, simplicity and stability of a non-flip featurephone. They were not meant to market stuff to you constantly and replace everything.


    If the models lack a (good) camera, it might express: I live in the moment. I don’t care to record everything. I don’t need the attention from a particularly good picture. I’m okay with bringing a separate camera when I need it.


    The small screen and usual lack of Spotify or the type of music capabilities we’re used to is so different to constantly drowning in entertainment, ads and distracting yourself in every free moment. To me it suggests: You have my attention, and this is a mere tool, not a toy or pacifier. It sticks out among people who frantically pull out the phone every other minute and say they struggle with their attention span.


    The disconnection to certain apps, feeds and notifications makes me think you’re okay with missing out, you’re okay with not constantly checking upvotes, followers and comments, and fine with not knowing the latest awful news and drama.


    Where it gets exclusive and therefore interesting as a status symbol is that so many people are addicted to apps, checking and validation, so making the switch is actually difficult - even if they really want to. And not only that, many of us now actually rely on a smartphone for 2FA codes, QR code scanning, transportation and companion apps for work and personal matters. Not needing that suggests independence, no workplace requiring it and a lack of accounts that need to be protected.


    This functions as a sort of class signifier. Not needing to blow up and get money through social media because you have enough; no need for a (or not having the type of) workplace to require a smartphone; the close relation to wellness/soft life/selfcare content via the aspect of disconnection and improved mental health, which is often inherently luxurious as most people do not have the time or money for these routines and products. Fittingly, it's also about taking your time back and not scrolling feeds for hours; which makes sense, as time especially is a big component of privilege. Money buys leisure and has always been a big part of the class divide (also see the concept of "the leisure class"4).


    That’s probably something a significant number of people subconsciously yearn for. It helps that people can see that their favorite rich celebs either completely delete their accounts or periodically disappear, modeling a new type of behavior from the people they admire that helps it take off as a trend.


    That’s when people who made the switch to a dumbphone anyway become the new trend, and the dumbphone becomes a status symbol.


    Is that projecting a whole lot of things onto people that downgrade to a dumbphone? Sure, but that’s what status symbols are in part about: You are supposed to assume a whole lot of positive things about the associated person no matter if it’s actually true or not. Even if it’s just supposed to scream “I’m rich!” when they’re not. You’re supposed to think they can afford something you can’t, and it doesn’t have to be material; in the case of a dumbphone as a status symbol, you’ll think they can afford to be disconnected in ways you cannot be, so you covet that kind of life if you’ve been fed up about your smartphone.


    In conclusion, I can totally see that happening, and we might be in the beginning stages of it now. I wouldn't be surprised if Megan Thee Stallion rapped about throwing away her smartphone next. Sounds dumb? I remember when a TikToker online correctly predicted that due to the intense prices, fresh produce would become the new status symbol, and it did5.



    Reply via email

    Published {{ post_published_date }}



    1. Eddy Burback, Alex Ernst, Chloe Lau and many many more, especially since the start of the year. I actually started writing this before Eddy Burback uploaded his, and seeing it made me realize how timely the topic is; if someone as big as him is doing that, there has to be something ringing true in my post.


    2. Y2K Fashion and see: Ice Spice’s newest album, Brooke Candy’s Flipphone, Barbie movie etc.


    3. If you didn't know: Digital cameras are back, and so are MP3 players and refurbished iPods. If you want separate music and camera, then a lot of the big draws of a phone that combines it all falls away.


    4. The Theory of the Leisure Class by Thorstein Veblen, 1899


    5. 1 / 2 / 3 / 4 and tbh too many more


    Maybe I Need A Soft Reboot

    Lately, I've been given a lot of thought to shutting down my blog. I haven't felt compelled to write much and I feel a bit

    ...
    Full

    Lately, I've been given a lot of thought to shutting down my blog. I haven't felt compelled to write much and I feel a bit disconnected from the BearBlog/IndieWeb community. But I've learned over the years, that making rash decisions rarely works out, so I've been sitting with these thoughts and brainstorming what comes next.


    This morning, I asked myself, why do I blog? Really? Like, let's cut out all the BS. Why did I ever start blogging?


    IMG_9268


    When I think back, it depends on when you consider I started blogging. I would say around 2004/2005 is when I started blogging in a traditional sense. I mean, I wrote on various websites before then, but as far as using a blogging platform and just writing, it was around twenty years ago. I began blogging out of frustration.


    I was hurting, life has hard, and things weren't going my way. I needed a place to vent, and rather than my lonely text file on my computer, I decided to share my frustrations with the world via a colorful Blogger theme. Eventually these posts were also published on MySpace, where I had an actual audience of friends and family. Being a stubborn, young adult, I let my personal beliefs almost destroy several friendships as I stood on my soapbox and preached. I realized that abusing my friends and family with my aggressive thoughts on the world, was not fair to them, so I retreated back to my Blogger site where my opinions were optional to read.


    I grew tired of ranting and raving, and found the most joy in sharing my passions. Over the next couple of decades, I'd bounce between writing about my personal life and writing about pop-culture usually through a retro lens. Blogging success never came my way, but I enjoyed exploring both myself and my hobbies, although at times, I believe I used my blogs to try and shape who I was, and not in a healthy way. I'd start random experiments or I'd force myself to go do things under the guise of helping myself get out, when in actuality it was because I felt obligated to. When you start blogging out of obligation or doing things to create content, it becomes a non-paying job. In my defense, during these years, I was in a terrible marriage. I think my blog was a distraction for all that was going wrong in my life and I needed something to call my own, even if it was superficial.


    Things changed in 2020. I took a break, relaunched my blog on Blogger for the first time in years, and I wrote just about every day. My posts were filled with dumb jokes, goofy gifs, bright colors, and was heavily influenced by The Good Place, a popular television show at the time. Despite all that was going on, I was possibly the most happy with my blogging during the first six months of 2020 than I've ever been. I wouldn't say the quality was all that great, but I liked what I was doing, blending personal anecdotes with randomness.


    In mid-2020, I traded in my bright green website for a minimalistic black and white Write.as blog as a way for me to distance myself from Google. My little site matured and over time, and I cut out the goofiness. After a few starts and stops, I found myself with a tiny audience and I've made some wonderful friends over the past couple of years, which is probably the only reason I haven't nuked my blog yet. How can I destroy something that has brought me so much? I have a forum of great people I chat with daily, all because somewhere along the way, we connected via my blog or theirs.


    Having spent time this morning thinking, and writing all of this out, I think I've come to realize that my current style of blogging is just not filling my cup right now. Maybe it feels inauthentic or maybe it's just too much trouble. I think what I need is something a bit quicker and easier to use, where I can post smaller posts more often.


    Yesterday, I wrote about feeling exhausted from the internet. I received a few messages from people agreeing with my sentiment, and a couple disagreeing. The one quote that really took me off guard was from Parker, who mentioned he feels optimistic about the internet. I'm sure he won't mind me quoting his email:



    I'm actively excited about the internet right now. I'm going deep on lots of personal blogs, and finding the online social group that I want. It's all e-mails, RSS, and comment-free blogs.



    This quote inspired me to write this post. I was inspired to reassess how I'm blogging and what I'm looking for on the internet. Maybe I need a soft reboot. Maybe it's time to be proactive in the way I use the internet and in the content I consume on a daily basis. Maybe... just maybe... I need to be a bit more of myself.


    What does this look like? I'm not 100% sure yet, but I do think I'm going to pull up my old posts from 2020 for a little inspiration until I figure it out.



    Reply via Email

    The meaning of life

    I remember when an entomologist (an expert on insects) was asked what benefit ticks provide. "None at all," he replied. The follow-up question, of course,

    ...
    Full

    I remember when an entomologist (an expert on insects) was asked what benefit ticks provide. "None at all," he replied. The follow-up question, of course, was why they exist at all. "Because the opportunity arose."


    It made me think about the questions we ask about our own existence.


    We like to believe that everything has a meaning and a connection. That everything is part of a perfect whole.


    We search and search, wanting the answer served to us on a silver platter. Perhaps that's why we never find it: we're looking in the wrong place.


    Maybe we ourselves hold the solution to the riddle without realizing it. Like using a torch to search for something, not realizing that it's the torch in our hand we're looking for.


    What if it's simply that the meaning of life is for us to create that meaning ourselves? To build our own purpose, brick by brick, experience by experience, and sharing that passion with the rest of the world.


    Maybe the meaning of life is to be meaningful.

    i feel

    when i was in elementary school, my 4th grade class had a class hamster. we named him george. i often used him to gauge my

    ...
    Full

    when i was in elementary school, my 4th grade class had a class hamster. we named him george. i often used him to gauge my moods. when i was happy and excited and ready to take on the world, i felt sorry for him. george was cooped up in a cage all day long without any aspirations or goals. he had the same boring routine everyday. the same faces looking in at him through his cage. he was doomed to a meaningless life, and someday, a meaningless death.


    but when i was miserable and lonely and felt like nothing would be okay ever again, i felt jealous of him. george had such a simple life. i wished it was me that got to eat food out of a tiny little bowl and drink water from a straw that was just my height at set times everyday. i wished it were me that only had to worry about whether or not i wanted to take a nap or take a spin on my wheel. no commitments, no disappointments.


    george the class hamster died a peaceful death towards the beginning of my 5th grade year. however, i still think about him sometimes. if i really gotten a choice to either be a bold, brave human or a quiet, weak hamster, which one would i choose? and which one would my 10 year old self choose?

    i became an album listener

    back when I used spotify, I used to listen to music by shuffling a massive 1k song playlist. I would occasionally listen to an album

    ...
    Full

    back when I used spotify, I used to listen to music by shuffling a massive 1k song playlist. I would occasionally listen to an album from start to finish but that was really rare.


    with my departure from spotify I started downloading music and consuming it more intentionally. nowadays I mostly listen to albums and it's great. one of the main benefits is I'm finding my own favourite tracks. there are so many hidden gems and finding them has been a joy.


    I also feel like I'm experiencing the music like artists intended. listening to albums is like eating a whole meal as opposed to snacking with single songs.


    I don't know what I'm trying to say with this post. maybe if you dig some artists hit songs set aside some time and give the album a shot. but I'm not your dad, do whatever you want ;)


    reply via email

    liked this post? you can follow my rss feed for more. i'd also appreciate if you toasted this post below. 🍞

    re: "the great scrape"

    I read Herman's blog post "The Great Scrape" about AI scraping and its impact on Bear. As the person behind bear.css.observer, his new

    ...
    Full

    I read Herman's blog post "The Great Scrape" about AI scraping and its impact on Bear. As the person behind bear.css.observer, his new security measures and strict Cloudflare checks threw a wrench in my little worker.js setup. Talk about being caught in the middle...


    I want to respect Herman's fight against AI scraping while keeping bear.css.observer working for real users. My solution handles Cloudflare's verification by using browser-like headers, managing cookies properly, dealing with challenge-response steps, and implementing decent cache control.


    This keeps my web worker running without undermining Bear's security - I'm just making sure legitimate requests get through properly. Of course, this might break tomorrow as security evolves, and that's fine - comes with the territory when building unofficial tools. I'm 100% behind Herman protecting Bear.




    Personally, I'm just glad my automations and webhooks still work. I need these for my workflow since Bear has no official API. My "dirty" approach to syncing and rebuilding my blog has been solid so far. Fingers crossed it stays that way while respecting what Herman's trying to do.

    Dear Diary, Shut Up and Listen

    For ten years, my journal has been my closest confidant. Over 1,500 daily notes, a million words, and enough self-analysis to make Freud roll his

    ...
    Full

    For ten years, my journal has been my closest confidant. Over 1,500 daily notes, a million words, and enough self-analysis to make Freud roll his eyes. Then, one morning, I typed out the title for new entry: “Pale Three Percent”.


    That day, I realised something quietly devastating: of all those words, maybe 30,000 — just 3% — had led anywhere meaningful. The rest? Unfinished thoughts, circular monologues, and the psychological equivalent of pacing in socks. ==Introspection==, it turns out, ==is not the same as transformation.== And journaling — my primary tool, my laboratory — had become a place where writing disappeared instead of beginning.


    So I changed. I stopped treating my journal like a confessional black box and started treating it like a studio. I broke apart the long, indulgent entries into smaller, livelier notes — the kind that could grow into essays, scenes, letters to the self, or quiet provocations. I began re-reading what I wrote. Highlighting. Cutting. Connecting the old with the new. And in doing so, I ==shifted from being just a writer to also being a reader of my own psyche== — a humbling, instructive role that taught me how to write with more care and how to reflect with more honesty.


    Because here’s the truth: introspection can absolutely go wrong. It can become ==rumination in disguise== — a loop dressed in contemplative language. You feel productive while avoiding actual movement. You revisit the same idea with slightly different metaphors and convince yourself ==it’s growth. It’s not. It’s rehearsal==.


    So what does constructive introspection look like? It doesn’t mean endless journaling, spiritual navel-gazing, or arguing with your inner critic in longhand. It means learning how to be with yourself in a way that’s curious, structured, and supportive of change.


    That might look like:



    • Reading yourself: ==revisiting your notes==, not just writing them. Seeing which parts still carry charge, and which ones are decoys.

    • Dialogue-based journaling: talking to your inner critic, your future self, your scared part — not just about them.

    • ==Prompting your psyche==: not with affirmations, but with the questions you hope no one will ask — especially not yourself.

    • Tracking patterns: ==noticing when you’re writing the same story== with a different villain.

    • Interrupting loops: calling out the voice that sounds wise but just wants you to stay comfortable.

    • Designing rituals: ==giving form to your growth through repeated, symbolic acts== — whether that’s a weekly rewrite, a seasonal check-in, or pulling symbols from a system that speaks more in images than instructions.


    I still journal every day. It’s still my primary method. While I do practice yoga, walk alone, and engage with art, poetry, and archetypes, everything — every part, symbol, pattern, dream, contradiction — eventually enters through typing. It’s where the psyche performs, dialogues, transforms. The keyboard is my therapist, the words are my medicine.


    These techniques have helped me transform my journal from a confessional to a workshop. But even the most dedicated introspectors need the right lens through which to view their inner landscape.


    What do we do with physical lenses?



    • We ==adjust them== to bring blurry thoughts into focus.

    • We ==switch them== depending on what part of ourselves we're examining — wide-angle for life patterns, macro for immediate emotions.

    • We ==clean them== when they get fogged up with assumptions or defensive thinking.

    • We ==look through them, not at them== — the lens is a tool, not the subject itself. They don't invent the truth — they clarify what's already there.


    This is why I suggest exploring different lenses for self-inquiry.


    Here are a few examples of lenses worth borrowing, depending on how you like to look inward:



    • For depth-oriented structure, try James Hollis. He’s the therapist for your soul’s grown-up voice.



    • For dialogue and inner parts, explore the Stones’ (Hal and Sidra Stone) Voice Dialogue. Give your inner voices a mic — not just a seat.



    • For mystical-spiritual creativity, Pamela Eakins might show you how to map the unmappable. Think cosmic symbolism over analysis.



    • Julia Cameron's morning pages also deserve an honorable mention, though her devotional tone and recovery-oriented approach never quite sang in my register.




    Whichever lens you choose, what matters most is what you do with what you see. ==Ask yourself:==



    • What are you doing with what you find?

    • What truth have you been circling for years, too frightened to name?

    • Where in your life are you mistaking comfort for clarity?

    • How are your routines shielding you from the questions that would truly undo you?

    • What fear would you have to face if you stopped editing your thoughts and started listening to them?


    If something in you nodded — even if your face didn’t move — let the arrow below know it’s not alone with a tap. Even prickly writings like a bit of sunshine. Feel free to leave a bluesky comment or drop me a note — I read and respond to every message.

    your blog’s scent

    What does this website smell like to you, if you had to imagine a scent?

    Is it cotton candy and bubblegum because of the pink?

    ...
    Full

    What does this website smell like to you, if you had to imagine a scent?


    Is it cotton candy and bubblegum because of the pink? Caramel seasalt for the dark mode?


    Since this is my online home, let me tell you what scents mix in my real life. I’m not usually a fan of scented products and it needs to be really subtle and not too overwhelming. Lush and Foamie stuff is usually too much.


    But my plushie boar, Benji, smells softly like ==lavender== because of his filling. I also grow lavender and ==rosemary== on my balcony. Rosemary or lavender is what I usually choose for hand soap and dish soap.
    I also have a dry body oil with a strong rosemary note.


    I prefer my laundry to be pretty neutral and unscented, just normal ==fresh laundry== smell, so the next stuff is my deodorant - its scent is ==vanilla orchid==. My new lipbalm is ==cherry vanilla==. I don’t use perfume at the moment, and my shower soap bars don’t really have a scent. There is a liquid shampoo the pharmacy gifted to customers that smells wonderfully like freshly peeled ==orange==, and I absolutely adore it.


    If I could, I would make it so when you rub on the page, you would smell that, like those scented papers (especially the Diddl papers everyone used to trade where I grew up).


    I wonder what yours smells like?



    Reply via email

    Published {{ post_published_date }}


    Walking the Archipelago, or: Couch to 1080p

    It's been a week of eventfulness as well as anti-events.

    I took the last of my ADHD medicine last Monday (the 24th), so I contacted

    ...
    Full

    It's been a week of eventfulness as well as anti-events.


    I took the last of my ADHD medicine last Monday (the 24th), so I contacted my doctor that morning to send another prescription. They said it can take "up to 72 hours" and offered me no guarantees that it would be filled before the end of the day. This is frustrating, because whenever I try to fill the prescription in advance, the pharmacy website yells at me for trying to fill it prematurely. Why can't they just fill it, because they know it's coming up for renewal soon, but not let me have it until the renewal date? Why the attitude? It makes me anxious to fill it until the day it's due, because I don't want to get flagged as some sort of non-compliant drug seeker or something.


    So unsure if I would be able to get it filled, I walked to the pharmacy after work to see if it was there. It wasn't, and it would be a waste of energy to catch the bus home and then catch it back to the pharmacy later, so I just kind of hung around. I got fast food for dinner. I got an extremely expensive but very good cold brew coffee from a little doughnut shop called "Duncan's". It's apparently a chain headquartered in Massachusetts. I'm not sure how they ended up in my neck of the woods, but they make a good cup of coffee.


    I hung around until 18h30, which is when my doctor's office closes, and checked to see if my medicine was there one last time. It wasn't, so I caught the bus home. In the future I guess I need to contact my doctor's office for refills instead of the pharmacy, so I can do it 72h in advance, but when I do that, it tends to fill up my profile with multiple prescriptions for the same drug, and I'm worried that's going to get me in trouble somehow too. Why do I keep asking for redundant prescriptions for a controlled substance? I feel like someone would find it suspicious. It's all very kafkaesque.


    Since I didn't have my medicine on Tuesday morning, it was an anti-day. I struggled to get through work, struggled to go to the pharmacy, struggled to pick up my medicine, struggled to come home, struggled to make dinner and finally collapsed into bed at 21h00.


    Wednesday I had my medicine and was back to normal, which was good because I had scheduled a live stream for that evening and would've had to cancel if I was unmedicated. I'll talk about that below.


    Thursday was Sunny's vet checkup, and I'm happy to report that she's as healthy as she appears to be. The only major issue is a bit of gingivitis, which is making it hard for her to eat. Getting that cared for is going to be very expensive, so we're going to start saving. In the meantime, we're only giving her soft food, and I'm mashing it up for her. I used to occasionally care for a relative's cat with bad dental issues, and mashing the food into a paste-like consistency helped him. We're also feeding her less food at more frequent intervals, which helps her not, to coin an expression, bite off more than she can chew. We made an appointment to get her spayed, which will give us a lot of peace of mind, since the vet said her amorous phase would probably recur every other week. Luckily that's not nearly as much as the dental work, but still gotta wait for a couple more paychecks before we're stable enough. I'm glad she doesn't have anything else seriously wrong with her.


    Friday, I worked on this blog post. Work is still busy, but not as overwhelming as the winter was. From the beginning of Nov. to the end of Feb., one of my coworkers was out having and recovering from surgery, and it was on me to cover for most of her daily job responsibilities. Doing that while trying to stay on top of my own work was mentally taxing. I still have a large backlog to work through, but being able to focus only on my own work has been like letting my brain out of a straitjacket.


    I don't have much time to write. My goal is to work on a post in little bits and pieces during work and have at least one entry a week. This is already rough for my ADHD brain to deal with, but I'm adjusting to it. I have an hour for lunch, which would be enough time to write what I consider a good post every day if I could use a computer, not so much on a phone. But it was enough. At least, it was. I have to spend my lunch break on something else now. More on that after this video game break.


    Exploring the Archipelago


    I'm a big fan of randomizers. It's great to be able to experience my favorite games in a fresh new way. I've finished randos for Dragon Warrior, Earthbound, Super Mario Bros. 3 and Super Win the Game.


    It got a bit of press coverage, so you may have heard about a Games Done Quick event in 2019 featuring a Super Metroid/Link to the Past mashup. One person plays LttP, one plays SM, and as they explore their randomized worlds, they find items that the other player needs to progress through their own. It creates a unique experience of indirect cooperation. At no point do the players interact directly, but each player's progress is fundamentally linked with the other.


    Archipelago is the extension of this concept into other games. A huge number of people are collaborating to add randomization and multiworld support to a wide variety of games. The number of supported games is bonkers. The idea that I could play Adventure for the Atari 2600 and it'll meaningfully interact with my friend's game of Dark Souls 3 seems like an April Fool's Day joke, but it's real. In fact, AF Day is the software's anniversary and a bunch of new games were just added, so it's a good time to check it out.


    alt text


    Anyway, my streaming partner and I were wanting to get back into it and were looking for something to play, and he had expressed interest in trying a randomizer before, and it's something I can play on my crappy old computer, so I suggested we check it out. I played Super Mario World and he played The Legend of Zelda. We finished after a couple streams, and we're in the process of playing a switcheroo, so now I'm playing Zelda and he's playing SMW. We're having a great time. Honestly I'd be happy if it were just playing games with my friend, but hanging out with some cool people in the chat is a nice bonus. We're trying to stream for a couple hours twice a week. Here's the playlist with every stream recording so far. It contains both my and my friend's sides. The best way to watch, in my opinion, is to sync them up with both videos side-by-side with one side muted. Unfortunately, there's no good way to automate this for people. Believe me, I tried. That's one reason this entry took so long, I was researching and testing what options are out there. Youtube Doubler is dead, Youtube Multiplier is riddled with ads (and there's no way to add a delay), and every other tool I found that claimed to do what I want didn't work. I'm not surprised, google is constantly removing functionality and breaking APIs so all the tools that interact with it are rickety at best. If I had a better computer, I'd try to edit them into a compilation vid, but I doubt any nLVE would be anywhere close to functional on my laptop.


    Rattling the Can


    Speaking of bad computers, I'm accepting donations if anyone has a few bucks to put towards the cause, or has an old but decent computer you're not using and want to give a good home. My needs are modest: I have no interest in playing games or video higher than 1080p60 and no interest in recording or editing video higher than 720p60. Based on my research, I should be able to get a refurbished computer that suits my needs for about $250. Now that crypto has become a Weyland-Yutani-scale industry and all the GPU hype is around AI slop and bonkers real-time post-processing,1 ones that are Pretty Good at games and video are fairly affordable.


    Anyway, if you'd like to contribute funds you can click the tip jar link in the navbar or click here for my Ko-Fi (already 20% funded!) and if you want to contribute hardware send me a message. Anything with at least a GTX 16 series and 16GB of ram is probably fine. I can cover shipping if you're in north america.


    I want to stress that the world is shit and I will absolutely survive without a new computer, so this should be last on your priority list as far as charitable giving goes. But if you have a few bucks in your entertainment budget and would like to use it to thank me for / assist me in entertaining you, it'd be appreciated.


    Why This Took So Long


    A couple weeks ago I noticed that my back was in agonizing pain when I woke up in the mornings. It was definitely some sort of muscle stiffness and not something wrong with my bones, because once I got up and started moving around, the pain subsided and would only recur if I contorted my body in specific ways.


    "Huh," I thought, "this is agonizing. I should probably do something about this." So I committed myself to doing something that I had wanted to do anyway once the weather got nicer, which is start walking during my lunch breaks. I had sort of forgotten about this quest, because it was easy for me to do other things instead, but the back pain brought it into sharp focus.


    So, for the past couple weeks I've been going on a walk every day for an hour. I should've realized that sitting most of the day at work and home was going to start causing more issues as I got older. I walk to work and home from work every day, but it's not quite enough.


    Luckily I enjoy walking, and my only issue was finding time for it. I downloaded a simple step counter app2 to see how I'm doing, and between my lunch and "commute", I'm getting just over 10,000 steps a day. Which is the number everyone says is a good one. Nice and round.



    alt text


    I, uh, may have slacked a bit over the weekend



    I'm happy to report that my back pain has already subsided almost entirely. I'm still a little stiff getting out of bed, but I haven't had a day where I questioned if I'd be able to get out of bed in several days.


    Unfortunately, and you knew this was coming, every silver lining has its cloud: this means that the chunk of time that I was most likely to blog is now spoken for. I don't have any regrets, because being in agonizing pain is also not conducive to blogging, but it is disappointing. I'll probably have to ask you to expect fewer and shorter entries until something changes about my work situation. Thanks for understanding 🦝




    1. Peri-processing?


    2. Someone's working on a walking game for Archipelago à la Pokémon Go, which would've been the perfect connective tissue between two otherwise unrelated topics, but I haven't yet figured out how it works. I'm devastated.


    218. Crash out at the Buddhist Temple

    📅 7 March 2025

    That Friday night and the following Saturday was one of the worst nights I’ve had in a while. It was exhausting,

    ...
    Full

    📅 7 March 2025


    That Friday night and the following Saturday was one of the worst nights I’ve had in a while. It was exhausting, physically and mentally. I decided to plan a night out with some friends at the terrace. I wanted to have a French party, so I asked my neighbors and my French friends if they could join me having a French night out with alcohol and terrace sitting. Most of them were up for it, some of them flaked, the two people that showed up were my good friends, but they arrived about 30 min to an hour late. We run on SE Asia time down here.


    I had a night full of laughter and eventually found my ass in the club. I did not want to be in the club. I was getting dragged around by one of my friends, (who I now call my son) around the clubbing district and I was not having the best time of my life. My friend M is freshly 22 years old, having his first big adult job out here, soaking in his independence and the longest he's ever been away from family.


    So I took my friends out for a night out, but didn't get home until 4AM. I had an unexpected house guest, M. My little friend was going through a rough breakup with his long-distance girlfriend. In short, he was a mess, having a crash out in Decathlon actually. I spent the late morning and early afternoon keeping him company, but I was just so tired. I had a lot of stuff on my mind and I wasn't in the right place to be a source of comfort. I need a reset, but by myself. I couldn't just leave my friend out to dry though, so I stuck it out. I was literally running on fumes. My social battery, energy levels, and any semblance of patience were on the brink of collapse.




    It was a Saturday afternoon, around 3 PM- I was drained from last night's schenanigans and not getting much sleep. I was with my friends at a coffee shop and we had planned to go to the Buddhist temple a week before. The week prior, I was asking about some solutions for more mindfulness and peace and my friend suggested for me to visit the temple for a bit. They thought it would help me reset, that being in a peaceful place and hearing some wisdom might make me feel better. I tagged along while nursing my slight hangover and a cup of tea.


    At the temple, a group of Buddhist scholars sat at a table offering readings. They asked for my full name, waved their hands in circles, and then told me an ancestral spirit was attached to me and blocking my financial success. Then came the price tag. For 900 RM (about 175 EUR / 150 GBP / 190 USD), they could cleanse it.


    Basically, this is what they told me:



    You have an ancestral spirit attached to you - 3rd generation on your father's side. This ancestral spirit is eating away 70% of your finance and taking away your livelihood, your soul, your well-being essentially. You can only keep 30% of your money (or something like that). You are very intelligent, but you have encountered conmen (yes), strangers that are trying to take advantage of you (yes), and many boyfriends have stole a lot from you (yes). You have to pay (they scribbled a bunch of numbers and did some calculations on the yellow piece of paper) at least 900 ringgit to us to do the cleansing ritual.



    I thought it was ridiculous. They asked if I had any questions. I was really confused. I said:


    Me: Was there anything that I did that caused this series of bad luck? Why is this happening?


    Buddhist scholars: You didn't do anything wrong, per se


    Me: Then why this much????




    I broke down at the Buddhist temple, crying my eyes out about this. Not because I believed them, but because I was already overwhelmed. They were telling me negative things and asking me to pay them a ridiculous amount of money to cleanse myself? I was exhausted that day. I was so emotionally stretched thin, and now some temple scholars were trying to sell me a solution to my problems, as if a spiritual antidote has earthly monetary value. My friend stepped in actually, defending me, glaring at them. We left, and I was still trying to process everything.


    On the way out, I noticed a mosquito zapper in the monastery. A place built on the idea of non-violence had a machine actively killing insects. It felt ironic, hypocritical even. Maybe I was just too tired to brush it off, but it stuck with me. I talked about it with my friend on the train. I thought it was so strange.


    That whole experience made me realize something. I couldn’t rely on other people to grant me peace. If I wanted to feel better, I had to create that space for myself.


    I started leaning on my friends more, allowing myself to be supported instead of always trying to hold everything together alone. I've spent most of my evenings having some peace and mindfulness by myself, meditating. Not because someone told me to, not because I expected it to change my life overnight, but because I needed a way to sit in stillness and breathe.


    Anyway, I had to pay almost 1k to remove an ancestral spirit on me? What the hell?


    I took care of it (I didn't go back to the temple at all. That's just completely bogus to me), but damn.




    ~ meditating,


    <3 K


    🍄
    https://marblethoughts.bearblog.dev/

    The waste of waiting

    Too often, we wait for "the perfect moment."

    When will that moment come?

    After the weekend? In a few months? Next year?

    The truth is,

    ...
    Full

    Too often, we wait for "the perfect moment."


    When will that moment come?


    After the weekend? In a few months? Next year?


    The truth is, it may never come – at least not in our definition of perfect. The real perfect moment is here and now.


    Don't procrastinate; don't hesitate; don't be afraid.


    Sign up for that scuba diving course, launch that blog, open that handmade stickers web shop.


    Go for it! See what happens.


    You may end up on a path far beyond your wildest dreams. And even if you don't, it's still a success – you've learned something valuable along the way, and you didn't waste your precious life waiting...


    Imagine all the opportunities we miss while waiting for the "ideal time." The perfect moment is a myth, a mirage that keeps us from living fully and discovering our true potential.


    Let's embrace the imperfect, the spontaneous, the now.


    Don't let fear hold you back. Life's too short for that.

    Calorie Deficits and 75 My Way

    I know that the number of the scale shouldn’t matter, but it matters to me, so please leave your judgement at the front door.

    ...
    Full


    I know that the number of the scale shouldn’t matter, but it matters to me, so please leave your judgement at the front door.



    I am slightly ashamed to admit it, but over the years, I feel like I have let myself go. Due to life stressors and non-important things, I’ve allowed staying fit to take a backseat in my never ending list of priorities. I remember when I was younger, I used to say: I run and lift this much so I can eat whatever I want. I’d probably blow up if I stopped working out. Well, I stopped working out, and blow up I did.


    The last time I tried to lose weight was in 2018. We had a trip to the Philippines for a milestone birthday. I was all in on it and managed to get 2-3 lbs within my goal weight. After that trip, a lot of life things like moving to a new state took over, and I didn’t think about what it would take to maintain my weight.


    Fast forward to 7 years later, and I am way above what I think my ideal weight1 should be. Over these past 7 years, I’ve tried to lose weight but nothing really stuck long term. Probably because I go all in way too fast, and then burn out. I know, I know. It’s a toxic quality that I am trying to improve on.


    Enter 2025.


    I wanted to set a realistic goal for myself going into 2025. My goal was to end the year ~15 lbs lighter. Roughly 1.25 lbs per month is realistic, right?


    We are 92 days into 2025, and to date I have lost 18.40 lbs. I have a milestone event coming up in July(?) and have decided that I want to hit a new weight goal that will get me to a total of around 30 lbs weight loss.


    That means I have around 11.4 lbs left to go.


    I’ve been playing around with a plan to get there.


    Right now, a big part of my weight loss is a caloric deficit. In February, I was doing around 1,500 calories (10,500/week) and then reduced it even more this past month to 1,200 calories2 (or 8,400/week). It’s really difficult, but not impossible.


    Anyways, I wanted to shake it up a little more. There is a challenge called 75 Hard — it produces GREAT results from what I’ve heard and seen. Unfortunately it’s not something that I’m ready to do at the moment. However, I like the idea, and wanted to do a similar challenge for myself that is inspired by it that I’m calling 75 My Way.


    🚨 75 Hard vs 75 My Way Rules



    1. Follow a nutrition plan: I’m continuing my 1,200/day (8,400 calories/week) plan

    2. Two 45-minute workouts - one MUST be outside daily: this is not realistic for me at this time so I’m changing it up to 30 minutes OR 5,000 steps a day. I typically do 45 minute workouts3 5-6 days a week already, so 30 minutes is just giving grace to myself for those off days.

    3. Drink 1 gallon of water: I know drinking water is something I need to work on but I do know that 1 gallon is a bit too much for my build. I’m doing 3 liters a day as a realistic compromise.

    4. Read 10 pages of nonfiction or personal development-focused book: I read a lot already, so I’m just modifying this to read a few pages of ANY book a day.

    5. Take a progress picture: I’m keeping this one


    OK, that’s it. Today is Day 1 of 754. Wish me luck.

    Comments

    If you'd like to comment, please send me an email, respond on any social media of mine you know, or sign my Guestbook.


    There are no comments or mentions in any platforms at this time

    Reply by email


    1. a weight that makes me feel and look good FOR MYSELF. This ideal weight is still slightly above the generic number given for someone of my height so we won’t talk about that.


    2. Before anyone tells me it’s too low, know that I’ve already done my research. I have calculated my TDEE, aware of my maintenance calories, my macros, and what is needed to lose weight.


    3. I’m training for a half marathon + century ride so the plan is to go beyond 30-minutes a day anyways.


    4. As of 12:04pm, I’ve done a 12.45 mile bike ride, and taken a progress photo. I’ve only consumed 47 calories so I need to eat soon, drink more water, and read my book… but I’m getting there.


    AI Slop Wins Because Nobody's Watching

    Gen AI is getting better, and looks useful. Maybe even good. But look closely, and it feels like we're just getting faster at churning out

    ...
    Full

    Gen AI is getting better, and looks useful. Maybe even good. But look closely, and it feels like we're just getting faster at churning out high-grade slop. The real question is, does anyone even notice the difference anymore?


    None of these things is in any meaningful way "solved." The quality of the slop keeps rising, and is genuinely very very good now, but what we keep learning is that the gap between "slop" and "something actually good" is much wider than we thought. Every generation of this stuff feels "close" but then the next one also feels "close" and so on, but we never quite get to good.


    What we ARE approaching is slop of high enough quality that the powers that be are seriously considering just using slop everywhere. The pictures don't look much like the inputs, or much like Studio Ghibli output, but nobody notices because nobody gives a damn, nobody looks. The essay is warmed over wikipedia, and contains a couple of wild errors, but nobody reads it, nobody gives a damn.


    Aren't these the same guys that want to RETVRN to Rome because of the High Quality architecture or whatever? I don't even know what's going on. It feels like the same people going on about Quality and how things got messed up, are at the same time telling me that AI output is awesome and rightfully should replace all human activity.


    The tragedy here is that all these AI bros apparently cannot tell the difference between AI slop and Renaissance Quality. They're exactly the unsophisticated slobs they desperately don't want to be.


    But slop is probably fine. It's true that nobody reads, nobody looks, nobody gives a damn. The powers that be are correct, they can just use slop for everything and fire everyone.


    Still, maybe the fear of replacement works just right for them. Lets them ignore the basics. Even Henry Ford knew he had to pay his workers enough to be able to afford his cars.

    I tried making artificial sunlight at home

    Some time ago, I saw this video by DIY Perks where they make artificial sunlight at home with a 500W LED and a gigantic

    ...
    Full

    Some time ago, I saw this video by DIY Perks where they make artificial sunlight at home with a 500W LED and a gigantic (1.2m) parabolic reflector. I've been fascinated by this project ever since, and I wanted my own.


    Over the past year or so, I finally took the time to work on a similar project, but I had the idea for a different design. The issue with the parabolic reflector is that it takes a huge amount of space. Could I do something similar, but with a less bulky design? This is the story of my first attempt at this project - version 1 so to speak. Perhaps there will be a version 2 in the future. Enjoy the read!




    My idea - as others have had I'm sure - was to use an array of lenses laid out as a grid. Then, instead of a single light source, I would use a grid array of multiple LEDs, one per lens. In my mind, this would have two major advantages:



    • Less bulky. The size of the device would be determined by the focal length of the individual lens elements, and because each would be small, the focal length could be small also, while maintaining a decent f number.

    • Easier thermal management. Multiple light sources could be regular low power LEDs which wouldn't need special cooling. There would just be a lot of them, spread out over the entire device surface.


    Over the course of this project, I also intended to teach myself some manufacturing and 3D design, as I don't have any experience doing any of this. My background is software, and as you'll see I took a very software heavy approach to this. It was all a long learning journey for me, but in the end I used:



    TL;DR: I did it! Here is the finished device sitting on my desk today, at night:


    main6.jpg


    And here it is during the day (much less impressive!)


    daylight


    Beware it's kinda hard to take good pictures of it, and I don't have the best photo gear. Here's also a video: (at night)


    Your browser does not support the video tag.

    Kinda cool that you can see a lens flare effect in the shape of the lens grid array.


    Technical specs


    Mechanical:



    • Lens square side length: 30mm

    • Effective Focal length: 55mm

    • Array size: 6x6 = 36 LEDs

    • Total size: 180x180mm


    Parts:



    • Lenses: 1 biconvex lens array, 1 plano-convex lens array - custom made out of PMMA acrylic, CNC fabrication with vapor polish finish @ JLCCNC

    • LEDs: LUXEON 2835 3V -- Ref: 2835HE. CRI: 95+, color temp: 4000K, 65mA.

    • PCBs: Custom design

    • Mounting hardware: custom design - aluminium 60601 for the CNC parts and mate black resin for the 3D printed parts

    • Rayleigh diffuser: waterproof printing inkjet film


    General design and sizing


    To create artificial sunlight, you need four ingredients:



    • Parallel light rays. The sun is so far away that light rays emitted from a point on the surface of the sun reach us essentially parallel. This is not to say that all light rays coming from the sun are parallel, as it still has a 0.5 deg apparent angular size. But they need to be pretty straight. Any light coming from an artificial light source like an LED will be going in all directions, so some optics is required.

    • High color quality. A good indicator to look for on a datasheet is the color rendering index (CRI). 95+ is recommended to achieve a good effect. I'm sure there's more color science you could get into, but CRI is a great start for off the shelf parts.

    • Rayleigh scattering, or an imitation of it.

    • A LOT of power.


    Light intensity is the most important sizing constraint, so let's look at it first. Now, the sun is very bright. Like, ridiculously bright: around 100,000 lux. To achieve this with LEDs is by no means impossible, but it's a challenge. For this first version, I thought that targetting 10,000 lux would be quite enough because it would reduce the power consumption a lot for a first prototype, and also brightness perception is logarithmic. So one tenth of the intensity is really, perceptually, almost the same as full brightness. (In the end, I estimate my design only effectively achieved something between 1000 and 10000 lux).


    The general grid based design of this project really has two variables:



    • the individual LED light output, in lumens

    • the individual lens surface area in mm²


    After some research, I think values between 30 to 130 lumens are typical for high CRI surface mount LEDs. So, assuming this is what we are working with, what is the required lens size to achieve the brightness of the sun?


    We have to assume some non perfect efficiency for collimating the light. This will never be 100%, and in fact may be quite low if the focal length is high, because a lot of the light will be hitting the side walls instead of reaching the lens. The lens itself will also be absorbing some light. So taking a wild guess of 0.5 for the overall optical efficiency, and taking three lumens value of 30, 80 and 130, we get this plot:


    plot


    With that in mind, I selected 30mm as my lens square side length. Presumably, this would be small enough to achieve some effect, but not too small to make the lenses too hard to make.


    Lenses


    Focal length, and the lenses shape in general, is the next design consideration. The goal is to have perfectly parallel light rays. In theory, with a perfect point source and a perfect lens this is easy. Put the light source at the lens focal length, you're done. In practice, a lot of things make it harder to achieve with a lens. (This is where the parabolic reflector design is superior to a lens).



    • A LED is not a point source

    • A lens will not have perfect optical performance (i.e. aberrations)

    • Mechanical reality of the device means that positioning and orientation will not be perfect

    • A LED radiation pattern is not isotropic, meaning intensity will be greater at the lens center


    This is the radiation pattern characteristics diagram from my LED datasheet:


    led-radiation


    I wrote some custom python code to simulate the optical system I had in mind, and find the best lens shape using numerical optimization. (This code eventually became an open-source project: torchlensmaker) After a lot of experimentation, I settled on a 2 lens design:



    • Lens 1: Biconvex parabolic lens

    • Lens 2: Planoconvex parabolic lens


    The effective focal length of this two lens system is about 55mm. Focal length is a key design parameter, and here I feel like more experimentation is needed. It's a big tradeoff consideration and has a huge impact on the system design. It impacts:



    • The curvature of the lens surface, which is a key manufacturing point (you want to minimize curvature for manufacturing, which means maximizing focal length)

    • The optical efficiency of the system due to the led radiance pattern (here you want to minimize focal length, to gather more of the emitted light)

    • The device thickness (here I wanted a not-too-thick device, so to minimize focal length also)


    I used a two lens system mostly to reduce the surface curvature of the lens arrays. This reduces the manufacturing cost by a lot. High curvature lenses are more expensive in general, and this grid array design means that a high curvature lens will create sort of "valleys" in between the lenses. Because I was targetting CNC manufacturing, this is to be minimized to get a design that's even possible to machine.


    This is the optical simulation I had at the time I finalized the design and ordered the lenses. (Since then my simulation code has improved and I could likely do much better modeling today using the latest version of torchlensmaker):


    lenses


    With some custom build123d code I was able to make the two lenses 3D models by stacking the lenses in a grid pattern and adding edges for mounting:

    <p>Your browser does not support iframes.</p>

    <p>Your browser does not support iframes.</p>

    What's really cool using build123d for 3D modeling, is that I can just change a python variable to change the size of the array, of the thickness of the lens, of anything else really. It's all parametric out of the box because it's regular Python code! This makes exploring the design space very efficient. I've never done 3D modeling any other way, but I can't imagine ever not having the power of programming with me if I ever do it again!


    I had the lenses manufactured out of PMMA acrylic at JLC with a vapor polish finish. Total cost for the lenses was about 55€ which is really not bad!


    One of the two main lens array, built by JLCCNC:


    assembly-lens


    LEDs


    I really wanted to use the 3030 G04 from YUJILEDS, but it's only sold on 5000 units reels that cost $1000 a piece... maybe for version 2 I will upgrade to those. For version 1, I settled on LUXEON 2835 3V. They are about 3 times less bright than the YUJILED, but they have good color rendering and the SMD package I was looking for. And importantly, the minimum order quantity was only 50 at JLC global sourcing.


    In the version 1 design, the grid is 6x6 which means 36 LEDs total.


    PCBs


    I designed a custom PCB with KiCAD. Each PCB holds 6 LEDs which are laid out as 2 segments of a 12V led strip in parallel. This allows to use a standard wall plug 12V power supply.


    pcb-schematic


    pcb-layout


    The mechanical role of the PCB is very important in this design. Not only does it distribute power to the LEDs and regulate current, it also precisely positions the LEDs at the lens focal point. For this, exporting the PCB 3D model and importing it into FreeCAD was very useful to check that everything fits together: the PCB in the aluminum support baseplate, the holes on the light hoods, etc. My Python code exported the precise LED coordinates which I could input into KiCad's layout editor.


    I had the PCB printed and the components assembled by JLCPCB. It's very very cool to design an electronic board on your computer and get it fully assembled in the mail a few weeks later - no soldering required! (for this step anyway).


    assembly-pcb


    Mechanical mounting parts


    To mount everything together I designed 3 parts:



    • A baseplate, to hold the PCBs and the side walls. The PCBs are fitted below the baseplate, and light goes through holes drilled into the baseplate. There are also partial holes to allow for the thickness of the SMD resistors mounted on top of the PCBs, and finally two mounting holes per PCB. This is why it has so many holes :)

    <p>Your browser does not support iframes.</p>


    • Side walls to hold the lenses using grooves in which to insert them, and a larger groove to secure in the baseplate. The baseplate side holes are threaded to support M2 screws securing the base of the walls. Again, JLCCNC did the drilling and threading of the holes at a great price.

    <p>Your browser does not support iframes.</p>


    • Light hoods, a rectangle block with rectangular holes. It sits on top of the PCB to shape the light coming from each LED into a cone (or really a four sided pyramid). This is to make sure light from a given LED only reaches its matching lens on the lens array, and no other. Bleed light is inevitable, but at least this prevents direct leakage.

    <p>Your browser does not support iframes.</p>

    The hoods were 3D printed out of black resin, the walls and baseplate were CNC cut out of Aluminum 60601.


    I'm not a mechanical engineer so this process was... trial and error. Still the result is working so I'm quite happy with that. For a possible version 2, there's a lot I'll change in the mechanical design. But apart from the one design flaw I was able to fix manually with a drill (more on that below), everything fit together quite well on the first try.


    Rayleigh scattering


    The final ingredient is Rayleigh scattering. This is the physical phenomenon that makes the sky look blue, and it's important to achieve a convincing effect. In the DIY Perks video that inspired this project, they used a home made liquid solution with suspended particles of the correct size for Rayleigh scattering. Not super practical and I really wanted to find another solution (get it?). Thankfully, some time after the original video, someone on the diyperks forum discovered that inkjet print film achieves a very similar effect. A quick trip to a local office supply store was all I needed here! Amazing discovery.


    I didn't anticipate this step during the initial design phase, so the film is simply cut to the correct size and secured with black electrical tape.


    Assembly


    After a few weeks of design work, and another few weeks of waiting for the parts to arrive, it was finally time for assembly!


    On top of the individual 3D models made with build123d, I had a final assembly FreeCAD model with all parts fitted together, including the lenses:


    freecad_assembly


    Note the green brackets that I initially planned to use. When actually assembling the walls to the baseplate, the solidity of the formed box was very high, I decided to drop the brackets entirely. This is why some extra unused holes remain on the side walls.


    This is all the parts just after unboxing (excluding the inkjet film, solder tin, screws, power supply, wiring, electrical tape):


    assembly-all-parts


    The only real design flaw was insufficient width of the grooves that hold the lenses. The lenses have an edge thickness of 1.2mm, which I had intended to fit into a 1.22mm groove. Turns out this was not enough, probably due to a combination of manufacturing tolerance and additional thickness added by the anodizing black matte surface finish of the aluminum part. The lenses didn't fit into the grooves!


    I don't have a very advanced tools at home, so my best solution to this was making the existing grooves wider by hand using a power drill. I bought a 1.5mm metal drill bit and achieved a decent result by doing 4 to 5 passes per groove. This took about 2-3h in total because I had to move the bit quite slow and could only machine about 1/4th of each groove depth at a time by moving the drill bit slowly accross, and there are 8 grooves total.


    drill


    Here's some more pictures of assembly below.


    The back side after soldering wires to the PCB power pins and a socket for the 12V power supply. The PCBs and hood pieces share a common mounting hole so only two screws per PCB-hood pair are used.


    assembly-back


    The front side of the baseplate + PCB + hoods assembly, but without the lenses, powered on. Don't look at it directly :)


    assembly-front


    It's interesting to note that in the picture above, all of the light you can see from the LEDs is actually "bleed light" and not useful light. None of the light visible above is the light that's intended to go into the lens and produce the sunlight effect.


    Testing with partial assembly of the walls and only 1 out of the 2 lenses:


    assembly-one-lens


    Testing the inkjet film layers with an avocado as a subject. I settled on using two layers of the inkjet film for the final build:


    assembly-avocado


    Cost


    Overall I spent around 1000€ on this project. But this includes cost of tools I was missing, prototype parts that I had manufactured but discarded, bulk orders for parts like LEDs and PCBs which had a minium order quantity above what I need for 1 unit, and various supplies like screws, etc. The actual raw cost of parts only, without shipping, to build the final unit is hard to estimate. But I would say around 300€. The most expensive parts are the CNC parts (PMMA lenses and the aluminum baseplate and walls) accounting for about 2/3rd of the total price. The rest (PCBs, assembly service, LEDs, 3D printed plastic parts) was quite cheap.


    Conclusion


    As I write this the final piece is sitting on my desk and producing a pleasant soft white glow. It's definitely nice, and I'm very proud of the result - especially because this was by far the biggest build project I have ever done.


    main1.jpg


    Thanks to this project, I've learned a ton about PCB design, electronics and CNC manufacturing and optics. I even got so far down the side quest of learning optics that I started an open-source python project for modeling geometric optics.


    So, is it convincing as artificial sunlight?


    My honest answer to that is: partially. The geometric effect of the light source appearing at infinity works. As I pan and tilt my head from side to side, the illusion of light coming from way far behind the object is 100% a success. On top of that, if you look at it while moving your head into the light beam, my eyes get surprised - almost hurt - by the sudden intensity jump. This indicates that collimation is good and you can sort of see it in the video at the start of this post.


    However it's apparent that it's simply too weak. Don't get me wrong, it's still bright. I can't look at it directly without sunglasses, and honestly it's really hard to take a good picture of it because the contrast between the light it emits and the outside of it is very high.


    Another downside is that I can definitely make out the grid of lenses, as the intensity pattern clearly reveals the grid shape. This is quite a minor downside and not really unpleasant, and I'm sure it could be improved upon.


    If I were to ever work on a version 2, I would focus on:



    • More power. My feeling is the light output needs to be 3 to 5 times stronger to get any closer to a convincing effect, and it's not crazy to aim for as much as 10x brighter than this prototype.

    • More surface area. This prototype is 18cm x 18cm. So you only really get the effect if you are able to sit with the produced straight beam of light, which is quite narrow to resemble any kind of "fake window". A future version would need to be 2 to 4 times wider in my opinion.

    • Better optical design. I still think a refraction based design is possible, but it requires very precise optical design and mechanical tolerances. My feeling is that a refraction based design, especially as a grid, is very sensitive to positioning and orientation of parts. I lack mechanical engineering skills in this area.


    main3.jpg


    However there are some really encouraging things that I really like about this grid based, refractive design:



    • It's scalable. If I had built 4 identical items, I could literally stack them on top of each other and get more surface area. The "bezels" would be only 5% of the total light emitting area, and I'm sure this could be lowered. I also like that the inner design calls for repeated elements, as this introduces some economy of scale, even at the prototype level. The only part that's not trivially scalable is the lens grid. Maybe it could be injection molded for very large scale production, or for medium scale you could come up with a way to tile multiple lens grids into a larger overall grid pattern, adding some thin bezels for mounting.

    • It's compact. The total size is 19cm x 19cm x 9cm. This is quite compact for a 5cm focal length and an effective lighting area of 18cm x 18cm. Reflective designs like the DIYPerks video or commercial products like CoeLux do not achieve this form factor.

    • Thermal management is better by design. This is not really something I got into for this design, as it's quite underpowered. The whole thing runs comfortably on a 12V / 3A wall brick power supply. But this design offers great margin for scaling up because there isn't a single light source to cool down, but a number of LEDs proportional to the surface area. I suspect the main thermal issue when scaling up would be the cooling of the power supply itself, not of the lamp.


    As final thoughts, let me talk about the software heavy approach I had for this project. It's awesome. If I was starting a manufacturing company today, I would do it all code based. PCBs, 3D models, assembly, testing... I want code everywhere. The power of changing a parameter and having the entire design updated with a single script it so good. Run a script and get all the production data including GERBERs, BOM, 3D models, mechanical schematics, technical diagrams, automated tolerance and electrical checks... absolutely no manual steps between changing a design parameter and ready to send a new order to manufacturing. The PCB and CAD space is even evolving to use proper CI/CD tools which is really exciting.


    I don't know if I'll ever have the time to work on version 2 of this project, but it was great fun anyway! And now I have a cool unique lamp. Thank you for reading!

    Walk 5 Miles to One Meal a Day: The Dorsey Way

    Golden light cascades down San Francisco's hills as a self-driving car navigates the morning gridlock. A woman in Lululemon inside a vehicle frantically texts her

    ...
    Full

    Golden light cascades down San Francisco's hills as a self-driving car navigates the morning gridlock. A woman in Lululemon inside a vehicle frantically texts her workout challenge group: "Just crushed a 45-minute HIIT on the Peloton. Late to my meeting. Uber surge pricing is at 3.6x!"


    The camera pans across the intersection clogged with rideshares, each a mobile island of tech workers chugging Soylent while their phones erupt with Slack notifications. "We need that deck by 10 AM!" shouts one into his AirPods. "Tell them we're not some bootstrapped startup — we're a pre-unicorn with legitimate Series A potential!"


    On the corner, a disheveled man sits cross-legged on cardboard, ignored by the vehicles streaming past. His sign reads, "Need cash to pay off my second Tesla – Venmo accepted".


    The traffic momentarily parts, and there — striding down the empty sidewalk in solitary contrast to the vehicular chaos — is a figure dressed in black. His beard is trimmed with precision, his gaze fixed forward with quiet certainty. Wait. Is that...?


    Jack Dorsey's habits read like a Silicon Valley post-IPO founder's wishlist: meditating, eating once a day, taking ice baths. Yet, the ritual that truly stands apart — the one that defies ==productivity-as-identity cliché== — is the long walk he once took to Twitter's offices every day.


    Eighty minutes, rain or shine, step by thoughtful step. While others multi-tasked their way through inbox triage or nursed their third single-origin pour-over, ==Dorsey walked.== Not chasing quantified-self metrics, not hacking longevity. ==Just to create space.==


    I admit my bias: choosing Jack as a model here isn't neutral. It's not just that I've walked those same streets. Fifteen years ago, I met Dorsey briefly. And he struck me as a rare breed: the old-school hacker who didn't chase money and fame but quietly attracted it, almost as if by accident. But there was no accident.


    I've seen a quality in him that felt out of sync with the performance culture he helped enable — and yet refused to embody. There's discipline beneath his eccentricity. Stillness beneath the spectacle. In a city wired for optimization, Jack seemed to walk a different route — literally and metaphorically. Maybe I want to believe that is still possible.


    The Alchemy of a Long Walk


    Walking five miles a day in a culture obsessed with efficiency feels almost rebellious. ==We've been trained to see time as currency==, yet here was someone willingly spending nearly an hour and a half to cover a five-mile commute. Yet, this isn't waste — it's at the heart of deep work and inner work. A long walk becomes meditation — a slow unfurling of thoughts, a space where tangled worries loosen, and half-formed ideas rise to meet you with startling clarity.


    What others dismiss as wasted time is often where meaningful transformations quietly occur. Breakthrough ideas, too — those strange, golden insights that never come when you're hammering away at a screen, but drift in during mile three, unhurried and fully formed, as if they've been waiting for you to slow down enough to notice. These walks ==silence the constant internal chatter== and reconnect you to your personal wisdom, buried under the noise of distractions and ambition.


    Jack isn't in this series because he's rich and peculiar. He's here because I've seen something quietly instructive in the way he operates. He models someone who found a way to move through the world without letting it move him. Not by wearing beanie and beard. But by virtue of the discipline. The restraint. The refusal to identify with efficiency just because the culture demands it.


    You don't have to adopt his specific routines: meditating, fasting, walking. But it's worth asking yourself:



    • Is urgency my own — or something I've picked up from others?

    • How can I reclaim any routine — as an intentional practice?

    • What's one thing I do on autopilot that deserves my full attention?

    • If I walked more slowly through my own life, what might I finally catch up to?


    The Off-Schedule life isn't about escaping the world. It's about ==choosing meaning over momentum, and presence over performance==. Whether this writing resonates, whether you're done sprinting on borrowed time, or whether you've already found your answers to the questions above — I'd love to hear about it. If this piece had a pace, it'd be a slow walk uphill — which makes the yellow arrow below look almost like a trail marker. Tap it if you climbed that far.

Robert Birming

Are you thinking about spilling your soul online? Check out my page. It might just save you from a lifetime of regret. I’m also creating the , a...

(10)

    The Writer's Engine

    After a few weeks' break, I've returned to reading books.

    I don't recall what caused the interruption, but I remember how difficult it felt to

    ...
    Full

    After a few weeks' break, I've returned to reading books.


    I don't recall what caused the interruption, but I remember how difficult it felt to resume the habit. It felt much easier to simply turn on the TV and half-heartedly watch a documentary or familiar movie.


    Now, I pick up a book whenever I have a moment. I'm immediately immersed in the story, finding the whole experience both relaxing and inspiring.


    When it comes to writing, though, I've managed to maintain it without interruption so far. However, the same principles apply to both reading and writing: after a break, it's significantly harder to get started.


    Creativity demands attention and maintenance. We must nurture it as we would with a car. If neglected, it will rust, falter, and lose its value.


    Writing is something very valuable, whether done privately or for an audience. Cherish it as the precious treasure it is. Even if it's just a few words.


    Turn the key and let the engine idle; that's enough to keep things running smoothly.

    Jekyll & Hyde: My Two Blogging Worlds

    I've launched a new blog! Welcome to robertbirming.com (RSS)

    For a long time, I've considered creating a separate space for my general life

    ...
    Full

    I've launched a new blog! Welcome to robertbirming.com (RSS)


    For a long time, I've considered creating a separate space for my general life musings, distinct from my reflections on blogging and creativity.


    In my writing, I often feel a duality, much like Dr. Jekyll and Mr. Hyde. Therefore, it feels natural to me to create two separate blogs.


    On robertbirming.com, you'll find my reflections on life. On this blog, I'll continue to share my thoughts on blogging, creativity, and related topics.


    Both blogs will live their own lives, side by side. I hope that one, or perhaps both, will resonate with you.


    Regardless, a warm welcome!

    Finding and keeping your voice

    There's one challenge most bloggers encounter: discovering their unique voice.

    The only way to discover it, really, is to just keep blogging. Hopefully, we'll find

    ...
    Full

    There's one challenge most bloggers encounter: discovering their unique voice.


    The only way to discover it, really, is to just keep blogging. Hopefully, we'll find it eventually.


    However, maintaining that voice can be just as difficult.


    It's easy to be blinded and lose yourself in the glow of upvote buttons and statistics. Some posts gain more traction, and we're tempted to continue down the same path.


    We might convince ourselves, "This is my true voice." But is that genuine? Or is it merely the false advertising of a traffic-hungry ego?


    It's easy to deceive yourself, but our readers are not as easily fooled. They'll expose our deception faster than it takes to learn how to format text in Markdown.


    Once you've found your voice, cherish it. Nurture it like the precious treasure it is.


    Your voice is unique. Let it remain so.

    Recognizing the value around us

    "I don't have anything of value."

    That's a common response from customers when I give tips about fire and burglary protection. One of today's customers

    ...
    Full

    "I don't have anything of value."


    That's a common response from customers when I give tips about fire and burglary protection. One of today's customers said the same thing. But this time, a quick addition came from his girlfriend:


    "You have me."


    Touché!


    Even though it was a joke, it reminds us of the importance of what we often take for granted: family, a roof over our heads, food on the table, friends, a stable income, good health, the ability to pursue our interests...


    Often, we only realize the true value of these "givens"—which aren't really givens at all—when we lose them.


    We don't need to wait until that day comes. We need to remind ourselves of it in our everyday lives.


    The value of everyday joy. The value of everyday connections.


    Let's make a conscious effort to cherish these moments, big and small.

    Never Stop Dreaming

    I recently listened to an interview with the Swedish author Lena Andersson.

    She shared a story about her father's skepticism towards her dreams of becoming

    ...
    Full

    I recently listened to an interview with the Swedish author Lena Andersson.


    She shared a story about her father's skepticism towards her dreams of becoming a writer. He believed that one shouldn't pursue something unrealistic, as dreams can lead to disappointment.


    And, as Lena herself points out, there is some truth to what he said. Most dreams fall apart; that's just how reality is.


    But does that mean we should stop dreaming? Absolutely not!


    It is not the broken dream itself that disappoints us. It is our attitude towards the dream that matters most.


    If we think that achieving success is essential for our happiness and overall life satisfaction, then we will inevitably be disappointed if it doesn't happen.


    However, if we are comfortable with the possibility that things might not work out, then failure isn't the end of the world. We will learn from the experience and move forward, which in itself is a success, not a failure.


    A dream pursued with an open mind, accepting of any outcome, becomes a journey of learning. It is that journey, not the destination, that truly enriches our lives.


    In short: never stop dreaming! Let the pursuit of your dreams be a testament to your resilience and your commitment to growth.

    Writing Without a Filter

    I listened to an interview with the Swedish author Lydia Sandgren.

    She begins by saying that she threw away all her teenage diaries because they

    ...
    Full

    I listened to an interview with the Swedish author Lydia Sandgren.


    She begins by saying that she threw away all her teenage diaries because they were so awful. "The worst thing was that they were so inauthentic, as if there was a filter over them," she says.


    "Congratulations," I say.


    I wish that feeling only applied to my teenage years, but that inauthentic filter still haunts me today.


    It's as if there's always an underlying vague motive behind my writing, an unspoken and undefinable goal: to be discovered, to be read, to be loved? I don't know, but it feels like every word and sentence is written in the glare of that constant observer – even as I write this.


    Is it a language thing, me writing in a language that's not my mother tongue? Maybe. But I don't think so.


    All I know is that it feels unclean, and I wish it could be washed away. This feeling, this sense of impurity, lingers like a stain.

    Fabruary 2025

    February is over. Fabruary is here.

    I'm curious about the blog posts that resonated with you last month. Which was your favorite among all

    ...
    Full

    February is over. Fabruary is here.


    I'm curious about the blog posts that resonated with you last month. Which was your favorite among all the blog posts you read? What appealed to you about the text?


    Please get in touch and let me know, and I'll add you to the list below. You're welcome to include a link to your own blog as well.


    I look forward to reading your comments.




    It feels weird that people are reading


    I love this post! It captures so much of what blogging is about. Even if our main purpose isn't necessarily to be read, it sure feels nice when some do, and it also feels surreal.


    Robert Birming




    What were your first seven jobs?


    Lou Plummer, like no other escribitionist I know, has a stunning way of capturing slices of his life. Through a wide variety of topics, I learn about challenges he has overcome, lessons he has learned, and things he has come to treasure in life. It's like listening to my wise father-in-law if we had the shared cultural and political sensitivities. This post is a great example of the wisdom that only comes from a person who has been through it. Reading it, and the rest of his website, is helping me reframe some of my own expriences.


    Zinzy's blog and reply to Lou Plummer




    A good day for sandwiches


    I enjoyed reading because it went from everyday things to reflections on family and relationships. There’s a balance of humor, exhaustion, and tenderness that makes it deeply human and relatable.


    Pedro




    Choosing my pace by shaping my thinking spaces (Part 5)


    With the following passage, Tracy highlights the stakes of media
    consumption:



    Controlling the pace of media becomes a tool of power, with political ramifications. If we’re busy watching, we’re not acting. If we’re stuck listening, we’re not thinking. If we’re not sure what’s happening, we’ll wait to gather more information. If we’re constantly playing catch up, we’re always in reactive mode, never proactive.



    This systemic strategy dovetails with ideas of perfectionism; we must have complete information before we can act. We can't do something unless we do it right. "Be an informed consumer."


    On one hand Neo-liberalism demands that we "do more with less" but there's a corollary; we have more "news" so we "do less when there's more." And Tracy articulates this so well in her meditation on
    recognizing the gluttonous buffet that is available for us to consume.


    Jeremy Friesen




    This page is under construction


    I love Sophie's heartfelt call to action in this one: please build your own website. Yes! We need more cozy, personal homes on the world wide web. Her post is both inspiring and practical, packed with interesting links and resources to help you get started crafting.


    Sven Dahlstrand




    The hardest working font in Manhattan


    Appropriately published on Valentine's Day, this is a love letter to a font with an incredible history. Wichary writes with such devotion and care; each point dutifully illustrated by some gorgeous photography.


    Thomas Rigby




    I tried my best


    This was the post that moved me the most. The one thing that scares me in the world, not being able to solve a problem for my daughter. That’s it. It made me cry, it came back to haunt me (multiple times), and I know there are no words that can make it better, let alone go away. I’m so so sorry.


    maique



    Nostalgia vs. Reality

    My job isn't quite what it used to be.

    It's easy to think things were better in the past, and in my case, they actually

    ...
    Full

    My job isn't quite what it used to be.


    It's easy to think things were better in the past, and in my case, they actually were.


    But is it bad now? That's the most important question.


    'Things were better in the past' is a relative statement, often implying a narrow-minded and negative viewpoint. Furthermore, that imagined 'past' is usually heavily sugarcoated. Or, perhaps we even had it a bit 'too good' back then.


    Of course, we should strive for improvements in all respects and not accept just anything. We learn from the past and strive for a better future.


    As for my job, it's still the best job I've ever had, even if it was "better" in the past.

    Behind Our Blog

    The blog is, in many ways, an extension of our personality, perhaps even an amplification.

    It's much more than just a tool or a hobby.

    ...
    Full

    The blog is, in many ways, an extension of our personality, perhaps even an amplification.


    It's much more than just a tool or a hobby. The blog allows us to express our thoughts and ideas in a concentrated and unfiltered way, filling in the gaps where speech fails us.


    Even though our blog posts are all different, we have one thing in common: we do it because it matters to us, even if no one else reads it.


    It doesn't matter that it might cost us a little without generating a single penny – it's still a win! Our blog is invaluable.


    In the '80s, the Swedish band Docent Död released their song "Solglasögon" (sunglasses). Freely translated into English, with "sunglasses" replaced by "blog," it would sound something like this:


    Behind my blog, I can be myself

    Everything becomes so beautiful through my blog

    Everything feels so real behind my blog


    Our blog is a space where we can be truly ourself, and that, in itself, is a win.

    Feeling Non-Adult

    I'm soon turning 54, but I don't feel like an adult.

    It's not that I feel like a child. I just feel "non-adult".

    When do

    ...
    Full

    I'm soon turning 54, but I don't feel like an adult.


    It's not that I feel like a child. I just feel "non-adult".


    When do you become an adult?


    I don't have any children myself, is that why I don't see myself as an adult? Is it because I still have parents alive and in a way am someone's child?


    Or is it the interest in new trends and new technology that keeps the child's mind alive? The curiosity to explore new things and places? The joy of discovering new music and movies?


    I don't know. All I know is that I'm 53 and feel like a non-adult.


    Do you have to become an adult in the "adult" way?


    I hope not.


    🗣️ Community echoes

Bear Blog Most Recent Posts

Most recent posts on Bear Blog

(20)

    Staying in reality

    Today I hardly checked my phone.

    I was present. It was nice.

    ...
    Full

    Today I hardly checked my phone.


    I was present. It was nice.

    Newgrounds Virtual World Idea

    So I had this idea for a Newgrounds virtual world game in the style of Club Penguin. Anyone got any ideas?

    ...
    Full

    So I had this idea for a Newgrounds virtual world game in the style of Club Penguin. Anyone got any ideas?

    confusing an LLM

    Today I stumbled onto a surefire way to thoroughly scramble an advanced LLM's cognitive faculties. I'm sure others have discovered this as well, but doing

    ...
    Full

    Today I stumbled onto a surefire way to thoroughly scramble an advanced LLM's cognitive faculties. I'm sure others have discovered this as well, but doing so oneself always provides the best teaching moments.


    Anyhow, it was this: I started a new conversation with ChatGPT-4o, as I'd had a breakthrough with it on a previous one started just a couple of weeks ago. I used the same cognitive scaffolding and framework I've come to rely on for a while now, but on this conversation, after a dozen or so prompts, decided for some random, unclear reason to switch the model to o1 (the reasoning model); it was fine for a bit, but its attention started wandering and it could no longer stick to the canonical framework established at the start of the convo. As has happened in the past, trying to get it to correct itself only results in it getting mired deeper in its error.


    Finally, ignoring the breakdown also went nowhere. Oh well; I guess I should determine a model right at the start, and stick with it for the duration. Seems like such a simple premise, but one that can easily derail things if for some reason the fancy strikes you to switch gears midstream.



    [ While on my afternoon constitutional, passed by The Internet Archive on Clement and Park Presidio... ]


    ![TIA](https://bear-images.sfo2.cdn.digitaloceanspaces.com/lnebres/tia.webp)

    김주영

    body { max-width: 480px; word-break: keep-all; } h1 { display: none; } img { box-shadow: none; max-width: 400px; display: block; } @media (max-width: 600px)

    ...
    Full

    body {
    max-width: 480px;
    word-break: keep-all;
    }

    h1 {
    display: none;
    }

    img {
    box-shadow: none;
    max-width: 400px;
    display: block;
    }

    @media (max-width: 600px) {
    body {
    padding: 0 1rem; /* Increased from 0.75rem */
    margin-top: 2em; /* Optional: adjust vertical margins if needed */
    margin-bottom: 2em;
    font-size: 16px;
    }
    img {
    max-width: 350px;
    }
    }

    김주영


    33.316577 (포스터) – 가격: 12,000원


    33.316577




    33.438441 (엽서) – 가격: 3,000원


    33.438441




    Greens of Udo (엽서) – 가격: 2,000원


    Greens of Udo




    Welcome (핀뱃지) – 가격: 1,000원


    Welcome




    Back Keyring (키링) – 가격: 3,000원


    Back Keyring




    A type (책갈피) – 가격: 1,000원


    A type (책갈피)




    B type (책갈피) – 가격: 2,000원


    B type (책갈피)




    JEJU BOOK FAIL!

    Morning Chug

    The past few mornings, the first thing I do when I wake up is chug my water bottle until it's empty—24 ounces. It’s amazing how

    ...
    Full

    The past few mornings, the first thing I do when I wake up is chug my water bottle until it's empty—24 ounces. It’s amazing how quickly it wakes me up. I’m trying really hard to stay hydrated, but wow, it’s tough! Right now, I average about three of my water bottles a day. I’m supposed to drink six. I already feel like I’m going to the bathroom a hundred billion times a day!


    I’ve never been good at drinking enough water, so besides all the bathroom trips, I’m curious to see how I’ll feel once I finally hit my hydration goals.


    Every time I go to the doctor, my liver numbers show that I need to drink more water. I guess that means I’ve been dehydrated my entire life, which is kind of a crazy thought. But it’ll get better! I’ll let you know how much water I manage to drink next week.


    Also, sorry for not doing much origami lately. I’ll get back on that horse soon enough because, I love that horse. I've been distracted with my Vlog


    Have a great night!

    crave sourdough (clinton hill)

    Of late I've been having an extremely difficult time writing. Can you tell? I won't lie to you by saying that I've tried everything because

    ...
    Full

    Of late I've been having an extremely difficult time writing. Can you tell? I won't lie to you by saying that I've tried everything because I haven't, and I'm not even sure I can say I've tried hard with a straight face. There's a part of me that doesn't want to brute force my way through this writer's block. How could you possibly enjoy reading something I didn't enjoy writing?


    IMG_3680


    I guess the first culprit I'll point to is work, which has been busy. It is of course nice to be useful and in motion for a change, though with that work comes the stress of wanting (and needing) to do things well. The job isn't hard per se but I have to pay attention to very small details like line spacing and formatting and font and that sustained level of focus leaves my brain shriveled and weak at the end of the day.


    The other reason is that I have been more aware of who reads or might read what I write and I am starting to feel more self-conscious over what I say here. This is what I feared when I first started sharing my blog with friends. Even though I have no regrets — there's no way I would have been able to keep from talking about this, anyway — I do miss how unfettered I felt when I could write about whatever and whoever I wanted however I wanted. All my life I've been the type to shoot from the hip and deal with the fallout later. This has its benefits, but is not a sustainable model for keeping friends. Primum non nocere. If it were not for this, I probably would have written about what's bothering me this week. (The obvious solution is to journal about it but I simply cannot bring myself to write something if nobody else is going to read it.)


    In an attempt to jolt myself out of this torpor I skipped my workout and went out exploring instead. I can't believe I used to do this almost every day in 2023; just getting myself to agree to break out of my routine and go somewhere new took almost all my willpower. Going out on a weeknight does have one benefit, though: the bar feels lower. If I go out on a weekend and have fun, that's to be expected, no? After all, it's the weekend. But on a weekday I normally would just be sitting at home or at the gym anyway, so any enjoyment I can get out of a post-work ramble is icing on the cake. Worst case I get to try again over the weekend. If you have a bad weekend you have to wait another week for it to come back around.


    For these rambles I only set one real rule for myself: no music. I don't even bring my headphones with me. I do this for a few reasons: so I am more alert for my own safety, so I can notice more things (how does a neighborhood sound? can I hear birds? water? trains?), and so I can hear my own thoughts. It's working out pretty well and I'd like to make a habit of it. Maybe one day I'll work up the courage to ditch Spotify entirely and make this much easier for myself.


    Other than that I pick somewhere to start my journey, bookmark maybe one or two spots of interest, and then let myself follow my instincts. The goal is to explore, to get lost. I fight my desire to plan.


    Today the starting point was Fan Fan Doughnuts, which has been on my bucket list a long time. It's wonderful! I highly recommend. I was recommended their best seller, the guava and cheese donut, which I scarfed down messily on the pink bench outside.


    I picked that as my starting spot because I haven't been to that neighborhood much before, which Google tells me is called Clinton Hill. The closest I've gotten is Fort Greene. Clinton Hill is a gorgeous neighborhood that looks a lot like its neighbors to the southwest Park Slope and Prospect Heights. Proud brownstones line the streets. When you look at them individually it's easy to pick out the differences between them; look down the street, and they feel so satisfyingly cohesive standing tall in orderly rows. Seeing brownstones always makes me daydream about living in one someday. I imagine standing at the top of the stoop of my apartment capped with verdigris patina, welcoming friends carrying charcuterie and grapes to the dinner party. Never mind that I've never hosted a party, dinner or otherwise. Never mind that I can't even afford a room of my own.


    IMG_4887


    IMG_4891


    Pratt sits at the center of Clinton Hill. Visitors can enter through the main entrance. Campus is small (a few blocks?) but pretty and covered in trees and the signs of spring. People-watching from a bench and eating my foccacia sandwich stuffed with sabich from Crave Sourdough made me feel very, very old. I thought about my college experience and how much it different from what I imagined it would be like, thought about all the things I would have done differently. My sandwich was long gone by the time I'd finished.


    IMG_4888


    The evening would have been a good one even if I hadn't had the best bread of my life at Crave (linked earlier). (I love bread. I'll follow you down a dark alley for a good boule or baguette.) It's open Thursdays 9 to 9 and Fridays 9 to 5. Get the seeded sourdough and their homemade mayo.


    IMG_4882


    IMG_4885


    IMG_4884



    show me good bread and i will follow...



    IMG_4892


    Egg update: there are still no eggs at the supermarket. Okay, there is one (1) box, and it's a half dozen for tree-fiddy. But how is there only one? Why does it have its own sign? And who's going to buy it?
    IMG_4879

    New Phone

    I told you that my cell phone broke last week. It didn't break, it drowned. It drowned in the pee my son made when he

    ...
    Full

    I told you that my cell phone broke last week. It didn't break, it drowned. It drowned in the pee my son made when he slept next to me without going to the bathroom first. But the phone wasn't a new phone that drowned and didn't come back after peeing. It was already old, tired, used, with a cracked screen. And that's because I had already replaced its screen twice. It was far from being 100% and that's okay.


    I'm one of those people who doesn't see any value in a cell phone and I only have one because of the social and professional pressure of having one so I can talk to people, especially my girlfriend and my son's mother. Yes, I also talk to my mother from time to time, but that's rare. My mother doesn't have much to talk to, the same old routine – I don't blame her, routine is good too – I think she talks a lot more to my siblings, especially my brother (because I have two siblings, a couple). My brother is really stupid and annoying, but my mother loves him, lol. I even think she loves us too, but loves him so much and uses any opportunity to make it clearer for everybody, lol. I used to get annoyed by this when I was young. Today I think it is her right to love whoever she wants more, and as life has gone on and with her attitudes towards me and him and the difference in treatment and judgment, ==I have learned to live with this much better and understand and that I don't have to interfere in that.== It has hurt me, and it still hurts my sister, but I try to stay out of this discussion.


    Siblings


    The funniest thing is that my brother knows and has noticed this all his life, but he has always been incapable of having any kind of conversation about it with us, as if he enjoyed some kind of immunity with her and that talking about it, about his advantage over us, would cause something and ==that this situation would change== just by talking about it. Oh, my brother is very stupid. I don't try to understand him, nor do I want to. I don't hang out with him or get close to him much, I don't socialize with him much, and I don't want to. He's just uninteresting. He's average. He's boring. He's unintelligent. I spend so much time away from him that the last time I tried to spend a little more time with him and see if there was any affinity, and I realized that there wasn't. It's like a ==Pokémon in reverse==, he never evolves. I don't have any intimacy with him and I don't want to. I prefer to stay as far away and as intimacy-free as possible, but even with all that, ==I always want everything to be friendly==. I never seek confrontation or conflict. I'm too far past that time. I'm out of it, I don't want to. His children were born or I saw very little. I only saw his daughter when she was about 5 years old. Then they grew up and I didn't seek contact, I don't know much about her.


    During our life and coexistence, there were few moments when we were together. Those beautiful stories of siblings helping each other, liking each other and growing up together, wow, I never had that at home. One of those moments was in my early teens when I discovered basketball, which led me to discover urban music and has changed my life ever since, but he wasn't very present even then. He participated – *perhaps even involuntarily *– in that moment and then I went on to explore it alone and/or with friends I made along the way.


    We rarely crossed paths. I entered the artistic field, he entered the military. I left home and started a family early and he continued to live with our mother until he was 30+ years old – he is 7 years older than me. I had a child early and he was much older. We never had much in common in tastes, soccer teams or politics. He, who didn't like or understand politics, got involved in it late and right away with one of the worst characters in our politics in decades, Jair Bolsonaro. Of course, this phenomenon was not by chance. Since he didn't understand anything about politics – and was even somewhat proud to talk about it – he didn't fall into the clutches of fascism's easy discourse for nothing. It's a shallow, idiotic discourse, made for people who really don't like and don't want to understand politics but at the same time are "indignant" with the politics that "is there" and want change. Haha. Just look at the change these ignorant people want. Help!


    And on the few times we spoke via mobile messaging app, it was when I needed a car to register on the UBER app so I could work on the app. He wasn't even able to do that for me. After that, even though we had his number and he had mine, we never spoke again. Last year he sent a forwarded message inviting me to his 50th birthday party, but honestly, ==I had nothing to do there.==


    Smart Shit


    But as I said, it's like that, I know what he's like, what he thinks, but I don't send him a single message about politics or anything, I don't seek contradiction, I don't want to know what he thinks and that's okay. I respect his limitations. And I use my cell phone with very few people. Even on the omnipresent WhatsApp. I don't like to join groups on the app and I talk to very few people (basically my girlfriend and the mother of my child). I have a new friend who snorts cocaine and would annoy me until I stopped responding. I wanted everything immediately, to be answered, to be heard, to be validated... oh, I'm tired. I have no patience for ego and cocaine addict talk. I get along fine without prejudice, but he started with megalomania and annoyance, I leave quickly. I don't have time for that. I had another friend who talked to me every day, but every now and then he would have a fit, get upset with me, disappear and then go back to calling me every day. But then he stopped again. And all because I contradicted him. He asked me for my opinion on a professional matter, I gave it, I was honest and it seems I hurt him. I say “it seems” because he was all excited talking to me every day and since then he disappeared and didn’t send me any more messages. I’m sorry. First he asks for my opinion, he asks me to be honest and then when I give my opinion he says "I didn’t understand" and never sends me any more messages? Haha. Come on, man. That’s not going to work. I try to be an adult and deal with life in an adult way, but some people have a lot of difficulty. Okay, I don’t judge, but I also can’t pretend that I’m going to change to fit into their game, sorry, I won’t. I’ve done that before, I was younger, but life teaches those who insist and learn from it.


    So, since this friend doesn’t send messages anymore and I silently ghosted the other one, my drowned cell phone served me very well even though it was broken, dented, old and damaged. I read my pirated ebooks on it, listened to some podcasts, saved some moments in videos and photos, played Candy Crush and everything was fine, everything was fine. I had already made the wise decision to uninstall Instagram, Facebook, Threads, Tiktok and all that crap a few weeks before the drowning. But then my son drowned it while we were both sleeping. After the despair of being without a cell phone, reading my books and communicating with the – basically – two people I communicated with, ==I was very happy to go out on the street and no longer have this omnipresent trinket on buses, bars, bakeries and the like.== But the happiness would not last long, because my credit card was about to expire and I remembered that I would only be able to make payments via cell phone because the bank is a bank that only exists digitally, that is, there is no physical store to do anything with and the bill I received by email did not give the option to pay by barcode, only there within the app by changing it. Smart guys. Another thing that made me regret the drowning of my old phone was realizing that in order to receive my Spotify payment I would also need it to log into PayPal and do all the paperwork to receive these things.


    InstaWorld


    I hesitated, hesitated, pretended to be tired, pretended to be lazy, pretended to work, pretended to have no time, but today I had to go out and buy a new phone. My son's mother needs the phone to talk to me and for me to send him updates, photos, videos, so she helped me with some money to buy a new one. I had to go to the market to buy some things while my son was at school, so I took the opportunity to go to the store next door ==and bought the second cheapest phone in the store.== I didn't research it on the internet, I didn't want to know what the features were, what was new, what was more advanced, nothing. ==All I want is to talk on the app, to order a motorcycle from time to time, to take some photos, to read some books and to play Candy Crush.== I'm not going to install anything extra, I'm not going to update big things, I'm not going to listen to Spotify, none of that. The first thing I did when I turned on my phone was to uninstall a bunch of stuff that came pre-installed, like YouTube Music, Spotify, Google Chrome, Google Search, Facebook, and Instagram. I remember a quote from one of the characters in the movie ==“Anora,”== when asked if he had Instagram, he said he was an adult. LOL. That’s more or less the answer I plan to give soon, even though I have Instagram for work, which wasn’t the case for him. But to what extent do we put ourselves in this position of being forced to have Instagram? My profile doesn’t really bring me much work or visibility anymore, but it’s still there, I have it, and I’m not abandoning it completely. It doesn’t even have a photo and I rarely change my stories, just to promote work, but even that one is down and I have zero desire to update it with personal stuff.


    I used to live this life of recording my day on stories and watching who was watching me and wanting to show more and more in a really crappy addictive circle. But I'm getting away from this lifestyle. I want to be the adult in the room. I want to need to do this. I want to live my life for myself and not to show it all the time to people who shouldn't even be watching it there. And I don't want to be the one watching other people's stories. Enough, that's why I uninstalled these apps.


    Today, for example, I went out with my son right after he woke up and we went to register his life so he could receive a government aid of one minimum wage in Brazil. It's a low-value aid but it helps a lot. Other than that, there's not much else for the disabled family. There's no treatment, no school, nothing. That's it and be satisfied. I'm not satisfied but I'll take it. At another time, I would record this moment, show it to strangers via stories, receive a couple dozen messages, respond, waste my time on that, always in vain. Always in an attempt at a false connection that doesn't bring anyone closer and leaves each person in their own square with fake, fake relationships. But I got tired of this relationship.


    There was also another opportunity when my son came back from school. He had an appointment with the family doctor here at the clinic in the city nearby at 5 pm, the time he gets back from school. I asked the driver to drop him off at the clinic door so I would be there waiting for him. I arrived early and signed him up and waited for him for about twenty minutes until he arrived. As soon as he arrived, he was called for service. I've never seen that place so empty. It was too fast. The last time I went, it was so full, so full that I almost gave up after an hour and twenty minutes of waiting. This time I was apprehensive, I made a snack for my son, I carried it in my backpack for fear of the delay, but it was very quick.


    A while ago I would have put this whole experience up for scrutiny by strangers who would be there automatically and without much empathy seeing my intimacy. But now I am reserving my intimacy for those who are willing to experience it with me. Of course, I can't escape the occasional record that my girlfriend makes and puts us there and sometimes I repost it too to please her :) And so it goes, =="a little bit of drugs, a little bit of salad"==(to keep the balance), as the popular saying goes.


    #instagram #android #smartphone #siblings #family #app #anora

    Pressing matters: the marks we make

    As a kid, whenever I wrote with a pen or pencil, I pressed so hard that the words left deep impressions on the next blank

    ...
    Full

    As a kid, whenever I wrote with a pen or pencil, I pressed so hard that the words left deep impressions on the next blank page. Even when I tried to write lightly, the marks remained—silent echoes of my forceful grip. I remember someone telling me, You don’t have to press that hard into the paper, you know.


    I never thought much about it until a few days ago, when I was coloring in Secret Garden, an intricate adult coloring book. As I filled in the patterns, I noticed faint grooves left on the next page. I ran my fingers over them, feeling the rough texture where the pressure had worn the paper uneven. Huh. I guess I do write hard.


    Even when coloring, I couldn’t help but press down. Some areas felt different to the touch, like the surface had been disturbed, unsettled in a way that wasn’t visible but was still there.


    Maybe I’ve always had this need to make things stick. To leave a mark, to make sure things last. Notes had to be bold and clear, lines had to be dark enough, words had to feel permanent. But paper isn’t meant to be pressed into that hard. And neither is everything else.


    I ran my fingers over the rough spots again. The page wasn’t ruined, just textured—changed by the pressure, but still able to hold color, still able to turn.


    Maybe I don’t have to press so hard. Maybe things don’t need force to stay.

    The meaning of life is 42

    5 And that he was seen of Cephas, then of the twelve: 6 After that, he was seen of above five hundred brethren at once;

    ...
    Full

    5 And that he was seen of Cephas, then of the twelve:
    6 After that, he was seen of above five hundred brethren at once; of whom the greater part remain unto this present, but some are fallen asleep.
    7 After that, he was seen of James; then of all the apostles.
    8 And last of all he was seen of me also, as of one born out of due time


    These verses are directly after the gospel used to give PROOF/EVIDENCE (of eyewitness accounts) in support of Jesus' death, burial, and resurrection


    Now there is a very popular meme out there Im sure you have heard it about how the meaning of life is 42. You can even type it in Google "the meaning of life..." and the number 42 will pop up.


    And I know a lot of you will often say that there is no evidence or proof to support the gospel 1 Corinthians 15:1-4 (KJV) How that Jesus Christ died for our sins was buried and rose again:


    1 Moreover, brethren, I declare unto you the gospel which I preached unto you, which also ye have received, and wherein ye stand;
    2 By which also ye are saved, if ye keep in memory what I preached unto you, unless ye have believed in vain.
    3 For I delivered unto you first of all that which I also received, how that Christ died for our sins according to the scriptures;
    4 And that he was buried, and that he rose again the third day according to the scriptures:


    OK the most popular and common method of finding number patterns in words is to calculate the ordinal value often called gematria.


    The ordinal value of those verses I gave in the beginning of this writing was 1 Corinthians 15:5-8 (KJV) with the numbers included is 2814


    The ordinal value of the word PROOF = 70


    The ordinal value of the word EVIDENCE= 67


    2814/EVIDENCE(67) = 42


    2814/PROOF(70) = 40.2


    42 and 40.2, yeah I know technically you would say 40 point 2 but you know what I mean.


    So there you have it folks the meaning of life is to be born again.


    Here on Earth when you commit a crime if the system is a righteous and just system then you got to pay the price whether its a ticket or going to jail. No matter how sorry you are got to take the punishment.


    Its the same way in the next life too, but the GOOD NEWS is


    For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life.


    That means all your past, present, and future sins are paid for! God is so gracious you dont have to do anything except believe 1 Corinthians 15:1-4

    I’ll See You Down the Line

    Robert Brimming recently wrote a succinct piece called Stop searching, start living about the concepts of finding oneself. I love this:

    Are we

    ...
    Full

    Robert Brimming recently wrote a succinct piece called Stop searching, start living about the concepts of finding oneself. I love this:



    Are we really meant to find ourselves, as if something is lost? Where should we look, and what do we hope to find? Will we ever succeed?



    We spend more time running from ourselves on a train of novelty, distraction, and fantasy—never realizing that the ghost at the end of the tunnel was the driver, the passenger, and the crew.

    The Day

    The day will bring
    What it will.
    Even without you
    And those noble thoughts.

    The world spins
    Without remorse
    Its

    ...
    Full

    The day will bring

    What it will.

    Even without you

    And those noble thoughts.


    The world spins

    Without remorse

    Its course

    Set without intrusion.

    Hello, World!

    With my newfound desire to express myself through word, here is my blog!

    Inside you will find vulgar and ever-changing opinions and reviews about ancient

    ...
    Full

    With my newfound desire to express myself through word, here is my blog!


    Inside you will find vulgar and ever-changing opinions and reviews about ancient computers, modern fountain pens, media, and anything else I take a month-long interest in.


    Early posts will probably be paper and pen reviews (the latest addition to my hobbies and bane of my wallet), but will no doubt spiral into rants about data sovereignty/privacy, beginner film photography, mechanical keyboards, and old computers.


    I'm an IT professional by day (and IT professional by night thanks to no work-life balance) but I try my best to make time for my many hobbies. This will be a place for me to unload my thoughts about these hobbies, and in doing so, spares my lovely partner from listening to me rant endlessly (which they would happily do).


    This blog is like my digital clutter drawer: a little bit of everything and probably more than you ever wanted. Some might argue its a mid-twenties crisis, but at least I didn't start a podcast.

    ramblings for sara

    What can I ramble on about for you?

    Books I've read this year. Lesson plans for you. How lonely I was before we met, when

    ...
    Full

    What can I ramble on about for you?


    Books I've read this year. Lesson plans for you. How lonely I was before we met, when I was reading and writing alone. Random sampling from my notebooks. Lit videos I watch. Things I read about online.


    Even when I'm not....(I left this unwritten and forgot what I was gonna write)


    I read things online and I think it's all trash. I get angry about it. But maybe I also think, I can't even do that. That's my insecurity. But what I really think is that it would be a waste of time to write like that. All that effort put into making trash. Then even more effort in trying to share it. Even more effort agonizing over how to get more attention on all that trash.


    These literature/philosophy subtacks, any form of content creation, I respect that it isn't easy. But I don't really respect that it's worth making.


    I didn't listen to any music today. I keep thinking that I should make a collection of my favorite lyrics. Come up with a way to extract their influence. And by keep thinking, I mean I've been thinking this for years and years. How many things I think about and never do. Countless.


    I think I hear some influence from Modest Mouse in my writing. I'll listen to some songs and copy out some lines here and you can tell me if you hear it. I love you. You're so pretty. Ty for reading. I'm posting Sick Lost Puppy tomorrow.


    We should write just to write. For the practice. I know this but I don't do it. I have a giant filter. Some people say it's the fear of failure. I understand that. But I think I've proven to myself a few times that I am capable of it. So I don't think it's that. I think it's just difficult to create. And my personality and habits make it even harder. My procrastination. My forgetfulness. So scatterbrained.


    I like thinking. I don't like talking.


    Typing to you feels like the middle.


    Is writing the middle? Thinking is effortless but fleeting. The pleasure exists in the moment then it's gone.


    Writing does have lasting rewards. I used to re-read myself like I re-read you now. Proud. Just feeling proud.


    Writing rabbithole diaries feels so recent. I remember feeling like it was good but there was no reaction afterwards. And really that's how it should be. Things should lie hidden for a while. Quietly building up.


    What a seismic quake you were. These poems brought you to me. I feel so grateful.


    You want to hear my thoughts. I want to hear yours. We can do that for each other. It's what makes our friendship so intoxicating. We are cool as fuck. Aren't we?

    4/3/2025

    thank god it's the weekend.

    it's been an exhausting week, and i don't want to ramble on for too long.

    my mood this week has

    ...
    Full

    thank god it's the weekend.


    it's been an exhausting week, and i don't want to ramble on for too long.


    my mood this week has dipped between great and absolute shit. it's a little bit tiring, but i tried the 988 hotline. it certainly did help, although i feel like i was misusing the service because i'm not exactly suicidal. i wish more services existed to help people.


    goodnight! 😇

    [25] I play Stardew Valley


    As the OG Farmville kid, I love farming game. I love doing virtual tedious works, taking care of virtual animals and making sure

    ...
    Full



    As the OG Farmville kid, I love farming game. I love doing virtual tedious works, taking care of virtual animals and making sure my virtual life flourish. Sure it bring back memories. There's a bigger story line in the game but I'm here to pick berries and water my tulips. I'm glad that they have a mobile version. The control can be quite clunky but hey, I'm okay with minor inconvenience. Let see if I'm determine enough to fix the community center.

    \
    \
    real gamer,

    reesa

    Find Your Alignment

    Twitter Post


    You’re not trying to escape work.

    You’re trying to escape work that doesn’t align with who you are.

    🧵

    I

    ...
    Full

    Twitter Post




    You’re not trying to escape work.


    You’re trying to escape work that doesn’t align with who you are.


    🧵


    I say this 100% as I try and figure it out myself.


    It’s messy (as momentum is) but this is a “sequence” I’m working on to find clarity and alignment.


    Businesses —> cash & systems


    Content —> process, audience, impact


    Acquisitions —> scale & autonomy


    Building a Movement —> legacy


    That might not be your exact sequence… shoot, that might be the sequence I end up with, but that’s not the point.


    The point is to keep iterating. Keep experimenting. Keep building and stacking the expertise.


    You’re not “lost” you’re just in the middle of progress. Stay at it.


    As I figure it out, I share it right here so you can skip the line


    Sunday Doughnuts🍩

    checklists

    I had my first anxiety attack in December of 2016. It's a bit of a blur of causes and stressors that eventually boiled up to

    ...
    Full

    I had my first anxiety attack in December of 2016. It's a bit of a blur of causes and stressors that eventually boiled up to the point where I felt like I couldn't breathe and felt like the walls were literally collapsing in on me. I ended up seeing a doctor and getting prescribed an SSRI. It took a bit, but as I was coming out of the anxiety/depression haze I remember playing a lot of Final Fantasy XV at the time and going through the hunting list.


    In the summer of 2019, my cat had gotten out of the house I lived in. She was an indoor cat, and I was beside myself in anxiety. Luckily, she turned back up in just under a week. During the time she was away, I had spent my free time playing Super Smash Bros. Ultimate, getting every Spirit in the game to maximum level.


    In early July of 2024, I had a pretty big depressive breakdown. I had been diagnosed with ulcerative colitis a year earlier, and, at the time, I felt like I was getting better (This would be proven wrong by the end of the year). I had been so deep in the trenches of "My body is fighting me and I am fighting it" that when that war started to look like it was easing up, I started to look outward for my next step and just completely retreated into a shell. Then, of course, the illness came back with a vengeance and I'm back firmly in square one, which is definitely something that is putting me on the verge of another breakdown, but that's a different blog post/journal entry entirely. One of the things I did coming out of this breakdown was download MLB The Show 24 off of Game Pass and start trying to fill out the baseball card collections in that game.


    There's comfort in completing checklists. I don't have to explain that, countless psychologists with far more researched evidence and education have explained that in much better ways than me, person with a bachelor's degree in digital media, specializing in the art of realtime 3D lighting musing on the internet could possibly dare to do.


    But I think specifically why my various breakdowns were drawn to these specific checklists is a matter of two things: Scale, and importance. There are about 102 hunting quests in Final Fantasy XV, 629 Primary Spirits in Super Smash Bros. Ultimate, and God knows how many cards in MLB The Show. I find that for these comfort checklists, bigger is generally better.


    The other matter is importance. Are any of these things vital? Even ignoring how non-vital video games are: No. The only achievement related to hunts in Final Fantasy XV is for completing one. Spirits are primarily used in one mode in Super Smash Bros. and needing every single spirit maxed out doesn't do anything for you. And while you gain in-game currency for hitting certain levels of completion of the various collections in The Show, said in-game currency is only really used for a single mode. While it's the game's most popular mode, I'm normally far more of a Road to the Show person anyway, where the currency is completely irrelevant.


    Do I have a point here? Honestly, not really. Since I've been teetering on the edge of another depressive breakdown I do find that, like checklists, the act of getting things that are on the inside out helps, like some weird form of journaling, I suppose. Every so often something spends too long boiling and this acts as a bit of a pressure release valve, staving off disaster for a bit longer.


    Do I have a good ending thought up for this post, neatly summarizing everything in a nice bow? Also no. My English teachers will be disappointed. It's just noticing a pattern of past actions, thinking about why those specific actions, and writing it all down. Now, if you'll excuse me, this Assassin's Creed Shadows map has some more question marks on it. Gotta check off what's at all of those.

    Out of Egypt

    I appreciate having run out of onions as it forces me to reckon more honestly with my remaining beef liver. If I weren't being so

    ...
    Full

    I appreciate having run out of onions as it forces me to reckon more honestly with my remaining beef liver. If I weren't being so strict with money, I would simply purchase more (even in NY they're not that expensive) but I spent my last $3 on black pepper---a very worthwhile purchase, and a food that I'm surprised to have gone this long without.


    Today's livers were borderline excellent despite being barely disguised. No marinade, no special sauce, and no onions. Just cumin seeds, garlic, lemon, and... cinnamon? It struck me as an unusual choice, too, but apparently that's a real thing, in Egypt, anyway, and it was an amazing appropriate antidote to the livers' metallic tang. These livers I could almost pop in my mouth without immediately rushing to cover my tongue in soy-soaked rice. Almost. I mostly included rice in every bite. But it wasn't as big a necessity.


    IMG_7002


    "The enemy of art is the absence of limitations," Orson Welles said wisely, and I think I am finding that the same is true of cooking.


    As an aside, I have discovered that cream-cheese filled dates are a wonderful thing. I ate seven for "lunch" today since I wasn't coming home in the afternoon. But I also got a surprise slice of free pizza at the talk I attended a minute or two after downing my dates. I could have had four or five free slices, and boy did I want to, but Lent and stuff, so one had to do. I had an unusually reduced breakfast of: a banana, so the pizza ex machina was very appreciated.

    Creating a static site backup of my Bearblog + self-hosting it on OpenBSD

    Background

    mgx created a new tool, nanuq: from bear blog to json, markdown, or a static site, and of course, I had to try

    ...
    Full

    Background


    mgx created a new tool, nanuq: from bear blog to json, markdown, or a static site, and of course, I had to try it. I've been interested in mirroring my Bearblog since he first mentioned doing so in mirrored my bear blog with cloudflare workers.


    With nanuq, I was able to export a backup of my posts as a static site in seconds. My blog is back on the bearblog.dev domain now, while my backup lives at squ.eeeee.lol. I did a basic export just to see how it all looks — so it's all barebones right now. I'll play with the CSS more later to pretty things up.


    In the process, I learned how to set up a second website at a subdomain on my OpenBSD server and rewrite URLs with httpd.


    What I did


    Every step that has only text styled like this means that I typed/pasted it into PowerShell and pressed enter afterwards. When I use mg to edit a file, I save & exit by pressing ctrl + x, then ctrl + c (to exit), then y to save changes.


    Export Bearblog posts and create a static site



    1. Go to Bearblog dashboard

      1. Go to Settings

      2. Select Export all blog data (bottom link)

      3. A post_export.csv will download



    2. Go to nanuq

      1. Scroll down to static site configuration

      2. Fill in the config details:

        1. Site Title

        2. Site Domain

        3. Favicon

        4. Lang

        5. Site Meta Image

        6. Footer text

        7. Inject JS to <head> — I added my CSS stylesheet here: <link href="path/to/styles.css" type="text/css" rel="stylesheet">

        8. Site introduction

        9. Apply CSS — check the box and toss in: html { color-scheme: light dark; } (leaving empty will apply default Bearblog styles)



      3. Select Browse and attach post_export.csv from earlier

      4. Select Export Static Site

      5. A static_site.zip will download



    3. Extract the zip file & see all the .html files within

    4. Open index.html in Firefox to browse the full post archive


    Upload static site to server



    1. Log into my server with WinSCP

    2. Go to var > www > htdocs

    3. Create a new directory: squee

    4. Upload my HTML files into the new directory

    5. Visit https://eeeee.lol/squee and confirm it all works


    Set up subdomain + security certificate



    1. Set up squ.eeeee.lol subdomain

      • doas mg /etc/relayd.conf

        • Add pass request quick header "Host" value "squ.eeeee.lol" forward to <httpd> to the existing list





    2. Set up security certificate for miniflux.example.com subdomain

      • doas mg /etc/acme-client.conf

        • Add squ.eeeee.lol to the existing list





    3. Update and apply new security certificate

      • doas su

      • domain=yourdomain.com

      • acme-client -v $domain

      • rcctl restart relayd

      • exit




    Point subdomain to directory



    1. doas mg /etc/httpd.conf

      • Append to the bottom:




    server "squ.eeeee.lol" {
    
    listen on 127.0.0.1 port 8080
    default type text/html
    root "/htdocs/squee"
    }


    1. doas rcctl restart httpd

    2. Visit https://squ.eeeee.lol and confirm it all works


    Rewrite & redirect rules in httpd


    So that going to /something.html, /something/, and /something all display the same page. Regular expressions, I can't. ಠ_ಠ



    1. doas mg /etc/httpd.conf

      • Add a few more lines to what we appended earlier:




    server "squ.eeeee.lol" {
    
    listen on 127.0.0.1 port 8080
    default type text/html
    root "/htdocs/squee"

    location match "^/$" {
    request rewrite "/index.html"
    }
    location match "/(.*).html$" {
    request rewrite "/%1.html"
    }
    location match "/(.*)/" {
    request rewrite "/%1.html"
    }
    location match "/(.*)" {
    request rewrite "/%1.html"
    }
    }



    1. doas rcctl restart httpd

    2. Visit https://squ.eeeee.lol and confirm it all works


    So that going to eeeee.lol/squee redirects to squ.eeeee.lol.



    1. doas mg /etc/httpd.conf

      • Find server "eeeee.lol" and add the redirect after the /pub/ line:




    server "eeeee.lol" {
    
    listen on 127.0.0.1 port 8080
    default type text/html
    location "/pub/*" {
    directory auto index
    }
    location "/squee/" {
    block return 301 "https://squ.eeeee.lol"
    }
    }


    1. doas rcctl restart httpd

    2. Visit https://eeeee.lol/squee to confirm that it redirects


    References


    New Blog, New Rules

    I'm trying out this blog platform and want to make it a place where I err on the side of too little editing rather than

    ...
    Full

    I'm trying out this blog platform and want to make it a place where I err on the side of too little editing rather than too much. I used to write a lot. Not so much anymore and I want to change that. Posts here may be very short, or may be longer reads. There may be ten a day, or zero. Expect technical (mostly Java) and music talk, mostly.


    No post, including this one, will reflect anything about my employer, whoever it might be at that moment, ever. All opinions and content are mine, not theirs.

Julia Evans

(20)

    Standards for ANSI escape codes

    Hello! Today I want to talk about ANSI escape codes.

    For a long time I was vaguely aware of ANSI escape codes (“that’s how you

    ...
    Full

    Hello! Today I want to talk about ANSI escape codes.


    For a long time I was vaguely aware of ANSI escape codes (“that’s how you make
    text red in the terminal and stuff”) but I had no real understanding of where they were
    supposed to be defined or whether or not there were standards for them. I just
    had a kind of vague “there be dragons” feeling around them. While learning
    about the terminal this year, I’ve learned that:



    1. ANSI escape codes are responsible for a lot of usability improvements
      in the terminal (did you know there’s a way to copy to your system clipboard
      when SSHed into a remote machine?? It’s an escape code called OSC 52!)

    2. They aren’t completely standardized, and because of that they don’t always
      work reliably. And because they’re also invisible, it’s extremely
      frustrating to troubleshoot escape code issues.


    So I wanted to put together a list for myself of some standards that exist
    around escape codes, because I want to know if they have to feel unreliable
    and frustrating, or if there’s a future where we could all rely on them with
    more confidence.



    what’s an escape code?


    Have you ever pressed the left arrow key in your terminal and seen ^[[D?
    That’s an escape code! It’s called an “escape code” because the first character
    is the “escape” character, which is usually written as ESC, \x1b, \E,
    \033, or ^[.


    Escape codes are how your terminal emulator communicates various kinds of
    information (colours, mouse movement, etc) with programs running in the
    terminal. There are two kind of escape codes:



    1. input codes which your terminal emulator sends for keypresses or mouse
      movements that don’t fit into Unicode. For example “left arrow key” is
      ESC[D, “Ctrl+left arrow” might be ESC[1;5D, and clicking the mouse might
      be something like ESC[M :3.

    2. output codes which programs can print out to colour text, move the
      cursor around, clear the screen, hide the cursor, copy text to the
      clipboard, enable mouse reporting, set the window title, etc.


    Now let’s talk about standards!


    ECMA-48


    The first standard I found relating to escape codes was
    ECMA-48,
    which was originally published in 1976.


    ECMA-48 does two things:



    1. Define some general formats for escape codes (like “CSI” codes, which are
      ESC[ + something and “OSC” codes, which are ESC] + something)

    2. Define some specific escape codes, like how “move the cursor to the left” is
      ESC[D, or “turn text red” is ESC[31m. In the spec, the “cursor left”
      one is called CURSOR LEFT and the one for changing colours is called
      SELECT GRAPHIC RENDITION.


    The formats are extensible, so there’s room for others to define more escape
    codes in the future. Lots of escape codes that are popular today aren’t defined
    in ECMA-48: for example it’s pretty common for terminal applications (like vim,
    htop, or tmux) to support using the mouse, but ECMA-48 doesn’t define escape
    codes for the mouse.


    xterm control sequences


    There are a bunch of escape codes that aren’t defined in ECMA-48, for example:



    • enabling mouse reporting (where did you click in your terminal?)

    • bracketed paste (did you paste that text or type it in?)

    • OSC 52 (which terminal applications can use to copy text to your system clipboard)


    I believe (correct me if I’m wrong!) that these and some others came from
    xterm, are documented in XTerm Control Sequences, and have
    been widely implemented by other terminal emulators.


    This list of “what xterm supports” is not a standard exactly, but xterm is
    extremely influential and so it seems like an important document.


    terminfo


    In the 80s (and to some extent today, but my understanding is that it was MUCH
    more dramatic in the 80s) there was a huge amount of variation in what escape
    codes terminals actually supported.


    To deal with this, there’s a database of escape codes for various terminals
    called “terminfo”.


    It looks like the standard for terminfo is called X/Open Curses, though you need to create
    an account to view that standard for some reason. It defines the database format as well
    as a C library interface (“curses”) for accessing the database.


    For example you can run this bash snippet to see every possible escape code for
    “clear screen” for all of the different terminals your system knows about:


    for term in $(toe -a | awk '{print $1}')
    
    do
    echo $term
    infocmp -1 -T "$term" 2>/dev/null | grep 'clear=' | sed 's/clear=//g;s/,//g'
    done

    On my system (and probably every system I’ve ever used?), the terminfo database is managed by ncurses.


    should programs use terminfo?


    I think it’s interesting that there are two main approaches that applications
    take to handling ANSI escape codes:



    1. Use the terminfo database to figure out which escape codes to use, depending
      on what’s in the TERM environment variable. Fish does this, for example.

    2. Identify a “single common set” of escape codes which works in “enough”
      terminal emulators and just hardcode those.


    Some examples of programs/libraries that take approach #2 (“don’t use terminfo”) include:



    I got curious about why folks might be moving away from terminfo and I found
    this very interesting and extremely detailed
    rant about terminfo from one of the fish maintainers, which argues that:



    [the terminfo authors] have done a lot of work that, at the time, was
    extremely important and helpful. My point is that it no longer is.



    I’m not going to do it justice so I’m not going to summarize it, I think it’s
    worth reading.


    is there a “single common set” of escape codes?


    I was just talking about the idea that you can use a “common set” of escape
    codes that will work for most people. But what is that set? Is there any agreement?


    I really do not know the answer to this at all, but from doing some reading it
    seems like it’s some combination of:



    • The codes that the VT100 supported (though some aren’t relevant on modern terminals)

    • what’s in ECMA-48 (which I think also has some things that are no longer relevant)

    • What xterm supports (though I’d guess that not everything in there is actually widely supported enough)


    and maybe ultimately “identify the terminal emulators you think your users are
    going to use most frequently and test in those”, the same way web developers do
    when deciding which CSS features are okay to use


    I don’t think there are any resources like Can I use…? or
    Baseline for the terminal
    though. (in theory terminfo is supposed to be the “caniuse” for the terminal
    but it seems like it often takes 10+ years to add new terminal features when
    people invent them which makes it very limited)


    some reasons to use terminfo


    I also asked on Mastodon why people found terminfo valuable in 2025 and got a
    few reasons that made sense to me:



    • some people expect to be able to use the TERM environment variable to
      control how programs behave (for example with TERM=dumb), and there’s
      no standard for how that should work in a post-terminfo world

    • even though there’s less variation between terminal emulators than
      there was in the 80s, there’s far from zero variation: there are graphical
      terminals, the Linux framebuffer console, the situation you’re in when
      connecting to a server via its serial console, Emacs shell mode, and probably
      more that I’m missing

    • there is no one standard for what the “single common set” of escape codes
      is, and sometimes programs use escape codes which aren’t actually widely
      supported enough


    terminfo & user agent detection


    The way that ncurses uses the TERM environment variable to decide which
    escape codes to use reminds me of how webservers used to sometimes use the
    browser user agent to decide which version of a website to serve.


    It also seems like it’s had some of the same results – the way iTerm2 reports
    itself as being “xterm-256color” feels similar to how Safari’s user agent is
    “Mozilla/5.0 (Macintosh; Intel Mac OS X 14_7_4) AppleWebKit/605.1.15 (KHTML,
    like Gecko) Version/18.3 Safari/605.1.15”. In both cases the terminal emulator
    / browser ends up changing its user agent to get around user agent detection
    that isn’t working well.


    On the web we ended up deciding that user agent detection was not a good
    practice and to instead focus on standardization so we can serve the same
    HTML/CSS to all browsers. I don’t know if the same approach is the future in
    the terminal though – I think the terminal landscape today is much more
    fragmented than the web ever was as well as being much less well funded.


    some more documents/standards


    A few more documents and standards related to escape codes, in no particular order:



    why I think this is interesting


    I sometimes see people saying that the unix terminal is “outdated”, and since I
    love the terminal so much I’m always curious about what incremental changes
    might make it feel less “outdated”.


    Maybe if we had a clearer standards landscape (like we do on the web!) it would
    be easier for terminal emulator developers to build new features and for
    authors of terminal applications to more confidently adopt those features so
    that we can all benefit from them and have a richer experience in the terminal.


    Obviously standardizing ANSI escape codes is not easy (ECMA-48 was first
    published almost 50 years ago and we’re still not there!). I don’t even know
    what all of the challenges are. But the situation with HTML/CSS/JS used to be
    extremely bad too and now it’s MUCH better, so maybe there’s hope.

    How to add a directory to your PATH

    I was talking to a friend about how to add a directory to your PATH today. It’s something that feels “obvious” to me since I’ve

    ...
    Full

    I was talking to a friend about how to add a directory to your PATH today. It’s
    something that feels “obvious” to me since I’ve been using the terminal for a
    long time, but when I searched for instructions for how to do it, I actually
    couldn’t find something that explained all of the steps – a lot of them just
    said “add this to ~/.bashrc”, but what if you’re not using bash? What if your
    bash config is actually in a different file? And how are you supposed to figure
    out which directory to add anyway?


    So I wanted to try to write down some more complete directions and mention some
    of the gotchas I’ve run into over the years.


    Here’s a table of contents:



    step 1: what shell are you using?


    If you’re not sure what shell you’re using, here’s a way to find out. Run this:


    ps -p $$ -o pid,comm=
    


    • if you’re using bash, it’ll print out 97295 bash

    • if you’re using zsh, it’ll print out 97295 zsh

    • if you’re using fish, it’ll print out an error like “In fish, please use
      $fish_pid” ($$ isn’t valid syntax in fish, but in any case the error
      message tells you that you’re using fish, which you probably already knew)


    Also bash is the default on Linux and zsh is the default on Mac OS (as of
    2024). I’ll only cover bash, zsh, and fish in these directions.


    step 2: find your shell’s config file



    • in zsh, it’s probably ~/.zshrc

    • in bash, it might be ~/.bashrc, but it’s complicated, see the note in the next section

    • in fish, it’s probably ~/.config/fish/config.fish (you can run echo $__fish_config_dir if you want to be 100% sure)


    a note on bash’s config file


    Bash has three possible config files: ~/.bashrc, ~/.bash_profile, and ~/.profile.


    If you’re not sure which one your system is set up to use, I’d recommend
    testing this way:



    1. add echo hi there to your ~/.bashrc

    2. Restart your terminal

    3. If you see “hi there”, that means ~/.bashrc is being used! Hooray!

    4. Otherwise remove it and try the same thing with ~/.bash_profile

    5. You can also try ~/.profile if the first two options don’t work.


    (there are a lot of elaborate flow charts out there that explain how bash
    decides which config file to use but IMO it’s not worth it to internalize them
    and just testing is the fastest way to be sure)


    step 3: figure out which directory to add


    Let’s say that you’re trying to install and run a program called http-server
    and it doesn’t work, like this:


    $ npm install -g http-server
    
    $ http-server
    bash: http-server: command not found

    How do you find what directory http-server is in? Honestly in general this is
    not that easy – often the answer is something like “it depends on how npm is
    configured”. A few ideas:



    • Often when setting up a new installer (like cargo, npm, homebrew, etc),
      when you first set it up it’ll print out some directions about how to update
      your PATH. So if you’re paying attention you can get the directions then.

    • Sometimes installers will automatically update your shell’s config file
      to update your PATH for you

    • Sometimes just Googling “where does npm install things?” will turn up the
      answer

    • Some tools have a subcommand that tells you where they’re configured to
      install things, like:

      • Node/npm: npm config get prefix (then append /bin/)

      • Go: go env GOPATH (then append /bin/)

      • asdf: asdf info | grep ASDF_DIR (then append /bin/ and /shims/)




    step 3.1: double check it’s the right directory


    Once you’ve found a directory you think might be the right one, make sure it’s
    actually correct! For example, I found out that on my machine, http-server is
    in ~/.npm-global/bin. I can make sure that it’s the right directory by trying to
    run the program http-server in that directory like this:


    $ ~/.npm-global/bin/http-server
    
    Starting up http-server, serving ./public

    It worked! Now that you know what directory you need to add to your PATH,
    let’s move to the next step!


    step 4: edit your shell config


    Now we have the 2 critical pieces of information we need:



    1. Which directory you’re trying to add to your PATH (like ~/.npm-global/bin/)

    2. Where your shell’s config is (like ~/.bashrc, ~/.zshrc, or ~/.config/fish/config.fish)


    Now what you need to add depends on your shell:


    bash instructions:


    Open your shell’s config file, and add a line like this:


    export PATH=$PATH:~/.npm-global/bin/
    

    (obviously replace ~/.npm-global/bin with the actual directory you’re trying to add)


    zsh instructions:


    You can do the same thing as in bash, but zsh also has some slightly fancier
    syntax you can use if you prefer:


    path=(
    
    $path
    ~/.npm-global/bin
    )

    fish instructions:


    In fish, the syntax is different:


    set PATH $PATH ~/.npm-global/bin
    

    (in fish you can also use fish_add_path, some notes on that further down)


    step 5: restart your shell


    Now, an extremely important step: updating your shell’s config won’t take
    effect if you don’t restart it!


    Two ways to do this:



    1. open a new terminal (or terminal tab), and maybe close the old one so you don’t get confused

    2. Run bash to start a new shell (or zsh if you’re using zsh, or fish if you’re using fish)


    I’ve found that both of these usually work fine.


    And you should be done! Try running the program you were trying to run and
    hopefully it works now.


    If not, here are a couple of problems that you might run into:


    problem 1: it ran the wrong program


    If the wrong version of a program is running, you might need to add the
    directory to the beginning of your PATH instead of the end.


    For example, on my system I have two versions of python3 installed, which I
    can see by running which -a:


    $ which -a python3
    
    /usr/bin/python3
    /opt/homebrew/bin/python3

    The one your shell will use is the first one listed.


    If you want to use the Homebrew version, you need to add that directory
    (/opt/homebrew/bin) to the beginning of your PATH instead, by putting this in
    your shell’s config file (it’s /opt/homebrew/bin/:$PATH instead of the usual $PATH:/opt/homebrew/bin/)


    export PATH=/opt/homebrew/bin/:$PATH
    

    or in fish:


    set PATH ~/.cargo/bin $PATH
    

    problem 2: the program isn’t being run from your shell


    All of these directions only work if you’re running the program from your
    shell
    . If you’re running the program from an IDE, from a GUI, in a cron job,
    or some other way, you’ll need to add the directory to your PATH in a different
    way, and the exact details might depend on the situation.


    in a cron job


    Some options:



    • use the full path to the program you’re running, like /home/bork/bin/my-program

    • put the full PATH you want as the first line of your crontab (something like
      PATH=/bin:/usr/bin:/usr/local/bin:….). You can get the full PATH you’re
      using in your shell by running echo "PATH=$PATH".


    I’m honestly not sure how to handle it in an IDE/GUI because I haven’t run into
    that in a long time, will add directions here if someone points me in the right
    direction.


    problem 3: duplicate PATH entries making it harder to debug


    If you edit your path and start a new shell by running bash (or zsh, or
    fish), you’ll often end up with duplicate PATH entries, because the shell
    keeps adding new things to your PATH every time you start your shell.


    Personally I don’t think I’ve run into a situation where this kind of
    duplication breaks anything, but the duplicates can make it harder to debug
    what’s going on with your PATH if you’re trying to understand its contents.


    Some ways you could deal with this:



    1. If you’re debugging your PATH, open a new terminal to do it in so you get
      a “fresh” state. This should avoid the duplication.

    2. Deduplicate your PATH at the end of your shell’s config (for example in
      zsh apparently you can do this with typeset -U path)

    3. Check that the directory isn’t already in your PATH when adding it (for
      example in fish I believe you can do this with fish_add_path --path /some/directory)


    How to deduplicate your PATH is shell-specific and there isn’t always a
    built in way to do it so you’ll need to look up how to accomplish it in your
    shell.


    problem 4: losing your history after updating your PATH


    Here’s a situation that’s easy to get into in bash or zsh:



    1. Run a command (it fails)

    2. Update your PATH

    3. Run bash to reload your config

    4. Press the up arrow a couple of times to rerun the failed command (or open a new terminal)

    5. The failed command isn’t in your history! Why not?


    This happens because in bash, by default, history is not saved until you exit
    the shell.


    Some options for fixing this:



    • Instead of running bash to reload your config, run source ~/.bashrc (or
      source ~/.zshrc in zsh). This will reload the config inside your current
      session.

    • Configure your shell to continuously save your history instead of only saving
      the history when the shell exits. (How to do this depends on whether you’re
      using bash or zsh, the history options in zsh are a bit complicated and I’m
      not exactly sure what the best way is)


    a note on source


    When you install cargo (Rust’s installer) for the first time, it gives you
    these instructions for how to set up your PATH, which don’t mention a specific
    directory at all.


    This is usually done by running one of the following (note the leading DOT):

    . "$HOME/.cargo/env" # For sh/bash/zsh/ash/dash/pdksh
    source "$HOME/.cargo/env.fish" # For fish


    The idea is that you add that line to your shell’s config, and their script
    automatically sets up your PATH (and potentially other things) for you.


    This is pretty common (for example Homebrew suggests you eval brew shellenv), and there are
    two ways to approach this:



    1. Just do what the tool suggests (like adding . "$HOME/.cargo/env" to your shell’s config)

    2. Figure out which directories the script they’re telling you to run would add
      to your PATH, and then add those manually. Here’s how I’d do that:

      • Run . "$HOME/.cargo/env" in my shell (or the fish version if using fish)

      • Run echo "$PATH" | tr ':' '\n' | grep cargo to figure out which directories it added

      • See that it says /Users/bork/.cargo/bin and shorten that to ~/.cargo/bin

      • Add the directory ~/.cargo/bin to PATH (with the directions in this post)




    I don’t think there’s anything wrong with doing what the tool suggests (it
    might be the “best way”!), but personally I usually use the second approach
    because I prefer knowing exactly what configuration I’m changing.


    a note on fish_add_path


    fish has a handy function called fish_add_path that you can run to add a directory to your PATH like this:


    fish_add_path /some/directory
    

    This is cool (it’s such a simple command!) but I’ve stopped using it for a couple of reasons:



    1. Sometimes fish_add_path will update the PATH for every session in the
      future (with a “universal variable”) and sometimes it will update the PATH
      just for the current session and it’s hard for me to tell which one it will
      do. In theory the docs explain this but I could not understand them.

    2. If you ever need to remove the directory from your PATH a few weeks or
      months later because maybe you made a mistake, it’s kind of hard to do
      (there are instructions in this comments of this github issue though).


    that’s all


    Hopefully this will help some people. Let me know (on Mastodon or Bluesky) if
    you there are other major gotchas that have tripped you up when adding a
    directory to your PATH, or if you have questions about this post!

    Some terminal frustrations

    A few weeks ago I ran a terminal survey (you can read the results here) and at the end I asked:

    What’s the

    ...
    Full

    A few weeks ago I ran a terminal survey (you can read the results here) and at the end I asked:



    What’s the most frustrating thing about using the terminal for you?



    1600 people answered, and I decided to spend a few days categorizing all the
    responses. Along the way I learned that classifying qualitative data is not
    easy but I gave it my best shot. I ended up building a custom
    tool to make it faster to categorize
    everything.


    As with all of my surveys the methodology isn’t particularly scientific. I just
    posted the survey to Mastodon and Twitter, ran it for a couple of days, and got
    answers from whoever happened to see it and felt like responding.


    Here are the top categories of frustrations!


    I think it’s worth keeping in mind while reading these comments that



    • 40% of people answering this survey have been using the terminal for 21+ years

    • 95% of people answering the survey have been using the terminal for at least 4 years


    These comments aren’t coming from total beginners.


    Here are the categories of frustrations! The number in brackets is the number
    of people with that frustration. I’m mostly writing this up for myself because
    I’m trying to write a zine about the terminal and I wanted to get a sense for
    what people are having trouble with.


    remembering syntax (115)


    People talked about struggles remembering:



    • the syntax for CLI tools like awk, jq, sed, etc

    • the syntax for redirects

    • keyboard shortcuts for tmux, text editing, etc


    One example comment:



    There are just so many little “trivia” details to remember for full
    functionality. Even after all these years I’ll sometimes forget where it’s 2
    or 1 for stderr, or forget which is which for > and >>.



    switching terminals is hard (91)


    People talked about struggling with switching systems (for example home/work
    computer or when SSHing) and running into:



    • OS differences in keyboard shortcuts (like Linux vs Mac)

    • systems which don’t have their preferred text editor (“no vim” or “only vim”)

    • different versions of the same command (like Mac OS grep vs GNU grep)

    • no tab completion

    • a shell they aren’t used to (“the subtle differences between zsh and bash”)


    as well as differences inside the same system like pagers being not consistent
    with each other (git diff pagers, other pagers).


    One example comment:



    I got used to fish and vi mode which are not available when I ssh into
    servers, containers.



    color (85)


    Lots of problems with color, like:



    • programs setting colors that are unreadable with a light background color

    • finding a colorscheme they like (and getting it to work consistently across different apps)

    • color not working inside several layers of SSH/tmux/etc

    • not liking the defaults

    • not wanting color at all and struggling to turn it off


    This comment felt relatable to me:



    Getting my terminal theme configured in a reasonable way between the terminal
    emulator and fish (I did this years ago and remember it being tedious and
    fiddly and now feel like I’m locked into my current theme because it works
    and I dread touching any of that configuration ever again).



    keyboard shortcuts (84)


    Half of the comments on keyboard shortcuts were about how on Linux/Windows, the
    keyboard shortcut to copy/paste in the terminal is different from in the rest
    of the OS.


    Some other issues with keyboard shortcuts other than copy/paste:



    • using Ctrl-W in a browser-based terminal and closing the window

    • the terminal only supports a limited set of keyboard shortcuts (no
      Ctrl-Shift-, no Super, no Hyper, lots of ctrl- shortcuts aren’t
      possible like Ctrl-,)

    • the OS stopping you from using a terminal keyboard shortcut (like by default
      Mac OS uses Ctrl+left arrow for something else)

    • issues using emacs in the terminal

    • backspace not working (2)


    other copy and paste issues (75)


    Aside from “the keyboard shortcut for copy and paste is different”, there were
    a lot of OTHER issues with copy and paste, like:



    • copying over SSH

    • how tmux and the terminal emulator both do copy/paste in different ways

    • dealing with many different clipboards (system clipboard, vim clipboard, the
      “middle click” clipboard on Linux, tmux’s clipboard, etc) and potentially
      synchronizing them

    • random spaces added when copying from the terminal

    • pasting multiline commands which automatically get run in a terrifying way

    • wanting a way to copy text without using the mouse


    discoverability (55)


    There were lots of comments about this, which all came down to the same basic
    complaint – it’s hard to discover useful tools or features! This comment kind of
    summed it all up:



    How difficult it is to learn independently. Most of what I know is an
    assorted collection of stuff I’ve been told by random people over the years.



    steep learning curve (44)


    A lot of comments about it generally having a steep learning curve. A couple of
    example comments:



    After 15 years of using it, I’m not much faster than using it than I was 5 or
    maybe even 10 years ago.



    and



    That I know I could make my life easier by learning more about the shortcuts
    and commands and configuring the terminal but I don’t spend the time because it
    feels overwhelming.



    history (42)


    Some issues with shell history:



    • history not being shared between terminal tabs (16)

    • limits that are too short (4)

    • history not being restored when terminal tabs are restored

    • losing history because the terminal crashed

    • not knowing how to search history


    One example comment:



    It wasted a lot of time until I figured it out and still annoys me that
    “history” on zsh has such a small buffer; I have to type “history 0” to get
    any useful length of history.



    bad documentation (37)


    People talked about:



    • documentation being generally opaque

    • lack of examples in man pages

    • programs which don’t have man pages


    Here’s a representative comment:



    Finding good examples and docs. Man pages often not enough, have to wade
    through stack overflow



    scrollback (36)


    A few issues with scrollback:



    • programs printing out too much data making you lose scrollback history

    • resizing the terminal messes up the scrollback

    • lack of timestamps

    • GUI programs that you start in the background printing stuff out that gets in
      the way of other programs’ outputs


    One example comment:



    When resizing the terminal (in particular: making it narrower) leads to
    broken rewrapping of the scrollback content because the commands formatted
    their output based on the terminal window width.



    “it feels outdated” (33)


    Lots of comments about how the terminal feels hampered by legacy decisions and
    how users often end up needing to learn implementation details that feel very
    esoteric. One example comment:



    Most of the legacy cruft, it would be great to have a green field
    implementation of the CLI interface.



    shell scripting (32)


    Lots of complaints about POSIX shell scripting. There’s a general feeling that
    shell scripting is difficult but also that switching to a different less
    standard scripting language (fish, nushell, etc) brings its own problems.



    Shell scripting. My tolerance to ditch a shell script and go to a scripting
    language is pretty low. It’s just too messy and powerful. Screwing up can be
    costly so I don’t even bother.



    more issues


    Some more issues that were mentioned at least 10 times:



    • (31) inconsistent command line arguments: is it -h or help or –help?

    • (24) keeping dotfiles in sync across different systems

    • (23) performance (e.g. “my shell takes too long to start”)

    • (20) window management (potentially with some combination of tmux tabs, terminal tabs, and multiple terminal windows. Where did that shell session go?)

    • (17) generally feeling scared/uneasy (“The debilitating fear that I’m going
      to do some mysterious Bad Thing with a command and I will have absolutely no
      idea how to fix or undo it or even really figure out what happened”)

    • (16) terminfo issues (“Having to learn about terminfo if/when I try a new terminal emulator and ssh elsewhere.”)

    • (16) lack of image support (sixel etc)

    • (15) SSH issues (like having to start over when you lose the SSH connection)

    • (15) various tmux/screen issues (for example lack of integration between tmux and the terminal emulator)

    • (15) typos & slow typing

    • (13) the terminal getting messed up for various reasons (pressing Ctrl-S, cating a binary, etc)

    • (12) quoting/escaping in the shell

    • (11) various Windows/PowerShell issues


    n/a (122)


    There were also 122 answers to the effect of “nothing really” or “only that I
    can’t do EVERYTHING in the terminal”


    One example comment:



    Think I’ve found work arounds for most/all frustrations



    that’s all!


    I’m not going to make a lot of commentary on these results, but here are a
    couple of categories that feel related to me:



    • remembering syntax & history (often the thing you need to remember is something you’ve run before!)

    • discoverability & the learning curve (the lack of discoverability is definitely a big part of what makes it hard to learn)

    • “switching systems is hard” & “it feels outdated” (tools that haven’t really
      changed in 30 or 40 years have many problems but they do tend to be always
      there no matter what system you’re on, which is very useful and makes them
      hard to stop using)


    Trying to categorize all these results in a reasonable way really gave me an
    appreciation for social science researchers’ skills.

    What's involved in getting a "modern" terminal setup?

    Hello! Recently I ran a terminal survey and I asked people what frustrated them. One person commented:

    There are so many pieces to having

    ...
    Full

    Hello! Recently I ran a terminal survey and I asked people what frustrated
    them. One person commented:



    There are so many pieces to having a modern terminal experience. I wish it
    all came out of the box.



    My immediate reaction was “oh, getting a modern terminal experience isn’t that
    hard, you just need to….”, but the more I thought about it, the longer the
    “you just need to…” list got, and I kept thinking about more and more
    caveats.


    So I thought I would write down some notes about what it means to me personally
    to have a “modern” terminal experience and what I think can make it hard for
    people to get there.


    what is a “modern terminal experience”?


    Here are a few things that are important to me, with which part of the system
    is responsible for them:



    • multiline support for copy and paste: if you paste 3 commands in your shell, it should not immediately run them all! That’s scary! (shell, terminal emulator)

    • infinite shell history: if I run a command in my shell, it should be saved forever, not deleted after 500 history entries or whatever. Also I want commands to be saved to the history immediately when I run them, not only when I exit the shell session (shell)

    • a useful prompt: I can’t live without having my current directory and current git branch in my prompt (shell)

    • 24-bit colour: this is important to me because I find it MUCH easier to theme neovim with 24-bit colour support than in a terminal with only 256 colours (terminal emulator)

    • clipboard integration between vim and my operating system so that when I copy in Firefox, I can just press p in vim to paste (text editor, maybe the OS/terminal emulator too)

    • good autocomplete: for example commands like git should have command-specific autocomplete (shell)

    • having colours in ls (shell config)

    • a terminal theme I like: I spend a lot of time in my terminal, I want it to look nice and I want its theme to match my terminal editor’s theme. (terminal emulator, text editor)

    • automatic terminal fixing: If a programs prints out some weird escape
      codes that mess up my terminal, I want that to automatically get reset so
      that my terminal doesn’t get messed up (shell)

    • keybindings: I want Ctrl+left arrow to work (shell or application)

    • being able to use the scroll wheel in programs like less: (terminal emulator and applications)


    There are a million other terminal conveniences out there and different people
    value different things, but those are the ones that I would be really unhappy
    without.


    how I achieve a “modern experience”


    My basic approach is:



    1. use the fish shell. Mostly don’t configure it, except to:

      • set the EDITOR environment variable to my favourite terminal editor

      • alias ls to ls --color=auto



    2. use any terminal emulator with 24-bit colour support. In the past I’ve used
      GNOME Terminal, Terminator, and iTerm, but I’m not picky about this. I don’t really
      configure it other than to choose a font.

    3. use neovim, with a configuration that I’ve been very slowly building over the last 9 years or so (the last time I deleted my vim config and started from scratch was 9 years ago)

    4. use the base16 framework to theme everything


    A few things that affect my approach:



    • I don’t spend a lot of time SSHed into other machines

    • I’d rather use the mouse a little than come up with keyboard-based ways to do everything

    • I work on a lot of small projects, not one big project


    some “out of the box” options for a “modern” experience


    What if you want a nice experience, but don’t want to spend a lot of time on
    configuration? Figuring out how to configure vim in a way that I was satisfied
    with really did take me like ten years, which is a long time!


    My best ideas for how to get a reasonable terminal experience with minimal
    config are:



    • shell: either fish or zsh with oh-my-zsh

    • terminal emulator: almost anything with 24-bit colour support, for example all of these are popular:

      • linux: GNOME Terminal, Konsole, Terminator, xfce4-terminal

      • mac: iTerm (Terminal.app doesn’t have 256-colour support)

      • cross-platform: kitty, alacritty, wezterm, or ghostty



    • shell config:

      • set the EDITOR environment variable to your favourite terminal text
        editor

      • maybe alias ls to ls --color=auto



    • text editor: this is a tough one, maybe micro or helix? I haven’t used
      either of them seriously but they both seem like very cool projects and I
      think it’s amazing that you can just use all the usual GUI editor commands
      (Ctrl-C to copy, Ctrl-V to paste, Ctrl-A to select all) in micro and
      they do what you’d expect. I would probably try switching to helix except
      that retraining my vim muscle memory seems way too hard. Also helix doesn’t
      have a GUI or plugin system yet.


    Personally I wouldn’t use xterm, rxvt, or Terminal.app as a terminal emulator,
    because I’ve found in the past that they’re missing core features (like 24-bit
    colour in Terminal.app’s case) that make the terminal harder to use for me.


    I don’t want to pretend that getting a “modern” terminal experience is easier
    than it is though – I think there are two issues that make it hard. Let’s talk
    about them!


    issue 1 with getting to a “modern” experience: the shell


    bash and zsh are by far the two most popular shells, and neither of them
    provide a default experience that I would be happy using out of the box, for
    example:



    • you need to customize your prompt

    • they don’t come with git completions by default, you have to set them up

    • by default, bash only stores 500 (!) lines of history and (at least on Mac OS)
      zsh is only configured to store 2000 lines, which is still not a lot

    • I find bash’s tab completion very frustrating, if there’s more than
      one match then you can’t tab through them


    And even though I love fish, the fact
    that it isn’t POSIX does make it hard for a lot of folks to make the switch.


    Of course it’s totally possible to learn how to customize your prompt in bash
    or whatever, and it doesn’t even need to be that complicated (in bash I’d
    probably start with something like export PS1='[\u@\h \W$(__git_ps1 " (%s)")]\$ ', or maybe use starship).
    But each of these “not complicated” things really does add up and it’s
    especially tough if you need to keep your config in sync across several
    systems.


    An extremely popular solution to getting a “modern” shell experience is
    oh-my-zsh. It seems like a great project and I know a lot
    of people use it very happily, but I’ve struggled with configuration systems
    like that in the past – it looks like right now the base oh-my-zsh adds about
    3000 lines of config, and often I find that having an extra configuration
    system makes it harder to debug what’s happening when things go wrong. I
    personally have a tendency to use the system to add a lot of extra plugins,
    make my system slow, get frustrated that it’s slow, and then delete it
    completely and write a new config from scratch.


    issue 2 with getting to a “modern” experience: the text editor


    In the terminal survey I ran recently, the most popular terminal text editors
    by far were vim, emacs, and nano.


    I think the main options for terminal text editors are:



    • use vim or emacs and configure it to your liking, you can probably have any
      feature you want if you put in the work

    • use nano and accept that you’re going to have a pretty limited experience
      (for example I don’t think you can select text with the mouse and then “cut”
      it in nano)

    • use micro or helix which seem to offer a pretty good out-of-the-box
      experience, potentially occasionally run into issues with using a less
      mainstream text editor

    • just avoid using a terminal text editor as much as possible, maybe use VSCode, use
      VSCode’s terminal for all your terminal needs, and mostly never edit files in
      the terminal. Or I know a lot of people use code as their EDITOR in the terminal.


    issue 3: individual applications


    The last issue is that sometimes individual programs that I use are kind of
    annoying. For example on my Mac OS machine, /usr/bin/sqlite3 doesn’t support
    the Ctrl+Left Arrow keyboard shortcut. Fixing this to get a reasonable
    terminal experience in SQLite was a little complicated, I had to:



    • realize why this is happening (Mac OS won’t ship GNU tools, and “Ctrl-Left arrow” support comes from GNU readline)

    • find a workaround (install sqlite from homebrew, which does have readline support)

    • adjust my environment (put Homebrew’s sqlite3 in my PATH)


    I find that debugging application-specific issues like this is really not easy
    and often it doesn’t feel “worth it” – often I’ll end up just dealing with
    various minor inconveniences because I don’t want to spend hours investigating
    them. The only reason I was even able to figure this one out at all is that
    I’ve been spending a huge amount of time thinking about the terminal recently.


    A big part of having a “modern” experience using terminal programs is just
    using newer terminal programs, for example I can’t be bothered to learn a
    keyboard shortcut to sort the columns in top, but in htop I can just click
    on a column heading with my mouse to sort it. So I use htop instead! But discovering new more “modern” command line tools isn’t easy (though
    I made a list here),
    finding ones that I actually like using in practice takes time, and if you’re
    SSHed into another machine, they won’t always be there.


    everything affects everything else


    Something I find tricky about configuring my terminal to make everything “nice”
    is that changing one seemingly small thing about my workflow can really affect
    everything else. For example right now I don’t use tmux. But if I needed to use
    tmux again (for example because I was doing a lot of work SSHed into another
    machine), I’d need to think about a few things, like:



    • if I wanted tmux’s copy to synchronize with my system clipboard over
      SSH, I’d need to make sure that my terminal emulator has OSC 52 support

    • if I wanted to use iTerm’s tmux integration (which makes tmux tabs into iTerm
      tabs), I’d need to change how I configure colours – right now I set them
      with a shell script that I run when my shell starts, but that means the
      colours get lost when restoring a tmux session.


    and probably more things I haven’t thought of. “Using tmux means that I have to
    change how I manage my colours” sounds unlikely, but that really did happen to
    me and I decided “well, I don’t want to change how I manage colours right now,
    so I guess I’m not using that feature!”.


    It’s also hard to remember which features I’m relying on – for example maybe
    my current terminal does have OSC 52 support and because copying from tmux over SSH
    has always Just Worked I don’t even realize that that’s something I need, and
    then it mysteriously stops working when I switch terminals.


    change things slowly


    Personally even though I think my setup is not that complicated, it’s taken
    me 20 years to get to this point! Because terminal config changes are so likely
    to have unexpected and hard-to-understand consequences, I’ve found that if I
    change a lot of terminal configuration all at once it makes it much harder to
    understand what went wrong if there’s a problem, which can be really
    disorienting.


    So I usually prefer to make pretty small changes, and accept that changes can
    might take me a REALLY long time to get used to. For example I switched from
    using ls to eza a year or two ago and
    while I like it (because eza -l prints human-readable file sizes by default)
    I’m still not quite sure about it. But also sometimes it’s worth it to make a
    big change, like I made the switch to fish (from bash) 10 years ago and I’m
    very happy I did.


    getting a “modern” terminal is not that easy


    Trying to explain how “easy” it is to configure your terminal really just made
    me think that it’s kind of hard and that I still sometimes get confused.


    I’ve found that there’s never one perfect way to configure things in the
    terminal that will be compatible with every single other thing. I just need to
    try stuff, figure out some kind of locally stable state that works for me, and
    accept that if I start using a new tool it might disrupt the system and I might
    need to rethink things.

    "Rules" that terminal programs follow

    Recently I’ve been thinking about how everything that happens in the terminal is some combination of:

    1. Your operating system’s job
    2. Your shell’s job
    3. Your

    ...
    Full

    Recently I’ve been thinking about how everything that happens in the terminal
    is some combination of:



    1. Your operating system’s job

    2. Your shell’s job

    3. Your terminal emulator’s job

    4. The job of whatever program you happen to be running (like top or vim or cat)


    The first three (your operating system, shell, and terminal emulator) are all kind of
    known quantities – if you’re using bash in GNOME Terminal on Linux, you can
    more or less reason about how how all of those things interact, and some of
    their behaviour is standardized by POSIX.


    But the fourth one (“whatever program you happen to be running”) feels like it
    could do ANYTHING. How are you supposed to know how a program is going to
    behave?


    This post is kind of long so here’s a quick table of contents:



    programs behave surprisingly consistently


    As far as I know, there are no real standards for how programs in the terminal
    should behave – the closest things I know of are:



    • POSIX, which mostly dictates how your terminal emulator / OS / shell should
      work together. I think it does specify a few things about how core utilities like
      cp should work but AFAIK it doesn’t have anything to say about how for
      example htop should behave.

    • these command line interface guidelines


    But even though there are no standards, in my experience programs in the
    terminal behave in a pretty consistent way. So I wanted to write down a list of
    “rules” that in my experience programs mostly follow.


    these are meant to be descriptive, not prescriptive


    My goal here isn’t to convince authors of terminal programs that they should
    follow any of these rules. There are lots of exceptions to these and often
    there’s a good reason for those exceptions.


    But it’s very useful for me to know what behaviour to expect from a random new
    terminal program that I’m using. Instead of “uh, programs could do literally
    anything”, it’s “ok, here are the basic rules I expect, and then I can keep a
    short mental list of exceptions”.


    So I’m just writing down what I’ve observed about how programs behave in my 20
    years of using the terminal, why I think they behave that way, and some
    examples of cases where that rule is “broken”.


    it’s not always obvious which “rules” are the program’s responsibility to implement


    There are a bunch of common conventions that I think are pretty clearly the
    program’s responsibility to implement, like:



    • config files should go in ~/.BLAHrc or ~/.config/BLAH/FILE or /etc/BLAH/ or something

    • --help should print help text

    • programs should print “regular” output to stdout and errors to stderr


    But in this post I’m going to focus on things that it’s not 100% obvious are
    the program’s responsibility. For example it feels to me like a “law of nature”
    that pressing Ctrl-D should quit a REPL, but programs often
    need to explicitly implement support for it – even though cat doesn’t need
    to implement Ctrl-D support, ipython does. (more about that in “rule 3” below)


    Understanding which things are the program’s responsibility makes it much less
    surprising when different programs’ implementations are slightly different.


    rule 1: noninteractive programs should quit when you press Ctrl-C


    The main reason for this rule is that noninteractive programs will quit by
    default on Ctrl-C if they don’t set up a SIGINT signal handler, so this is
    kind of a “you should act like the default” rule.


    Something that trips a lot of people up is that this doesn’t apply to
    interactive programs like python3 or bc or less. This is because in
    an interactive program, Ctrl-C has a different job – if the program is
    running an operation (like for example a search in less or some Python code
    in python3), then Ctrl-C will interrupt that operation but not stop the
    program.


    As an example of how this works in an interactive program: here’s the code in prompt-toolkit (the library that iPython uses for handling input)
    that aborts a search when you press Ctrl-C.


    rule 2: TUIs should quit when you press q


    TUI programs (like less or htop) will usually quit when you press q.


    This rule doesn’t apply to any program where pressing q to quit wouldn’t make
    sense, like tmux or text editors.


    rule 3: REPLs should quit when you press Ctrl-D on an empty line


    REPLs (like python3 or ed) will usually quit when you press Ctrl-D on an
    empty line. This rule is similar to the Ctrl-C rule – the reason for this is
    that by default if you’re running a program (like cat) in “cooked mode”, then
    the operating system will return an EOF when you press Ctrl-D on an empty
    line.


    Most of the REPLs I use (sqlite3, python3, fish, bash, etc) don’t actually use
    cooked mode, but they all implement this keyboard shortcut anyway to mimic the
    default behaviour.


    For example, here’s the code in prompt-toolkit
    that quits when you press Ctrl-D, and here’s the same code in readline.


    I actually thought that this one was a “Law of Terminal Physics” until very
    recently because I’ve basically never seen it broken, but you can see that it’s
    just something that each individual input library has to implement in the links
    above.


    Someone pointed out that the Erlang REPL does not quit when you press Ctrl-D,
    so I guess not every REPL follows this “rule”.


    rule 4: don’t use more than 16 colours


    Terminal programs rarely use colours other than the base 16 ANSI colours. This
    is because if you specify colours with a hex code, it’s very likely to clash
    with some users’ background colour. For example if I print out some text as
    #EEEEEE, it would be almost invisible on a white background, though it would
    look fine on a dark background.


    But if you stick to the default 16 base colours, you have a much better chance
    that the user has configured those colours in their terminal emulator so that
    they work reasonably well with their background color. Another reason to stick
    to the default base 16 colours is that it makes less assumptions about what
    colours the terminal emulator supports.


    The only programs I usually see breaking this “rule” are text editors, for
    example Helix by default will use a purple background which is not a default
    ANSI colour. It seems fine for Helix to break this rule since Helix isn’t a
    “core” program and I assume any Helix user who doesn’t like that colorscheme
    will just change the theme.


    rule 5: vaguely support readline keybindings


    Almost every program I use supports readline keybindings if it would make
    sense to do so. For example, here are a bunch of different programs and a link
    to where they define Ctrl-E to go to the end of the line:



    None of those programs actually uses readline directly, they just sort of
    mimic emacs/readline keybindings. They don’t always mimic them exactly: for
    example atuin seems to use Ctrl-A as a prefix, so Ctrl-A doesn’t go to the
    beginning of the line.


    Also all of these programs seem to implement their own internal cut and paste
    buffers so you can delete a line with Ctrl-U and then paste it with Ctrl-Y.


    The exceptions to this are:



    • some programs (like git, cat, and nc) don’t have any line editing support at all (except for backspace, Ctrl-W, and Ctrl-U)

    • as usual text editors are an exception, every text editor has its own
      approach to editing text


    I wrote more about this “what keybindings does a program support?” question in
    entering text in the terminal is complicated.


    rule 5.1: Ctrl-W should delete the last word


    I’ve never seen a program (other than a text editor) where Ctrl-W doesn’t
    delete the last word. This is similar to the Ctrl-C rule – by default if a
    program is in “cooked mode”, the OS will delete the last word if you press
    Ctrl-W, and delete the whole line if you press Ctrl-U. So usually programs
    will imitate that behaviour.


    I can’t think of any exceptions to this other than text editors but if there
    are I’d love to hear about them!


    rule 6: disable colours when writing to a pipe


    Most programs will disable colours when writing to a pipe. For example:



    • rg blah will highlight all occurrences of blah in the output, but if the
      output is to a pipe or a file, it’ll turn off the highlighting.

    • ls --color=auto will use colour when writing to a terminal, but not when
      writing to a pipe


    Both of those programs will also format their output differently when writing
    to the terminal: ls will organize files into columns, and ripgrep will group
    matches with headings.


    If you want to force the program to use colour (for example because you want to
    look at the colour), you can use unbuffer to force the program’s output to be
    a tty like this:


    unbuffer rg blah |  less -R
    

    I’m sure that there are some programs that “break” this rule but I can’t think
    of any examples right now. Some programs have an --color flag that you can
    use to force colour to be on, in the example above you could also do rg --color=always | less -R.


    rule 7: - means stdin/stdout


    Usually if you pass - to a program instead of a filename, it’ll read from
    stdin or write to stdout (whichever is appropriate). For example, if you want
    to format the Python code that’s on your clipboard with black and then copy
    it, you could run:


    pbpaste | black - | pbcopy
    

    (pbpaste is a Mac program, you can do something similar on Linux with xclip)


    My impression is that most programs implement this if it would make sense and I
    can’t think of any exceptions right now, but I’m sure there are many
    exceptions.


    these “rules” take a long time to learn


    These rules took me a long time for me to learn because I had to:



    1. learn that the rule applied anywhere at all ("Ctrl-C will exit programs")

    2. notice some exceptions (“okay, Ctrl-C will exit find but not less”)

    3. subconsciously figure out what the pattern is ("Ctrl-C will generally quit
      noninteractive programs, but in interactive programs it might interrupt the
      current operation instead of quitting the program")

    4. eventually maybe formulate it into an explicit rule that I know


    A lot of my understanding of the terminal is honestly still in the
    “subconscious pattern recognition” stage. The only reason I’ve been taking the
    time to make things explicit at all is because I’ve been trying to explain how
    it works to others. Hopefully writing down these “rules” explicitly will make
    learning some of this stuff a little bit faster for others.

    Why pipes sometimes get "stuck": buffering

    Here’s a niche terminal problem that has bothered me for years but that I never really understood until a few weeks ago. Let’s say you’re

    ...
    Full

    Here’s a niche terminal problem that has bothered me for years but that I never
    really understood until a few weeks ago. Let’s say you’re running this command
    to watch for some specific output in a log file:


    tail -f /some/log/file | grep thing1 | grep thing2
    

    If log lines are being added to the file relatively slowly, the result I’d see
    is… nothing! It doesn’t matter if there were matches in the log file or not,
    there just wouldn’t be any output.


    I internalized this as “uh, I guess pipes just get stuck sometimes and don’t
    show me the output, that’s weird”, and I’d handle it by just
    running grep thing1 /some/log/file | grep thing2 instead, which would work.


    So as I’ve been doing a terminal deep dive over the last few months I was
    really excited to finally learn exactly why this happens.


    why this happens: buffering


    The reason why “pipes get stuck” sometimes is that it’s VERY common for
    programs to buffer their output before writing it to a pipe or file. So the
    pipe is working fine, the problem is that the program never even wrote the data
    to the pipe!


    This is for performance reasons: writing all output immediately as soon as you
    can uses more system calls, so it’s more efficient to save up data until you
    have 8KB or so of data to write (or until the program exits) and THEN write it
    to the pipe.


    In this example:


    tail -f /some/log/file | grep thing1 | grep thing2
    

    the problem is that grep thing1 is saving up all of its matches until it has
    8KB of data to write, which might literally never happen.


    programs don’t buffer when writing to a terminal


    Part of why I found this so disorienting is that tail -f file | grep thing
    will work totally fine, but then when you add the second grep, it stops
    working!! The reason for this is that the way grep handles buffering depends
    on whether it’s writing to a terminal or not.


    Here’s how grep (and many other programs) decides to buffer its output:



    • Check if stdout is a terminal or not using the isatty function

      • If it’s a terminal, use line buffering (print every line immediately as soon as you have it)

      • Otherwise, use “block buffering” – only print data if you have at least 8KB or so of data to print




    So if grep is writing directly to your terminal then you’ll see the line as
    soon as it’s printed, but if it’s writing to a pipe, you won’t.


    Of course the buffer size isn’t always 8KB for every program, it depends on the implementation. For grep the buffering is handled by libc, and libc’s buffer size is
    defined in the BUFSIZ variable. Here’s where that’s defined in glibc.


    (as an aside: “programs do not use 8KB output buffers when writing to a
    terminal” isn’t, like, a law of terminal physics, a program COULD use an 8KB
    buffer when writing output to a terminal if it wanted, it would just be
    extremely weird if it did that, I can’t think of any program that behaves that
    way)


    commands that buffer & commands that don’t


    One annoying thing about this buffering behaviour is that you kind of need to
    remember which commands buffer their output when writing to a pipe.


    Some commands that don’t buffer their output:



    • tail

    • cat

    • tee


    I think almost everything else will buffer output, especially if it’s a command
    where you’re likely to be using it for batch processing. Here’s a list of some
    common commands that buffer their output when writing to a pipe, along with the
    flag that disables block buffering.



    • grep (--line-buffered)

    • sed (-u)

    • awk (there’s a fflush() function)

    • tcpdump (-l)

    • jq (-u)

    • tr (-u)

    • cut (can’t disable buffering)


    Those are all the ones I can think of, lots of unix commands (like sort) may
    or may not buffer their output but it doesn’t matter because sort can’t do
    anything until it finishes receiving input anyway.


    Also I did my best to test both the Mac OS and GNU versions of these but there
    are a lot of variations and I might have made some mistakes.


    programming languages where the default “print” statement buffers


    Also, here are a few programming language where the default print statement
    will buffer output when writing to a pipe, and some ways to disable buffering
    if you want:



    • C (disable with setvbuf)

    • Python (disable with python -u, or PYTHONUNBUFFERED=1, or sys.stdout.reconfigure(line_buffering=False), or print(x, flush=True))

    • Ruby (disable with STDOUT.sync = true)

    • Perl (disable with $| = 1)


    I assume that these languages are designed this way so that the default print
    function will be fast when you’re doing batch processing.


    Also whether output is buffered or not might depend on how you print, for
    example in C++ cout << "hello\n" buffers when writing to a pipe but cout << "hello" << endl will flush its output.


    when you press Ctrl-C on a pipe, the contents of the buffer are lost


    Let’s say you’re running this command as a hacky way to watch for DNS requests
    to example.com, and you forgot to pass -l to tcpdump:


    sudo tcpdump -ni any port 53 | grep example.com
    

    When you press Ctrl-C, what happens? In a magical perfect world, what I would
    want to happen is for tcpdump to flush its buffer, grep would search for
    example.com, and I would see all the output I missed.


    But in the real world, what happens is that all the programs get killed and the
    output in tcpdump’s buffer is lost.


    I think this problem is probably unavoidable – I spent a little time with
    strace to see how this works and grep receives the SIGINT before
    tcpdump anyway so even if tcpdump tried to flush its buffer grep would
    already be dead.



    After a little more investigation, there is a workaround: if you find
    tcpdump’s PID and kill -TERM $PID, then tcpdump will flush the buffer so
    you can see the output. That’s kind of a pain but I tested it and it seems to
    work.



    redirecting to a file also buffers


    It’s not just pipes, this will also buffer:


    sudo tcpdump -ni any port 53 > output.txt
    

    Redirecting to a file doesn’t have the same “Ctrl-C will totally destroy the
    contents of the buffer” problem though – in my experience it usually behaves
    more like you’d want, where the contents of the buffer get written to the file
    before the program exits. I’m not 100% sure whether this is something you can
    always rely on or not.


    a bunch of potential ways to avoid buffering


    Okay, let’s talk solutions. Let’s say you’ve run this command:


    tail -f /some/log/file | grep thing1 | grep thing2
    

    I asked people on Mastodon how they would solve this in practice and there were
    5 basic approaches. Here they are:


    solution 1: run a program that finishes quickly


    Historically my solution to this has been to just avoid the “command writing to
    pipe slowly” situation completely and instead run a program that will finish quickly
    like this:


    cat /some/log/file | grep thing1 | grep thing2 | tail
    

    This doesn’t do the same thing as the original command but it does mean that
    you get to avoid thinking about these weird buffering issues.


    (you could also do grep thing1 /some/log/file but I often prefer to use an
    “unnecessary” cat)


    solution 2: remember the “line buffer” flag to grep


    You could remember that grep has a flag to avoid buffering and pass it like this:


    tail -f /some/log/file | grep --line-buffered thing1 | grep thing2
    

    solution 3: use awk


    Some people said that if they’re specifically dealing with a multiple greps
    situation, they’ll rewrite it to use a single awk instead, like this:


    tail -f /some/log/file |  awk '/thing1/ && /thing2/'
    

    Or you would write a more complicated grep, like this:


    tail -f /some/log/file |  grep -E 'thing1.*thing2'
    

    (awk also buffers, so for this to work you’ll want awk to be the last command in the pipeline)


    solution 4: use stdbuf


    stdbuf uses LD_PRELOAD to turn off libc’s buffering, and you can use it to turn off output buffering like this:


    tail -f /some/log/file | stdbuf -o0 grep thing1 | grep thing2
    

    Like any LD_PRELOAD solution it’s a bit unreliable – it doesn’t work on
    static binaries, I think won’t work if the program isn’t using libc’s
    buffering, and doesn’t always work on Mac OS. Harry Marr has a really nice How stdbuf works post.


    solution 5: use unbuffer


    unbuffer program will force the program’s output to be a TTY, which means
    that it’ll behave the way it normally would on a TTY (less buffering, colour
    output, etc). You could use it in this example like this:


    tail -f /some/log/file | unbuffer grep thing1 | grep thing2
    

    Unlike stdbuf it will always work, though it might have unwanted side
    effects, for example grep thing1’s will also colour matches.


    If you want to install unbuffer, it’s in the expect package.


    that’s all the solutions I know about!


    It’s a bit hard for me to say which one is “best”, I think personally I’m
    mostly likely to use unbuffer because I know it’s always going to work.


    If I learn about more solutions I’ll try to add them to this post.


    I’m not really sure how often this comes up


    I think it’s not very common for me to have a program that slowly trickles data
    into a pipe like this, normally if I’m using a pipe a bunch of data gets
    written very quickly, processed by everything in the pipeline, and then
    everything exits. The only examples I can come up with right now are:



    • tcpdump

    • tail -f

    • watching log files in a different way like with kubectl logs

    • the output of a slow computation


    what if there were an environment variable to disable buffering?


    I think it would be cool if there were a standard environment variable to turn
    off buffering, like PYTHONUNBUFFERED in Python. I got this idea from a
    couple of blog posts by Mark Dominus
    in 2018. Maybe NO_BUFFER like NO_COLOR?


    The design seems tricky to get right; Mark points out that NETBSD has environment variables called STDBUF, STDBUF1, etc which gives you a
    ton of control over buffering but I imagine most developers don’t want to
    implement many different environment variables to handle a relatively minor
    edge case.


    I’m also curious about whether there are any programs that just automatically
    flush their output buffers after some period of time (like 1 second). It feels
    like it would be nice in theory but I can’t think of any program that does that
    so I imagine there are some downsides.


    stuff I left out


    Some things I didn’t talk about in this post since these posts have been
    getting pretty long recently and seriously does anyone REALLY want to read 3000
    words about buffering?



    • the difference between line buffering and having totally unbuffered output

    • how buffering to stderr is different from buffering to stdout

    • this post is only about buffering that happens inside the program, your
      operating system’s TTY driver also does a little bit of buffering sometimes

    • other reasons you might need to flush your output other than “you’re writing
      to a pipe”

    Importing a frontend Javascript library without a build system

    I like writing Javascript without a build system and for the millionth time yesterday I ran into a problem where I needed to figure

    ...
    Full

    I like writing Javascript without a build system
    and for the millionth time yesterday I ran into a problem where I needed to
    figure out how to import a Javascript library in my code without using a build
    system, and it took FOREVER to figure out how to import it because the
    library’s setup instructions assume that you’re using a build system.


    Luckily at this point I’ve mostly learned how to navigate this situation and
    either successfully use the library or decide it’s too difficult and switch to
    a different library, so here’s the guide I wish I had to importing Javascript
    libraries years ago.


    I’m only going to talk about using Javacript libraries on the frontend, and
    only about how to use them in a no-build-system setup.


    In this post I’m going to talk about:



    1. the three main types of Javascript files a library might provide (ES Modules, the “classic” global variable kind, and CommonJS)

    2. how to figure out which types of files a Javascript library includes in its build

    3. ways to import each type of file in your code


    the three kinds of Javascript files


    There are 3 basic types of Javascript files a library can provide:



    1. the “classic” type of file that defines a global variable. This is the kind
      of file that you can just <script src> and it’ll Just Work. Great if you
      can get it but not always available

    2. an ES module (which may or may not depend on other files, we’ll get to that)

    3. a “CommonJS” module. This is for Node, you can’t use it in a browser at all
      without using a build system.


    I’m not sure if there’s a better name for the “classic” type but I’m just going
    to call it “classic”. Also there’s a type called “AMD” but I’m not sure how
    relevant it is in 2024.


    Now that we know the 3 types of files, let’s talk about how to figure out which
    of these the library actually provides!


    where to find the files: the NPM build


    Every Javascript library has a build which it uploads to NPM. You might be
    thinking (like I did originally) – Julia! The whole POINT is that we’re not
    using Node to build our library! Why are we talking about NPM?


    But if you’re using a link from a CDN like https://cdnjs.cloudflare.com/ajax/libs/Chart.js/4.4.1/chart.umd.min.js,
    you’re still using the NPM build! All the files on the CDNs originally come
    from NPM.


    Because of this, I sometimes like to npm install the library even if I’m not
    planning to use Node to build my library at all – I’ll just create a new temp
    folder, npm install there, and then delete it when I’m done. I like being able to poke
    around in the files in the NPM build on my filesystem, because then I can be
    100% sure that I’m seeing everything that the library is making available in
    its build and that the CDN isn’t hiding something from me.


    So let’s npm install a few libraries and try to figure out what types of
    Javascript files they provide in their builds!


    example library 1: chart.js


    First let’s look inside Chart.js, a plotting library.


    $ cd /tmp/whatever
    
    $ npm install chart.js
    $ cd node_modules/chart.js/dist
    $ ls *.*js
    chart.cjs chart.js chart.umd.js helpers.cjs helpers.js

    This library seems to have 3 basic options:


    option 1: chart.cjs. The .cjs suffix tells me that this is a CommonJS
    file
    , for using in Node. This means it’s impossible to use it directly in the
    browser without some kind of build step.


    option 2:chart.js. The .js suffix by itself doesn’t tell us what kind of
    file it is, but if I open it up, I see import '@kurkle/color'; which is an
    immediate sign that this is an ES module – the import ... syntax is ES
    module syntax.


    option 3: chart.umd.js. “UMD” stands for “Universal Module Definition”,
    which I think means that you can use this file either with a basic <script src>, CommonJS,
    or some third thing called AMD that I don’t understand.


    how to use a UMD file


    When I was using Chart.js I picked Option 3. I just needed to add this to my
    code:


    <script src="./chart.umd.js"> </script>
    

    and then I could use the library with the global Chart environment variable.
    Couldn’t be easier. I just copied chart.umd.js into my Git repository so that
    I didn’t have to worry about using NPM or the CDNs going down or anything.


    the build files aren’t always in the dist directory


    A lot of libraries will put their build in the dist directory, but not
    always! The build files’ location is specified in the library’s package.json.


    For example here’s an excerpt from Chart.js’s package.json.


      "jsdelivr": "./dist/chart.umd.js",
    
    "unpkg": "./dist/chart.umd.js",
    "main": "./dist/chart.cjs",
    "module": "./dist/chart.js",

    I think this is saying that if you want to use an ES Module (module) you
    should use dist/chart.js, but the jsDelivr and unpkg CDNs should use
    ./dist/chart.umd.js. I guess main is for Node.


    chart.js’s package.json also says "type": "module", which according to this documentation
    tells Node to treat files as ES modules by default. I think it doesn’t tell us
    specifically which files are ES modules and which ones aren’t but it does tell
    us that something in there is an ES module.


    example library 2: @atcute/oauth-browser-client


    @atcute/oauth-browser-client
    is a library for logging into Bluesky with OAuth in the browser.


    Let’s see what kinds of Javascript files it provides in its build!


    $ npm install @atcute/oauth-browser-client
    
    $ cd node_modules/@atcute/oauth-browser-client/dist
    $ ls *js
    constants.js dpop.js environment.js errors.js index.js resolvers.js

    It seems like the only plausible root file in here is index.js, which looks
    something like this:


    export { configureOAuth } from './environment.js';
    
    export * from './errors.js';
    export * from './resolvers.js';

    This export syntax means it’s an ES module. That means we can use it in
    the browser without a build step! Let’s see how to do that.


    how to use an ES module with importmaps


    Using an ES module isn’t an easy as just adding a <script src="whatever.js">. Instead, if
    the ES module has dependencies (like @atcute/oauth-browser-client does) the
    steps are:



    1. Set up an import map in your HTML

    2. Put import statements like import { configureOAuth } from '@atcute/oauth-browser-client'; in your JS code

    3. Include your JS code in your HTML like this: <script type="module" src="YOURSCRIPT.js"></script>


    The reason we need an import map instead of just doing something like import { BrowserOAuthClient } from "./oauth-client-browser.js" is that internally the module has more import statements like import {something} from @atcute/client, and we need to tell the browser where to get the code for @atcute/client and all of its other dependencies.


    Here’s what the importmap I used looks like for @atcute/oauth-browser-client:


    <script type="importmap">
    
    {
    "imports": {
    "nanoid": "./node_modules/nanoid/bin/dist/index.js",
    "nanoid/non-secure": "./node_modules/nanoid/non-secure/index.js",
    "nanoid/url-alphabet": "./node_modules/nanoid/url-alphabet/dist/index.js",
    "@atcute/oauth-browser-client": "./node_modules/@atcute/oauth-browser-client/dist/index.js",
    "@atcute/client": "./node_modules/@atcute/client/dist/index.js",
    "@atcute/client/utils/did": "./node_modules/@atcute/client/dist/utils/did.js"
    }
    }
    </script>

    Getting these import maps to work is pretty fiddly, I feel like there must be a
    tool to generate them automatically but I haven’t found one yet. It’s definitely possible to
    write a script that automatically generates the importmaps using esbuild’s metafile but I haven’t done that and
    maybe there’s a better way.


    I decided to set up importmaps yesterday to get
    github.com/jvns/bsky-oauth-example
    to work, so there’s some example code in that repo.


    Also someone pointed me to Simon Willison’s
    download-esm, which will
    download an ES module and rewrite the imports to point to the JS files directly
    so that you don’t need importmaps. I haven’t tried it yet but it seems like a
    great idea.


    problems with importmaps: too many files


    I did run into some problems with using importmaps in the browser though – it
    needed to download dozens of Javascript files to load my site, and my webserver
    in development couldn’t keep up for some reason. I kept seeing files fail to
    load randomly and then had to reload the page and hope that they would succeed
    this time.


    It wasn’t an issue anymore when I deployed my site to production, so I guess it
    was a problem with my local dev environment.


    Also one slightly annoying thing about ES modules in general is that you need to
    be running a webserver to use them, I’m sure this is for a good reason but it’s
    easier when you can just open your index.html file without starting a
    webserver.


    Because of the “too many files” thing I think actually using ES modules with
    importmaps in this way isn’t actually that appealing to me, but it’s good to
    know it’s possible.


    how to use an ES module without importmaps


    If the ES module doesn’t have dependencies then it’s even easier – you don’t
    need the importmaps! You can just:



    • put <script type="module" src="YOURCODE.js"></script> in your HTML. The type="module" is important.

    • put import {whatever} from "https://example.com/whatever.js" in YOURCODE.js


    alternative: use esbuild


    If you don’t want to use importmaps, you can also use a build system like esbuild. I talked about how to do
    that in Some notes on using esbuild, but this blog post is
    about ways to avoid build systems completely so I’m not going to talk about
    that option here. I do still like esbuild though and I think it’s a good option
    in this case.


    what’s the browser support for importmaps?


    CanIUse says that importmaps are in
    “Baseline 2023: newly available across major browsers” so my sense is that in
    2024 that’s still maybe a little bit too new? I think I would use importmaps
    for some fun experimental code that I only wanted like myself and 12 people to
    use, but if I wanted my code to be more widely usable I’d use esbuild instead.


    example library 3: @atproto/oauth-client-browser


    Let’s look at one final example library! This is a different Bluesky auth
    library than @atcute/oauth-browser-client.


    $ npm install @atproto/oauth-client-browser
    
    $ cd node_modules/@atproto/oauth-client-browser/dist
    $ ls *js
    browser-oauth-client.js browser-oauth-database.js browser-runtime-implementation.js errors.js index.js indexed-db-store.js util.js

    Again, it seems like only real candidate file here is index.js. But this is a
    different situation from the previous example library! Let’s take a look at
    index.js:


    There’s a bunch of stuff like this in index.js:


    __exportStar(require("@atproto/oauth-client"), exports);
    
    __exportStar(require("./browser-oauth-client.js"), exports);
    __exportStar(require("./errors.js"), exports);
    var util_js_1 = require("./util.js");

    This require() syntax is CommonJS syntax, which means that we can’t use this
    file in the browser at all, we need to use some kind of build step, and
    ESBuild won’t work either.


    Also in this library’s package.json it says "type": "commonjs" which is
    another way to tell it’s CommonJS.


    how to use a CommonJS module with esm.sh


    Originally I thought it was impossible to use CommonJS modules without learning
    a build system, but then someone Bluesky told me about
    esm.sh! It’s a CDN that will translate anything into an ES
    Module. skypack.dev does something similar, I’m not
    sure what the difference is but one person mentioned that if one doesn’t work
    sometimes they’ll try the other one.


    For @atproto/oauth-client-browser using it seems pretty simple, I just need to put this in my HTML:


    <script type="module" src="script.js"> </script>
    

    and then put this in script.js.


    import { BrowserOAuthClient } from "https://esm.sh/@atproto/oauth-client-browser@0.3.0"
    

    It seems to Just Work, which is cool! Of course this is still sort of using a
    build system – it’s just that esm.sh is running the build instead of me. My
    main concerns with this approach are:



    • I don’t really trust CDNs to keep working forever – usually I like to copy dependencies into my repository so that they don’t go away for some reason in the future.

    • I’ve heard of some issues with CDNs having security compromises which scares me.

    • I don’t really understand what esm.sh is doing.


    esbuild can also convert CommonJS modules into ES modules


    I also learned that you can also use esbuild to convert a CommonJS module
    into an ES module, though there are some limitations – the import { BrowserOAuthClient } from syntax doesn’t work. Here’s a github issue about that.


    I think the esbuild approach is probably more appealing to me than the
    esm.sh approach because it’s a tool that I already have on my computer so I
    trust it more. I haven’t experimented with this much yet though.


    summary of the three types of files


    Here’s a summary of the three types of JS files you might encounter, options
    for how to use them, and how to identify them.


    Unhelpfully a .js or .min.js file extension could be any of these 3
    options, so if the file is something.js you need to do more detective work to
    figure out what you’re dealing with.



    1. “classic” JS files

      • How to use it:: <script src="whatever.js"></script>

      • Ways to identify it:

        • The website has a big friendly banner in its setup instructions saying “Use this with a CDN!” or something

        • A .umd.js extension

        • Just try to put it in a <script src=... tag and see if it works





    2. ES Modules

      • Ways to use it:

        • If there are no dependencies, just import {whatever} from "./my-module.js" directly in your code

        • If there are dependencies, create an importmap and import {whatever} from "my-module"


        • Use esbuild or any ES Module bundler



      • Ways to identify it:

        • Look for an import or export statement. (not module.exports = ..., that’s CommonJS)

        • An .mjs extension

        • maybe "type": "module" in package.json (though it’s not clear to me which file exactly this refers to)





    3. CommonJS Modules

      • Ways to use it:

        • Use https://esm.sh to convert it into an ES module, like https://esm.sh/@atproto/oauth-client-browser@0.3.0

        • Use a build somehow (??)



      • Ways to identify it:

        • Look for require() or module.exports = ... in the code

        • A .cjs extension

        • maybe "type": "commonjs" in package.json (though it’s not clear to me which file exactly this refers to)






    it’s really nice to have ES modules standardized


    The main difference between CommonJS modules and ES modules from my perspective
    is that ES modules are actually a standard. This makes me feel a lot more
    confident using them, because browsers commit to backwards compatibility for
    web standards forever – if I write some code using ES modules today, I can
    feel sure that it’ll still work the same way in 15 years.


    It also makes me feel better about using tooling like esbuild because even if
    the esbuild project dies, because it’s implementing a standard it feels likely
    that there will be another similar tool in the future that I can replace it
    with.


    the JS community has built a lot of very cool tools


    A lot of the time when I talk about this stuff I get responses like “I hate
    javascript!!! it’s the worst!!!”. But my experience is that there are a lot of great tools for Javascript
    (I just learned about https://esm.sh yesterday which seems great! I love
    esbuild!), and that if I take the time to learn how things works I can take
    advantage of some of those tools and make my life a lot easier.


    So the goal of this post is definitely not to complain about Javascript, it’s
    to understand the landscape so I can use the tooling in a way that feels good
    to me.


    questions I still have


    Here are some questions I still have, I’ll add the answers into the post if I
    learn the answer.



    • Is there a tool that automatically generates importmaps for an ES Module that
      I have set up locally? (apparently yes: jspm)

    • How can I convert a CommonJS module into an ES module on my computer, the way
      https://esm.sh does? (apparently esbuild can sort of do this, though named exports don’t work)

    • When people normally build CommonJS modules into regular JS code, what’s code is
      doing that? Obviously there are tools like webpack, rollup, esbuild, etc, but
      do those tools all implement their own JS parsers/static analysis? How many
      JS parsers are there out there?

    • Is there any way to bundle an ES module into a single file (like
      atcute-client.js), but so that in the browser I can still import multiple
      different paths from that file (like both @atcute/client/lexicons and
      @atcute/client)?


    all the tools


    Here’s a list of every tool we talked about in this post:



    Writing this post has made me think that even though I usually don’t want to
    have a build that I run every time I update the project, I might be willing to
    have a build step (using download-esm or something) that I run only once
    when setting up the project and never run again except maybe if I’m updating my
    dependency versions.


    that’s all!


    Thanks to Marco Rogers who taught me a lot of the things
    in this post. I’ve probably made some mistakes in this post and I’d love to
    know what they are – let me know on Bluesky or Mastodon!

    New microblog with TILs

    I added a new section to this site a couple weeks ago called TIL (“today I learned”).

    the goal: save interesting tools &

    ...
    Full

    I added a new section to this site a couple weeks ago called
    TIL (“today I learned”).


    the goal: save interesting tools & facts I posted on social media


    One kind of thing I like to post on Mastodon/Bluesky is “hey, here’s a cool
    thing”, like the great SQLite repl litecli, or
    the fact that cross compiling in Go Just Works and it’s amazing, or
    cryptographic right answers,
    or this great diff tool. Usually I don’t want to write
    a whole blog post about those things because I really don’t have much more to
    say than “hey this is useful!”


    It started to bother me that I didn’t have anywhere to put those things: for
    example recently I wanted to use diffdiff and I just
    could not remember what it was called.


    the solution: make a new section of this blog


    So I quickly made a new folder called /til/, added some
    custom styling (I wanted to style the posts to look a little bit like a tweet),
    made a little Rake task to help me create new posts quickly (rake new_til), and
    set up a separate RSS Feed for it.


    I think this new section of the blog might be more for myself than anything,
    now when I forget the link to Cryptographic Right Answers I can hopefully look
    it up on the TIL page. (you might think “julia, why not use bookmarks??” but I
    have been failing to use bookmarks for my whole life and I don’t see that
    changing ever, putting things in public is for whatever reason much easier for
    me)


    So far it’s been working, often I can actually just make a quick post in 2
    minutes which was the goal.


    inspired by Simon Willison’s TIL blog


    My page is inspired by Simon Willison’s great TIL blog, though my TIL posts are a lot shorter.


    I don’t necessarily want everything to be archived


    This came about because I spent a lot of time on Twitter, so I’ve been thinking
    about what I want to do about all of my tweets.


    I keep reading the advice to “POSSE” (“post on your own site, syndicate
    elsewhere”), and while I find the idea appealing in principle, for me part of
    the appeal of social media is that it’s a little bit ephemeral. I can
    post polls or questions or observations or jokes and then they can just kind of
    fade away as they become less relevant.


    I find it a lot easier to identify specific categories of things that I actually
    want to have on a Real Website That I Own:



    and then let everything else be kind of ephemeral.


    I really believe in the advice to make email lists though – the first two
    (blog posts & comics) both have email lists and RSS feeds that people can
    subscribe to if they want. I might add a quick summary of any TIL posts from
    that week to the “blog posts from this week” mailing list.

    ASCII control characters in my terminal

    Hello! I’ve been thinking about the terminal a lot and yesterday I got curious about all these “control codes”, like Ctrl-A, Ctrl-C, Ctrl-W, etc. What’s

    ...
    Full

    Hello! I’ve been thinking about the terminal a lot and yesterday I got curious
    about all these “control codes”, like Ctrl-A, Ctrl-C, Ctrl-W, etc. What’s
    the deal with all of them?


    a table of ASCII control characters


    Here’s a table of all 33 ASCII control characters, and what they do on my
    machine (on Mac OS), more or less. There are about a million caveats, but I’ll talk about
    what it means and all the problems with this diagram that I know about.



    You can also view it as an HTML page (I just made it an image so
    it would show up in RSS).


    different kinds of codes are mixed together


    The first surprising thing about this diagram to me is that there are 33
    control codes, split into (very roughly speaking) these categories:



    1. Codes that are handled by the operating system’s terminal driver, for
      example when the OS sees a 3 (Ctrl-C), it’ll send a SIGINT signal to
      the current program

    2. Everything else is passed through to the application as-is and the
      application can do whatever it wants with them. Some subcategories of
      those:

      • Codes that correspond to a literal keypress of a key on your keyboard
        (Enter, Tab, Backspace). For example when you press Enter, your
        terminal gets sent 13.

      • Codes used by readline: “the application can do whatever it wants”
        often means “it’ll do more or less what the readline library does,
        whether the application actually uses readline or not”, so I’ve
        labelled a bunch of the codes that readline uses

      • Other codes, for example I think Ctrl-X has no standard meaning in the
        terminal in general but emacs uses it very heavily




    There’s no real structure to which codes are in which categories, they’re all
    just kind of randomly scattered because this evolved organically.


    (If you’re curious about readline, I wrote more about readline in entering text in the terminal is complicated, and there are a lot of
    cheat sheets out there)


    there are only 33 control codes


    Something else that I find a little surprising is that are only 33 control codes –
    A to Z, plus 7 more (@, [, \, ], ^, _, ?). This means that if you want to
    have for example Ctrl-1 as a keyboard shortcut in a terminal application,
    that’s not really meaningful – on my machine at least Ctrl-1 is exactly the
    same thing as just pressing 1, Ctrl-3 is the same as Ctrl-[, etc.


    Also Ctrl+Shift+C isn’t a control code – what it does depends on your
    terminal emulator. On Linux Ctrl-Shift-X is often used by the terminal
    emulator to copy or open a new tab or paste for example, it’s not sent to the
    TTY at all.


    Also I use Ctrl+Left Arrow all the time, but that isn’t a control code,
    instead it sends an ANSI escape sequence (ctrl-[[1;5D) which is a different
    thing which we absolutely do not have space for in this post.


    This “there are only 33 codes” thing is totally different from how keyboard
    shortcuts work in a GUI where you can have Ctrl+KEY for any key you want.


    the official ASCII names aren’t very meaningful to me


    Each of these 33 control codes has a name in ASCII (for example 3 is ETX).
    When all of these control codes were originally defined, they weren’t being
    used for computers or terminals at all, they were used for the telegraph machine.
    Telegraph machines aren’t the same as UNIX terminals so a lot of the codes were repurposed to mean something else.


    Personally I don’t find these ASCII names very useful, because 50% of the time
    the name in ASCII has no actual relationship to what that code does on UNIX
    systems today. So it feels easier to just ignore the ASCII names completely
    instead of trying to figure which ones still match their original meaning.


    It’s hard to use Ctrl-M as a keyboard shortcut


    Another thing that’s a bit weird is that Ctrl-M is literally the same as
    Enter, and Ctrl-I is the same as Tab, which makes it hard to use those two as keyboard shortcuts.


    From some quick research, it seems like some folks do still use Ctrl-I and
    Ctrl-M as keyboard shortcuts (here’s an example), but to do that
    you need to configure your terminal emulator to treat them differently than the
    default.


    For me the main takeaway is that if I ever write a terminal application I
    should avoid Ctrl-I and Ctrl-M as keyboard shortcuts in it.


    how to identify what control codes get sent


    While writing this I needed to do a bunch of experimenting to figure out what
    various key combinations did, so I wrote this Python script
    echo-key.py
    that will print them out.


    There’s probably a more official way but I appreciated having a script I could
    customize.


    caveat: on canonical vs noncanonical mode


    Two of these codes (Ctrl-W and Ctrl-U) are labelled in the table as
    “handled by the OS”, but actually they’re not always handled by the OS, it
    depends on whether the terminal is in “canonical” mode or in “noncanonical mode”.


    In canonical mode,
    programs only get input when you press Enter (and the OS is in charge of deleting characters when you press Backspace or Ctrl-W). But in noncanonical mode the program gets
    input immediately when you press a key, and the Ctrl-W and Ctrl-U codes are passed through to the program to handle any way it wants.


    Generally in noncanonical mode the program will handle Ctrl-W and Ctrl-U
    similarly to how the OS does, but there are some small differences.


    Some examples of programs that use canonical mode:



    • probably pretty much any noninteractive program, like grep or cat

    • git, I think


    Examples of programs that use noncanonical mode:



    • python3, irb and other REPLs

    • your shell

    • any full screen TUI like less or vim


    caveat: all of the “OS terminal driver” codes are configurable with stty


    I said that Ctrl-C sends SIGINT but technically this is not necessarily
    true, if you really want to you can remap all of the codes labelled “OS
    terminal driver”, plus Backspace, using a tool called stty, and you can view
    the mappings with stty -a.


    Here are the mappings on my machine right now:


    $ stty -a
    
    cchars: discard = ^O; dsusp = ^Y; eof = ^D; eol = <undef>;
    eol2 = <undef>; erase = ^?; intr = ^C; kill = ^U; lnext = ^V;
    min = 1; quit = ^\; reprint = ^R; start = ^Q; status = ^T;
    stop = ^S; susp = ^Z; time = 0; werase = ^W;

    I have personally never remapped any of these and I cannot imagine a reason I
    would (I think it would be a recipe for confusion and disaster for me), but I
    asked on Mastodon and people said the most common reasons they used
    stty were:



    • fix a broken terminal with stty sane

    • set stty erase ^H to change how Backspace works

    • set stty ixoff

    • some people even map SIGINT to a different key, like their DELETE key


    caveat: on signals


    Two signals caveats:



    1. If the ISIG terminal mode is turned off, then the OS won’t send signals. For example vim turns off ISIG

    2. Apparently on BSDs, there’s an extra control code (Ctrl-T) which sends SIGINFO


    You can see which terminal modes a program is setting using strace like this,
    terminal modes are set with the ioctl system call:


    $ strace -tt -o out  vim
    
    $ grep ioctl out | grep SET

    here are the modes vim sets when it starts (ISIG and ICANON are
    missing!):


    17:43:36.670636 ioctl(0, TCSETS, {c_iflag=IXANY|IMAXBEL|IUTF8,
    
    c_oflag=NL0|CR0|TAB0|BS0|VT0|FF0|OPOST, c_cflag=B38400|CS8|CREAD,
    c_lflag=ECHOK|ECHOCTL|ECHOKE|PENDIN, ...}) = 0

    and it resets the modes when it exits:


    17:43:38.027284 ioctl(0, TCSETS, {c_iflag=ICRNL|IXANY|IMAXBEL|IUTF8,
    
    c_oflag=NL0|CR0|TAB0|BS0|VT0|FF0|OPOST|ONLCR, c_cflag=B38400|CS8|CREAD,
    c_lflag=ISIG|ICANON|ECHO|ECHOE|ECHOK|IEXTEN|ECHOCTL|ECHOKE|PENDIN, ...}) = 0

    I think the specific combination of modes vim is using here might be called
    “raw mode”, man cfmakeraw talks about
    that.


    there are a lot of conflicts


    Related to “there are only 33 codes”, there are a lot of conflicts where
    different parts of the system want to use the same code for different things,
    for example by default Ctrl-S will freeze your screen, but if you turn that
    off then readline will use Ctrl-S to do a forward search.


    Another example is that on my machine sometimes Ctrl-T will send SIGINFO
    and sometimes it’ll transpose 2 characters and sometimes it’ll do something
    completely different depending on:



    • whether the program has ISIG set

    • whether the program uses readline / imitates readline’s behaviour


    caveat: on “backspace” and “other backspace”


    In this diagram I’ve labelled code 127 as “backspace” and 8 as “other
    backspace”. Uh, what?


    I think this was the single biggest topic of discussion in the replies on Mastodon – apparently there’s a LOT of history to this and I’d never heard of any of it before.


    First, here’s how it works on my machine:



    1. I press the Backspace key

    2. The TTY gets sent the byte 127, which is called DEL in ASCII

    3. the OS terminal driver and readline both have 127 mapped to “backspace” (so it works both in canonical mode and noncanonical mode)

    4. The previous character gets deleted


    If I press Ctrl+H, it has the same effect as Backspace if I’m using
    readline, but in a program without readline support (like cat for instance),
    it just prints out ^H.


    Apparently Step 2 above is different for some folks – their Backspace key sends
    the byte 8 instead of 127, and so if they want Backspace to work then they
    need to configure the OS (using stty) to set erase = ^H.


    There’s an incredible section of the Debian Policy Manual on keyboard configuration
    that describes how Delete and Backspace should work according to Debian
    policy, which seems very similar to how it works on my Mac today. My
    understanding (via this mastodon post)
    is that this policy was written in the 90s because there was a lot of confusion
    about what Backspace should do in the 90s and there needed to be a standard
    to get everything to work.


    There’s a bunch more historical terminal stuff here but that’s all I’ll say for
    now.


    there’s probably a lot more diversity in how this works


    I’ve probably missed a bunch more ways that “how it works on my machine” might
    be different from how it works on other people’s machines, and I’ve probably
    made some mistakes about how it works on my machine too. But that’s all I’ve
    got for today.


    Some more stuff I know that I’ve left out: according to stty -a Ctrl-O is
    “discard”, Ctrl-R is “reprint”, and Ctrl-Y is “dsusp”. I have no idea how
    to make those actually do anything (pressing them does not do anything
    obvious, and some people have told me what they used to do historically but
    it’s not clear to me if they have a use in 2024), and a lot of the time in practice
    they seem to just be passed through to the application anyway so I just
    labelled Ctrl-R and Ctrl-Y as
    readline.


    not all of this is that useful to know


    Also I want to say that I think the contents of this post are kind of interesting
    but I don’t think they’re necessarily that useful. I’ve used the terminal
    pretty successfully every day for the last 20 years without knowing literally
    any of this – I just knew what Ctrl-C, Ctrl-D, Ctrl-Z, Ctrl-R,
    Ctrl-L did in practice (plus maybe Ctrl-A, Ctrl-E and Ctrl-W) and did
    not worry about the details for the most part, and that was
    almost always totally fine except when I was trying to use xterm.js.


    But I had fun learning about it so maybe it’ll be interesting to you too.

    Using less memory to look up IP addresses in Mess With DNS

    I’ve been having problems for the last 3 years or so where Mess With DNS periodically runs out of memory and gets OOM killed.

    ...
    Full

    I’ve been having problems for the last 3 years or so where Mess With DNS
    periodically runs out of memory and gets OOM killed.


    This hasn’t been a big priority for me: usually it just goes down for a few
    minutes while it restarts, and it only happens once a day at most, so I’ve just
    been ignoring. But last week it started actually causing a problem so I decided
    to look into it.


    This was kind of winding road where I learned a lot so here’s a table of contents:



    there’s about 100MB of memory available


    I run Mess With DNS on a VM without about 465MB of RAM, which according to
    ps aux (the RSS column) is split up something like:



    • 100MB for PowerDNS

    • 200MB for Mess With DNS

    • 40MB for hallpass


    That leaves about 110MB of memory free.


    A while back I set GOMEMLIMIT to 250MB
    to try to make sure the garbage collector ran if Mess With DNS used more than
    250MB of memory, and I think this helped but it didn’t solve everything.


    the problem: OOM killing the backup script


    A few weeks ago I started backing up Mess With DNS’s database for the first time using restic.


    This has been working okay, but since Mess With DNS operates without much extra
    memory I think restic sometimes needed more memory than was available on the
    system, and so the backup script sometimes got OOM killed.


    This was a problem because



    1. backups might be corrupted sometimes

    2. more importantly, restic takes out a lock when it runs, and so I’d have to manually do an
      unlock if I wanted the backups to continue working. Doing manual work like
      this is the #1 thing I try to avoid with all my web services (who has time
      for that!) so I really wanted to do something about it.


    There’s probably more than one solution to this, but I decided to try to make
    Mess With DNS use less memory so that there was more available memory on the
    system, mostly because it seemed like a fun problem to try to solve.


    what’s using memory: IP addresses


    I’d run a memory profile of Mess With DNS a bunch of times in the past, so I
    knew exactly what was using most of Mess With DNS’s memory: IP addresses.


    When it starts, Mess With DNS loads this database where you can look up the
    ASN of every IP address
    into memory, so that when it
    receives a DNS query it can take the source IP address like 74.125.16.248 and
    tell you that IP address belongs to GOOGLE.


    This database by itself used about 117MB of memory, and a simple du told me
    that was too much – the original text files were only 37MB!


    $ du -sh *.tsv
    
    26M ip2asn-v4.tsv
    11M ip2asn-v6.tsv

    The way it worked originally is that I had an array of these:


    type IPRange struct {
    
    StartIP net.IP
    EndIP net.IP
    Num int
    Name string
    Country string
    }

    and I searched through it with a binary search to figure out if any of the
    ranges contained the IP I was looking for. Basically the simplest possible
    thing and it’s super fast, my machine can do about 9 million lookups per
    second.


    attempt 1: use SQLite


    I’ve been using SQLite recently, so my first thought was – maybe I can store
    all of this data on disk in an SQLite database, give the tables an index, and
    that’ll use less memory.


    So I:



    • wrote a quick Python script using sqlite-utils to import the TSV files into an SQLite database

    • adjusted my code to select from the database instead


    This did solve the initial memory goal (after a GC it now hardly used any
    memory at all because the table was on disk!), though I’m not sure how much GC
    churn this solution would cause if we needed to do a lot of queries at once. I
    did a quick memory profile and it seemed to allocate about 1KB of memory per
    lookup.


    Let’s talk about the issues I ran into with using SQLite though.


    problem: how to store IPv6 addresses


    SQLite doesn’t have support for big integers and IPv6 addresses are 128 bits,
    so I decided to store them as text. I think BLOB might have been better, I
    originally thought BLOBs couldn’t be compared but the sqlite docs say they can.


    I ended up with this schema:


    CREATE TABLE ipv4_ranges (
    
    start_ip INTEGER NOT NULL,
    end_ip INTEGER NOT NULL,
    asn INTEGER NOT NULL,
    country TEXT NOT NULL,
    name TEXT NOT NULL
    );
    CREATE TABLE ipv6_ranges (
    start_ip TEXT NOT NULL,
    end_ip TEXT NOT NULL,
    asn INTEGER,
    country TEXT,
    name TEXT
    );
    CREATE INDEX idx_ipv4_ranges_start_ip ON ipv4_ranges (start_ip);
    CREATE INDEX idx_ipv6_ranges_start_ip ON ipv6_ranges (start_ip);
    CREATE INDEX idx_ipv4_ranges_end_ip ON ipv4_ranges (end_ip);
    CREATE INDEX idx_ipv6_ranges_end_ip ON ipv6_ranges (end_ip);

    Also I learned that Python has an ipaddress module, so I could use
    ipaddress.ip_address(s).exploded to make sure that the IPv6 addresses were
    expanded so that a string comparison would compare them properly.


    problem: it’s 500x slower


    I ran a quick microbenchmark, something like this. It printed out that it could
    look up 17,000 IPv6 addresses per second, and similarly for IPv4 addresses.


    This was pretty discouraging – being able to look up 17k addresses per section
    is kind of fine (Mess With DNS does not get a lot of traffic), but I compared it to
    the original binary search code and the original code could do 9 million per second.


    	ips := []net.IP{}
    
    count := 20000
    for i := 0; i < count; i++ {
    // create a random IPv6 address
    bytes := randomBytes()
    ip := net.IP(bytes[:])
    ips = append(ips, ip)
    }
    now := time.Now()
    success := 0
    for _, ip := range ips {
    _, err := ranges.FindASN(ip)
    if err == nil {
    success++
    }
    }
    fmt.Println(success)
    elapsed := time.Since(now)
    fmt.Println("number per second", float64(count)/elapsed.Seconds())

    time for EXPLAIN QUERY PLAN


    I’d never really done an EXPLAIN in sqlite, so I thought it would be a fun
    opportunity to see what the query plan was doing.


    sqlite> explain query plan select * from ipv6_ranges where '2607:f8b0:4006:0824:0000:0000:0000:200e' BETWEEN start_ip and end_ip;
    
    QUERY PLAN
    `--SEARCH ipv6_ranges USING INDEX idx_ipv6_ranges_end_ip (end_ip>?)

    It looks like it’s just using the end_ip index and not the start_ip index,
    so maybe it makes sense that it’s slower than the binary search.


    I tried to figure out if there was a way to make SQLite use both indexes, but I
    couldn’t find one and maybe it knows best anyway.


    At this point I gave up on the SQLite solution, I didn’t love that it was
    slower and also it’s a lot more complex than just doing a binary search. I felt
    like I’d rather keep something much more similar to the binary search.


    A few things I tried with SQLite that did not cause it to use both indexes:



    • using a compound index instead of two separate indexes

    • running ANALYZE

    • using INTERSECT to intersect the results of start_ip < ? and ? < end_ip. This did make it use both indexes, but it also seemed to make the
      query literally 1000x slower, probably because it needed to create the
      results of both subqueries in memory and intersect them.


    attempt 2: use a trie


    My next idea was to use a
    trie,
    because I had some vague idea that maybe a trie would use less memory, and
    I found this library called
    ipaddress-go that lets you look up IP addresses using a trie.


    I tried using it here’s the code, but I
    think I was doing something wildly wrong because, compared to my naive array + binary search:



    • it used WAY more memory (800MB to store just the IPv4 addresses)

    • it was a lot slower to do the lookups (it could do only 100K/second instead of 9 million/second)


    I’m not really sure what went wrong here but I gave up on this approach and
    decided to just try to make my array use less memory and stick to a simple
    binary search.


    some notes on memory profiling


    One thing I learned about memory profiling is that you can use runtime
    package to see how much memory is currently allocated in the program. That’s
    how I got all the memory numbers in this post. Here’s the code:


    func memusage() {
    
    runtime.GC()
    var m runtime.MemStats
    runtime.ReadMemStats(&m)
    fmt.Printf("Alloc = %v MiB\n", m.Alloc/1024/1024)
    // write mem.prof
    f, err := os.Create("mem.prof")
    if err != nil {
    log.Fatal(err)
    }
    pprof.WriteHeapProfile(f)
    f.Close()
    }

    Also I learned that if you use pprof to analyze a heap profile there are two
    ways to analyze it: you can pass either --alloc-space or --inuse-space to
    go tool pprof. I don’t know how I didn’t realize this before but
    alloc-space will tell you about everything that was allocated, and
    inuse-space will just include memory that’s currently in use.


    Anyway I ran go tool pprof -pdf --inuse_space mem.prof > mem.pdf a lot. Also
    every time I use pprof I find myself referring to my own intro to pprof, it’s probably
    the blog post I wrote that I use the most often. I should add --alloc-space
    and --inuse-space to it.


    attempt 3: make my array use less memory


    I was storing my ip2asn entries like this:


    type IPRange struct {
    
    StartIP net.IP
    EndIP net.IP
    Num int
    Name string
    Country string
    }

    I had 3 ideas for ways to improve this:



    1. There was a lot of repetition of Name and the Country, because a lot of IP ranges belong to the same ASN

    2. net.IP is an []byte under the hood, which felt like it involved an unnecessary pointer, was there a way to inline it into the struct?

    3. Maybe I didn’t need both the start IP and the end IP, often the ranges were consecutive so maybe I could rearrange things so that I only had the start IP


    idea 3.1: deduplicate the Name and Country


    I figured I could store the ASN info in an array, and then just store the index
    into the array in my IPRange struct. Here are the structs so you can see what
    I mean:


    type IPRange struct {
    
    StartIP netip.Addr
    EndIP netip.Addr
    ASN uint32
    Idx uint32
    }

    type ASNInfo struct {
    Country string
    Name string
    }

    type ASNPool struct {
    asns []ASNInfo
    lookup map[ASNInfo]uint32
    }


    This worked! It brought memory usage from 117MB to 65MB – a 50MB savings. I felt good about this.


    Here’s all of the code for that part.


    how big are ASNs?


    As an aside – I’m storing the ASN in a uint32, is that right? I looked in the ip2asn
    file and the biggest one seems to be 401307, though there are a few lines that
    say 4294901931 which is much bigger, but also are just inside the range of a
    uint32. So I can definitely use a uint32.


    59.101.179.0	59.101.179.255	4294901931	Unknown	AS4294901931
    

    idea 3.2: use netip.Addr instead of net.IP


    It turns out that I’m not the only one who felt that net.IP was using an
    unnecessary amount of memory – in 2021 the folks at Tailscale released a new
    IP address library for Go which solves this and many other issues. They wrote a great blog post about it.


    I discovered (to my delight) that not only does this new IP address library exist and do exactly what I want, it’s also now in the Go
    standard library as netip.Addr. Switching to netip.Addr was
    very easy and saved another 20MB of memory, bringing us to 46MB.


    I didn’t try my third idea (remove the end IP from the struct) because I’d
    already been programming for long enough on a Saturday morning and I was happy
    with my progress.


    It’s always such a great feeling when I think “hey, I don’t like this, there
    must be a better way” and then immediately discover that someone has already
    made the exact thing I want, thought about it a lot more than me, and
    implemented it much better than I would have.


    all of this was messier in real life


    Even though I tried to explain this in a simple linear way “I tried X, then I
    tried Y, then I tried Z”, that’s kind of a lie – I always try to take my
    actual debugging process (total chaos) and make it seem more linear and
    understandable because the reality is just too annoying to write down. It’s
    more like:



    • try sqlite

    • try a trie

    • second guess everything that I concluded about sqlite, go back and look at
      the results again

    • wait what about indexes

    • very very belatedly realize that I can use runtime to check how much
      memory everything is using, start doing that

    • look at the trie again, maybe I misunderstood everything

    • give up and go back to binary search

    • look at all of the numbers for tries/sqlite again to make sure I didn’t misunderstand


    A note on using 512MB of memory


    Someone asked why I don’t just give the VM more memory. I could very easily
    afford to pay for a VM with 1GB of memory, but I feel like 512MB really
    should be enough (and really that 256MB should be enough!) so I’d rather stay
    inside that constraint. It’s kind of a fun puzzle.


    a few ideas from the replies


    Folks had a lot of good ideas I hadn’t thought of. Recording them as
    inspiration if I feel like having another Fun Performance Day at some point.



    • Try Go’s unique package for the ASNPool. Someone tried this and it uses more memory, probably because Go’s pointers are 64 bits

    • Try compiling with GOARCH=386 to use 32-bit pointers to sace space (maybe in combination with using unique!)

    • It should be possible to store all of the IPv6 addresses in just 64 bits, because only the first 64 bits of the address are public

    • Interpolation search might be faster than binary search since IP addresses are numeric

    • Try the MaxMind db format with mmdbwriter or mmdbctl

    • Tailscale’s art routing table package


    the result: saved 70MB of memory!


    I deployed the new version and now Mess With DNS is using less memory! Hooray!


    A few other notes:



    • lookups are a little slower – in my microbenchmark they went from 9 million
      lookups/second to 6 million, maybe because I added a little indirection.
      Using less memory and a little more CPU seemed like a good tradeoff though.

    • it’s still using more memory than the raw text files do (46MB vs 37MB), I
      guess pointers take up space and that’s okay.


    I’m honestly not sure if this will solve all my memory problems, probably not!
    But I had fun, I learned a few things about SQLite, I still don’t know what to
    think about tries, and it made me love binary search even more than I already
    did.

    Some notes on upgrading Hugo

    Warning: this is a post about very boring yakshaving, probably only of interest to people who are trying to upgrade Hugo from a very old

    ...
    Full

    Warning: this is a post about very boring yakshaving, probably only of interest
    to people who are trying to upgrade Hugo from a very old version to a new
    version. But what are blogs for if not documenting one’s very boring yakshaves
    from time to time?


    So yesterday I decided to try to upgrade Hugo. There’s no real reason to do
    this – I’ve been using Hugo version 0.40 to generate this blog since 2018, it
    works fine, and I don’t have any problems with it. But I thought – maybe it
    won’t be as hard as I think, and I kind of like a tedious computer task sometimes!


    I thought I’d document what I learned along the way in case it’s useful to
    anyone else doing this very specific migration. I upgraded from Hugo v0.40
    (from 2018) to v0.135 (from 2024).


    Here are most of the changes I had to make:


    change 1: template "theme/partials/thing.html is now partial thing.html


    I had to replace a bunch of instances of {{ template "theme/partials/header.html" . }} with {{ partial "header.html" . }}.


    This happened in v0.42:



    We have now virtualized the filesystems for project and theme files. This
    makes everything simpler, faster and more powerful. But it also means that
    template lookups on the form {{ template “theme/partials/pagination.html” .
    }} will not work anymore. That syntax has never been documented, so it’s not
    expected to be in wide use.



    change 2: .Data.Pages is now site.RegularPages


    This seems to be discussed in the release notes for 0.57.2


    I just needed to replace .Data.Pages with site.RegularPages in the template on the homepage as well as in my RSS feed template.


    change 3: .Next and .Prev got flipped


    I had this comment in the part of my theme where I link to the next/previous blog post:



    “next” and “previous” in hugo apparently mean the opposite of what I’d think
    they’d mean intuitively. I’d expect “next” to mean “in the future” and
    “previous” to mean “in the past” but it’s the opposite



    It looks they changed this in
    ad705aac064
    so that “next” actually is in the future and “prev” actually is in the past. I
    definitely find the new behaviour more intuitive.


    downloading the Hugo changelogs with a script


    Figuring out why/when all of these changes happened was a little difficult. I
    ended up hacking together a bash script to download all of the changelogs from github as text files, which I
    could then grep to try to figure out what happened. It turns out it’s pretty
    easy to get all of the changelogs from the GitHub API.


    So far everything was not so bad – there was also a change around taxonomies
    that’s I can’t quite explain, but it was all pretty manageable, but then we got
    to the really tough one: the markdown renderer.


    change 4: the markdown renderer (blackfriday -> goldmark)


    The blackfriday markdown renderer (which was previously the default) was removed in v0.100.0. This seems pretty reasonable:



    It has been deprecated for a long time, its v1 version is not maintained
    anymore, and there are many known issues. Goldmark should be a mature
    replacement by now.



    Fixing all my Markdown changes was a huge pain – I ended up having to update
    80 different Markdown files (out of 700) so that they would render properly, and I’m not totally sure


    why bother switching renderers?


    The obvious question here is – why bother even trying to upgrade Hugo at all
    if I have to switch Markdown renderers?
    My old site was running totally fine and I think it wasn’t necessarily a good
    use of time, but the one reason I think it might be useful in the future is
    that the new renderer (goldmark) uses the CommonMark markdown standard, which I’m hoping will be somewhat
    more futureproof. So maybe I won’t have to go through this again? We’ll see.


    Also it turned out that the new Goldmark renderer does fix some problems I had
    (but didn’t know that I had) with smart quotes and how lists/blockquotes
    interact.


    finding all the Markdown problems: the process


    The hard part of this Markdown change was even figuring out what changed.
    Almost all of the problems (including #2 and #3 above) just silently broke the
    site, they didn’t cause any errors or anything. So I had to diff the HTML to
    hunt them down.


    Here’s what I ended up doing:



    1. Generate the site with the old version, put it in public_old

    2. Generate the new version, put it in public

    3. Diff every single HTML file in public/ and public_old with this diff.sh script and put the results in a diffs/ folder

    4. Run variations on find diffs -type f | xargs cat | grep -C 5 '(31m|32m)' | less -r over and over again to look at every single change until I found something that seemed wrong

    5. Update the Markdown to fix the problem

    6. Repeat until everything seemed okay


    (the grep 31m|32m thing is searching for red/green text in the diff)


    This was very time consuming but it was a little bit fun for some reason so I
    kept doing it until it seemed like nothing too horrible was left.


    the new markdown rules


    Here’s a list of every type of Markdown change I had to make. It’s very
    possible these are all extremely specific to me but it took me a long time to
    figure them all out so maybe this will be helpful to one other person who finds
    this in the future.


    4.1: mixing HTML and markdown


    This doesn’t work anymore (it doesn’t expand the link):


    <small>
    
    [a link](https://example.com)
    </small>

    I need to do this instead:


    <small>

    [a link](https://example.com)

    </small>


    This works too:


    <small> [a link](https://example.com) </small>
    

    4.2: << is changed into «


    I didn’t want this so I needed to configure:


    markup:
    
    goldmark:
    extensions:
    typographer:
    leftAngleQuote: '&lt;&lt;'
    rightAngleQuote: '&gt;&gt;'

    4.3: nested lists sometimes need 4 space indents


    This doesn’t render as a nested list anymore if I only indent by 2 spaces, I need to put 4 spaces.


    1. a
    
    * b
    * c
    2. b

    The problem is that the amount of indent needed depends on the size of the list
    markers. Here’s a reference in CommonMark for this.


    4.4: blockquotes inside lists work better


    Previously the > quote here didn’t render as a blockquote, and with the new renderer it does.


    * something
    
    > quote
    * something else

    I found a bunch of Markdown that had been kind of broken (which I hadn’t
    noticed) that works better with the new renderer, and this is an example of
    that.


    Lists inside blockquotes also seem to work better.


    4.5: headings inside lists


    Previously this didn’t render as a heading, but now it does. So I needed to
    replace the # with &num;.


    * # passengers: 20
    

    4.6: + or 1) at the beginning of the line makes it a list


    I had something which looked like this:


    `1 / (1
    
    + exp(-1)) = 0.73`

    With Blackfriday it rendered like this:


    <p><code>1 / (1
    
    + exp(-1)) = 0.73</code></p>

    and with Goldmark it rendered like this:


    <p>`1 / (1</p>
    
    <ul>
    <li>exp(-1)) = 0.73`</li>
    </ul>

    Same thing if there was an accidental 1) at the beginning of a line, like in this Markdown snippet


    I set up a small Hadoop cluster (1 master, 2 workers, replication set to 
    
    1) on

    To fix this I just had to rewrap the line so that the + wasn’t the first character.


    The Markdown is formatted this way because I wrap my Markdown to 80 characters
    a lot and the wrapping isn’t very context sensitive.


    4.7: no more smart quotes in code blocks


    There were a bunch of places where the old renderer (Blackfriday) was doing
    unwanted things in code blocks like replacing ... with or replacing
    quotes with smart quotes. I hadn’t realized this was happening and I was very
    happy to have it fixed.


    4.8: better quote management


    The way this gets rendered got better:


    "Oh, *interesting*!"
    


    • old: “Oh, interesting!“

    • new: “Oh, interesting!”


    Before there were two left smart quotes, now the quotes match.


    4.9: images are no longer wrapped in a p tag


    Previously if I had an image like this:


    <img src="https://jvns.ca/images/rustboot1.png">
    

    it would get wrapped in a <p> tag, now it doesn’t anymore. I dealt with this
    just by adding a margin-bottom: 0.75em to images in the CSS, hopefully
    that’ll make them display well enough.


    4.10: <br> is now wrapped in a p tag


    Previously this wouldn’t get wrapped in a p tag, but now it seems to:


    <br><br>
    

    I just gave up on fixing this though and resigned myself to maybe having some
    extra space in some cases. Maybe I’ll try to fix it later if I feel like
    another yakshave.


    4.11: some more goldmark settings


    I also needed to



    • turn off code highlighting (because it wasn’t working properly and I didn’t have it before anyway)

    • use the old “blackfriday” method to generate heading IDs so they didn’t change

    • allow raw HTML in my markdown


    Here’s what I needed to add to my config.yaml to do all that:


    markup:
    
    highlight:
    codeFences: false
    goldmark:
    renderer:
    unsafe: true
    parser:
    autoHeadingIDType: blackfriday

    Maybe I’ll try to get syntax highlighting working one day, who knows. I might
    prefer having it off though.


    a little script to compare blackfriday and goldmark


    I also wrote a little program to compare the Blackfriday and Goldmark output
    for various markdown snippets, here it is in a gist.


    It’s not really configured the exact same way Blackfriday and Goldmark were in
    my Hugo versions, but it was still helpful to have to help me understand what
    was going on.


    a quick note on maintaining themes


    My approach to themes in Hugo has been:



    1. pay someone to make a nice design for the site (for example wizardzines.com was designed by Melody Starling)

    2. use a totally custom theme

    3. commit that theme to the same Github repo as the site


    So I just need to edit the theme files to fix any problems. Also I wrote a lot
    of the theme myself so I’m pretty familiar with how it works.


    Relying on someone else to keep a theme updated feels kind of scary to me, I
    think if I were using a third-party theme I’d just copy the code into my site’s
    github repo and then maintain it myself.


    which static site generators have better backwards compatibility?


    I asked on Mastodon if
    anyone had used a static site generator with good backwards compatibility.


    The main answers seemed to be Jekyll and 11ty. Several people said they’d been
    using Jekyll for 10 years without any issues, and 11ty says it has
    stability as a core goal.


    I think a big factor in how appealing Jekyll/11ty are is how easy it is for you
    to maintain a working Ruby / Node environment on your computer: part of the
    reason I stopped using Jekyll was that I got tired of having to maintain a
    working Ruby installation. But I imagine this wouldn’t be a problem for a Ruby
    or Node developer.


    Several people said that they don’t build their Jekyll site locally at all –
    they just use GitHub Pages to build it.


    that’s it!


    Overall I’ve been happy with Hugo – I started using it because it had fast
    build times and it was a static binary, and both of those things are still
    extremely useful to me. I might have spent 10 hours on this upgrade, but I’ve
    probably spent 1000+ hours writing blog posts without thinking about Hugo at
    all so that seems like an extremely reasonable ratio.


    I find it hard to be too mad about the backwards incompatible changes, most of
    them were quite a long time ago, Hugo does a great job of making their old
    releases available so you can use the old release if you want, and the most
    difficult one is removing support for the blackfriday Markdown renderer in
    favour of using something CommonMark-compliant which seems pretty reasonable to
    me even if it is a huge pain.


    But it did take a long time and I don’t think I’d particularly recommend moving
    700 blog posts to a new Markdown renderer unless you’re really in the mood for
    a lot of computer suffering for some reason.


    The new renderer did fix a bunch of problems so I think overall it might be a
    good thing, even if I’ll have to remember to make 2 changes to how I write
    Markdown (4.1 and 4.3).


    Also I’m still using Hugo 0.54 for https://wizardzines.com so maybe these notes
    will be useful to Future Me if I ever feel like upgrading Hugo for that site.


    Hopefully I didn’t break too many things on the blog by doing this, let me know
    if you see anything broken!

    Terminal colours are tricky

    Yesterday I was thinking about how long it took me to get a colorscheme in my terminal that I was mostly happy with (SO MANY

    ...
    Full

    Yesterday I was thinking about how long it took me to get a colorscheme in my
    terminal that I was mostly happy with (SO MANY YEARS), and it made me wonder
    what about terminal colours made it so hard.


    So I asked people on Mastodon what problems
    they’ve run into with colours in the terminal, and I got a ton of interesting
    responses! Let’s talk about some of the problems and a few possible ways to fix
    them.


    problem 1: blue on black


    One of the top complaints was “blue on black is hard to read”. Here’s an
    example of that: if I open Terminal.app, set the background to black, and run
    ls, the directories are displayed in a blue that isn’t that easy to read:



    To understand why we’re seeing this blue, let’s talk about ANSI colours!


    the 16 ANSI colours


    Your terminal has 16 numbered colours – black, red, green, yellow, blue,
    magenta, cyan, white, and “bright” version of each of those.


    Programs can use them by printing out an “ANSI escape code” – for example if
    you want to see each of the 16 colours in your terminal, you can run this
    Python program:


    def color(num, text):
    
    return f"\033[38;5;{num}m{text}\033[0m"

    for i in range(16):
    print(color(i, f"number {i:02}"))


    what are the ANSI colours?


    This made me wonder – if blue is colour number 5, who decides what hex color
    that should correspond to?


    The answer seems to be “there’s no standard, terminal emulators just choose
    colours and it’s not very consistent”. Here’s a screenshot of a table from Wikipedia, where you
    can see that there’s a lot of variation:



    problem 1.5: bright yellow on white


    Bright yellow on white is even worse than blue on black, here’s what I get in
    a terminal with the default settings:



    That’s almost impossible to read (and some other colours like light green cause
    similar issues), so let’s talk about solutions!


    two ways to reconfigure your colours


    If you’re annoyed by these colour contrast issues (or maybe you just think the
    default ANSI colours are ugly), you might think – well, I’ll just choose a
    different “blue” and pick something I like better!


    There are two ways you can do this:


    Way 1: Configure your terminal emulator: I think most modern terminal emulators
    have a way to reconfigure the colours, and some of them even come with some
    preinstalled themes that you might like better than the defaults.


    Way 2: Run a shell script: There are ANSI escape codes that you can print
    out to tell your terminal emulator to reconfigure its colours. Here’s a shell script that does that,
    from the base16-shell project.
    You can see that it has a few different conventions for changing the colours –
    I guess different terminal emulators have different escape codes for changing
    their colour palette, and so the script is trying to pick the right style of
    escape code based on the TERM environment variable.


    what are the pros and cons of the 2 ways of configuring your colours?


    I prefer to use the “shell script” method, because:



    • if I switch terminal emulators for some reason, I don’t need to a different configuration system, my colours still Just Work

    • I use base16-shell with base16-vim to make my vim colours match my terminal colours, which is convenient


    some advantages of configuring colours in your terminal emulator:



    • if you use a popular terminal emulator, there are probably a lot more nice terminal themes out there that you can choose from

    • not all terminal emulators support the “shell script method”, and even if
      they do, the results can be a little inconsistent


    This is what my shell has looked like for probably the last 5 years (using the
    solarized light base16 theme), and I’m pretty happy with it. Here’s htop:



    Okay, so let’s say you’ve found a terminal colorscheme that you like. What else
    can go wrong?


    problem 2: programs using 256 colours


    Here’s what some output of fd, a find alternative, looks like in my
    colorscheme:



    The contrast is pretty bad here, and I definitely don’t have that lime green in
    my normal colorscheme. What’s going on?


    We can see what color codes fd is using using the unbuffer program to
    capture its output including the color codes:


    $ unbuffer fd . > out
    
    $ vim out
    ^[[38;5;48mbad-again.sh^[[0m
    ^[[38;5;48mbad.sh^[[0m
    ^[[38;5;48mbetter.sh^[[0m
    out

    ^[[38;5;48 means “set the foreground color to color 48”. Terminals don’t
    only have 16 colours – many terminals these days actually have 3 ways of
    specifying colours:



    1. the 16 ANSI colours we already talked about

    2. an extended set of 256 colours

    3. a further extended set of 24-bit hex colours, like #ffea03


    So fd is using one of the colours from the extended 256-color set. bat (a
    cat alternative) does something similar – here’s what it looks like by
    default in my terminal.



    This looks fine though and it really seems like it’s trying to work well with a
    variety of terminal themes.


    some newer tools seem to have theme support


    I think it’s interesting that some of these newer terminal tools (fd, cat,
    delta, and probably more) have support for arbitrary custom themes. I guess
    the downside of this approach is that the default theme might clash with your
    terminal’s background, but the upside is that it gives you a lot more control
    over theming the tool’s output than just choosing 16 ANSI colours.


    I don’t really use bat, but if I did I’d probably use bat --theme ansi to
    just use the ANSI colours that I have set in my normal terminal colorscheme.


    problem 3: the grays in Solarized


    A bunch of people on Mastodon mentioned a specific issue with grays in the
    Solarized theme: when I list a directory, the base16 Solarized Light theme
    looks like this:



    but iTerm’s default Solarized Light theme looks like this:



    This is because in the iTerm theme (which is the original Solarized design), colors 9-14 (the “bright blue”, “bright
    red”, etc) are mapped to a series of grays, and when I run ls, it’s trying to
    use those “bright” colours to color my directories and executables.


    My best guess for why the original Solarized theme is designed this way is to
    make the grays available to the vim Solarized colorscheme.


    I’m pretty sure I prefer the modified base16 version I use where the “bright”
    colours are actually colours instead of all being shades of gray though. (I
    didn’t actually realize the version I was using wasn’t the “original” Solarized
    theme until I wrote this post)


    In any case I really love Solarized and I’m very happy it exists so that I can
    use a modified version of it.


    problem 4: a vim theme that doesn’t match the terminal background


    If I my vim theme has a different background colour than my terminal theme, I
    get this ugly border, like this:



    This one is a pretty minor issue though and I think making your terminal
    background match your vim background is pretty straightforward.


    problem 5: programs setting a background color


    A few people mentioned problems with terminal applications setting an
    unwanted background colour, so let’s look at an example of that.


    Here ngrok has set the background to color #16 (“black”), but the
    base16-shell script I use sets color 16 to be bright orange, so I get this,
    which is pretty bad:



    I think the intention is for ngrok to look something like this:



    I think base16-shell sets color #16 to orange (instead of black)
    so that it can provide extra colours for use by base16-vim.
    This feels reasonable to me – I use base16-vim in the terminal, so I guess I’m
    using that feature and it’s probably more important to me than ngrok (which I
    rarely use) behaving a bit weirdly.


    This particular issue is a maybe obscure clash between ngrok and my colorschem,
    but I think this kind of clash is pretty common when a program sets an ANSI
    background color that the user has remapped for some reason.


    a nice solution to contrast issues: “minimum contrast”


    A bunch of terminals (iTerm2, tabby, kitty’s text_fg_override_threshold, and
    folks tell me also Ghostty and Windows Terminal) have a “minimum
    contrast” feature that will automatically adjust colours to make sure they have enough contrast.


    Here’s an example from iTerm. This ngrok accident from before has pretty bad
    contrast, I find it pretty difficult to read:



    With “minimum contrast” set to 40 in iTerm, it looks like this instead:



    I didn’t have minimum contrast turned on before but I just turned it on today
    because it makes such a big difference when something goes wrong with colours
    in the terminal.


    problem 6: TERM being set to the wrong thing


    A few people mentioned that they’ll SSH into a system that doesn’t support the
    TERM environment variable that they have set locally, and then the colours
    won’t work.


    I think the way TERM works is that systems have a terminfo database, so if
    the value of the TERM environment variable isn’t in the system’s terminfo
    database, then it won’t know how to output colours for that terminal. I don’t
    know too much about terminfo, but someone linked me to this terminfo rant that talks about a few other
    issues with terminfo.


    I don’t have a system on hand to reproduce this one so I can’t say for sure how
    to fix it, but this stackoverflow question
    suggests running something like TERM=xterm ssh instead of ssh.


    problem 7: picking “good” colours is hard


    A couple of problems people mentioned with designing / finding terminal colorschemes:



    • some folks are colorblind and have trouble finding an appropriate colorscheme

    • accidentally making the background color too close to the cursor or selection color, so they’re hard to find

    • generally finding colours that work with every program is a struggle (for example you can see me having a problem with this with ngrok above!)


    problem 8: making nethack/mc look right


    Another problem people mentioned is using a program like nethack or midnight
    commander which you might expect to have a specific colourscheme based on the
    default ANSI terminal colours.


    For example, midnight commander has a really specific classic look:



    But in my Solarized theme, midnight commander looks like this:



    The Solarized version feels like it could be disorienting if you’re
    very used to the “classic” look.


    One solution Simon Tatham mentioned to this is using some palette customization
    ANSI codes (like the ones base16 uses that I talked about earlier) to change
    the color palette right before starting the program, for example remapping
    yellow to a brighter yellow before starting Nethack so that the yellow
    characters look better.


    problem 9: commands disabling colours when writing to a pipe


    If I run fd | less, I see something like this, with the colours disabled.



    In general I find this useful – if I pipe a command to grep, I don’t want it
    to print out all those color escape codes, I just want the plain text. But what if you want to see the colours?


    To see the colours, you can run unbuffer fd | less -r! I just learned about
    unbuffer recently and I think it’s really cool, unbuffer opens a tty for the
    command to write to so that it thinks it’s writing to a TTY. It also fixes
    issues with programs buffering their output when writing to a pipe, which is
    why it’s called unbuffer.


    Here’s what the output of unbuffer fd | less -r looks like for me:



    Also some commands (including fd) support a --color=always flag which will
    force them to always print out the colours.


    problem 10: unwanted colour in ls and other commands


    Some people mentioned that they don’t want ls to use colour at all, perhaps
    because ls uses blue, it’s hard to read on black, and maybe they don’t feel like
    customizing their terminal’s colourscheme to make the blue more readable or
    just don’t find the use of colour helpful.


    Some possible solutions to this one:



    • you can run ls --color=never, which is probably easiest

    • you can also set LS_COLORS to customize the colours used by ls. I think some other programs other than ls support the LS_COLORS environment variable too.

    • also some programs support setting NO_COLOR=true (there’s a list here)


    Here’s an example of running LS_COLORS="fi=0:di=0:ln=0:pi=0:so=0:bd=0:cd=0:or=0:ex=0" ls:



    problem 11: the colours in vim


    I used to have a lot of problems with configuring my colours in vim – I’d set
    up my terminal colours in a way that I thought was okay, and then I’d start vim
    and it would just be a disaster.


    I think what was going on here is that today, there are two ways to set up a vim colorscheme in the terminal:



    1. using your ANSI terminal colours – you tell vim which ANSI colour number to use for the background, for functions, etc.

    2. using 24-bit hex colours – instead of ANSI terminal colours, the vim colorscheme can use hex codes like #faea99 directly


    20 years ago when I started using vim, terminals with 24-bit hex color support
    were a lot less common (or maybe they didn’t exist at all), and vim certainly
    didn’t have support for using 24-bit colour in the terminal. From some quick
    searching through git, it looks like vim added support for 24-bit colour in 2016
    – just 8 years ago!


    So to get colours to work properly in vim before 2016, you needed to synchronize
    your terminal colorscheme and your vim colorscheme. Here’s what that looked like,
    the colorscheme needed to map the vim color classes like cterm05 to ANSI colour numbers.


    But in 2024, the story is really different! Vim (and Neovim, which I use now)
    support 24-bit colours, and as of Neovim 0.10 (released in May 2024), the
    termguicolors setting (which tells Vim to use 24-bit hex colours for
    colorschemes) is turned on by default in any terminal with 24-bit
    color support.


    So this “you need to synchronize your terminal colorscheme and your vim
    colorscheme” problem is not an issue anymore for me in 2024, since I
    don’t plan to use terminals without 24-bit color support in the future.


    The biggest consequence for me of this whole thing is that I don’t need base16
    to set colors 16-21 to weird stuff anymore to integrate with vim – I can just
    use a terminal theme and a vim theme, and as long as the two themes use similar
    colours (so it’s not jarring for me to switch between them) there’s no problem.
    I think I can just remove those parts from my base16 shell script and totally
    avoid the problem with ngrok and the weird orange background I talked about
    above.


    some more problems I left out


    I think there are a lot of issues around the intersection of multiple programs,
    like using some combination tmux/ssh/vim that I couldn’t figure out how to
    reproduce well enough to talk about them. Also I’m sure I missed a lot of other
    things too.


    base16 has really worked for me


    I’ve personally had a lot of success with using
    base16-shell with
    base16-vim – I just need to add a couple of lines to my
    fish config to set it up (+ a few .vimrc lines) and then I can move on and
    accept any remaining problems that that doesn’t solve.


    I don’t think base16 is for everyone though, some limitations I’m aware
    of with base16 that might make it not work for you:



    • it comes with a limited set of builtin themes and you might not like any of them

    • the Solarized base16 theme (and maybe all of the themes?) sets the “bright”
      ANSI colours to be exactly the same as the normal colours, which might cause
      a problem if you’re relying on the “bright” colours to be different from the
      regular ones

    • it sets colours 16-21 in order to give the vim colorschemes from base16-vim
      access to more colours, which might not be relevant if you always use a
      terminal with 24-bit color support, and can cause problems like the ngrok
      issue above

    • also the way it sets colours 16-21 could be a problem in terminals that don’t
      have 256-color support, like the linux framebuffer terminal


    Apparently there’s a community fork of base16 called
    tinted-theming, which I haven’t
    looked into much yet.


    some other colorscheme tools


    Just one so far but I’ll link more if people tell me about them:



    okay, that was a lot


    We talked about a lot in this post and while I think learning about all these
    details is kind of fun if I’m in the mood to do a deep dive, I find it SO
    FRUSTRATING to deal with it when I just want my colours to work! Being
    surprised by unreadable text and having to find a workaround is just not my
    idea of a good day.


    Personally I’m a zero-configuration kind of person and it’s not that appealing
    to me to have to put together a lot of custom configuration just to make my
    colours in the terminal look acceptable. I’d much rather just have some
    reasonable defaults that I don’t have to change.


    minimum contrast seems like an amazing feature


    My one big takeaway from writing this was to turn on “minimum contrast” in my
    terminal, I think it’s going to fix most of the occasional accidental
    unreadable text issues I run into and I’m pretty excited about it.

    Some Go web dev notes

    I spent a lot of time in the past couple of weeks working on a website in Go that may or may not ever see

    ...
    Full

    I spent a lot of time in the past couple of weeks working on a website in Go
    that may or may not ever see the light of day, but I learned a couple of things
    along the way I wanted to write down. Here they are:


    go 1.22 now has better routing


    I’ve never felt motivated to learn any of the Go routing libraries
    (gorilla/mux, chi, etc), so I’ve been doing all my routing by hand, like this.


    	// DELETE /records:
    
    case r.Method == "DELETE" && n == 1 && p[0] == "records":
    if !requireLogin(username, r.URL.Path, r, w) {
    return
    }
    deleteAllRecords(ctx, username, rs, w, r)
    // POST /records/<ID>
    case r.Method == "POST" && n == 2 && p[0] == "records" && len(p[1]) > 0:
    if !requireLogin(username, r.URL.Path, r, w) {
    return
    }
    updateRecord(ctx, username, p[1], rs, w, r)


    But apparently as of Go 1.22, Go
    now has better support for routing in the standard library, so that code can be
    rewritten something like this:


    	mux.HandleFunc("DELETE /records/", app.deleteAllRecords)
    
    mux.HandleFunc("POST /records/{record_id}", app.updateRecord)

    Though it would also need a login middleware, so maybe something more like
    this, with a requireLogin middleware.


    	mux.Handle("DELETE /records/", requireLogin(http.HandlerFunc(app.deleteAllRecords)))
    

    a gotcha with the built-in router: redirects with trailing slashes


    One annoying gotcha I ran into was: if I make a route for /records/, then a
    request for /records will be redirected to /records/.


    I ran into an issue with this where sending a POST request to /records
    redirected to a GET request for /records/, which broke the POST request
    because it removed the request body. Thankfully Xe Iaso wrote a blog post about the exact same issue which made it
    easier to debug.


    I think the solution to this is just to use API endpoints like POST /records
    instead of POST /records/, which seems like a more normal design anyway.


    sqlc automatically generates code for my db queries


    I got a little bit tired of writing so much boilerplate for my SQL queries, but
    I didn’t really feel like learning an ORM, because I know what SQL queries I
    want to write, and I didn’t feel like learning the ORM’s conventions for
    translating things into SQL queries.


    But then I found sqlc, which will compile a query like this:


    
    
    -- name: GetVariant :one
    SELECT *
    FROM variants
    WHERE id = ?;


    into Go code like this:


    const getVariant = `-- name: GetVariant :one
    
    SELECT id, created_at, updated_at, disabled, product_name, variant_name
    FROM variants
    WHERE id = ?
    `

    func (q *Queries) GetVariant(ctx context.Context, id int64) (Variant, error) {
    row := q.db.QueryRowContext(ctx, getVariant, id)
    var i Variant
    err := row.Scan(
    &i.ID,
    &i.CreatedAt,
    &i.UpdatedAt,
    &i.Disabled,
    &i.ProductName,
    &i.VariantName,
    )
    return i, err
    }


    What I like about this is that if I’m ever unsure about what Go code to write
    for a given SQL query, I can just write the query I want, read the generated
    function and it’ll tell me exactly what to do to call it. It feels much easier
    to me than trying to dig through the ORM’s documentation to figure out how to
    construct the SQL query I want.


    Reading Brandur’s sqlc notes from 2024 also gave me some confidence
    that this is a workable path for my tiny programs. That post gives a really
    helpful example of how to conditionally update fields in a table using CASE
    statements (for example if you have a table with 20 columns and you only want
    to update 3 of them).


    sqlite tips


    Someone on Mastodon linked me to this post called Optimizing sqlite for servers. My projects are small and I’m
    not so concerned about performance, but my main takeaways were:



    • have a dedicated object for writing to the database, and run
      db.SetMaxOpenConns(1) on it. I learned the hard way that if I don’t do this
      then I’ll get SQLITE_BUSY errors from two threads trying to write to the db
      at the same time.

    • if I want to make reads faster, I could have 2 separate db objects, one for writing and one for reading


    There are a more tips in that post that seem useful (like “COUNT queries are
    slow” and “Use STRICT tables”), but I haven’t done those yet.


    Also sometimes if I have two tables where I know I’ll never need to do a JOIN
    beteween them, I’ll just put them in separate databases so that I can connect
    to them independently.


    Go 1.19 introduced a way to set a GC memory limit


    I run all of my Go projects in VMs with relatively little memory, like 256MB or
    512MB. I ran into an issue where my application kept getting OOM killed and it
    was confusing – did I have a memory leak? What?


    After some Googling, I realized that maybe I didn’t have a memory leak, maybe I
    just needed to reconfigure the garbage collector! It turns out that by default (according to A Guide to the Go Garbage Collector), Go’s garbage collector will
    let the application allocate memory up to 2x the current heap size.


    Mess With DNS’s base heap size is around 170MB and
    the amount of memory free on the VM is around 160MB right now, so if its memory
    doubled, it’ll get OOM killed.


    In Go 1.19, they added a way to tell Go “hey, if the application starts using
    this much memory, run a GC”. So I set the GC memory limit to 250MB and it seems
    to have resulted in the application getting OOM killed less often:


    export GOMEMLIMIT=250MiB
    

    some reasons I like making websites in Go


    I’ve been making tiny websites (like the nginx playground) in Go on and off for the last 4 years or so and it’s really been working for me. I think I like it because:



    • there’s just 1 static binary, all I need to do to deploy it is copy the binary. If there are static files I can just embed them in the binary with embed.

    • there’s a built-in webserver that’s okay to use in production, so I don’t need to configure WSGI or whatever to get it to work. I can just put it behind Caddy or run it on fly.io or whatever.

    • Go’s toolchain is very easy to install, I can just do apt-get install golang-go or whatever and then a go build will build my project

    • it feels like there’s very little to remember to start sending HTTP responses
      – basically all there is are functions like Serve(w http.ResponseWriter, r *http.Request) which read the request and send a response. If I need to
      remember some detail of how exactly that’s accomplished, I just have to read
      the function!

    • also net/http is in the standard library, so you can start making websites
      without installing any libraries at all. I really appreciate this one.

    • Go is a pretty systems-y language, so if I need to run an ioctl or
      something that’s easy to do


    In general everything about it feels like it makes projects easy to work on for
    5 days, abandon for 2 years, and then get back into writing code without a lot
    of problems.


    For contrast, I’ve tried to learn Rails a couple of times and I really want
    to love Rails – I’ve made a couple of toy websites in Rails and it’s always
    felt like a really magical experience. But ultimately when I come back to those
    projects I can’t remember how anything works and I just end up giving up. It
    feels easier to me to come back to my Go projects that are full of a lot of
    repetitive boilerplate, because at least I can read the code and figure out how
    it works.


    things I haven’t figured out yet


    some things I haven’t done much of yet in Go:



    • rendering HTML templates: usually my Go servers are just APIs and I make the
      frontend a single-page app with Vue. I’ve used html/template a lot in Hugo (which I’ve used for this blog for the last 8 years)
      but I’m still not sure how I feel about it.

    • I’ve never made a real login system, usually my servers don’t have users at all.

    • I’ve never tried to implement CSRF


    In general I’m not sure how to implement security-sensitive features so I don’t
    start projects which need login/CSRF/etc. I imagine this is where a framework
    would help.


    it’s cool to see the new features Go has been adding


    Both of the Go features I mentioned in this post (GOMEMLIMIT and the routing)
    are new in the last couple of years and I didn’t notice when they came out. It
    makes me think I should pay closer attention to the release notes for new Go
    versions.

    Reasons I still love the fish shell

    I wrote about how much I love fish in this blog post from 2017 and, 7 years of using it every day later,

    ...
    Full

    I wrote about how much I love fish in this blog post from 2017 and, 7 years
    of using it every day later, I’ve found even more reasons to love it. So I
    thought I’d write a new post with both the old reasons I loved it and some
    reasons.


    This came up today because I was trying to figure out why my terminal doesn’t
    break anymore when I cat a binary to my terminal, the answer was “fish fixes
    the terminal!”, and I just thought that was really nice.


    1. no configuration


    In 10 years of using fish I have never found a single thing I wanted to configure. It just works the way I want. My fish config file just has:



    • environment variables

    • aliases (alias ls eza, alias vim nvim, etc)

    • the occasional direnv hook fish | source to integrate a tool like direnv

    • a script I run to set up my terminal colours


    I’ve been told that configuring things in fish is really easy if you ever do
    want to configure something though.


    2. autosuggestions from my shell history


    My absolute favourite thing about fish is that I type, it’ll automatically
    suggest (in light grey) a matching command that I ran recently. I can press the
    right arrow key to accept the completion, or keep typing to ignore it.


    Here’s what that looks like. In this example I just typed the “v” key and it
    guessed that I want to run the previous vim command again.



    2.5 “smart” shell autosuggestions


    One of my favourite subtle autocomplete features is how fish handles autocompleting commands that contain paths in them. For example, if I run:


    $ ls blah.txt
    

    that command will only be autocompleted in directories that contain blah.txt – it won’t show up in a different directory. (here’s a short comment about how it works)


    As an example, if in this directory I type bash scripts/, it’ll only suggest
    history commands including files that actually exist in my blog’s scripts
    folder, and not the dozens of other irrelevant scripts/ commands I’ve run in
    other folders.


    I didn’t understand exactly how this worked until last week, it just felt like fish was
    magically able to suggest the right commands. It still feels a little like magic and I love it.


    3. pasting multiline commands


    If I copy and paste multiple lines, bash will run them all, like this:


    [bork@grapefruit linux-playground (main)]$ echo hi
    
    hi
    [bork@grapefruit linux-playground (main)]$ touch blah
    [bork@grapefruit linux-playground (main)]$ echo hi
    hi

    This is a bit alarming – what if I didn’t actually want to run all those
    commands?


    Fish will paste them all at a single prompt, so that I can press Enter if I
    actually want to run them. Much less scary.


    bork@grapefruit ~/work/> echo hi

    touch blah
    echo hi


    4. nice tab completion


    If I run ls and press tab, it’ll display all the filenames in a nice grid. I can use either Tab, Shift+Tab, or the arrow keys to navigate the grid.


    Also, I can tab complete from the middle of a filename – if the filename
    starts with a weird character (or if it’s just not very unique), I can type
    some characters from the middle and press tab.


    Here’s what the tab completion looks like:


    bork@grapefruit ~/work/> ls 
    
    api/ blah.py fly.toml README.md
    blah Dockerfile frontend/ test_websocket.sh

    I honestly don’t complete things other than filenames very much so I can’t
    speak to that, but I’ve found the experience of tab completing filenames to be
    very good.


    5. nice default prompt (including git integration)


    Fish’s default prompt includes everything I want:



    • username

    • hostname

    • current folder

    • git integration

    • status of last command exit (if the last command failed)


    Here’s a screenshot with a few different variations on the default prompt,
    including if the last command was interrupted (the SIGINT) or failed.



    6. nice history defaults


    In bash, the maximum history size is 500 by default, presumably because
    computers used to be slow and not have a lot of disk space. Also, by default,
    commands don’t get added to your history until you end your session. So if your
    computer crashes, you lose some history.


    In fish:



    1. the default history size is 256,000 commands. I don’t see any reason I’d ever need more.

    2. if you open a new tab, everything you’ve ever run (including commands in
      open sessions) is immediately available to you

    3. in an existing session, the history search will only include commands from
      the current session, plus everything that was in history at the time that
      you started the shell


    I’m not sure how clearly I’m explaining how fish’s history system works here,
    but it feels really good to me in practice. My impression is that the way it’s
    implemented is the commands are continually added to the history file, but fish
    only loads the history file once, on startup.


    I’ll mention here that if you want to have a fancier history system in another
    shell it might be worth checking out atuin or fzf.


    7. press up arrow to search history


    I also like fish’s interface for searching history: for example if I want to
    edit my fish config file, I can just type:


    $ config.fish
    

    and then press the up arrow to go back the last command that included config.fish. That’ll complete to:


    $ vim ~/.config/fish/config.fish
    

    and I’m done. This isn’t so different from using Ctrl+R in bash to search
    your history but I think I like it a little better over all, maybe because
    Ctrl+R has some behaviours that I find confusing (for example you can
    end up accidentally editing your history which I don’t like).


    8. the terminal doesn’t break


    I used to run into issues with bash where I’d accidentally cat a binary to
    the terminal, and it would break the terminal.


    Every time fish displays a prompt, it’ll try to fix up your terminal so that
    you don’t end up in weird situations like this. I think this is some of the
    code in fish to prevent broken terminals
    .


    Some things that it does are:



    • turn on echo so that you can see the characters you type

    • make sure that newlines work properly so that you don’t get that weird staircase effect

    • reset your terminal background colour, etc


    I don’t think I’ve run into any of these “my terminal is broken” issues in a
    very long time, and I actually didn’t even realize that this was because of
    fish – I thought that things somehow magically just got better, or maybe I
    wasn’t making as many mistakes. But I think it was mostly fish saving me from
    myself, and I really appreciate that.


    9. Ctrl+S is disabled


    Also related to terminals breaking: fish disables Ctrl+S (which freezes your
    terminal and then you need to remember to press Ctrl+Q to unfreeze it). It’s a
    feature that I’ve never wanted and I’m happy to not have it.


    Apparently you can disable Ctrl+S in other shells with stty -ixon.


    10. nice syntax highlighting


    By default commands that don’t exist are highlighted in red, like this.



    11. easier loops


    I find the loop syntax in fish a lot easier to type than the bash syntax. It looks like this:


    for i in *.yaml
    
    echo $i
    end

    Also it’ll add indentation in your loops which is nice.


    12. easier multiline editing


    Related to loops: you can edit multiline commands much more easily than in bash
    (just use the arrow keys to navigate the multiline command!). Also when you use
    the up arrow to get a multiline command from your history, it’ll show you the
    whole command the exact same way you typed it instead of squishing it all onto
    one line like bash does:


    $ bash
    
    $ for i in *.png
    > do
    > echo $i
    > done
    $ # press up arrow
    $ for i in *.png; do echo $i; done ink

    13. Ctrl+left arrow


    This might just be me, but I really appreciate that fish has the Ctrl+left arrow / Ctrl+right arrow keyboard shortcut for moving between
    words when writing a command.


    I’m honestly a bit confused about where this keyboard shortcut is coming from
    (the only documented keyboard shortcut for this I can find in fish is Alt+left arrow / Alt + right arrow which seems to do the same thing), but I’m pretty
    sure this is a fish shortcut.


    A couple of notes about getting this shortcut to work / where it comes from:



    • one person said they needed to switch their terminal emulator from the “Linux
      console” keybindings to “Default (XFree 4)” to get it to work in fish

    • on Mac OS, Ctrl+left arrow switches workspaces by default, so I had to turn
      that off.

    • Also apparently Ubuntu configures libreadline in /etc/inputrc to make
      Ctrl+left/right arrow go back/forward a word, so it’ll work in bash on
      Ubuntu and maybe other Linux distros too. Here’s a stack overflow question talking about that


    a downside: not everything has a fish integration


    Sometimes tools don’t have instructions for integrating them with fish. That’s annoying, but:



    • I’ve found this has gotten better over the last 10 years as fish has gotten
      more popular. For example Python’s virtualenv has had a fish integration for
      a long time now.

    • If I need to run a POSIX shell command real quick, I can always just run bash or zsh

    • I’ve gotten much better over the years at translating simple commands to fish syntax when I need to


    My biggest day-to-day to annoyance is probably that for whatever reason I’m
    still not used to fish’s syntax for setting environment variables, I get confused
    about set vs set -x.


    another downside: fish_add_path


    fish has a function called fish_add_path that you can run to add a directory
    to your PATH like this:


    fish_add_path /some/directory
    

    I love the idea of it and I used to use it all the time, but I’ve stopped using
    it for two reasons:



    1. Sometimes fish_add_path will update the PATH for every session in the
      future (with a “universal variable”) and sometimes it will update the PATH
      just for the current session. It’s hard for me to tell which one it will
      do: in theory the docs explain this but I could not understand them.

    2. If you ever need to remove the directory from your PATH a few weeks or
      months later because maybe you made a mistake, that’s also kind of hard to do
      (there are instructions in this comments of this github issue though).


    Instead I just update my PATH like this, similarly to how I’d do it in bash:


    set PATH $PATH /some/directory/bin
    

    on POSIX compatibility


    When I started using fish, you couldn’t do things like cmd1 && cmd2 – it
    would complain “no, you need to run cmd1; and cmd2” instead.


    It seems like over the years fish has started accepting a little more POSIX-style syntax than it used to, like:



    • cmd1 && cmd2

    • export a=b to set an environment variable (though this seems a bit limited, you can’t do export PATH=$PATH:/whatever so I think it’s probably better to learn set instead)


    on fish as a default shell


    Changing my default shell to fish is always a little annoying, I occasionally get myself into a situation where



    1. I install fish somewhere like maybe /home/bork/.nix-stuff/bin/fish

    2. I add the new fish location to /etc/shells as an allowed shell

    3. I change my shell with chsh

    4. at some point months/years later I reinstall fish in a different location for some reason and remove the old one

    5. oh no!!! I have no valid shell! I can’t open a new terminal tab anymore!


    This has never been a major issue because I always have a terminal open
    somewhere where I can fix the problem and rescue myself, but it’s a bit
    alarming.


    If you don’t want to use chsh to change your shell to fish (which is very reasonable,
    maybe I shouldn’t be doing that), the Arch wiki page has a couple of good suggestions –
    either configure your terminal emulator to run fish or add an exec fish to
    your .bashrc.


    I’ve never really learned the scripting language


    Other than occasionally writing a for loop interactively on the command line,
    I’ve never really learned the fish scripting language. I still do all of my
    shell scripting in bash.


    I don’t think I’ve ever written a fish function or if statement.


    it seems like fish is getting pretty popular


    I ran a highly unscientific poll on Mastodon asking people what shell they use interactively. The results were (of 2600 responses):



    • 46% bash

    • 49% zsh

    • 16% fish

    • 5% other


    I think 16% for fish is pretty remarkable, since (as far as I know) there isn’t
    any system where fish is the default shell, and my sense is that it’s very
    common to just stick to whatever your system’s default shell is.


    It feels like a big achievement for the fish project, even if maybe my Mastodon
    followers are more likely than the average shell user to use fish for some
    reason.


    who might fish be right for?


    Fish definitely isn’t for everyone. I think I like it because:



    1. I really dislike configuring my shell (and honestly my dev environment in general), I want things to “just work” with the default settings

    2. fish’s defaults feel good to me

    3. I don’t spend that much time logged into random servers using other shells
      so there’s not too much context switching

    4. I liked its features so much that I was willing to relearn how to do a few
      “basic” shell things, like using parentheses (seq 1 10) to run a command
      instead of backticks or using set instead of export


    Maybe you’re also a person who would like fish! I hope a few more of the people
    who fish is for can find it, because I spend so much of my time in the terminal
    and it’s made that time much more pleasant.

    Migrating Mess With DNS to use PowerDNS

    About 3 years ago, I announced Mess With DNS in this blog post, a playground where you can learn how DNS works by

    ...
    Full

    About 3 years ago, I announced Mess With DNS in
    this blog post, a playground
    where you can learn how DNS works by messing around and creating records.


    I wasn’t very careful with the DNS implementation though (to quote the release blog
    post: “following the DNS RFCs? not exactly”), and people started reporting
    problems that eventually I decided that I wanted to fix.


    the problems


    Some of the problems people have reported were:



    • domain names with underscores weren’t allowed, even though they should be

    • If there was a CNAME record for a domain name, it allowed you to create other records for that domain name, even if it shouldn’t

    • you could create 2 different CNAME records for the same domain name, which shouldn’t be allowed

    • no support for the SVCB or HTTPS record types, which seemed a little complex to implement

    • no support for upgrading from UDP to TCP for big responses


    And there are certainly more issues that nobody got around to reporting, for
    example that if you added an NS record for a subdomain to delegate it, Mess
    With DNS wouldn’t handle the delegation properly.


    the solution: PowerDNS


    I wasn’t sure how to fix these problems for a long time – technically I
    could have started addressing them individually, but it felt like there were
    a million edge cases and I’d never get there.


    But then one day I was chatting with someone else who was working on a DNS
    server and they said they were using PowerDNS: an open
    source DNS server with an HTTP API!


    This seemed like an obvious solution to my problems – I could just swap out my
    own crappy DNS implementation for PowerDNS.


    There were a couple of challenges I ran into when setting up PowerDNS that I’ll
    talk about here. I really don’t do a lot of web development and I think I’ve never
    built a website that depends on a relatively complex API before, so it was a
    bit of a learning experience.


    challenge 1: getting every query made to the DNS server


    One of the main things Mess With DNS does is give you a live view of every DNS
    query it receives for your subdomain, using a websocket. To make this work, it
    needs to intercept every DNS query before they it gets sent to the PowerDNS DNS
    server:


    There were 2 options I could think of for how to intercept the DNS queries:



    1. dnstap: dnsdist (a DNS load balancer from the PowerDNS project) has
      support for logging all DNS queries it receives using
      dnstap, so I could put dnsdist in front of PowerDNS
      and then log queries that way

    2. Have my Go server listen on port 53 and proxy the queries myself


    I originally implemented option #1, but for some reason there was a 1 second
    delay before every query got logged. I couldn’t figure out why, so I
    implemented my own very simple proxy instead.


    challenge 2: should the frontend have direct access to the PowerDNS API?


    The frontend used to have a lot of DNS logic in it – it converted emoji domain
    names to ASCII using punycode, had a lookup table to convert numeric DNS query
    types (like 1) to their human-readable names (like A), did a little bit of
    validation, and more.


    Originally I considered keeping this pattern and just giving the frontend (more
    or less) direct access to the PowerDNS API to create and delete, but writing
    even more complex code in Javascript didn’t feel that appealing to me – I
    don’t really know how to write tests in Javascript and it seemed like it
    wouldn’t end well.


    So I decided to take all of the DNS logic out of the frontend and write a new
    DNS API for managing records, shaped something like this:



    • GET /records

    • DELETE /records/<ID>

    • DELETE /records/ (delete all records for a user)

    • POST /records/ (create record)

    • POST /records/<ID> (update record)


    This meant that I could actually write tests for my code, since the backend is
    in Go and I do know how to write tests in Go.


    what I learned: it’s okay for an API to duplicate information


    I had this idea that APIs shouldn’t return duplicate information – for example
    if I get a DNS record, it should only include a given piece of information
    once.


    But I ran into a problem with that idea when displaying MX records: an MX
    record has 2 fields, “preference”, and “mail server”. And I needed to display
    that information in 2 different ways on the frontend:



    1. In a form, where “Preference” and “Mail Server” are 2 different form fields (like 10 and mail.example.com)

    2. In a summary view, where I wanted to just show the record (10 mail.example.com)


    This is kind of a small problem, but it came up in a few different places.


    I talked to my friend Marco Rogers about this, and based on some advice from
    him I realized that I could return the same information in the API in 2
    different ways! Then the frontend just has to display it. So I started just
    returning duplicate information in the API, something like this:


    {
    
    values: {'Preference': 10, 'Server': 'mail.example.com'},
    content: '10 mail.example.com',
    ...
    }

    I ended up using this pattern in a couple of other places where I needed to
    display the same information in 2 different ways and it was SO much easier.


    I think what I learned from this is that if I’m making an API that isn’t
    intended for external use (there are no users of this API other than the
    frontend!), I can tailor it very specifically to the frontend’s needs and
    that’s okay.


    challenge 3: what’s a record’s ID?


    In Mess With DNS (and I think in most DNS user interfaces!), you create, add, and delete records.


    But that’s not how the PowerDNS API works. In PowerDNS, you create a zone,
    which is made of record sets. Records don’t have any ID in the API at all.


    I ended up solving this by generate a fake ID for each records which is made of:



    • its name

    • its type

    • and its content (base64-encoded)


    For example one record’s ID is brooch225.messwithdns.com.|NS|bnMxLm1lc3N3aXRoZG5zLmNvbS4=


    Then I can search through the zone and find the appropriate record to update
    it.


    This means that if you update a record then its ID will change which isn’t
    usually what I want in an ID, but that seems fine.


    challenge 4: making clear error messages


    I think the error messages that the PowerDNS API returns aren’t really intended to be shown to end users, for example:



    • Name 'new\032site.island358.messwithdns.com.' contains unsupported characters (this error encodes the space as \032, which is a bit disorienting if you don’t know that the space character is 32 in ASCII)

    • RRset test.pear5.messwithdns.com. IN CNAME: Conflicts with pre-existing RRset (this talks about RRsets, which aren’t a concept that the Mess With DNS UI has at all)

    • Record orange.beryl5.messwithdns.com./A '1.2.3.4$': Parsing record content (try 'pdnsutil check-zone'): unable to parse IP address, strange character: $ (mentions “pdnsutil”, a utility which Mess With DNS’s users don’t have
      access to in this context)


    I ended up handling this in two ways:



    1. Do some initial basic validation of values that users enter (like IP addresses), so I can just return errors like Invalid IPv4 address: "1.2.3.4$

    2. If that goes well, send the request to PowerDNS and if we get an error back, then do some hacky translation of those messages to make them clearer.


    Sometimes users will still get errors from PowerDNS directly, but I added some
    logging of all the errors that users see, so hopefully I can review them and
    add extra translations if there are other common errors that come up.


    I think what I learned from this is that if I’m building a user-facing
    application on top of an API, I need to be pretty thoughtful about how I
    resurface those errors to users.


    challenge 5: setting up SQLite


    Previously Mess With DNS was using a Postgres database. This was problematic
    because I only gave the Postgres machine 256MB of RAM, which meant that the
    database got OOM killed almost every single day. I never really worked out
    exactly why it got OOM killed every day, but that’s how it was. I spent some
    time trying to tune Postgres’ memory usage by setting the max connections /
    work-mem / maintenance-work-mem and it helped a bit but didn’t solve the
    problem.


    So for this refactor I decided to use SQLite instead, because the website
    doesn’t really get that much traffic. There are some choices involved with
    using SQLite, and I decided to:



    1. Run db.SetMaxOpenConns(1) to make sure that we only open 1 connection to
      the database at a time, to prevent SQLITE_BUSY errors from two threads
      trying to access the database at the same time (just setting WAL mode didn’t
      work)

    2. Use separate databases for each of the 3 tables (users, records, and
      requests) to reduce contention. This maybe isn’t really necessary, but there
      was no reason I needed the tables to be in the same database so I figured I’d set
      up separate databases to be safe.

    3. Use the cgo-free modernc.org/sqlite, which translates SQLite’s source code to Go.
      I might switch to a more “normal” sqlite implementation instead at some point and use cgo though.
      I think the main reason I prefer to avoid cgo is that cgo has landed me with difficult-to-debug errors in the past.

    4. use WAL mode


    I still haven’t set up backups, though I don’t think my Postgres database had
    backups either. I think I’m unlikely to use
    litestream for backups – Mess With DNS is very far
    from a critical application, and I think daily backups that I could recover
    from in case of a disaster are more than good enough.


    challenge 6: upgrading Vue & managing forms


    This has nothing to do with PowerDNS but I decided to upgrade Vue.js from
    version 2 to 3 as part of this refresh. The main problem with that is that the
    form validation library I was using (FormKit) completely changed its API
    between Vue 2 and Vue 3, so I decided to just stop using it instead of learning
    the new API.


    I ended up switching to some form validation tools that are built into the
    browser like required and oninvalid (here’s the code).
    I think it could use some of improvement, I still don’t understand forms very well.


    challenge 7: managing state in the frontend


    This also has nothing to do with PowerDNS, but when modifying the frontend I
    realized that my state management in the frontend was a mess – in every place
    where I made an API request to the backend, I had to try to remember to add a
    “refresh records” call after that in every place that I’d modified the state
    and I wasn’t always consistent about it.


    With some more advice from Marco, I ended up implementing a single global
    state management store
    which stores all the state for the application, and which lets me
    create/update/delete records.


    Then my components can just call store.createRecord(record), and the store
    will automatically resynchronize all of the state as needed.


    challenge 8: sequencing the project


    This project ended up having several steps because I reworked the whole
    integration between the frontend and the backend. I ended up splitting it into
    a few different phases:



    1. Upgrade Vue from v2 to v3

    2. Make the state management store

    3. Implement a different backend API, move a lot of DNS logic out of the frontend, and add tests for the backend

    4. Integrate PowerDNS


    I made sure that the website was (more or less) 100% working and then deployed
    it in between phases, so that the amount of changes I was managing at a time
    stayed somewhat under control.


    the new website is up now!


    I released the upgraded website a few days ago and it seems to work!
    The PowerDNS API has been great to work on top of, and I’m relieved that
    there’s a whole class of problems that I now don’t have to think about at all,
    other than potentially trying to make the error messages from PowerDNS a little
    clearer. Using PowerDNS has fixed a lot of the DNS issues that folks have
    reported in the last few years and it feels great.


    If you run into problems with the new Mess With DNS I’d love to hear about them here.

    Go structs are copied on assignment (and other things about Go I'd missed)

    I’ve been writing Go pretty casually for years – the backends for all of my playgrounds (nginx, dns, memory, more DNS)

    ...
    Full

    I’ve been writing Go pretty casually for years – the backends for all of my
    playgrounds (nginx, dns, memory, more DNS) are written in Go, but many of those projects are just a few hundred lines and I don’t come back to those codebases much.


    I thought I more or less understood the basics of the language, but this week
    I’ve been writing a lot more Go than usual while working on some upgrades to
    Mess with DNS, and ran into a bug that revealed I
    was missing a very basic concept!


    Then I posted about this on Mastodon and someone linked me to this very cool
    site (and book) called 100 Go Mistakes and How To Avoid Them by Teiva Harsanyi. It just came out in 2022 so it’s relatively new.


    I decided to read through the site to see what else I was missing, and found
    a couple of other misconceptions I had about Go. I’ll talk about some of the
    mistakes that jumped out to me the most, but really the whole
    100 Go Mistakes site is great and I’d recommend reading it.


    Here’s the initial mistake that started me on this journey:


    mistake 1: not understanding that structs are copied on assignment


    Let’s say we have a struct:


    type Thing struct {
    
    Name string
    }

    and this code:


    thing := Thing{"record"}
    
    other_thing := thing
    other_thing.Name = "banana"
    fmt.Println(thing)

    This prints “record” and not “banana” (play.go.dev link), because thing is copied when you
    assign it to other_thing.


    the problem this caused me: ranges


    The bug I spent 2 hours of my life debugging last week was effectively this code (play.go.dev link):


    type Thing struct {
    
    Name string
    }
    func findThing(things []Thing, name string) *Thing {
    for _, thing := range things {
    if thing.Name == name {
    return &thing
    }
    }
    return nil
    }

    func main() {
    things := []Thing{Thing{"record"}, Thing{"banana"}}
    thing := findThing(things, "record")
    thing.Name = "gramaphone"
    fmt.Println(things)
    }


    This prints out [{record} {banana}] – because findThing returned a copy, we didn’t change the name in the original array.


    This mistake is #30 in 100 Go Mistakes.


    I fixed the bug by changing it to something like this (play.go.dev link), which returns a
    reference to the item in the array we’re looking for instead of a copy.


    func findThing(things []Thing, name string) *Thing {
    
    for i := range things {
    if things[i].Name == name {
    return &things[i]
    }
    }
    return nil
    }

    why didn’t I realize this?


    When I learned that I was mistaken about how assignment worked in Go I was
    really taken aback, like – it’s such a basic fact about the language works!
    If I was wrong about that then what ELSE am I wrong about in Go????


    My best guess for what happened is:



    1. I’ve heard for my whole life that when you define a function,
      you need to think about whether its arguments are passed by reference or
      by value

    2. So I’d thought about this in Go, and I knew that if you pass a struct as a
      value to a function, it gets copied – if you want to pass a reference then
      you have to pass a pointer

    3. But somehow it never occurred to me that you need to think about the same
      thing for assignments, perhaps because in most of the other languages I
      use (Python, JS, Java) I think everything is a reference anyway. Except for
      in Rust, where you do have values that you make copies of but I think most of the time I had to run .clone() explicitly.
      (though apparently structs will be automatically copied on assignment if the struct implements the Copy trait)

    4. Also obviously I just don’t write that much Go so I guess it’s never come
      up.


    mistake 2: side effects appending slices (#25)


    When you subset a slice with x[2:3], the original slice and the sub-slice
    share the same backing array, so if you append to the new slice, it can
    unintentionally change the old slice:


    For example, this code prints [1 2 3 555 5] (code on play.go.dev)


    x := []int{1, 2, 3, 4, 5}
    
    y := x[2:3]
    y = append(y, 555)
    fmt.Println(x)

    I don’t think this has ever actually happened to me, but it’s alarming and I’m
    very happy to know about it.


    Apparently you can avoid this problem by changing y := x[2:3] to y := x[2:3:3], which restricts the new slice’s capacity so that appending to it
    will re-allocate a new slice. Here’s some code on play.go.dev that does that.


    mistake 3: not understanding the different types of method receivers (#42)


    This one isn’t a “mistake” exactly, but it’s been a source of confusion for me
    and it’s pretty simple so I’m glad to have it cleared up.


    In Go you can declare methods in 2 different ways:



    1. func (t Thing) Function() (a “value receiver”)

    2. func (t *Thing) Function() (a “pointer receiver”)


    My understanding now is that basically:



    • If you want the method to mutate the struct t, you need a pointer receiver.

    • If you want to make sure the method doesn’t mutate the struct t, use a value receiver.


    Explanation #42 has a
    bunch of other interesting details though. There’s definitely still something
    I’m missing about value vs pointer receivers (I got a compile error related to
    them a couple of times in the last week that I still don’t understand), but
    hopefully I’ll run into that error again soon and I can figure it out.


    more interesting things I noticed


    Some more notes from 100 Go Mistakes:



    Also there are some things that have tripped me up in the past, like:



    this “100 common mistakes” format is great


    I really appreciated this “100 common mistakes” format – it made it really
    easy for me to skim through the mistakes and very quickly mentally classify
    them into:



    1. yep, I know that

    2. not interested in that one right now

    3. WOW WAIT I DID NOT KNOW THAT, THAT IS VERY USEFUL!!!!


    It looks like “100 Common Mistakes” is a series of books from Manning and they
    also have “100 Java Mistakes” and an upcoming “100 SQL Server Mistakes”.


    Also I enjoyed what I’ve read of Effective Python by Brett Slatkin, which has a similar “here are a bunch of
    short Python style tips” structure where you can quickly skim it and take
    what’s useful to you. There’s also Effective C++, Effective Java, and probably
    more.


    some other Go resources


    other resources I’ve appreciated:


    Entering text in the terminal is complicated

    The other day I asked what folks on Mastodon find confusing about working in the terminal, and one thing that stood out to me was

    ...
    Full

    The other day I asked what folks on Mastodon find confusing about working in
    the terminal, and one thing that stood out to me was “editing a command you
    already typed in”.


    This really resonated with me: even though entering some text and editing it is
    a very “basic” task, it took me maybe 15 years of using the terminal every
    single day to get used to using Ctrl+A to go to the beginning of the line (or
    Ctrl+E for the end – I think I used Home/End instead).


    So let’s talk about why entering text might be hard! I’ll also share a few tips
    that I wish I’d learned earlier.


    it’s very inconsistent between programs


    A big part of what makes entering text in the terminal hard is the
    inconsistency between how different programs handle entering text. For example:



    1. some programs (cat, nc, git commit --interactive, etc) don’t support using arrow keys at all: if you press arrow keys, you’ll just see ^[[D^[[D^[[C^[[C^

    2. many programs (like irb, python3 on a Linux machine and many many more) use the readline library, which gives you a lot of basic functionality (history, arrow keys, etc)

    3. some programs (like /usr/bin/python3 on my Mac) do support very basic features like arrow keys, but not other features like Ctrl+left or reverse searching with Ctrl+R

    4. some programs (like the fish shell or ipython3 or micro or vim) have their own fancy system for accepting input which is totally custom


    So there’s a lot of variation! Let’s talk about each of those a little more.


    mode 1: the baseline


    First, there’s “the baseline” – what happens if a program just accepts text by
    calling fgets() or whatever and doing absolutely nothing else to provide a
    nicer experience. Here’s what using these tools typically looks for me – If I
    start the version of dash installed on
    my machine (a pretty minimal shell) press the left arrow keys, it just prints
    ^[[D to the terminal.


    $ ls l-^[[D^[[D^[[D
    

    At first it doesn’t seem like all of these “baseline” tools have much in
    common, but there are actually a few features that you get for free just from
    your terminal, without the program needing to do anything special at all.


    The things you get for free are:



    1. typing in text, obviously

    2. backspace

    3. Ctrl+W, to delete the previous word

    4. Ctrl+U, to delete the whole line

    5. a few other things unrelated to text editing (like Ctrl+C to interrupt the process, Ctrl+Z to suspend, etc)


    This is not great, but it means that if you want to delete a word you
    generally can do it with Ctrl+W instead of pressing backspace 15 times, even
    if you’re in an environment which is offering you absolutely zero features.


    You can get a list of all the ctrl codes that your terminal supports with stty -a.


    mode 2: tools that use readline


    The next group is tools that use readline! Readline is a GNU library to make
    entering text more pleasant, and it’s very widely used.


    My favourite readline keyboard shortcuts are:



    1. Ctrl+E (or End) to go to the end of the line

    2. Ctrl+A (or Home) to go to the beginning of the line

    3. Ctrl+left/right arrow to go back/forward 1 word

    4. up arrow to go back to the previous command

    5. Ctrl+R to search your history


    And you can use Ctrl+W / Ctrl+U from the “baseline” list, though Ctrl+U
    deletes from the cursor to the beginning of the line instead of deleting the
    whole line. I think Ctrl+W might also have a slightly different definition of
    what a “word” is.


    There are a lot more (here’s a full list), but those are the only ones that I personally use.


    The bash shell is probably the most famous readline user (when you use
    Ctrl+R to search your history in bash, that feature actually comes from
    readline), but there are TONS of programs that use it – for example psql,
    irb, python3, etc.


    tip: you can make ANYTHING use readline with rlwrap


    One of my absolute favourite things is that if you have a program like nc
    without readline support, you can just run rlwrap nc to turn it into a
    program with readline support!


    This is incredible and makes a lot of tools that are borderline unusable MUCH
    more pleasant to use. You can even apparently set up rlwrap to include your own
    custom autocompletions, though I’ve never tried that.


    some reasons tools might not use readline


    I think reasons tools might not use readline might include:



    • the program is very simple (like cat or nc) and maybe the maintainers don’t want to bring in a relatively large dependency

    • license reasons, if the program’s license is not GPL-compatible – readline is GPL-licensed, not LGPL

    • only a very small part of the program is interactive, and maybe readline
      support isn’t seen as important. For example git has a few interactive
      features (like git add -p), but not very many, and usually you’re just
      typing a single character like y or n – most of the time you need to really
      type something significant in git, it’ll drop you into a text editor instead.


    For example idris2 says they don’t use readline
    to keep dependencies minimal and suggest using rlwrap to get better
    interactive features.


    how to know if you’re using readline


    The simplest test I can think of is to press Ctrl+R, and if you see:


    (reverse-i-search)`':
    

    then you’re probably using readline. This obviously isn’t a guarantee (some
    other library could use the term reverse-i-search too!), but I don’t know of
    another system that uses that specific term to refer to searching history.


    the readline keybindings come from Emacs


    Because I’m a vim user, It took me a very long time to understand where these
    keybindings come from (why Ctrl+A to go to the beginning of a line??? so
    weird!)


    My understanding is these keybindings actually come from Emacs – Ctrl+A and
    Ctrl+E do the same thing in Emacs as they do in Readline and I assume the
    other keyboard shortcuts mostly do as well, though I tried out Ctrl+W and
    Ctrl+U in Emacs and they don’t do the same thing as they do in the terminal
    so I guess there are some differences.


    There’s some more history of the Readline project here.


    mode 3: another input library (like libedit)


    On my Mac laptop, /usr/bin/python3 is in a weird middle ground where it
    supports some readline features (for example the arrow keys), but not the
    other ones. For example when I press Ctrl+left arrow, it prints out ;5D,
    like this:


    $ python3
    
    >>> importt subprocess;5D

    Folks on Mastodon helped me figure out that this is because in the default
    Python install on Mac OS, the Python readline module is actually backed by
    libedit, which is a similar library which has fewer features, presumably
    because Readline is GPL licensed.


    Here’s how I was eventually able to figure out that Python was using libedit on
    my system:


    $ python3 -c "import readline; print(readline.__doc__)"
    
    Importing this module enables command line editing using libedit readline.

    Generally Python uses readline though if you install it on Linux or through
    Homebrew. It’s just that the specific version that Apple includes on their
    systems doesn’t have readline. Also Python 3.13 is going to remove the readline dependency
    in favour of a custom library, so “Python uses readline” won’t be true in the
    future.


    I assume that there are more programs on my Mac that use libedit but I haven’t
    looked into it.


    mode 4: something custom


    The last group of programs is programs that have their own custom (and sometimes
    much fancier!) system for editing text. This includes:



    • most terminal text editors (nano, micro, vim, emacs, etc)

    • some shells (like fish), for example it seems like fish supports Ctrl+Z for undo when typing in a command. Zsh’s line editor is called zle.

    • some REPLs (like ipython), for example IPython uses the prompt_toolkit library instead of readline

    • lots of other programs (like atuin)


    Some features you might see are:



    • better autocomplete which is more customized to the tool

    • nicer history management (for example with syntax highlighting) than the default you get from readline

    • more keyboard shortcuts


    custom input systems are often readline-inspired


    I went looking at how Atuin (a wonderful tool for
    searching your shell history that I started using recently) handles text input.
    Looking at the code
    and some of the discussion around it, their implementation is custom but it’s
    inspired by readline, which makes sense to me – a lot of users are used to
    those keybindings, and it’s convenient for them to work even though atuin
    doesn’t use readline.


    prompt_toolkit (the library
    IPython uses) is similar – it actually supports a lot of options (including
    vi-like keybindings), but the default is to support the readline-style
    keybindings.


    This is like how you see a lot of programs which support very basic vim
    keybindings (like j for down and k for up). For example Fastmail supports
    j and k even though most of its other keybindings don’t have much
    relationship to vim.


    I assume that most “readline-inspired” custom input systems have various subtle
    incompatibilities with readline, but this doesn’t really bother me at all
    personally because I’m extremely ignorant of most of readline’s features. I only use
    maybe 5 keyboard shortcuts, so as long as they support the 5 basic commands I
    know (which they always do!) I feel pretty comfortable. And usually these
    custom systems have much better autocomplete than you’d get from just using
    readline, so generally I prefer them over readline.


    lots of shells support vi keybindings


    Bash, zsh, and fish all have a “vi mode” for entering text. In a
    very unscientific poll I ran on
    Mastodon, 12% of people said they use it, so it seems pretty popular.


    Readline also has a “vi mode” (which is how Bash’s support for it works), so by
    extension lots of other programs have it too.


    I’ve always thought that vi mode seems really cool, but for some reason even
    though I’m a vim user it’s never stuck for me.


    understanding what situation you’re in really helps


    I’ve spent a lot of my life being confused about why a command line application
    I was using wasn’t behaving the way I wanted, and it feels good to be able to
    more or less understand what’s going on.


    I think this is roughly my mental flowchart when I’m entering text at a command
    line prompt:



    1. Do the arrow keys not work? Probably there’s no input system at all, but at
      least I can use Ctrl+W and Ctrl+U, and I can rlwrap the tool if I
      want more features.

    2. Does Ctrl+R print reverse-i-search? Probably it’s readline, so I can use
      all of the readline shortcuts I’m used to, and I know I can get some basic
      history and press up arrow to get the previous command.

    3. Does Ctrl+R do something else? This is probably some custom input library:
      it’ll probably act more or less like readline, and I can check the
      documentation if I really want to know how it works.


    Being able to diagnose what’s going on like this makes the command line feel a
    more predictable and less chaotic.


    some things this post left out


    There are lots more complications related to entering text that we didn’t talk
    about at all here, like:



    • issues related to ssh / tmux / etc

    • the TERM environment variable

    • how different terminals (gnome terminal, iTerm, xterm, etc) have different kinds of support for copying/pasting text

    • unicode

    • probably a lot more

    Reasons to use your shell's job control

    Hello! Today someone on Mastodon asked about job control (fg, bg, Ctrl+z, wait, etc). It made me think about how I don’t use my shell’s

    ...
    Full

    Hello! Today someone on Mastodon asked about job control (fg, bg, Ctrl+z,
    wait, etc). It made me think about how I don’t use my shell’s job
    control interactively very often: usually I prefer to just open a new terminal
    tab if I want to run multiple terminal programs, or use tmux if it’s over ssh.
    But I was curious about whether other people used job control more often than me.


    So I asked on Mastodon for
    reasons people use job control. There were a lot of great responses, and it
    even made me want to consider using job control a little more!


    In this post I’m only going to talk about using job control interactively (not
    in scripts) – the post is already long enough just talking about interactive
    use.


    what’s job control?


    First: what’s job control? Well – in a terminal, your processes can be in one of 3 states:



    1. in the foreground. This is the normal state when you start a process.

    2. in the background. This is what happens when you run some_process &: the process is still running, but you can’t interact with it anymore unless you bring it back to the foreground.

    3. stopped. This is what happens when you start a process and then press Ctrl+Z. This pauses the process: it won’t keep using the CPU, but you can restart it if you want.


    “Job control” is a set of commands for seeing which processes are running in a terminal and moving processes between these 3 states


    how to use job control



    • fg brings a process to the foreground. It works on both stopped processes and background processes. For example, if you start a background process with cat < /dev/zero &, you can bring it back to the foreground by running fg

    • bg restarts a stopped process and puts it in the background.

    • Pressing Ctrl+z stops the current foreground process.

    • jobs lists all processes that are active in your terminal

    • kill sends a signal (like SIGKILL) to a job (this is the shell builtin kill, not /bin/kill)

    • disown removes the job from the list of running jobs, so that it doesn’t get killed when you close the terminal

    • wait waits for all background processes to complete. I only use this in scripts though.

    • apparently in bash/zsh you can also just type %2 instead of fg %2


    I might have forgotten some other job control commands but I think those are all the ones I’ve ever used.


    You can also give fg or bg a specific job to foreground/background. For example if I see this in the output of jobs:


    $ jobs
    
    Job Group State Command
    1 3161 running cat < /dev/zero &
    2 3264 stopped nvim -w ~/.vimkeys $argv

    then I can foreground nvim with fg %2. You can also kill it with kill -9 %2, or just kill %2 if you want to be more gentle.


    how is kill %2 implemented?


    I was curious about how kill %2 works – does %2 just get replaced with the
    PID of the relevant process when you run the command, the way environment
    variables are? Some quick experimentation shows that it isn’t:


    $ echo kill %2
    
    kill %2
    $ type kill
    kill is a function with definition
    # Defined in /nix/store/vicfrai6lhnl8xw6azq5dzaizx56gw4m-fish-3.7.0/share/fish/config.fish

    So kill is a fish builtin that knows how to interpret %2. Looking at
    the source code (which is very easy in fish!), it uses jobs -p %2 to expand %2
    into a PID, and then runs the regular kill command.


    on differences between shells


    Job control is implemented by your shell. I use fish, but my sense is that the
    basics of job control work pretty similarly in bash, fish, and zsh.


    There are definitely some shells which don’t have job control at all, but I’ve
    only used bash/fish/zsh so I don’t know much about that.


    Now let’s get into a few reasons people use job control!


    reason 1: kill a command that’s not responding to Ctrl+C


    I run into processes that don’t respond to Ctrl+C pretty regularly, and it’s
    always a little annoying – I usually switch terminal tabs to find and kill and
    the process. A bunch of people pointed out that you can do this in a faster way
    using job control!


    How to do this: Press Ctrl+Z, then kill %1 (or the appropriate job number
    if there’s more than one stopped/background job, which you can get from
    jobs). You can also kill -9 if it’s really not responding.


    reason 2: background a GUI app so it’s not using up a terminal tab


    Sometimes I start a GUI program from the command line (for example with
    wireshark some_file.pcap), forget to start it in the background, and don’t want it eating up my terminal tab.


    How to do this:



    • move the GUI program to the background by pressing Ctrl+Z and then running bg.

    • you can also run disown to remove it from the list of jobs, to make sure that
      the GUI program won’t get closed when you close your terminal tab.


    Personally I try to avoid starting GUI programs from the terminal if possible
    because I don’t like how their stdout pollutes my terminal (on a Mac I use
    open -a Wireshark instead because I find it works better but sometimes you
    don’t have another choice.


    reason 2.5: accidentally started a long-running job without tmux


    This is basically the same as the GUI app thing – you can move the job to the
    background and disown it.


    I was also curious about if there are ways to redirect a process’s output to a
    file after it’s already started. A quick search turned up this Linux-only tool which is based on
    nelhage’s reptyr (which lets you for example move a
    process that you started outside of tmux to tmux) but I haven’t tried either of
    those.


    reason 3: running a command while using vim


    A lot of people mentioned that if they want to quickly test something while
    editing code in vim or another terminal editor, they like to use Ctrl+Z
    to stop vim, run the command, and then run fg to go back to their editor.


    You can also use this to check the output of a command that you ran before
    starting vim.


    I’ve never gotten in the habit of this, probably because I mostly use a GUI
    version of vim. I feel like I’d also be likely to switch terminal tabs and end
    up wondering “wait… where did I put my editor???” and have to go searching
    for it.


    reason 4: preferring interleaved output


    A few people said that they prefer to the output of all of their commands being
    interleaved in the terminal. This really surprised me because I usually think
    of having the output of lots of different commands interleaved as being a bad
    thing, but one person said that they like to do this with tcpdump specifically
    and I think that actually sounds extremely useful. Here’s what it looks like:


    # start tcpdump
    
    $ sudo tcpdump -ni any port 1234 &
    tcpdump: data link type PKTAP
    tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
    listening on any, link-type PKTAP (Apple DLT_PKTAP), snapshot length 524288 bytes

    # run curl
    $ curl google.com:1234
    13:13:29.881018 IP 192.168.1.173.49626 > 142.251.41.78.1234: Flags [S], seq 613574185, win 65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 2730440518 ecr 0,sackOK,eol], length 0
    13:13:30.881963 IP 192.168.1.173.49626 > 142.251.41.78.1234: Flags [S], seq 613574185, win 65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 2730441519 ecr 0,sackOK,eol], length 0
    13:13:31.882587 IP 192.168.1.173.49626 > 142.251.41.78.1234: Flags [S], seq 613574185, win 65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 2730442520 ecr 0,sackOK,eol], length 0

    # when you're done, kill the tcpdump in the background
    $ kill %1


    I think it’s really nice here that you can see the output of tcpdump inline in
    your terminal – when I’m using tcpdump I’m always switching back and forth and
    I always get confused trying to match up the timestamps, so keeping everything
    in one terminal seems like it might be a lot clearer. I’m going to try it.


    reason 5: suspend a CPU-hungry program


    One person said that sometimes they’re running a very CPU-intensive program,
    for example converting a video with ffmpeg, and they need to use the CPU for
    something else, but don’t want to lose the work that ffmpeg already did.


    You can do this by pressing Ctrl+Z to pause the process, and then run fg
    when you want to start it again.


    reason 6: you accidentally ran Ctrl+Z


    Many people replied that they didn’t use job control intentionally, but
    that they sometimes accidentally ran Ctrl+Z, which stopped whatever program was
    running, so they needed to learn how to use fg to bring it back to the
    foreground.


    The were also some mentions of accidentally running Ctrl+S too (which stops
    your terminal and I think can be undone with Ctrl+Q). My terminal totally
    ignores Ctrl+S so I guess I’m safe from that one though.


    reason 7: already set up a bunch of environment variables


    Some folks mentioned that they already set up a bunch of environment variables
    that they need to run various commands, so it’s easier to use job control to
    run multiple commands in the same terminal than to redo that work in another
    tab.


    reason 8: it’s your only option


    Probably the most obvious reason to use job control to manage multiple
    processes is “because you have to” – maybe you’re in single-user mode, or on a
    very restricted computer, or SSH’d into a machine that doesn’t have tmux or
    screen and you don’t want to create multiple SSH sessions.


    reason 9: some people just like it better


    Some people also said that they just don’t like using terminal tabs: for
    instance a few folks mentioned that they prefer to be able to see all of their
    terminals on the screen at the same time, so they’d rather have 4 terminals on
    the screen and then use job control if they need to run more than 4 programs.


    I learned a few new tricks!


    I think my two main takeaways from thos post is I’ll probably try out job control a little more for:



    1. killing processes that don’t respond to Ctrl+C

    2. running tcpdump in the background with whatever network command I’m running, so I can see both of their output in the same place

    New zine: How Git Works!

    Hello! I’ve been writing about git on here nonstop for months, and the git zine is FINALLY done! It came out on Friday!

    You can

    ...
    Full

    Hello! I’ve been writing about git on here nonstop for months, and the git zine
    is FINALLY done! It came out on Friday!


    You can get it for $12 here:
    https://wizardzines.com/zines/git, or get
    an 14-pack of all my zines here.


    Here’s the cover:



    the table of contents


    Here’s the table of contents:





    who is this zine for?


    I wrote this zine for people who have been using git for years and are still
    afraid of it. As always – I think it sucks to be afraid of the tools that you
    use in your work every day! I want folks to feel confident using git.


    My goals are:



    • To explain how some parts of git that initially seem scary (like “detached
      HEAD state”) are pretty straightforward to deal with once you understand
      what’s going on

    • To show some parts of git you probably should be careful around. For
      example, the stash is one of the places in git where it’s easiest to lose
      your work in a way that’s incredibly annoying to recover form, and I avoid
      using it heavily because of that.

    • To clear up a few common misconceptions about how the core parts of git (like
      commits, branches, and merging) work


    what’s the difference between this and Oh Shit, Git!


    You might be wondering – Julia! You already have a zine about git! What’s going
    on? Oh Shit, Git! is a set of tricks for fixing git messes. “How Git Works”
    explains how Git actually works.


    Also, Oh Shit, Git! is the amazing Katie Sylor Miller’s concept: we made it
    into a zine because I was such a huge fan of her work on it.


    I think they go really well together.


    what’s so confusing about git, anyway?


    This zine was really hard for me to write because when I started writing it,
    I’d been using git pretty confidently for 10 years. I had no real memory of
    what it was like to struggle with git.


    But thanks to a huge amount of help from Marie as
    well as everyone who talked to me about git on Mastodon, eventually I was able
    to see that there are a lot of things about git that are counterintuitive,
    misleading, or just plain confusing. These include:



    • confusing terminology (for example “fast-forward”, “reference”, or “remote-tracking branch”)

    • misleading messages (for example how Your branch is up to date with 'origin/main' doesn’t necessary mean that your branch is up to date with the main branch on the origin)

    • uninformative output (for example how I STILL can’t reliably figure out which code comes from which branch when I’m looking at a merge conflict)

    • a lack of guidance around handling diverged branches (for example how when you run git pull and your branch has diverged from the origin, it doesn’t give you great guidance how to handle the situation)

    • inconsistent behaviour (for example how git’s reflogs are almost always append-only, EXCEPT for the stash, where git will delete entries when you run git stash drop)


    The more I heard from people how about how confusing they find git, the more it
    became clear that git really does not make it easy to figure out what its
    internal logic is just by using it.


    handling git’s weirdnesses becomes pretty routine


    The previous section made git sound really bad, like “how can anyone possibly
    use this thing?”.


    But my experience is that after I learned what git actually means by all of its
    weird error messages, dealing with it became pretty routine! I’ll see an
    error: failed to push some refs to 'github.com:jvns/wizard-zines-site',
    realize “oh right, probably a coworker made some changes to main since I last
    ran git pull”, run git pull --rebase to incorporate their changes, and move
    on with my day. The whole thing takes about 10 seconds.


    Or if I see a You are in 'detached HEAD' state warning, I’ll just make sure
    to run git checkout mybranch before continuing to write code. No big deal.


    For me (and for a lot of folks I talk to about git!), dealing with git’s weird
    language can become so normal that you totally forget why anybody would even
    find it weird.


    a little bit of internals


    One of my biggest questions when writing this zine was how much to focus on
    what’s in the .git directory. We ended up deciding to include a couple of
    pages about internals (“inside .git”, pages 14-15), but otherwise focus more on
    git’s behaviour when you use it and why sometimes git behaves in unexpected
    ways.


    This is partly because there are lots of great guides to git’s internals
    out there already (1, 2), and partly because I think even if you have read one
    of these guides to git’s internals, it isn’t totally obvious how to connect
    that information to what you actually see in git’s user interface.


    For example: it’s easy to find documentation about remotes in git –
    for example this page says:



    Remote-tracking branches […] remind you where the branches in your remote
    repositories were the last time you connected to them.



    But even if you’ve read that, you might not realize that the statement Your branch is up to date with 'origin/main'" in git status doesn’t necessarily
    mean that you’re actually up to date with the remote main branch.


    So in general in the zine we focus on the behaviour you see in Git’s UI, and
    then explain how that relates to what’s happening internally in Git.


    the cheat sheet


    The zine also comes with a free printable cheat sheet: (click to get a PDF version)





    it comes with an HTML transcript!


    The zine also comes with an HTML transcript, to (hopefully) make it easier to
    read on a screen reader! Our Operations Manager, Lee, transcribed all of the
    pages and wrote image descriptions. I’d love feedback about the experience of
    reading the zine on a screen reader if you try it.


    I really do love git


    I’ve been pretty critical about git in this post, but I only write zines about
    technologies I love, and git is no exception.


    Some reasons I love git:



    • it’s fast!

    • it’s backwards compatible! I learned how to use it 10 years ago and
      everything I learned then is still true

    • there’s tons of great free Git hosting available out there (GitHub! Gitlab! a
      million more!), so I can easily back up all my code

    • simple workflows are REALLY simple (if I’m working on a project on my own, I
      can just run git commit -am 'whatever' and git push over and over again and it
      works perfectly)

    • Almost every internal file in git is a pretty simple text file (or has a
      version which is a text file), which makes me feel like I can always
      understand exactly what’s going on under the hood if I want to.


    I hope this zine helps some of you love it too.


    people who helped with this zine


    I don’t make these zines by myself!


    I worked with Marie Claire LeBlanc Flanagan every
    morning for 8 months to write clear explanations of git.


    The cover is by Vladimir Kašiković,
    Gersande La Flèche did copy editing,
    James Coglan (of the great Building
    Git
    ) did technical review, our
    Operations Manager Lee did the transcription as well as a million other
    things, my partner Kamal read the zine and told me which parts were off (as he
    always does), and I had a million great conversations with Marco Rogers about
    git.


    And finally, I want to thank all the beta readers! There were 66 this time
    which is a record! They left hundreds of comments about what was confusing,
    what they learned, and which of my jokes were funny. It’s always hard to hear
    from beta readers that a page I thought made sense is actually extremely
    confusing, and fixing those problems before the final version makes the zine so
    much better.


    get the zine


    Here are some links to get the zine again:



    As always, you can get either a PDF version to print at home or a print version
    shipped to your house. The only caveat is print orders will ship in July – I
    need to wait for orders to come in to get an idea of how many I should print
    before sending it to the printer.


    thank you


    As always: if you’ve bought zines in the past, thank you for all your support
    over the years. And thanks to all of you (1000+ people!!!) who have already
    bought the zine in the first 3 days. It’s already set a record for most zines
    sold in a single day and I’ve been really blown away.

    Notes on git's error messages

    While writing about Git, I’ve noticed that a lot of folks struggle with Git’s error messages. I’ve had many years to get used to these

    ...
    Full

    While writing about Git, I’ve noticed that a lot of folks struggle with Git’s
    error messages. I’ve had many years to get used to these error messages so it
    took me a really long time to understand why folks were confused, but having
    thought about it much more, I’ve realized that:



    1. sometimes I actually am confused by the error messages, I’m just used to
      being confused

    2. I have a bunch of strategies for getting more information when the error
      message git gives me isn’t very informative


    So in this post, I’m going to go through a bunch of Git’s error messages,
    list a few things that I think are confusing about them for each one, and talk
    about what I do when I’m confused by the message.


    improving error messages isn’t easy


    Before we start, I want to say that trying to think about why these error
    messages are confusing has given me a lot of respect for how difficult
    maintaining Git is. I’ve been thinking about Git for months, and for some of
    these messages I really have no idea how to improve them.


    Some things that seem hard to me about improving error messages:



    • if you come up with an idea for a new message, it’s hard to tell if it’s actually better!

    • work like improving error messages often isn’t funded

    • the error messages have to be translated (git’s error messages are translated into 19 languages!)


    That said, if you find these messages confusing, hopefully some of these notes
    will help clarify them a bit.

    .error {
    color: #db322e;
    }
    .warning {
    color: #765900;
    }
    .bg {
    color: #fdf6e3
    }
    pre {
    background-color: #fdf6e3;
    padding: 10px;
    border-radius: 5px;
    /* wrap long lines */
    white-space: pre-wrap;
    }

    h2 a {
    color: black;
    text-decoration: none;
    }

    article span {
    padding: 0;
    }

    article a:hover {
    text-decoration: underline;
    }



    error: git push on a diverged branch


    $ git push
    
    To github.com:jvns/int-exposed
    ! [rejected] main -> main (non-fast-forward)
    error: failed to push some refs to 'github.com:jvns/int-exposed'
    hint: Updates were rejected because the tip of your current branch is behind
    hint: its remote counterpart. Integrate the remote changes (e.g.
    hint: 'git pull ...') before pushing again.
    hint: See the 'Note about fast-forwards' in 'git push --help' for details.

    $ git status
    On branch main
    Your branch and 'origin/main' have diverged,
    and have 2 and 1 different commits each, respectively.


    Some things I find confusing about this:



    1. You get the exact same error message whether the branch is just behind
      or the branch has diverged. There’s no way to tell which it is from this
      message: you need to run git status or git pull to find out.

    2. It says failed to push some refs, but it’s not totally clear which references it
      failed to push. I believe everything that failed to push is listed with ! [rejected] on the previous line– in this case just the main branch.


    What I like to do if I’m confused:



    • I’ll run git status to figure out what the state of my current branch is.

    • I think I almost never try to push more than one branch at a time, so I
      usually totally ignore git’s notes about which specific branch failed to push
      – I just assume that it’s my current branch




    error: git pull on a diverged branch


    $ git pull
    
    hint: You have divergent branches and need to specify how to reconcile them.
    hint: You can do so by running one of the following commands sometime before
    hint: your next pull:
    hint:
    hint: git config pull.rebase false # merge
    hint: git config pull.rebase true # rebase
    hint: git config pull.ff only # fast-forward only
    hint:
    hint: You can replace "git config" with "git config --global" to set a default
    hint: preference for all repositories. You can also pass --rebase, --no-rebase,
    hint: or --ff-only on the command line to override the configured default per
    hint: invocation.

    fatal: Need to specify how to reconcile divergent branches.

    The main thing I think is confusing here is that git is presenting you with a
    kind of overwhelming number of options: it’s saying that you can either:



    1. configure pull.rebase false, pull.rebase true, or pull.ff only locally

    2. or configure them globally

    3. or run git pull --rebase or git pull --no-rebase


    It’s very hard to imagine how a beginner to git could easily use this hint to
    sort through all these options on their own.


    If I were explaining this to a friend, I’d say something like “you can use git pull --rebase
    or git pull --no-rebase to resolve this with a rebase or merge
    right now, and if you want to set a permanent preference, you can do that
    with git config pull.rebase false or git config pull.rebase true.


    git config pull.ff only feels a little redundant to me because that’s git’s
    default behaviour anyway (though it wasn’t always).


    What I like to do here:



    • run git status to see the state of my current branch

    • maybe run git log origin/main or git log to see what the diverged commits are

    • usually run git pull --rebase to resolve it

    • sometimes I’ll run git push --force or git reset --hard origin/main if I
      want to throw away my local work or remote work (for example because I
      accidentally commited to the wrong branch, or because I ran git commit --amend on a personal branch that only I’m using and want to force push)




    error: git checkout asdf (a branch that doesn't exist)


    $ git checkout asdf
    
    error: pathspec 'asdf' did not match any file(s) known to git

    This is a little weird because we my intention was to check out a branch,
    but git checkout is complaining about a path that doesn’t exist.


    This is happening because git checkout’s first argument can be either a
    branch or a path, and git has no way of knowing which one you intended. This
    seems tricky to improve, but I might expect something like “No such branch,
    commit, or path: asdf”.


    What I like to do here:



    • in theory it would be good to use git switch instead, but I keep using git checkout anyway

    • generally I just remember that I need to decode this as “branch asdf doesn’t exist”




    error: git switch asdf (a branch that doesn't exist)


    $ git switch asdf
    
    fatal: invalid reference: asdf

    git switch only accepts a branch as an argument (unless you pass -d), so why is it saying invalid reference: asdf instead of invalid branch: asdf?


    I think the reason is that internally, git switch is trying to be helpful in its error messages: if you run git switch v0.1 to switch to a tag, it’ll say:


    $ git switch v0.1
    
    fatal: a branch is expected, got tag 'v0.1'`

    So what git is trying to communicate with fatal: invalid reference: asdf is
    asdf isn’t a branch, but it’s not a tag either, or any other reference”. From my various git polls my impression is that
    a lot of git users have literally no idea what a “reference” is in git, so I’m not sure if that’s coming across.


    What I like to do here:


    90% of the time when a git error message says reference I just mentally
    replace it with branch in my head.



    error: git checkout HEAD^


    $ git checkout HEAD^
    
    Note: switching to 'HEAD^'.

    You are in 'detached HEAD' state. You can look around, make experimental
    changes and commit them, and you can discard any commits you make in this
    state without impacting any branches by switching back to a branch.

    If you want to create a new branch to retain commits you create, you may
    do so (now or later) by using -c with the switch command. Example:

    git switch -c

    Or undo this operation with:

    git switch -

    Turn off this advice by setting config variable advice.detachedHead to false

    HEAD is now at 182cd3f add "swap byte order" button



    This is a tough one. Definitely a lot of people are confused about this
    message, but obviously there's been a lot of effort to improve it too. I don't
    have anything smart to say about this one.


    What I like to do here:



    • my shell prompt tells me if I’m in detached HEAD state, and generally I can remember not to make new commits while in that state

    • when I’m done looking at whatever old commits I wanted to look at, I’ll run git checkout main or something to go back to a branch




    message: git status when a rebase is in progress


    This isn’t an error message, but I still find it a little confusing on its own:


    $ git status
    
    interactive rebase in progress; onto c694cf8
    Last command done (1 command done):
    pick 0a9964d wip
    No commands remaining.
    You are currently rebasing branch 'main' on 'c694cf8'.
    (fix conflicts and then run "git rebase --continue")
    (use "git rebase --skip" to skip this patch)
    (use "git rebase --abort" to check out the original branch)

    Unmerged paths:
    (use "git restore --staged ..." to unstage)
    (use "git add ..." to mark resolution)
    both modified: index.html

    no changes added to commit (use "git add" and/or "git commit -a")


    Two things I think could be clearer here:



    1. I think it would be nice if You are currently rebasing branch 'main' on 'c694cf8'. were on the first line instead of the 5th line – right now the first line doesn’t say which branch you’re rebasing.

    2. In this case, c694cf8 is actually origin/main, so I feel like You are currently rebasing branch 'main' on 'origin/main' might be even clearer.


    What I like to do here:


    My shell prompt includes the branch that I’m currently rebasing, so I rely on that instead of the output of git status.




    error: git rebase when a file has been deleted


    $ git rebase main
    
    CONFLICT (modify/delete): index.html deleted in 0ce151e (wip) and modified in HEAD. Version HEAD of index.html left in tree.
    error: could not apply 0ce151e... wip

    The thing I still find confusing about this is – index.html was modified in
    HEAD. But what is HEAD? Is it the commit I was working on when I started
    the merge/rebase, or is it the commit from the other branch? (the answer is
    HEAD is your branch if you’re doing a merge, and it’s the “other branch” if
    you’re doing a rebase, but I always find that hard to remember)


    I think I would personally find it easier to understand if the message listed the branch names if possible, something like this:


    CONFLICT (modify/delete): index.html deleted on `main` and modified on `mybranch`
    



    error: git status during a merge or rebase (who is "them"?)


    $ git status 
    
    On branch master
    You have unmerged paths.
    (fix conflicts and run "git commit")
    (use "git merge --abort" to abort the merge)

    Unmerged paths:
    (use “git add/rm …” as appropriate to mark resolution)
    deleted by them: the_file


    no changes added to commit (use “git add” and/or “git commit -a”)


    I find this one confusing in exactly the same way as the previous message: it
    says deleted by them:, but what “them” refers to depends on whether you did a merge or rebase or cherry-pick.



    • for a merge, them is the other branch you merged in

    • for a rebase, them is the branch that you were on when you ran git rebase

    • for a cherry-pick, I guess it’s the commit you cherry-picked


    What I like to do if I’m confused:



    • try to remember what I did

    • run git show main --stat or something to see what I did on the main branch if I can’t remember




    error: git clean


    $ git clean
    
    fatal: clean.requireForce defaults to true and neither -i, -n, nor -f given; refusing to clean

    I just find it a bit confusing that you need to look up what -i, -n and
    -f are to be able to understand this error message. I’m personally way too
    lazy to do that so even though I’ve probably been using git clean for 10
    years I still had no idea what -i stood for (interactive) until I was
    writing this down.


    What I like to do if I’m confused:


    Usually I just chaotically run git clean -f to delete all my untracked files
    and hope for the best, though I might actually switch to git clean -i now
    that I know what -i stands for. Seems a lot safer.


    that’s all!


    Hopefully some of this is helpful!

Feeds last updated about 8 hours ago