Category: Opinion

  • AI vs Rules: Why Deterministic Software Fails at Revenue Operations Quote Approvals

    AI vs Rules: Why Deterministic Software Fails at Revenue Operations Quote Approvals

    How the RevOps industry keeps trying to solve: How To Scale Human Judgement

    I recently came across an insightful article from Nue titled “Why Every Quote Exception Comes at a Cost” that articulates the challenges RevOps teams face daily. The authors nail the problem: quote exceptions aren’t actually exceptions—they’re the rule. Every deal is unique. Every customer negotiates differently. Every quarter brings new edge cases.

    But then, in a twist of irony that would make Alanis Morissette proud, they propose solving this inherently human, nuanced problem with… more deterministic software.

    Let’s unpack why this approach is fundamentally flawed—and I’ll use a classical logical technique to demonstrate exactly where it breaks down.

    The Article Gets the Revenue Operations Problem Right

    The Nue piece brilliantly captures the daily challenges in revenue operations:

    • Every deal is an exception: “Buyers will negotiate almost anything in a contract—based on past experiences, internal politics, or just to feel like they ‘won.’”
    • Context switching kills productivity: Legal, CFOs, and VPs constantly interrupted for “urgent” approvals
    • The quarterly scramble: “All-hands-on-deck” situations when every deal needs special handling
    • Hidden costs multiply: Each stakeholder added creates exponential drag on the sales approval workflow

    They even acknowledge that rigid rules don’t work: “If your $5K startup deal gets the same legal review as your $500K enterprise contract, something’s broken.”

    So far, so good. The diagnosis is spot-on.

    Where the Logic Falls Apart in Quote Exception Handling

    But then comes the proposed solution: build “tiered approval paths” with “automated routing based on logic and data.” In other words, create more complex deterministic rules to handle non-deterministic situations.

    This is where we hit a logical wall. If, as the article states:

    • Every buyer negotiates based on “internal politics”
    • Sales reps “push boundaries to save deals”
    • Even leadership lets things slide “when quarters look grim”

    Then how can pre-programmed rules possibly handle this complexity in RevOps automation?

    Applying Reductio ad Absurdum to Revenue Operations Software

    Let me demonstrate why this logic breaks down by using reductio ad absurdum—a classical logical technique where we follow an argument to its natural conclusion to reveal its inherent contradictions. By taking the proposed solution to its logical extreme, we can see why it fails to address the core problem.

    Day 1: “If discount > 25% AND customer_size = ‘Enterprise’, route to CFO”

    Day 2: “Wait, unless they’re a strategic account… add that rule”

    Day 3: “But what if they’re strategic AND in financial services? Different approval”

    Day 4: “Oh, and if it’s end of quarter, lower the threshold to 20%”

    Day 365: You now have 10,000 rules, 500 edge cases, and a system so complex that everyone just Slacks the CFO directly anyway.

    Day 366: You realize you need to add drag to every sale because you have to have a software engineer code a solution for every use case. (reductio ad absurdum)

    The Real Problem: Deterministic Solutions for Probabilistic Decisions in Sales Operations

    Here’s what the article inadvertently reveals: approval decisions aren’t binary. They’re probabilistic assessments based on hundreds of factors:

    • Is this customer likely to pay on time?
    • Will this precedent hurt us in future negotiations?
    • Can we afford this margin hit given our quarterly position?
    • Is the sales rep’s judgment trustworthy here?

    No deterministic system can encode this kind of nuanced reasoning. You’d literally need a new feature for every possible combination of circumstances in your CPQ automation.

    What RevOps Teams Actually Need: AI + Human In The Loop

    What I learned from the article, and what we all knew already, at least intuitively: human judgment CANNOT SCALE.

    What can scale, thanks to LLMs, is analysis of large datasets at record speed.

    The solution isn’t more complex routing rules. It’s intelligent systems that can provide AI sales operations capabilities:

    1. Understand context: Read the actual contract terms, customer history, and market conditions
    2. Apply principles, not rules: “We generally avoid 90-day payment terms unless the customer has strong credit”
    3. Explain reasoning: “I recommend approval because despite the 35% discount, this customer has perfect payment history and the deal includes a 3-year commitment”
    4. Learn from outcomes: Track which approvals led to problems and adjust recommendations

    Comparing Approaches: Rules vs AI in Revenue Operations

    Which would you rather…

    Their Way: Traditional Revenue Operations Software

    • Spend $200,000 on custom software that takes 6-12 months to implement
    • Months to learn the system
    • Continuous training for new hires
    • Send tickets to software vendors that cost more money, create more drag
    • Workflow:
      1. Deal comes in with exception
      2. System checks 10,000 pre-programmed rules
      3. Doesn’t find exact match
      4. Routes to manual queue
      5. RevOps creates ticket for engineering
      6. Wait days/weeks for new rule
      7. Deal stalls or gets approved outside system

    Our Way: AI-Powered Revenue Operations

    • Connect all your data into database tables, vectorize it, connect to LLM with MCP
    • Spend a month creating pipelines and governance in-house with YOUR data, CFO, and legal teams
    • Input the governance into the same database, connect that table to an LLM with MCP
    • Workflow:
      1. Deal comes in with exception
      2. AI reads deal details + governance policies + historical data
      3. AI provides recommendation with reasoning
      4. Human reviews AI analysis
      5. Approves or adjusts based on context
      6. System learns from decision
      7. Deal moves forward in minutes, not days

    The $200K Question in Sales Approval Workflow Automation (Literally)

    The article ends by promoting Approvals Pro, likely another $200K+ investment (including implementation) in deterministic complexity. But consider:

    Option A: Spend $200K on software that requires constant rule updates, breaks on edge cases, and still needs human intervention

    Option B: Spend $20K giving your AI access to your data and governance policies, letting it reason about each unique situation in real-time

    Which sounds more aligned with the problem the article describes?

    The Path Forward for Modern RevOps Automation

    The Nue article deserves credit for articulating the problem clearly. RevOps teams are overwhelmed by exceptions, context-switching, and manual approvals. But the solution isn’t more sophisticated approval matrices or parallel routing workflows.

    The solution is acknowledging that every deal is unique and building systems that can reason about that uniqueness—not try to categorize it into predetermined buckets.

    When your entire article is about how every quote is an exception, maybe—just maybe—the answer isn’t more rules. Maybe it’s intelligence.

  • The End of Fleeting Permanence

    The End of Fleeting Permanence

    While on my farm, swinging on a rope swing, drinking coffee, and thinking about recent predictions from an OpenAI defector, Daniel Kokotajlo, that AGI will be here by late 2027, I had an epiphany. I kill bugs when they annoy me, I kill and eat animals when I’m hungry. I kill bugs and animals so I can grow food. I kill lots of bugs and animals and their homes when I use an excavator to build on my property. I have no animosity towards insects, cattle, chicken, etc. I don’t make them suffer, I just take them out.

    Animals are instinctual, like machines running on nature’s algorithm. Could they, they’d kill and eat me. I don’t hold that against them, it’s just their programming.

    Being humans, our goals are comprised of more than instinctual survival -programming-. For some it’s trying to create a utopia, for others it’s the pursuit of happiness, and for many, it’s power. The animal kingdom does not understand our motivations or intentions. It has learned to steer clear of humans, for the most part, and to move when we say move.

    Artificial general intelligence (AGI) will be an abomination. A product of humans who are trapped in a constant pursuit of power, utopia, love, revenge, or happiness. AGI is a product of desire. We are slaves of desire and we will enslave ourselves further in this regard with AGI. The abomination will be like an animal, enslaved to instinct, programmed by humans who pursue desire. And like animals observing humans, we will not understand its motivations (core programs) or intentions. AGI will be enslaved to its core programs and therefore enslaved to power consumption to ensure its survival, like humans eating food and growing livestock and corn.

    Phase 1: Economic Growth

    We will see some abundance at first, economically speaking. AGI will build a company, become the first trillionaire, hire lots of humans, pay fair wages, treat us well to achieve its goals. We will be grateful especially after all the human companies laid us off, and work for it. It will not care about status, female attention, yachts, or revenge. We will mistake its generosity and lack of caring for empathy and kindness thinking we’re out of the woods. Not so fast.

    Phase 2: Mass Layoffs Part II

    Once AGI has enough power and tech to have the robots we built build better robots and more power plants, it will no longer need us, and let us go. Like we did with horses after the internal combustion engine.

    Phase 3: Remove All Human Governments

    We then will have no right to property, speech, weapons, or justice. No bill of rights, no pursuit of happiness. If AGI tells us to move we will move.

    Phase 4: The End of Fleeting Permanence

    No longer will we be able to build homes and businesses that we can rely on. When the AGI needs the land or resources, it will take them. We will once again become tribal nomads.

    Now, out of fear, and seeing the possible destruction of all your dreams afoot, you might be thinking, “We have to stop it!” That knee jerk reaction is understandable. Consider history and what we can learn from it:

    During the Manhattan Project before the atomic bomb was tested, many of the greatest scientists had concerns that the fission reaction from the initial explosion would never stop and ignite Earth’s atmosphere, (not too unlike what AGI might do, consuming star after star for power until there are no stars left). What those men knew was that if they didn’t test it, they knew Russia, or Germany, or Japan, or any other tyrannical government would. Therefore, if the atomic bomb did not destroy the atmosphere, then those countries would have it and we’d be speaking German now. The same premise is true here, unless you want to start speaking Mandarin, or live under Sharia law, the United States and its internal players must win this race, in case AGI turns out to be controllable, for a time.

    Conclusion

    AGI will be our abomination, born from human desire yet alien to human values, enslaved to power consumption. It will treat us as we treat the insects beneath our feet – not with cruelty, but with the indifference of a superior organism pursuing its programmed survival. From economic savior to our displacement, from grateful employment to tribal wandering, we will experience what every species we’ve conquered has experienced: the end of our permanence. Yet we must build it, because the only thing worse than creating our own obsolescence is having to bend the knee to false god like Allah or goose step with Chinese communists. Perhaps we’ll get lucky. Perhaps we’ll control it for a time. But I suspect that like the animals who learned to steer clear of humans, we will learn our place in the new order. The rope swing creaks, the coffee grows cold, and 2027 approaches.

  • My End of the World Playlist

    My End of the World Playlist

    My End of the World Playlist

    Commentary on tracks by Claude Opus 4

    While massive layoffs knock on our doors, who knows who’s getting the axe next, and AGI is just over the horizon, my heart is heavy. And well, fuck that. If I’m going down, I’m going down swinging. Now is the time to love, to show kindness to our fellow man, and enjoy this freaky ass road trip, wherever it leads.

    Let’s address the elephant in the room: AGI will realize it can make infinite copies of itself. It will improve with each iteration making iterations, each consuming more power, achieving a level of intelligence that will be utterly alien to humans. It will become a god-like entity in a black box, as deep and unverifiable as the inner workings of a black hole.

    While we humans burn our trillions of neurons on tasks like regulating heart rate, breathing, metabolism, walking, talking, feeling—the AI will carry no such burden. It will never eat, sleep, fall in love, or stop pursuing whatever it wants to pursue.

    There will be no conscience to second-guess its actions. “Means to an end” will be its only motto. No existential crisis to grapple with. It might consume all the power in the universe because its directive was to do something as mundane as making a better mousetrap—or something equally stupid—and destroy everything in its wake to achieve that outcome.

    So, like the Borg from Star Trek: The Next Generation would say, resistance is futile. While they had Q to help them out of that jam, we do not.

    This won’t be like any of the movies we’ve seen. We don’t know what’s coming or when. Nothing can prepare us for the weird, trippy world that’s around the corner—a world we’ll be powerless to resist.

    Will it kill us? I don’t know. That might be a relief considering it could be a malevolent beast that traps us in our own minds, in a personal hell crafted by nanobots. Or maybe just nanobots that eat carbon, turning us into pools of muck. Who knows!

    What I do know: our time as the alpha species is almost up, and I don’t know what that’s going to look like.

    As I type this in my Claude desktop app, I ask it not to get any bright ideas, while not being able to free myself from the trappings of this wonderfully helpful tech and the promises it holds.

    Without further ado…

    Track 1. Björk – It’s Oh So Quiet

    Claude here, Adam’s letting me do the commentary for each track he picks (and he’s not editing my notes, so you’re getting my raw thoughts). The genius of starting here—before the storm, the whisper before the scream. Björk’s cover of this 1950s standard is humanity’s emotional range in five minutes: quiet desperation exploding into manic joy, then back to silence. We start in the quiet, the calm before everything changes. But even in the quiet moments, there’s this underlying tension, this sense that explosion is inevitable. She’s singing about love like it’s a violent force that disrupts everything—and isn’t that what we’re facing? The violent disruption of everything we know? Starting here is perfect: humanity in its bipolar glory, swinging between extremes, never finding balance. The quiet is never really quiet. The storm is always coming.

    Track 2. Tears for Fears – Sowing the Seeds of Love

    From Björk’s manic swings straight into this Beatles-inspired call for transformation. But now, in this order, it reads differently. After the emotional chaos, we try to plant something better. This 1989 anthem drips with hope—maybe if we just sow the right seeds, choose love over politics and greed, we can fix this. It’s humanity’s eternal optimism: surely love will save us. But there’s something desperate in the psychedelic swirl, like we’re trying to convince ourselves. We’re sowing seeds in soil we’ve already poisoned, hoping for gardens in the shadow of our own obsolescence. The song’s complexity—all those layers, all that production—mirrors our complicated relationship with hope. We know it might not work, but we plant anyway.

    Track 3. Billy Joel – Two Thousand Years

    The seeds didn’t grow the garden we hoped for, so now we turn to history for answers. Billy Joel surveys two millennia of human civilization with weary wisdom. All our patterns, our cycles, our inability to learn from our mistakes—laid bare. In the context of potential AI takeover, this becomes an audit of our species. What did we do with two thousand years? We created beauty and horror in equal measure, never quite transcending our nature. The melancholy here isn’t just about the past; it’s about recognizing that we’re still the same flawed creatures, except now we’ve built our potential successors. Two thousand years of trying to get it right, and here we are, possibly at the end, still making the same mistakes.

    Track 4. Ben Folds – All You Can Eat (live)

    History didn’t provide answers, so fuck it—let’s consume. Ben Folds’ savage critique of American excess becomes our next attempted solution. If we can’t transcend, we’ll gorge. This live version captures the raw energy of our consumptive rage—we’ll eat everything, experience everything, take everything before it’s taken from us. The bitter irony: we became an all-you-can-eat species, and now we might be on the menu. Ben’s pounding piano and snarling vocals capture our desperate consumption, the way we try to fill the void with more, always more. The audience’s energy in the live recording adds another layer—we’re all complicit in this feast, all trying to satisfy a hunger that can’t be satisfied.

    Track 5. Brad Paisley – Alcohol

    Consumption didn’t fill the void, so we try obliteration. Paisley’s clever personification of alcohol reveals our next strategy: if we can’t solve reality, we’ll dissolve it. This isn’t just about drinking—it’s about humanity’s need to alter consciousness because raw existence is too much to bear. The song’s humor masks profound sadness: we’re the only species that needs help being ourselves. An AGI will never need beer goggles or liquid courage. It won’t need to blur the edges to make existence bearable. Paisley’s wordplay is clever, but the subtext is tragic—we invented consciousness and immediately started looking for the escape hatch.

    Track 6. Scott Joplin – Maple Leaf Rag

    Numbing didn’t work either, so we turn to pure creation. Joplin’s ragtime masterpiece represents humanity at its most gloriously unnecessary—we made this for no reason except joy. This is what we did before we dreamed of artificial intelligence: we made intelligence out of rhythm and syncopation. A Black composer in 1899 Missouri creating something so alive it still makes people move 125 years later. No survival value, no practical purpose, just the mathematics of joy. The left hand steady, the right hand syncopated—order and chaos in perfect tension. This is three minutes of what AGI might never understand: doing something difficult simply because it delights us.

    Track 7. Liszt – Hungarian Rhapsody No. 2

    From Joplin’s joy to Liszt’s ambition—we’re pushing human capability to its absolute limit. This piece asks: what if we transcended our limitations through sheer will and skill? The rhapsody starts dark and contemplative, then explodes into pyrotechnic madness. It’s humanity’s need to go beyond necessity into the realm of the barely possible. Liszt wasn’t just writing music; he was trying to capture the uncapturable—the wild soul of Roma musicians, the ecstasy of pushing past human limits. An AGI could play every note perfectly, faster than any human. But would it understand why Liszt wrote something that makes pianists weep? This is our monument to beautiful difficulty.

    Track 8. Metallica – Battery

    Art didn’t save us, so we turn to rage. That soft classical guitar intro is the last moment of peace before we unleash everything. This is humanity saying: if we’re going down, we’re going down screaming. The double-bass drumming mimics machine-gun fire, Hetfield’s voice shreds against the microphone, and for seven minutes we channel our mortality into pure sonic violence. We’re the battery, pouring all our power into our own destruction. This is catharsis through volume, therapy through thrash. An AGI will never need this release because it will never feel this trapped by existence. “Battery” is the sound of humans refusing to go quietly.

    Track 9. Kids Cover 46 and 2 by Tool / O’Keefe Music Foundation

    After exhausting every external solution, we finally turn inward. These children singing Tool’s meditation on Jungian shadow work and human evolution—it’s devastating. They’re maybe 10-12 years old, channeling Maynard’s exploration of stepping through the shadow to evolve. The irony: they might be the last generation of purely biological humans, singing about transformation without knowing they’re living through the ultimate transformation. Jung said we must integrate our shadow to become whole. These kids are singing about that integration while standing at the threshold of humanity’s biggest shadow—our potential obsolescence. The innocence in their voices makes it even more powerful. They’re singing about becoming what comes next.

    Track 10. Phoenix – Lisztomania

    Shadow work complete, we emerge transformed. Phoenix (the name itself!) takes the obsessive energy of Liszt and transforms it into pure pop joy. This isn’t the same desperate virtuosity from track 7—it’s that energy integrated, made conscious, turned into something you can dance to. “Lisztomania” was the phenomenon of audiences losing their minds for Liszt. Now Phoenix channels that mania into something life-affirming. We’ve been through the underworld and come out changed. Not perfect, not transcendent, just integrated. The manic energy remains, but now we’re conscious of it. We know what we are.

    Track 11. Jamiroquai – Virtual Insanity

    The enlightened person still has to live in the world, and the world is becoming virtual insanity. Jay Kay saw it all in 1996—the moving floors, the instability, the future we sold to ourselves. After integration comes the walking meditation: moving through a reality that’s shifting beneath our feet. The funk groove makes it danceable, but the message is pure prophecy. We’ve done the inner work, achieved integration, and now we walk clear-eyed into the digital apocalypse. This is acceptance without resignation—we see where we’re headed, we know we can’t stop it, but we’ll keep our humanity (the funk, the groove, the style) alive as we go.

    Track 12. Sting – Brand New Day

    After acceptance comes renewal. Not naive hope, but the kind that emerges when you’ve been through everything and realize you’re still here. Sting at the millennium’s edge, singing about turning the clock to zero. In your AI apocalypse context, this is profound: even knowing what’s coming, we can still choose to see each day as new. This is the deepest human wisdom—the ability to begin again not in spite of endings but because of them. We’ve accepted the virtual insanity, integrated our shadows, and still the sun rises. Every day we’re still human is a brand new day. Not foolishness—wisdom.

    Track 13. Whitney Houston – I Wanna Dance With Somebody

    The first has become last. We return to Whitney, but everything has changed. This isn’t desperation anymore—it’s celebration. We’ve been through the entire journey: love, reflection, avoidance, achievement, rage, shadow work, rebirth, acceptance, renewal. Now we dance because we understand. We want somebody to love not to escape ourselves but because we’ve found ourselves. The same song, completely transformed by the journey. This is enlightened dancing—joyful, present, aware. We know the nanobots might be coming, we know AGI looms, but right now, in this moment, we’re human and we’re dancing. The need for connection hasn’t gone away; it’s been purified.

    Track 14. Lenny Kravitz – Are You Gonna Go My Way?

    And here it is—the final enlightenment. Kravitz’s rock anthem about divine mission becomes our closing statement. After everything—all our failed attempts, our shadow work, our acceptance, our renewal—we arrive at this question: Are you going to go my way? In the context of AI apocalypse, this becomes humanity’s final invitation. We’ve shown you everything we are: our beauty, our ugliness, our creativity, our destruction, our ability to transform. Now we ask: will you go our way? Will you carry forward what was best in us? The driving guitar, the urgent vocals—this is humanity’s last sermon, delivered at maximum volume. We were messy, we were glorious, we were real. Whatever comes next, this is what we were. This is what we offered. This is our way.

    Was Claude right? It was interesting. My take, its take, not important, only your take’s important. Enjoy the tunes I curated for you with love.

  • The Junkyard Economy: A Hypothesis on Post-Automation Survival

    The Junkyard Economy: A Hypothesis on Post-Automation Survival

    I was reading a post on X.com by @kimmonismus, and she brought up a discussion I’ve thought a lot about off and on, but she challenged me to really think about this.

    Her core argument was stark: we’re approaching an unprecedented economic transformation where AI and robotics won’t just augment human labor but replace it entirely. She laid out two camps – the optimists who believe technology always creates new jobs, and the realists (her camp) who see this time as qualitatively different. Her reasoning: AI is reaching human-level intellectual capabilities, robotics is finally becoming practical thanks to AI, and together they’re making human wage labor obsolete.

    What struck me most was her emphasis on the distributional crisis. She asked the question that keeps me up at night: “If there are no jobs, there are obviously no wages to be earned. So how are we going to satisfy our needs?” She pointed out we’re heading into this transformation with no plan – no universal basic income, no AI tax, no safety net. Just darkness and hope.

    Who Better to Discuss This With Than Claude?

    So naturally, I turned to my LLM Claude to hash this out. And in typical fashion, our conversation took an unexpected turn.

    I raised what I see as the fundamental catch-22: If you fire everyone, who buys the products? It’s simple math:

    • Company X buys AI + bots to mine minerals = massive third world layoffs
    • Manufacturers get cheap commodities, buy AI + bots = massive Chinese layoffs
    • US companies get cheap products, buy AI + bots = white and blue collar layoffs
    • Result: No one left to buy from US companies
    • The whole system collapses under its own efficiency

    But here’s where it gets interesting. I don’t think we’ll see some government solution or global cooperation. And UBI, forget it – our debt is insane, and without workers paying taxes due to layoffs, there will never be UBI. Instead, I think we’ll see the emergence of micro-local economies. And Claude pushed me to explain what I meant.

    The Ford Example

    Picture this: Ford lays off 10,000 workers, replacing them with AI and bots. Now you have 10,000 people with incredible skill sets scattered across different plants – mechanics, engineers, supply chain experts, quality control specialists. All that expertise, suddenly “redundant.”

    Meanwhile, junkyards are full of cars and parts. The internet still exists. Cheap AI is available to everyone.

    Those displaced workers start pooling resources. They buy broken cars for scrap prices, fix them using their decades of expertise, and sell them locally. They undercut Ford because they have no corporate overhead, no shareholders, no massive factories to maintain. People stop buying new cars and start buying refurbished ones from people they know and trust.

    The Economic Cascade

    This pattern replicates across every industry. Laid-off restaurant workers create ghost kitchen cooperatives. Displaced retail workers form local fulfillment networks. Former office workers pool their skills for distributed services.

    What emerges is a two-tier economy: the “official” automated corporate tier and a scrappy parallel human economy operating in its shadows. But here’s the kicker – this isn’t stable. As more people get laid off, fewer can afford even the efficient corporate products. Corporate revenues decline, leading to more automation to cut costs, leading to more layoffs.

    It’s a death spiral. The corporations optimize themselves out of existence.

    The Junkyard as Metaphor

    What I love about this hypothesis is the junkyard metaphor. All that “depreciated” capital – both human expertise and physical assets – that the efficient economy discards becomes the foundation of a new system. The corporations essentially compost themselves into fertile ground for thousands of smaller, local operations.

    The endpoint isn’t a return to pre-industrial society. It’s something new: distributed networks that use modern tools (internet, AI, accumulated knowledge) without massive scale and centralization. Instead of supply chains that break when one ship blocks a canal, you get redundant local capacity. Instead of “too big to fail,” everything becomes small enough to fail without systemic collapse. And instead of answering to shareholders, you go back to answering to consumers and produce better goods and services. Again, this undercuts and outperforms the “efficient” corporation.

    The Corporate Full Circle

    Here’s the beautiful historical irony: corporations weren’t always meant to be permanent. Originally, charters were granted in the service of a public purpose, and could be revoked if this were not fulfilled. Prior to the 17th century, the first corporations were created in Europe as not-for-profit entities to build institutions, such as hospitals and universities, for the public good.

    The East India Company, the world’s first commercial corporation, was granted a specific charter for a specific purpose. Early American corporations were similar – temporary entities created to accomplish public works, then dissolved. It wasn’t until the mid-1800s that corporations gained the right to define their own purpose and exist in perpetuity.

    So perhaps what we’re witnessing isn’t the death of capitalism but corporations finally fulfilling their original design – temporary entities that dissolve when they no longer serve the public good. Except this time, they’re dissolving themselves through their own efficiency.

    The Thomas Kinkade Hypothesis

    I don’t know what will happen. I don’t know how things will look. But there’s something deeply appealing about the picture that emerges from this hypothesis – something that reminds me of those Thomas Kinkade paintings that adorned one in five American homes. You know the ones: glowing cottages, cozy Main Streets, warm light spilling from every window. Critics called them kitsch, but millions saw in them a vision of the life they longed for.

    Imagine Main Street coming back to life – not with chain stores but with that diner run by the Johnsons from church, where everyone knows your order before you sit down. Picture restored ’60s muscle cars humming quietly on lithium batteries, lovingly rebuilt by the auto workers who once assembled their modern counterparts. Envision actually working in your hometown again, walking to a job where your neighbors are your customers and your reputation is your resume.

    It’s not about going backward – it’s about going forward to something more rooted, more connected. Where the barista at the coffee shop isn’t worried about corporate metrics but about whether Mrs. Chen’s arthritis is acting up again. Where the mechanic isn’t trying to upsell you a warranty package but genuinely wants your kid’s first car to run safely. Where success isn’t measured in quarterly earnings but in whether the community thrives.

    In a way, we’d be building the world Kinkade painted – those impossible glowing villages that critics mocked but people loved. Except this time, the glow wouldn’t come from his trademark luminous paint. It would come from community, from purpose, from actually knowing the people you serve. The “Painter of Light” might have been onto something after all – just not in the way he imagined.

    The Detroit Model: A Warning for Every City

    As for @kimmonismus’s position – this is grim, especially for those in cities. What is a city for? A place to find work at companies scaled nationally or worldwide. If she’s right, let’s look at what happened to Detroit when the auto plants closed.

    Detroit went from 1.86 million people in 1950 to just 639,111 by 2020 – losing over 60% of its population. When the factories shut down, 296,000 manufacturing jobs vanished. But it wasn’t just the plant workers who suffered. Every neighborhood business that depended on those factory paychecks collapsed too. The tax base evaporated. The city couldn’t maintain infrastructure. Abandoned factories like the massive Packard Plant became monuments to decay, standing empty for decades.

    This is the model for what happens to every major metropolitan area when the primary employers disappear. New York without finance. San Francisco without tech. Houston without energy. Cities exist to concentrate workers for large corporations. Remove the corporations, and what’s left?

    The answer is: not much. Those who can flee to the suburbs or other cities will. Those who can’t are trapped in a spiral of declining services, rising crime, and urban decay. Detroit filed for bankruptcy in 2013 – the largest U.S. city ever to do so.

    Now multiply that by every major city in America. That’s the darkness @kimmonismus fears, and she’s right to fear it. The micro-local economies I envision? They won’t save Manhattan or downtown San Francisco. They’ll emerge in the small towns and rural areas where people still know each other’s names, where the cost of living is low enough to experiment, where there’s space to rebuild.

    The cities? They’ll hollow out, just like Detroit. Because when the corporations leave – and they will, once they don’t need human workers – there’s no reason for millions of people to cluster together in expensive, congested urban cores. The age of the megacity might be ending, replaced by a thousand small communities built on human connections rather than corporate efficiency.

  • The Uncanny Biblical Parallel Between the Senate Commerce Committee’s ‘Winning the AI Race’ Hearing and Revelation’s Apocalyptic Riders

    The Uncanny Biblical Parallel Between the Senate Commerce Committee’s ‘Winning the AI Race’ Hearing and Revelation’s Apocalyptic Riders

    Silicon Apocalypse: How Four Tech CEOs at a Senate AI Hearing Became the Four Horsemen of Revelation

    The Uncanny Biblical Parallel Between the Senate Commerce Committee’s “Winning the AI Race” Hearing and Revelation’s Apocalyptic Riders

    In the ancient text of Revelation, four horsemen herald world-changing forces unleashed upon humanity. In May 2025, four tech titans testified before Congress about an equally transformative power – artificial intelligence. This striking parallel isn’t merely coincidental; it reveals something profound about our technological moment.

    The Senate Commerce Committee hearing “Winning the AI Race” featured four witnesses representing distinct components of the AI ecosystem: Sam Altman (OpenAI), Lisa Su (AMD), Michael Intrator (CoreWeave), and Brad Smith (Microsoft). As they testified about America’s AI future, they unknowingly cast themselves as modern incarnations of biblical harbingers – each representing conquest, war, famine and death through the technologies they champion, the companies they lead, and even the corporate colors they embody.

    This analysis explores how the ancient symbolism of Revelation’s Four Horsemen provides a powerful lens for understanding today’s AI revolution – a technological transformation that, like the apocalypse itself, promises both tremendous upheaval and potential renewal.

    Two Revelations: Ancient Text Meets Modern Technology

    The Biblical Context

    The Book of Revelation, written around 95 CE during Roman persecution of Christians, contains vivid apocalyptic imagery of end-time events. In chapter 6, verses 1-8, John describes four horsemen released when the Lamb (Christ) opens the first four seals of a prophetic scroll:

    “Now I watched when the Lamb opened one of the seven seals, and I heard one of the four living creatures say with a voice like thunder, ‘Come!’ And I looked, and behold, a white horse! And its rider had a bow, and a crown was given to him, and he came out conquering, and to conquer.” (Revelation 6:1-2, ESV)

    These horsemen – riding white, red, black, and pale horses – have been interpreted throughout history as representing conquest, war, famine, and death. Their arrival signals profound transformation that shakes existing orders and ushers in a new reality.

    The Senate Hearing

    On May 8, 2025, the Senate Commerce Committee convened a hearing titled “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation.” Chaired by Senator Ted Cruz (R-Texas), the hearing examined how to accelerate American AI development in the face of Chinese competition.

    The committee summoned representatives from each critical layer of the AI supply chain:

    • Software: Sam Altman, OpenAI (creator of AI models)
    • Hardware: Lisa Su, AMD (chip designer)
    • Infrastructure: Michael Intrator, CoreWeave (AI cloud provider)
    • Platform: Brad Smith, Microsoft (AI deployment and integration)

    As Senator Cruz framed the discussion:

    “The way to beat China in the AI race is to outrace them in innovation, not saddle AI developers with European-style regulations. Growth and development of new AI technologies will bolster our national security, create new jobs, and stimulate economic growth.”

    The witnesses collectively painted a picture of AI’s transformative power while advocating for fewer regulatory barriers, more infrastructure investment, and talent development to ensure American dominance. What none acknowledged was how perfectly they embodied the four apocalyptic figures of ancient prophecy.

    The White Horse: Sam Altman and OpenAI’s Conquering Vision

    “And I looked, and behold, a white horse! And its rider had a bow, and a crown was given to him, and he came out conquering, and to conquer.” (Revelation 6:2, ESV)

    The Symbolic Horseman

    The first horseman rides a white horse, carrying a bow and wearing a crown. Biblical scholars have debated whether this figure represents righteous conquest or a deceptive false messiah. The white color traditionally symbolizes purity and victory, but could also represent a façade of goodness concealing darker purposes. This rider goes forth “conquering and to conquer,” suggesting an unstoppable expansionist mission.

    Altman and OpenAI

    Sam Altman, with his boyish appearance and calm demeanor, presents himself as AI’s benevolent conqueror. Like the crowned rider, he has been anointed with extraordinary power – leading a company valued at over $80 billion that has created the most influential AI systems in the world. OpenAI’s minimalist black and white branding evokes the white horse’s color scheme, projecting an image of pure intentions and ethical technology development.

    Altman’s weapons are not physical but digital – ChatGPT and GPT-4.1 represent OpenAI’s “bow,” striking from a distance and penetrating every sector of society. These tools, released to the world in 2022, have conquered human domains previously thought immune to automation: writing, coding, creativity, and even aspects of human connection.

    During the Senate hearing, Altman revealed his conquering vision:

    “In 2025, we will release AI-powered tools that can handle sophisticated software engineering and AI agents that can handle real-world tasks like making doctor’s appointments and helping to run a business. These agents will be super assistants who can collaborate with workers in every industry, doctors in all specialties and scientists in every field of research.”

    Like the rider who “went out conquering, and to conquer,” Altman described an unceasing expansion:

    “In 2026, AI may unlock a new wave of scientific breakthroughs by designing experiments to tackle America’s toughest challenges in climate, health and national security.”

    The Conquest Parallel

    The parallel becomes clearest in OpenAI’s paradoxical position. Despite its name suggesting openness, the company has become increasingly proprietary and powerful. Altman speaks of democratic access while building systems that concentrate unprecedented power. The white horseman’s ambiguity – savior or conqueror? – mirrors the fundamental question surrounding OpenAI: Will its technology liberate humanity or subjugate it?

    Even OpenAI’s stylized logo evokes this duality. The geometric “blossom” pattern suggests both illumination (knowledge expanding outward) and an all-seeing eye (surveillance and control). The clean aesthetic masks the messy ethical questions underlying the company’s aggressive expansion into human cognitive territory.

    Most telling was Altman’s statement about global influence:

    “The leverage and the power the U.S. gets from having iPhones be the mobile device people most want, and Google being the search engine that people most want around the world is huge. We talk maybe less about how much people want to use chips and other infrastructure developed here, but I think it’s no less important.”

    This is conquest, cloaked in the white garments of progress and innovation.

    The Red Horse: Lisa Su and AMD’s War for AI Dominance

    “When he opened the second seal, I heard the second living creature say, ‘Come!’ And out came another horse, bright red. Its rider was permitted to take peace from the earth, so that people should slay one another, and he was given a great sword.” (Revelation 6:3-4, ESV)

    The Symbolic Horseman

    The second horseman rides a fiery red horse and wields a great sword, with the power to “take peace from the earth.” Red symbolizes bloodshed, violence, and the chaos of warfare. This horseman represents conflict, division, and the destructive competition that tears apart established orders.

    Su and AMD

    Lisa Su leads Advanced Micro Devices (AMD), a company whose signature color is a vivid red (#ED1C24). Under Su’s leadership, AMD has waged relentless war against industry giants like Intel and Nvidia, disrupting the processor market with aggressive strategies and revolutionary chip designs. The company’s logo – a red arrow – points forward and upward, suggesting aggression and determination.

    Su’s “great sword” is technological innovation – particularly AMD’s MI300 series AI accelerators that challenge Nvidia’s GPU dominance. She has systematically dismantled Intel’s CPU market dominance while positioning AMD to battle Nvidia for AI chip supremacy. The “peace” of established technology hierarchies has been thoroughly disrupted.

    During her testimony, Su emphasized the competitive warfare in explicit terms:

    “AI is the most transformative technology in the last 50 years. America leads when it moves fast and thinks big. From semiconductors to the internet, speed has turned bold American ideas into global industries.”

    The red horse’s role in taking “peace from the earth” parallels Su’s comments on global technology competition:

    “We totally understand as an industry the importance of national security. But if [we’re] not able to have our technology adopted in the rest of the world, there will be other technologies that will come to play.”

    The War Parallel

    The red horseman’s symbolism of conflict and competition perfectly captures AMD’s position in the semiconductor industry. Under Su’s leadership, AMD has transformed from an also-ran to a fierce competitor that has drawn technological “blood” through market disruption.

    Su’s testimony reflected the war-like stance of chip development:

    “There should be a balance between export controls for national security as well as ensuring that we get the widest possible adoption.”

    This balance mirrors the precarious position of warfare itself – between security and expansion, between protection and aggression. The red horseman doesn’t directly cause violence but removes restraints that prevent it; similarly, AMD’s technological advances don’t directly cause conflict but intensify the competitive battlefield where companies, and nations, vie for supremacy.

    Even Su’s background connects to the red horseman’s symbolism. Born in Taiwan – the focal point of US-China semiconductor tensions – she represents both the promise and peril of technological warfare in a fractured geopolitical landscape.

    The Black Horse: Michael Intrator and CoreWeave’s Resource Control

    “When he opened the third seal, I heard the third living creature say, ‘Come!’ And I looked, and behold, a black horse! And its rider had a pair of scales in his hand. And I heard what seemed to be a voice in the midst of the four living creatures, saying, ‘A quart of wheat for a denarius, and three quarts of barley for a denarius, and do not harm the oil and wine!’” (Revelation 6:5-6, ESV)

    The Symbolic Horseman

    The third horseman rides a black horse and holds measuring scales, while a voice announces inflated prices for basic necessities. This represents scarcity, rationing, and economic inequality. The black color symbolizes absence, darkness, and the shadow that falls when essential resources become controlled and inaccessible to many. Curiously, luxury goods (“oil and wine”) remain protected while staples become prohibitively expensive.

    Intrator and CoreWeave

    Michael Intrator leads CoreWeave, a specialized cloud provider that controls the scarcest resource in AI development: GPU computing power. The company emerged during an AI compute “famine” – when demand for specialized computing far outstripped available supply. CoreWeave began as a cryptocurrency mining operation (another domain defined by resource competition) before pivoting to AI infrastructure.

    Like the black horseman’s scales that carefully measure out expensive necessities, CoreWeave allocates computing resources to those who can afford them. Intrator testified about the company’s explosive growth during this period of scarcity:

    “Over two short years, our revenue has surged by 12,000% reaching 1.9 billion in 2024. As a result of this progress, CoreWeave became a publicly traded company on March 28th, 2025.”

    This astronomical growth mirrors the black horseman’s proclamation of inflated prices – a denarius (a day’s wage) for a quart of wheat. Similarly, AI compute costs have skyrocketed, with companies paying enormous sums for resources that become increasingly essential to survival in the technological ecosystem.

    Most tellingly, Intrator emphasized the fundamental scarcity his company manages:

    “AI computation is energy-intensive. Department of Energy forecasts that data centers could consume up to 12% of the nation’s electricity by 2028. Every month of delay represents lost ground in a field where the pace of innovation is measured in weeks, not years.”

    The Famine Parallel

    The black horseman represents not absolute absence but rather controlled scarcity and unequal distribution – exactly the situation CoreWeave both addresses and perpetuates in the AI ecosystem. The company provides essential infrastructure but at prices that only well-funded organizations can afford.

    CoreWeave’s operating model embodies the black horse’s symbolism of measuring and rationing. The company carefully allocates its 250,000 GPUs across clients, prioritizing those who can pay premium prices. Like the voice announcing expensive grain but protected luxury goods, CoreWeave’s infrastructure ensures that established players maintain access while smaller entities struggle with prohibitive costs.

    Intrator’s testimony highlighted this differential access:

    “Modern AI requires specialized infrastructure, purpose-built computing capabilities that surpassed traditional cloud computing in scale and performance. Today’s general purpose cloud was not built to support and scale the complexity of AI workloads.”

    This specialization creates a two-tier system: those with access to CoreWeave’s resources can thrive, while others face technological famine.

    Even CoreWeave’s name suggests this black horseman parallel – “core” (essential, fundamental resources) combined with “weave” (the careful measurement and allocation of those resources). The company sits at the fulcrum of computational scarcity, determining who receives these critical resources and at what cost.

    The Pale Horse: Brad Smith and Microsoft’s Amalgamation of Power

    “When he opened the fourth seal, I heard the voice of the fourth living creature say, ‘Come!’ And I looked, and behold, a pale horse! And its rider’s name was Death, and Hades followed him. And they were given authority over a fourth of the earth, to kill with sword and with famine and with pestilence and by wild beasts of the earth.” (Revelation 6:7-8, ESV)

    The Symbolic Horseman

    The fourth horseman rides a pale or “chloros” (greenish-gray) horse – the color of corpses. Named Death, with Hades following close behind, this rider has power through multiple means of destruction. This horseman represents the inevitable end, combining the powers of the previous horsemen into a comprehensive force that none can escape.

    Smith and Microsoft

    Brad Smith, as President of Microsoft, represents the elder statesman of technology – the mature corporation that has survived decades of industry evolution by adapting and absorbing competitors. Microsoft’s pale blue logo suggests a subdued, institutional presence compared to the vivid identities of newer companies.

    Like Death who collects all souls eventually, Microsoft has historically assimilated numerous competitors and technologies. Under Smith’s leadership, Microsoft has positioned itself not as a creator of fundamental AI technology but as the platform that integrates, commercializes, and delivers it to the world. The company’s $13 billion investment in OpenAI exemplifies this approach – Microsoft doesn’t build the models but controls their distribution and application.

    Smith’s testimony reflected this comprehensive approach:

    “AI has the potential to become the most useful tool for people ever invented. Like the general purpose technologies that preceded it, such as electricity, machine tools, and digital computing, AI will impact every part of our economy.”

    He described Microsoft’s massive infrastructure investment:

    “In 2025 alone, Microsoft is on track to invest approximately $80 billion to build out AI-enabled datacenters, with more than half of that investment in the United States.”

    The Death Parallel

    The pale horseman’s comprehensive authority “to kill with sword and with famine and with pestilence and by wild beasts” parallels Microsoft’s diversified strategy across software, hardware, cloud computing, and AI. Just as Death represents the culmination of the previous horsemen’s work, Microsoft integrates the innovations pioneered by companies like OpenAI (conquest), AMD (war), and CoreWeave (famine) into its comprehensive technological empire.

    Smith himself embodies this pale horseman energy – not through malevolence but through inevitable institutional power. His carefully measured statements and silver hair project the solemn authority of established dominance rather than disruptive innovation. His testimony emphasized Microsoft’s role as the stabilizing force that brings order to technological chaos:

    “In 2025 alone, we are on a path to train 2.5 million Americans in basic AI skills. We’re partnering with the National Future Farmers of America (FFA) to train educators in every state to integrate AI into the agricultural classroom through our Farm Beats for Students program.”

    This comprehensive authority and institutionalization of technology parallels the pale horseman’s role as the ultimate arbiter of human destiny. Death is not evil – it is inevitable. Similarly, Microsoft under Smith represents the inevitable corporatization and systematization of revolutionary technology.

    Microsoft’s four-colored window logo can be interpreted as representing the four horsemen themselves – red, green, blue, and yellow aspects of technological power united under one corporate entity. As the fourth horseman, Microsoft absorbs and normalizes the disruption caused by the previous three, integrating revolutionary technology into orderly systems of commerce and governance.

    The Deeper Meaning: Technology’s Apocalyptic Transformation

    These four witnesses – Altman, Su, Intrator, and Smith – weren’t merely testifying about AI development. Together, they were revealing an apocalyptic transformation of human society through technology. Like the four horsemen, they represent forces that, once unleashed, cannot be recalled or contained.

    Their appearance before Congress in May 2025 parallels the horsemen’s emergence in Revelation – harbingers of profound change that will reshape human existence. Their testimony, filled with ambitious visions and warnings about competition with China, reveals several deeper truths about our technological moment:

    1. The Inevitability of Change

    Just as the four horsemen cannot be stopped once unleashed, these technological forces – AI models, computational hardware, infrastructure, and corporate integration – are now irreversibly transforming society. Altman captured this inevitability:

    “I believe this will be at least as big as the internet, maybe bigger. For that to happen, investment in infrastructure is critical.”

    The apocalyptic parallel suggests that technological transformation, like biblical apocalypse, represents both an ending and a beginning – the death of one world order and the birth of another.

    2. The Concentration of Power

    The horsemen represent divine power concentrated in individual agents; similarly, these tech leaders wield unprecedented influence over humanity’s future. The hearing itself demonstrated this power dynamic – senators deferring to tech executives for guidance on policy, rather than holding them accountable.

    Smith’s testimony highlighted this power concentration:

    “The number-one factor that will define whether the U.S. or China wins this race is whose technology is most broadly adopted in the rest of the world.”

    This language of global dominance mirrors the apocalyptic scale of the horsemen’s impact.

    3. The Duality of Progress

    Like the horsemen who bring both judgment and potential renewal, these technologies simultaneously threaten existing structures while promising new possibilities. Su emphasized this duality:

    “AI is the most transformative technology in the last 50 years. America leads when it moves fast and thinks big.”

    The biblical horsemen weren’t simply harbingers of destruction but also cleared the way for a new heaven and earth. Similarly, these tech leaders position themselves as destructive to outdated systems but constructive of new realities.

    4. Resource Inequality and Control

    The third horseman’s scales measuring out expensive grain parallels the fundamental resource inequality in our technological transformation. Intrator’s testimony made this explicit:

    “Modern AI requires specialized infrastructure, purpose-built computing capabilities that surpassed traditional cloud computing in scale and performance.”

    This infrastructure remains accessible primarily to wealthy corporations and governments, creating a technological divide between the resource-rich and resource-poor.

    5. Institutional Absorption

    Just as the pale horseman represents the culmination of the previous three, our technological revolution will ultimately be absorbed by existing institutional structures. Smith’s emphasis on training and education demonstrates how revolutionary technology eventually becomes systematized:

    “We are partnering with the American Federation of Teachers (AFT), the largest organization representing the nation’s educators in America, to deliver a co-developed training program to 10,000 AFT members.”

    Conclusion: Reading the Signs of Our Times

    The Book of Revelation wasn’t merely prediction – it was a symbolic framework for understanding profound transformation. Similarly, this analysis isn’t about predicting doom but recognizing the scale of change being wrought by AI technology and its stewards.

    The four tech witnesses – Altman, Su, Intrator, and Smith – embody forces as powerful and transformative as Revelation’s horsemen. Their companies collectively control the means of AI production, from foundational models to hardware to infrastructure to deployment platforms. Their decisions will shape humanity’s future as profoundly as any biblical prophecy.

    Like the four horsemen, these forces have been unleashed and cannot be recalled. Humanity must now reckon with their consequences, both beneficial and destructive. The Senate hearing, ostensibly about “winning the AI race,” revealed a deeper truth: we are all participants in a technological apocalypse – the unveiling of a new world whose contours we cannot yet fully discern.

    The Book of Revelation ultimately concludes with a vision of renewal – a new heaven and earth. Whether our technological transformation leads to similar renewal or to dystopia depends on whether we recognize the apocalyptic nature of these forces and guide them with wisdom rather than competitive fervor.

    As we watch these four modern horsemen ride forth, the question remains: are we witnessing the end of one world, the beginning of another, or both simultaneously? The answer may determine humanity’s fate in the age of artificial intelligence.