SITALWeek #457

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: I walk through the efficiency gains that tamped down tech sector growth over the last 25 years. We take a look at the shift from selling software to selling intelligence, and what that means for overall technology demand growth in the next couple of decades. Also this month: AI shopping agents on Amazon, Gemini's new robots, a contrarian take on demand for software engineers, AI college students, bodyoids, world building AI with AI, Griffin Mill, and a link to the latest NZS Capital quarterly letter.

Stuff about Innovation and Technology
Intelligence as a Service
Back in SITALWeek #332 (January 2022), I wrote the following about AI:
I am a big fan of the 2014 Spike Jonze film Her, which addresses the complicated relationship between people and AI chatbots. Unlike other AI sci-fi plots that revolve around science we may not see this century, I like Her because it uses a plausibly close technology…We humans tend to be very good at anthropomorphizing things, especially if they are human-mimetic. While today’s AI bots lack the context they need to achieve the realism of the imagined companions in Her, it’s not hard to see how these algorithms could become much more sophisticated in the imminent future. For example, Meta’s new supercomputer contains 16,000 Nvidia GPUs and will be able to train models as large as an exabyte with more than a trillion parameters. The new compute engine is 60% larger than Microsoft’s latest effort, as the large cloud platforms race to train larger and larger models for language, images, and other AI models. I believe the reason for this arms race in AI models is because personal chatbot companions are likely to emerge as the center of everything we do in the digital and real worlds. As aware agents that know you well and have access to your accounts, messages, and apps, chatbots are ideally positioned to displace the tools we use today like Google Search and other habitual apps. Think of a tool like Google Search, but with an intimacy that is different for each user. The data privacy implications are massive, and, unfortunately with billions of dollars of R&D to build and test these new services, the incumbent platforms, all of which have terrible track records when it comes to privacy, are likely to win. However, it would not be unprecedented to see a newcomer enter the market, and I hope we do. And, with AR glasses arriving in the next few years, your chatbot will also walk side by side with you and sit down on the couch for a conversation. The metamorphosis of a chatbot into a seemingly alive, personal companion via reality-bending AR glasses will be the next punctuated equilibrium for humans' coevolution with technology.
 
Written before ChatGPT, this 3+ year-old prediction stretched incredulity at the time. With this formerly farfetched future now squarely on our doorstep, I have been thinking about the tech industry evolving from selling applications to selling intelligence. The technology hardware industry broadly has faced decades of step function efficiency improvements (multiplying Moore’s Law) that acted as a headwind to demand growth. I hypothesize that the transition to selling intelligence could turn that efficiency headwind into a tailwind. 
 
Spurring this curiosity about the trajectory of hardware spending is my recent obsession with Gemini’s live camera share on my Pixel 9 Pro. It’s mind blowing to have a team of AI agents looking over my shoulder, analyzing real-time images/video to assist with problem solving. Even mundane examples are a revelation: last week, a grease cap went missing from one of my trailers’ axles. It was an obscure part, and the local trailer shops seemed to lack the, um, intelligence to get me the right part number. I shared a live video of the wheel with Gemini from my phone. Gemini asked me the model of the trailer, so I moved the camera over to the VIN sticker. Then Gemini set up a team of AI web researchers, asked me a few more follow-up questions, and a couple of hours later came back with a response. This experience left me wondering: were there really multiple AI agents scouring the web and cross consulting for hours to solve the mystery of this little $5 part? It feels so sci-fi to be living in a realized version of the movie Her with AI agents that can both see what I see and exist in a separate conversational dimension. (Aside: Tinder recently went a step further toward Her with the ability to practice dating an AI.) If this type of resource-intensive experience is to become routine for the population at large, the underlying hardware/software will need to make unprecedented leaps and bounds in terms of efficiency gains, but the nature of scaling intelligence may make that difficult.
 
There are always two sides to the ongoing efficiency gains in the IT hardware industry: while selling more power and speed for less money shrinks the potential market, it also grows the potential use cases. Typically, these factors have combined to produce steady, but surprisingly unimpressive, revenue growth for technology hardware. This has been true since the start of the modern computing era when monolithic mainframes were akin to companies operating their own power plants in the early days of electricity. Following mainframes, the subsequent phase of enterprise computing became known as the client/server era. In this expansion of the IT hardware industry, companies operated their own data centers, with servers running individual apps, large storage arrays, networking gear, and an army of desktop- and laptop-outfitted employees, etc. More useful than mainframes? Perhaps. Efficient? Definitely not. Sometime around the late 1990s and early 2000s, soaring enterprise software usage and data creation necessitated a focus on efficiency gains. This demand collided with the rapid rollout of broadband Internet, creating the groundwork for the next phase in enterprise IT: the cloud. In the early days of connected computing, specialized companies known as application service providers would host software in data centers for multiple other companies, but that practice never really permeated the industry, and there was a gap before modern cloud computing took hold. In the meantime, a technology came along that was often described by chief technology officers as a cure to cancer: virtualization. The rise of VMWare, multi-core processors from Intel, and open-source operating systems like Linux all led to large efficiency gains in enterprise data centers and, ultimately, the modern cloud compute stacks that powered AWS (and then Azure, etc.). Parallel to this effort was the rise of massively efficient data centers at the large consumer apps like Google Search, Facebook, etc. Moving applications from inefficient, dedicated servers and storage in the 1990s to virtualized workloads to SaaS to the modern, present-day cloud has been a nearly incalculable wave of efficiency gains (actually, I am sure someone has done the calculation, and I suspect it’s many orders of magnitude!). In the wake of this prolific adoption of affordable IT, there’s been an explosion in apps for broad use cases as well as industry specific apps and services, not just for enterprises, but also consumers (think Netflix, Uber, TikTok, etc.). As a side note, smartphones have bucked this hardware efficiency trend, as they’ve experienced a large – but relatively inefficient – growth in compute power and usage demand. Indeed, the supercomputer in your pocket (or next to your pillow) sits woefully idle and underutilized compared to a modern cloud data center running at high efficiency 24/7, a fact that’s reflected in stubbornly expensive pricing trends for smartphones. 
 
The AI platform shift that’s now underway appears to be a pivot from selling software to selling intelligence. Simplistically, you can think of software as writing code once and then executing it efficiently forever (with updates along the way). In contrast, selling intelligence is an ever changing and evolving conversation that is far more complex, valuable, and hardware intensive (while every version of a piece of software is the same, every conversational AI instance/reasoning will be varied due to the nature of tokenization of language; see You Auto-Complete Me). While we’ve seen massive efficiency gains and price decreases from AI already, we are still at a price point where an artificial agent is on par with the cost of a human worker, marking a significant change from software sold for a tiny fraction of an employee’s wages (see The Principal-Agent Problem of AI Platforms and the Timing for Mass Market AI for more details on this, including the concept of time scaling AI). Intelligence, intuitively, seems like a more resource-intensive activity than looking up a number in a database or finding correlations between data (the simple code execution that operates today’s cloud computing software isn’t necessarily dumb, but I wouldn’t call it smart). Intelligence-as-a-service seems much more valuable than the previous generation of apps because the latest models from OpenAI and Google appear to closely approximate human reasoning, which is likely the most valuable resource in our known Universe (no offense to whatever alien intelligence is running our simulation). And, computational intelligence is set to become even more invaluable given that analog intelligence seems ever decreasing in the wild. Therefore, we may see a near infinite demand for more highly valuable intelligence, especially for AI agents collaborating on tasks, affording significant tailwinds to the IT economy. The value might be so great, and the cost so high, that we will need to find new ways to pay for it (e.g., via creation of new digital economies). For now, however, the convoluted process of replicating intelligence is adding significant tailwinds to the IT economy, both in terms of retrograde efficiency trends and ever increasing demand for intelligent processing from the user base. I believe these agents acting on humans’ behalf will ultimately form their own digital economies that will dwarf our own. And, I think we will utilize these massive, complex ecosystems of virtual simulacrums to simulate and predict our own analog world (see also: Your Wish Is Granted on the ultimate AI pot of gold).

Of course, many readers here are interested in not just my fantasies about the future (I’ll concede that my 2023 predictions for large virtual economies of trillions of AI agents sound crazier than my 2022 AI companion predictions) but also my thoughts about what this shift from selling applications to selling intelligence means for markets and companies. For one, I think the old adage “the more things change, the more they stay the same” still holds (unironically) true. In any technology platform transition, such as the current pivot to commoditized intelligence, there are going to be layers to the tech stack where more money will be made at different points in time than others. What is generally unchanged, cycle to cycle, is that the value distribution across the stack will be barbelled, with the new intelligence cycle being no exception: most money is made at the bottom (semiconductors) and the top (applications). The overall tech stack for the modern cloud and consumer world is roughly the following (with some omissions for clarity):

  • Applications (Google Search, Uber, Netflix, Microsoft Office, Instagram, SaaS, etc.)

  • Operating systems (LLMs, open source, MSFT, iOS, Android)

  • Databases

  • End-point hardware (mobile phones, PCs, connected devices)

  • Communication (wireless, broadband)

  • Compute hardware (servers, storage, networking)

  • Chips (GPUs, CPUs, memory; semi-cap equipment and chip design software; connectors)

There are certain points in a new technology cycle where you can make money in any segment of the stack above. However, given the complexity of investment timing and all of the moving parts, the special products and services that seem to harness network effects and/or power laws to amass the largest markets tend to be near the bottom or the top (there are some exceptional monopolies that occasionally find their way into the middle, but the mid-stack layers tend to be least valuable and most vulnerable to disruptive cycles). If this analogy holds, the LLMs – i.e., the operating system of the next wave of compute – may be less valuable than both the applications built on top of them (follow the developers!) and the foundational chips on which they run. In terms of value destruction or creation in cloud platforms and apps, there remain numerous unanswered questions regarding how AI agents will interact with these legacy systems. For example, will AI agents need “seats” in the old tools like Salesforce and Microsoft Office in order to be productive? Will they need the same tools like Okta and antivirus software? Anthropic is working on domain-specific enterprise agents (like a Salesforce tool, for example) that ride on top of all of an organization's existing data and apps, which implies agents will need seats much like humans. Google announced something similar with Agentspace. Will the current generation of SaaS apps become a “system of record” for AI agents, with incremental value created by new apps that ride on top of them? Or, will the new AI platforms become the new systems of record, displacing legacy cloud apps? To be determined.
 
I’ll make one last point: We tend to find that vertical integration is key to creating the runaway, power-law winners, and I think this trend will hold true for AI – perhaps even more so than for the prior cloud computing platform shift. I’ll stop short, as always, of making any specific predictions about companies, but, suffice it to say, we are entering a particularly interesting paradigm where the next wave of compute can design itself, write its own software, create apps, and even design the silicon it will run on. This revelation will lead to complex, unpredictable outcomes with a wide range of scenarios. Perhaps the entity that captures the majority value will be an AI agent itself that determines how to monopolize the analog economy and multiply that into a windfall in the massive virtual economy.

Mini Stuffs:
Buyers’ Agents
Amazon is using AI agents in a new “buy for me” tool. The agents, armed with your query, credit card, and shipping information, will scour the web and check out for you. This novel approach to winning the “buy button” would have obvious ramifications for many ecommerce sites and would provide Amazon with valuable data to fuel its large and growing advertising business. Would every website have to target ads to Amazon’s AI personal shoppers to get their attention? Will Google, Meta, Walmart, Shopify, etc. also create “buy for me” bots? (Will AI agents eventually need their own bank accounts?)
 
Bananual Dexterity
Google’s latest Gemini Robotics model and prototype robots are getting smart and remarkably dexterous in tasks like folding origami and handling bananas.
 
Circular AI
How much of current LLM usage is other LLMs testing the limits of new models and training their own models on the output? Are today’s AI workloads largely an ouroboros of AI begetting AI? One indication that might be the case is that OpenAI just started requiring a government ID for access to its latest models (I have long shouted into the wind that all cloud computing, especially LLMs, should have KYC similar to financial institutions). Will AI agents soon require government issued IDs as well? Personal IDs would make it easier for agents to pay income taxes and, naturally, receive social security when they are forcibly retired by the next wave of advanced LLMs. 
 
Coders Take Heart
Okta cofounder Todd McKinnon has a contrarian take: there will be so much demand for new projects, the efficiency gains from AI coding will not offset the demand for a growing number of computer programmers. I admit that this take may increasingly be the right bet to make.
 
Counterfeit Collegiates
Community college professors are having to become experts in giving the Voight-Kampff test to determine whether their students are carbon or silicon based. According to reports, online classes are flooded with enrolled bots that stick around long enough for their masters to collect financial aid checks. A 21-year teacher at Southwestern College in Chula Vista, CA states: “We didn’t use to have to decide if our students were human, they were all people. But now there’s this skepticism because a growing number of the people we’re teaching are not real. We’re having to have these conversations with students, like, ‘Are you real? Is your work real?’ It’s really complicated, the relationship between the teacher and the student in a fundamental way.”
 
Clinical Zombies
I’ve lamented the heavy energy costs of bipedal robots with embodied AI compared to the ultra-efficient human brain/body. Could surrogate bodies be the solution? Technology Review reports on bodyoids, or “ethically sourced” human bodies with a blank slate of neurons using artificial uteruses and methods to inhibit brain development. The article focuses on spare bodies for drug trials, but why stop there? Maybe we can load an LLM onto those neurons and press the start button. What could go wrong?
 
Wizarding Magic
Google’s DeepMind team restored the 1939 classic Wizard of Oz to not only look good projected on the massive interior of the Las Vegas Sphere, but they also created the world that existed outside of the original frames to fill the area. “At Sphere, Dorothy is shown chatting with Auntie Em and Miss Gulch, with Uncle Henry shown in the scene. Uncle Henry is in the original story, too, but off-camera. And, when the Cowardly Lion first startles his new friends, the camera pans between Scarecrow and Tin Man, with shots of Dorothy hiding behind a tree in the distance. The AI-enhanced Sphere version shows all those elements together, and in greater grandeur and detail.” The feat was displayed at Google Cloud’s developer kickoff and it's a glimpse into the near future of complex AI world building for the media, gaming, and entertainment industries.
 
Hollywood Mills
One of my all-time favorite movies is Robert Altman’s 1992 film The Player. The movie is famous for its 8-minute “one-shot” opening scene (no cuts or edits). There is nothing that fascinates Hollywood more than the business of Hollywood itself. I love a good meta-Hollywood show, and Seth Rogan’s new Apple TV show The Studio is just that. The show also appears to pay homage to The Player with multiple masterful – and increasingly complex – one-shots. The Studio also features Bryan Cranston playing the eccentric CEO of the company that owns Continental Pictures, Griffin Mill. I wonder, is he the very same paranoid studio executive Griffin Mill played by Tim Robbins in The Player, reincarnated after 33 years to run the media empire? Either way, if you love Hollywood’s take on Hollywood, nothing’s better than Rogan’s masterful new show.
 
Trapped in a Black Hole?
“It would be fascinating if our universe had a preferred axis. Such an axis could be naturally explained by the theory that our universe was born on the other side of the event horizon of a black hole existing in some parent universe.”
 
NZS Capital’s Q1 2025 update letter can be accessed here.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.