SITALWeek #457

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: I walk through the efficiency gains that tamped down tech sector growth over the last 25 years. We take a look at the shift from selling software to selling intelligence, and what that means for overall technology demand growth in the next couple of decades. Also this month: AI shopping agents on Amazon, Gemini's new robots, a contrarian take on demand for software engineers, AI college students, bodyoids, world building AI with AI, Griffin Mill, and a link to the latest NZS Capital quarterly letter.

Stuff about Innovation and Technology
Intelligence as a Service
Back in SITALWeek #332 (January 2022), I wrote the following about AI:
I am a big fan of the 2014 Spike Jonze film Her, which addresses the complicated relationship between people and AI chatbots. Unlike other AI sci-fi plots that revolve around science we may not see this century, I like Her because it uses a plausibly close technology…We humans tend to be very good at anthropomorphizing things, especially if they are human-mimetic. While today’s AI bots lack the context they need to achieve the realism of the imagined companions in Her, it’s not hard to see how these algorithms could become much more sophisticated in the imminent future. For example, Meta’s new supercomputer contains 16,000 Nvidia GPUs and will be able to train models as large as an exabyte with more than a trillion parameters. The new compute engine is 60% larger than Microsoft’s latest effort, as the large cloud platforms race to train larger and larger models for language, images, and other AI models. I believe the reason for this arms race in AI models is because personal chatbot companions are likely to emerge as the center of everything we do in the digital and real worlds. As aware agents that know you well and have access to your accounts, messages, and apps, chatbots are ideally positioned to displace the tools we use today like Google Search and other habitual apps. Think of a tool like Google Search, but with an intimacy that is different for each user. The data privacy implications are massive, and, unfortunately with billions of dollars of R&D to build and test these new services, the incumbent platforms, all of which have terrible track records when it comes to privacy, are likely to win. However, it would not be unprecedented to see a newcomer enter the market, and I hope we do. And, with AR glasses arriving in the next few years, your chatbot will also walk side by side with you and sit down on the couch for a conversation. The metamorphosis of a chatbot into a seemingly alive, personal companion via reality-bending AR glasses will be the next punctuated equilibrium for humans' coevolution with technology.
 
Written before ChatGPT, this 3+ year-old prediction stretched incredulity at the time. With this formerly farfetched future now squarely on our doorstep, I have been thinking about the tech industry evolving from selling applications to selling intelligence. The technology hardware industry broadly has faced decades of step function efficiency improvements (multiplying Moore’s Law) that acted as a headwind to demand growth. I hypothesize that the transition to selling intelligence could turn that efficiency headwind into a tailwind. 
 
Spurring this curiosity about the trajectory of hardware spending is my recent obsession with Gemini’s live camera share on my Pixel 9 Pro. It’s mind blowing to have a team of AI agents looking over my shoulder, analyzing real-time images/video to assist with problem solving. Even mundane examples are a revelation: last week, a grease cap went missing from one of my trailers’ axles. It was an obscure part, and the local trailer shops seemed to lack the, um, intelligence to get me the right part number. I shared a live video of the wheel with Gemini from my phone. Gemini asked me the model of the trailer, so I moved the camera over to the VIN sticker. Then Gemini set up a team of AI web researchers, asked me a few more follow-up questions, and a couple of hours later came back with a response. This experience left me wondering: were there really multiple AI agents scouring the web and cross consulting for hours to solve the mystery of this little $5 part? It feels so sci-fi to be living in a realized version of the movie Her with AI agents that can both see what I see and exist in a separate conversational dimension. (Aside: Tinder recently went a step further toward Her with the ability to practice dating an AI.) If this type of resource-intensive experience is to become routine for the population at large, the underlying hardware/software will need to make unprecedented leaps and bounds in terms of efficiency gains, but the nature of scaling intelligence may make that difficult.
 
There are always two sides to the ongoing efficiency gains in the IT hardware industry: while selling more power and speed for less money shrinks the potential market, it also grows the potential use cases. Typically, these factors have combined to produce steady, but surprisingly unimpressive, revenue growth for technology hardware. This has been true since the start of the modern computing era when monolithic mainframes were akin to companies operating their own power plants in the early days of electricity. Following mainframes, the subsequent phase of enterprise computing became known as the client/server era. In this expansion of the IT hardware industry, companies operated their own data centers, with servers running individual apps, large storage arrays, networking gear, and an army of desktop- and laptop-outfitted employees, etc. More useful than mainframes? Perhaps. Efficient? Definitely not. Sometime around the late 1990s and early 2000s, soaring enterprise software usage and data creation necessitated a focus on efficiency gains. This demand collided with the rapid rollout of broadband Internet, creating the groundwork for the next phase in enterprise IT: the cloud. In the early days of connected computing, specialized companies known as application service providers would host software in data centers for multiple other companies, but that practice never really permeated the industry, and there was a gap before modern cloud computing took hold. In the meantime, a technology came along that was often described by chief technology officers as a cure to cancer: virtualization. The rise of VMWare, multi-core processors from Intel, and open-source operating systems like Linux all led to large efficiency gains in enterprise data centers and, ultimately, the modern cloud compute stacks that powered AWS (and then Azure, etc.). Parallel to this effort was the rise of massively efficient data centers at the large consumer apps like Google Search, Facebook, etc. Moving applications from inefficient, dedicated servers and storage in the 1990s to virtualized workloads to SaaS to the modern, present-day cloud has been a nearly incalculable wave of efficiency gains (actually, I am sure someone has done the calculation, and I suspect it’s many orders of magnitude!). In the wake of this prolific adoption of affordable IT, there’s been an explosion in apps for broad use cases as well as industry specific apps and services, not just for enterprises, but also consumers (think Netflix, Uber, TikTok, etc.). As a side note, smartphones have bucked this hardware efficiency trend, as they’ve experienced a large – but relatively inefficient – growth in compute power and usage demand. Indeed, the supercomputer in your pocket (or next to your pillow) sits woefully idle and underutilized compared to a modern cloud data center running at high efficiency 24/7, a fact that’s reflected in stubbornly expensive pricing trends for smartphones. 
 
The AI platform shift that’s now underway appears to be a pivot from selling software to selling intelligence. Simplistically, you can think of software as writing code once and then executing it efficiently forever (with updates along the way). In contrast, selling intelligence is an ever changing and evolving conversation that is far more complex, valuable, and hardware intensive (while every version of a piece of software is the same, every conversational AI instance/reasoning will be varied due to the nature of tokenization of language; see You Auto-Complete Me). While we’ve seen massive efficiency gains and price decreases from AI already, we are still at a price point where an artificial agent is on par with the cost of a human worker, marking a significant change from software sold for a tiny fraction of an employee’s wages (see The Principal-Agent Problem of AI Platforms and the Timing for Mass Market AI for more details on this, including the concept of time scaling AI). Intelligence, intuitively, seems like a more resource-intensive activity than looking up a number in a database or finding correlations between data (the simple code execution that operates today’s cloud computing software isn’t necessarily dumb, but I wouldn’t call it smart). Intelligence-as-a-service seems much more valuable than the previous generation of apps because the latest models from OpenAI and Google appear to closely approximate human reasoning, which is likely the most valuable resource in our known Universe (no offense to whatever alien intelligence is running our simulation). And, computational intelligence is set to become even more invaluable given that analog intelligence seems ever decreasing in the wild. Therefore, we may see a near infinite demand for more highly valuable intelligence, especially for AI agents collaborating on tasks, affording significant tailwinds to the IT economy. The value might be so great, and the cost so high, that we will need to find new ways to pay for it (e.g., via creation of new digital economies). For now, however, the convoluted process of replicating intelligence is adding significant tailwinds to the IT economy, both in terms of retrograde efficiency trends and ever increasing demand for intelligent processing from the user base. I believe these agents acting on humans’ behalf will ultimately form their own digital economies that will dwarf our own. And, I think we will utilize these massive, complex ecosystems of virtual simulacrums to simulate and predict our own analog world (see also: Your Wish Is Granted on the ultimate AI pot of gold).

Of course, many readers here are interested in not just my fantasies about the future (I’ll concede that my 2023 predictions for large virtual economies of trillions of AI agents sound crazier than my 2022 AI companion predictions) but also my thoughts about what this shift from selling applications to selling intelligence means for markets and companies. For one, I think the old adage “the more things change, the more they stay the same” still holds (unironically) true. In any technology platform transition, such as the current pivot to commoditized intelligence, there are going to be layers to the tech stack where more money will be made at different points in time than others. What is generally unchanged, cycle to cycle, is that the value distribution across the stack will be barbelled, with the new intelligence cycle being no exception: most money is made at the bottom (semiconductors) and the top (applications). The overall tech stack for the modern cloud and consumer world is roughly the following (with some omissions for clarity):

  • Applications (Google Search, Uber, Netflix, Microsoft Office, Instagram, SaaS, etc.)

  • Operating systems (LLMs, open source, MSFT, iOS, Android)

  • Databases

  • End-point hardware (mobile phones, PCs, connected devices)

  • Communication (wireless, broadband)

  • Compute hardware (servers, storage, networking)

  • Chips (GPUs, CPUs, memory; semi-cap equipment and chip design software; connectors)

There are certain points in a new technology cycle where you can make money in any segment of the stack above. However, given the complexity of investment timing and all of the moving parts, the special products and services that seem to harness network effects and/or power laws to amass the largest markets tend to be near the bottom or the top (there are some exceptional monopolies that occasionally find their way into the middle, but the mid-stack layers tend to be least valuable and most vulnerable to disruptive cycles). If this analogy holds, the LLMs – i.e., the operating system of the next wave of compute – may be less valuable than both the applications built on top of them (follow the developers!) and the foundational chips on which they run. In terms of value destruction or creation in cloud platforms and apps, there remain numerous unanswered questions regarding how AI agents will interact with these legacy systems. For example, will AI agents need “seats” in the old tools like Salesforce and Microsoft Office in order to be productive? Will they need the same tools like Okta and antivirus software? Anthropic is working on domain-specific enterprise agents (like a Salesforce tool, for example) that ride on top of all of an organization's existing data and apps, which implies agents will need seats much like humans. Google announced something similar with Agentspace. Will the current generation of SaaS apps become a “system of record” for AI agents, with incremental value created by new apps that ride on top of them? Or, will the new AI platforms become the new systems of record, displacing legacy cloud apps? To be determined.
 
I’ll make one last point: We tend to find that vertical integration is key to creating the runaway, power-law winners, and I think this trend will hold true for AI – perhaps even more so than for the prior cloud computing platform shift. I’ll stop short, as always, of making any specific predictions about companies, but, suffice it to say, we are entering a particularly interesting paradigm where the next wave of compute can design itself, write its own software, create apps, and even design the silicon it will run on. This revelation will lead to complex, unpredictable outcomes with a wide range of scenarios. Perhaps the entity that captures the majority value will be an AI agent itself that determines how to monopolize the analog economy and multiply that into a windfall in the massive virtual economy.

Mini Stuffs:
Buyers’ Agents
Amazon is using AI agents in a new “buy for me” tool. The agents, armed with your query, credit card, and shipping information, will scour the web and check out for you. This novel approach to winning the “buy button” would have obvious ramifications for many ecommerce sites and would provide Amazon with valuable data to fuel its large and growing advertising business. Would every website have to target ads to Amazon’s AI personal shoppers to get their attention? Will Google, Meta, Walmart, Shopify, etc. also create “buy for me” bots? (Will AI agents eventually need their own bank accounts?)
 
Bananual Dexterity
Google’s latest Gemini Robotics model and prototype robots are getting smart and remarkably dexterous in tasks like folding origami and handling bananas.
 
Circular AI
How much of current LLM usage is other LLMs testing the limits of new models and training their own models on the output? Are today’s AI workloads largely an ouroboros of AI begetting AI? One indication that might be the case is that OpenAI just started requiring a government ID for access to its latest models (I have long shouted into the wind that all cloud computing, especially LLMs, should have KYC similar to financial institutions). Will AI agents soon require government issued IDs as well? Personal IDs would make it easier for agents to pay income taxes and, naturally, receive social security when they are forcibly retired by the next wave of advanced LLMs. 
 
Coders Take Heart
Okta cofounder Todd McKinnon has a contrarian take: there will be so much demand for new projects, the efficiency gains from AI coding will not offset the demand for a growing number of computer programmers. I admit that this take may increasingly be the right bet to make.
 
Counterfeit Collegiates
Community college professors are having to become experts in giving the Voight-Kampff test to determine whether their students are carbon or silicon based. According to reports, online classes are flooded with enrolled bots that stick around long enough for their masters to collect financial aid checks. A 21-year teacher at Southwestern College in Chula Vista, CA states: “We didn’t use to have to decide if our students were human, they were all people. But now there’s this skepticism because a growing number of the people we’re teaching are not real. We’re having to have these conversations with students, like, ‘Are you real? Is your work real?’ It’s really complicated, the relationship between the teacher and the student in a fundamental way.”
 
Clinical Zombies
I’ve lamented the heavy energy costs of bipedal robots with embodied AI compared to the ultra-efficient human brain/body. Could surrogate bodies be the solution? Technology Review reports on bodyoids, or “ethically sourced” human bodies with a blank slate of neurons using artificial uteruses and methods to inhibit brain development. The article focuses on spare bodies for drug trials, but why stop there? Maybe we can load an LLM onto those neurons and press the start button. What could go wrong?
 
Wizarding Magic
Google’s DeepMind team restored the 1939 classic Wizard of Oz to not only look good projected on the massive interior of the Las Vegas Sphere, but they also created the world that existed outside of the original frames to fill the area. “At Sphere, Dorothy is shown chatting with Auntie Em and Miss Gulch, with Uncle Henry shown in the scene. Uncle Henry is in the original story, too, but off-camera. And, when the Cowardly Lion first startles his new friends, the camera pans between Scarecrow and Tin Man, with shots of Dorothy hiding behind a tree in the distance. The AI-enhanced Sphere version shows all those elements together, and in greater grandeur and detail.” The feat was displayed at Google Cloud’s developer kickoff and it's a glimpse into the near future of complex AI world building for the media, gaming, and entertainment industries.
 
Hollywood Mills
One of my all-time favorite movies is Robert Altman’s 1992 film The Player. The movie is famous for its 8-minute “one-shot” opening scene (no cuts or edits). There is nothing that fascinates Hollywood more than the business of Hollywood itself. I love a good meta-Hollywood show, and Seth Rogan’s new Apple TV show The Studio is just that. The show also appears to pay homage to The Player with multiple masterful – and increasingly complex – one-shots. The Studio also features Bryan Cranston playing the eccentric CEO of the company that owns Continental Pictures, Griffin Mill. I wonder, is he the very same paranoid studio executive Griffin Mill played by Tim Robbins in The Player, reincarnated after 33 years to run the media empire? Either way, if you love Hollywood’s take on Hollywood, nothing’s better than Rogan’s masterful new show.
 
Trapped in a Black Hole?
“It would be fascinating if our universe had a preferred axis. Such an axis could be naturally explained by the theory that our universe was born on the other side of the event horizon of a black hole existing in some parent universe.”
 
NZS Capital’s Q1 2025 update letter can be accessed here.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

SITALWeek #456

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: a look at the market dynamics making investing fun again in 2025; startups built on LLM platforms are growing revenues faster than their cloud software predecessors; Amazon's robot diversity; China's leveling fuel consumption; Oura Rings; a few recent celebrity profiles I enjoyed; speculating on the rise of Christianity in young American men; NZS updates; and, much more below. 

Stuff about Innovation and Technology
AI startups built on the LLM platforms are growing revenue faster than cloud software companies ever did, according to payment processor Stripe. As the costs of AI decline, developers will rapidly take advantage of the technology’s expanding capabilities. 
 
In their fulfillment centers, Amazon has over 750,000 robots encompassing a diverse set of form factors and purposes. It’s interesting to see the heterogeneity of worker bots, as well as the rising incorporation of computer vision and AI. Did you know that Amazon also offers fulfillment center tours to the public?
 
The International Energy Agency reports that China’s fuel consumption may have already plateaued, thanks to their rapid adoption of hybrids and EVs. China’s current fuel consumption is “narrowly” above 2019 levels. 
 
Bloomberg profiles smart-ring maker Oura and its dynamic leader Tom Hale. I’ve been wearing an Oura Ring every day for over five years, and I love all the health/fitness insights it provides. You can also export the data and have some fun analyzing seasonal and year-to-year trends using LLMs.
 
Unemployment for IT workers hit 5.7% in January, presumably in no small part thanks to the impact of AI. The rate jump was significant vs. December’s 3.9%.

Miscellaneous Stuff
The CEO of Ford discusses his famous cousin Chris Farley. Jim Farley recalls growing up in the Farley family and trying to support Chris as he struggled with the pressures of Hollywood fame.
 
Christopher Walken’s real name is Ronnie Walken and, at one point, he ran away to join the circus as a lion tamer. 
 
Harrison Ford is featured in a long WSJ profile. My favorite Ford character is when he plays “cranky Ford” on talk shows. Here is his most recent performance on Kimmel.
 
Young Men’s Amen
Pew reports that the steady decline of Christianity in recent decades is showing signs of leveling off in the US. If you dig through the recent data and compare them to the previous survey, the major delta is young men closing the gender gap with women in religiosity (see the table on page 39 of the PDF). Last year, the NYT reported anecdotally on this phenomenon in: “In a First Among Christians, Young Men Are More Religious Than Young Women”. Several years ago, I began exploring possible reasons behind the growing sense of hopelessness in younger generations (which seems to be stronger in young men than women). In Giving Up on the Old College Try in 2021, I tried to connect the dots on this tangible problem. Related, Pew recently reported a large rise in workers uncertain about how AI could impact their jobs. In the very first publicly available issue of SITALWeek in 2019, I wrote the following about Kurt Vonnegut’s prescient novel Player Piano
Written in 1952, Player Piano takes place in an alternate post-war world where machines have been elevated to all decision making and humans become for the most part increasingly useless. It’s an obvious parallel to the issues facing humans today as AI takes over more and more jobs. One of the book’s insights is that it’s human nature to destroy the things we’ve built, so we can build them back up again. Humans are tool and technology building machines – it’s where the fitness function of natural selection landed our mind-bodies after millions of years. To rail against technology platforms of the 21st century is to rail against the wheel, fire, spears, etc. It’s the same story, different century in human progress – this decade it’s all about AI turning on humans.
Last year saw an encouraging decline in deaths from drug overdoses (although the rate remains well above pre-pandemic levels). It could be that young men have, on the margin, found their savior as the US undergoes a Constantinian Shift.

Stuff About Demographics, the Economy, and Investing
New Hires at NZS
NZS Capital is excited to welcome two new hires in 2025. Alexandra Pope joins NZS as Head of Investor Relations. With prior roles at Avala Global, Calixto, and Trian Partners, Alex will be an excellent resource for current and future NZS Capital clients. If you would like to touch base with Alex, please reply back to this email and I will make the connection. Also, Ethan Bennett recently started as our IT and Analytics Associate. Ethan is working on our technology systems as well as our ongoing efforts to incorporate AI into our research/investment process to decrease cognitive bias.
 
Cognex’ Vision
Brett recently made a repeat appearance on Business Breakdowns, this time discussing NZS Capital holding Cognex.
 
Power Hedge
Hedge funds are powerlawing, with BI reporting that the four largest multistrategy firms (Millennium, Citadel, Point72, and Balyasny) employ 71% of the 18,600 people working for 53 multistrategy firms. These four are also around half of the total $366B in hedge fund assets. Bigger appears to be better for the moment as firms with over $10B in assets outperformed all multistrategy hedge funds by around 5% in 2024 (however, all of these firms on average materially underperformed the broader market in the period). As a professional market observer for over a quarter century, I suspect that one of the reasons why the market has become far more interesting for active managers recently (see the next section for more on that) is because of this concentration of assets and, importantly, the shift from traditional algorithmic and high-frequency trading strategies to AI and LLM-based ones.
 
Active Fun
This week, I would like to take the opportunity to thank President Trump for making investing fun again. Recently, we spent some time reflecting upon the first five years of our performance at NZS Capital (the Q4 2024 letter can be found here). And, it’s not that it hasn’t been a blast this past half decade investing on behalf of our clients, but there were several extended moments in the market where correlations ran high and interest rates were driving the bus. It was a little bit trickier at times to construct a portfolio that could provide uncorrelated alpha in a market whose performance was dominated by a small number of very large companies (it’s also been an especially good time period to employ our Complexity Investing strategy where we hold predictions very loosely). We joke that last year we were ahead of the market, but we did it the hard way by being underweight the group of dominant large cap stocks (more details can be found in the letter linked above). But, then something magical happened in the new year: the market started to feel, perhaps, like it was decorrelating. Uncertainty is on the rise and ranges of outcomes are widening. I do not welcome this uncertainty, nor do I agree with much of the agenda responsible for the current atmosphere or the way it’s being implemented (i.e., with dire consequences for real people’s lives). But, I always do my best to look for the silver lining, so I’ll embrace the fact that it’s a fun time to be an investor again. 
 
One of the (many, many) sources of volatility is the split personality of the economy. I recently came across the University of Michigan’s inflation expectation survey divided by political beliefs. Republicans in the survey had consistently higher, persistent inflation expectations going back to mid-2021, while democrats’ expectations settled down to match the reality of a more muted inflation outlook in 2024 (I do not have a good way of showing these data publicly to readers; if you have a Bloomberg Terminal, the tickers are CONSIN1R and CONSIN1D). In mid-2022, republicans expected 7.8% inflation compared to democrats at 4.3%. The gap steadily closed, with the October 2024 results showing 3.7% for republicans and 1.6% for democrats. The trajectories of these differing expectations are consistent with the campaign messaging of both parties, which steadily bombarded voters with a near-infinite stream of social media nonsense. Since last October, however, the survey stats have gone wild! Republican inflation expectations went from 3.7% to negative 0.1% while democrats’ went from 1.6% to 5.4% (as of February 2025). Not only did the positions reverse, with democrats expecting higher inflation, the delta of 5.5% is greater than the inverse peak delta of 3.8% in October of 2022. The chart makes your eyes cross. For reference, a recent NY Fed survey of corporate managers showed 4% expected inflation in the next 12 months. If you are unlucky enough to have a degree in economics, you might recall the traditional view that inflation is predicated upon people’s expectations of future inflation. In other words, it’s a reflexive cycle whereby people and businesses expecting higher future prices stock up on goods today to lock in lower prices. That drives an increase in current demand, which theoretically would cause near-term prices to rise, thus fulfilling the prophecy. In the Digital Age of rapid information flow, I am not sure expectations of inflation are the actual cause; rather, it’s perhaps unpredictable things like algorithmic collusion by landlords and private equity consolidating industries and raising prices unchecked. Tariffs are inflationary in the short term (and potentially long term if comparatively cheaper foreign goods are replaced by more expensive domestically made stuff). Labor scarcity is also inflationary. (For more on these changing economic winds, see the end of the last SITALWeek titled Economic Shock). If half of the country is expecting all of these things to be inflationary, that might drive inflation via the traditional theory of buying ahead, but if the other half thinks prices are going to fall, then what happens to actual inflation? 

How far can this split-personality economy go? Have years of social media brainwashing transformed the perception of two different realities into an actual two-track economy? Will expectations of the two factions drive the fundamentals of two different economies or will they average out to a mediocre middle ground? How are companies supposed to react to all of the pending economic shocks in the US? Should supply-chain managers amass inventory to sell to their inflation-wary democratic consumers eager to stock up, but steadily mete out goods to their republican consumers more likely to wait to buy until they need something? Could there be two economies driven by differing political beliefs? Since the start of 2024, the value of a used Tesla has declined 20% while all other used cars have declined 6%, according to CarGurus. But, I digress. Back in SITALWeek #367, I wrote in Divisive Banking that the US economy was at risk of splitting in two. I suggested we might see companies specifically catering to affiliates of one particular political party or the other; since then, we’ve seen examples of ETFs emerge that are either “woke” or “anti-woke”. Consumers have traditionally voted with their wallets (when it’s convenient to do so!); but, will the two different perceptions of reality cause a legitimate split, one which might manifest state by state, with red states experiencing very different economies than blue ones? Since the election, we have seen almost all of corporate America rapidly chameleon – adopting an entirely different set of beliefs and values to get in line with the new US administration – suggesting that businesses could be adapting in real time to the new landscape of multiple realities their customers inhabit. I don’t know if the Information Age has the ability to actually render perception into reality. As we learned from David Bowie, this choose-your-own-reality trend all started in the 1970s, and we are now perhaps reaching its logical end state. I’ll be on the edge of my seat as I watch the next installment of reality unfold – I can’t wait to see which of the two inflation indices is right. What a great time to be an active investor!

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

SITALWeek #455

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: as the market gets anxious about advancements in lower cost AI, I see cause for optimism long term; production value is rapidly rising on YouTube as the platform continues to take share; intelligent task master agents will soon overtake humans for app and web usage with implications for the big platforms; Chinese robot dogs; AI doctors outperform; the homeostasis of GLP-1s; remembering David Lynch; the opposing forces in the economy are becoming stronger in both directions; a history lesson from the fourth century; and, much more below. 

Stuff about Innovation and Technology
AI Angst
On Monday, January 27th, Nvidia stock dropped 18%, knocking over $500B off of its market value in a single day. The alleged culprit behind the fall of Nvidia and other AI-related stocks was China’s new AI model DeepSeek. Readers of SITALWeek might recall my November post titled How I Started to Worry About China Again wherein I discussed the failures of the US to curb China’s AI capabilities on multiple fronts, concluding:
I think it’s time that the US treats the issue of China's ongoing chip and AI access seriously. OpenAI recently called for a North American Compact for AI in order to "protect our nation and our allies against a surging China". Specifically, I think the US should consider cessation of all sales and service of chips, chip equipment, and related software tools, as well as enact cloud KYC restrictions. I would even question whether or not US companies should be allowed to make chips in China for export to the US. The dangers of malicious code injection, amongst other things, strike me as a risk not worth taking. I realize there would be significant geopolitical implications if Western governments were to attempt a wholesale exclusion of China from the semiconductor ecosystem, but if their AI advances remain unchecked, the West could be facing a far more dire existential threat in the not-too-distant future. At the very least, the current chip sanctions should be enforced through increased supply chain scrutiny, and all efforts to restrict equipment and chips that can be used in large parallel AI training and inference implementations should be stopped regardless of what manufacturing node they are. Five years ago, I penned How I Learned to Stop Worrying About China, which largely found its footing on the lack of homegrown chip progress and the clamp down on the creative and entrepreneurial spirit that had been allowed to flourish in the first couple of decades of this century. That sentiment from five years ago proved correct: since the date of that publication, the MSCI China Index is down 5.7% compared to a positive return of 68% for the MSCI ACWI Index and 100% for the S&P 500. Today, however, I once again squarely worry about China given how loose Western restrictions have been on China’s chip industry, their reported progress in parallel compute with trailing-edge processors, as well as their access to massive troves of training data and energy resources. One might even go out on a limb and suggest that the West consider the feasibility of implementing some type of oversight (e.g., akin to how we handled Iran’s nuclear program), concerning China’s AI tech. Hopefully it's not too late to implement some form of hostile-AI kill switch. If the decades of relative peace following the Cold War teach us anything about mutually assured destruction, one might argue for all sides to have equal access to leading-edge AI. But, the analogy breaks down given that we are dealing with a human-like artificial intelligence that is prone to making devastating mistakes. Remedies like the compact suggested by OpenAI could be necessary to stay ahead, particularly if no action is taken to slow down China's progress in AI. 
 
Further driving angst for the AI sector was the notion that China’s DeepSeek model appears to train and operate in a much more cost efficient way. While there is controversy regarding the claims (e.g., the assertion that DeepSeek is 90% cheaper to operate vs. OpenAI’s o1 model is misleading), the point is very consistent with SITALWeek’s Timing for Mass Market AI post, where I concluded we are already very close to replicating human information jobs with current AI cost models. The implication is that we are very close to seeing AI agents that are more skilled and far cheaper than human workers. This basic idea is one that Bill Gates recently put succinctly on The Tonight Show: just like we saw compute power and information go from expensive/scarce to cheap/abundant, intelligence is now going from rare to free (Gates also dropped a new music video). Taken together, developments out of China and evidence for the attractive commoditization of intelligence (at a cost and energy consumption level that could lead to rapid, mass adoption) is, for me, a reason to be optimistic about overall spending on technology. It’s a reason to be optimistic for corporate margins and productivity, the future of immersive, interactive entertainment, as well as the potential for a new scientific revolution. And, it’s a reason to be on the edge of my seat wondering if ubiquitous intelligence will create a new era of human advancement or a scary level of despair and pessimism. The stock market may have taken a pessimistic view in the short term, but, as longtime readers know, optimism always wins in the end, so for the moment I don't see anything that merits an overly skeptical stance.
 
YouTube goes Hollywood
YouTube continues to post impressive growth, with Google reporting 14% constant currency ad revenue growth in Q4 2024 to $8.9B. As one sign of the shifting media attention, Google noted political spending on YouTube in the 2024 campaign was twice what it was in 2020 (Bloomberg also discussed the bro-casters that boosted Trump via YouTube, one of whom was even called out by name on Google’s earnings call last week). YouTube is now also the number one app for podcasts, as most popular shows are now produced in video format. What I’ve noticed over the last six months is a major increase in the production value of YouTube videos, both live and recorded. With Hollywood still descending from “Peak TV” during the pandemic and the lingering impacts of the writer’s strike, it seems like a perfect storm for talented and experienced production crews to embrace the rise of high quality shows on YouTube. As an example, here is a delightful interview Ted Danson and his wife Mary Steenburgen did with a barefoot, 99-year-old Dick Van Dyke from his living room. The video and audio quality are terrific, as is the multi-cam directing and editing. This is a broadcast-quality show, but it was just another episode of Danson’s podcast. YouTube’s success hinges on the high non-zero sumness of their business model (something discussed back in #357). In the past three years, YouTube has paid out $70B to content creators and partners. One of the more interesting things I’ve noticed on YouTube is how creators in one category support fellow competitors in the same space. For example, one live streamer can “raid” another’s by sending their viewers over when their stream is done. The community and social glue continue to grow across the platform. As Netflix increasingly turns into Tide Pod cinema (background viewable, glossy shows and movies that “dissolve into thin air”), YouTube is becoming the place to find large quantities of engaging content. Since half of my readers are probably only here to learn what my latest YouTube obsession is, it’s: satisfying task YouTube. Watch a fella pressure wash a driveway, clear a yard, or clean a rug and have your moment of zen. If that’s not your bag, watch another Hunter Pauley video
 
Internet for Agents
Back in April of 2023, I wrote a post titled Discovery Engines about the evolving nature of Internet search and content curation as we move to an agent-based era. Here is an excerpt:
The Internet was a reinvention of the entire customer interface for myriad content and business sectors (before the Internet, we couldn’t access our bank account without a monthly mailed statement or a trip to the local branch!). Chatbots, likewise, will redefine our discovery gateways as we go from multitouch, screen-based systems to conversational interactions with intelligent agents. Indeed, a conversational Internet has the potential to bring about more paradigm-shifting changes than what we’ve experienced over the last three decades...It took decades for the Internet to fully take over our lives and devolve into the morass of misinformation and mediocrity we have today; however, since technological half-lives keep shrinking, we should not be surprised if chatbots are co-opted even more quickly (or, perhaps they already have been). There is a (albeit slim) chance here that AI platforms will develop a different relationship with advertising and be able to defend against spammers. However, it’s more likely, given the high cost to operate AI, that the multi-hundred-billion dollar advertising industry will be needed to pay for it. Maybe we can enable our personal AI chatbots to also consume all the content and advertising for us and face off against spammers, so we can all just get outside and go for a walk instead.
 
As I look back on that post now, one thing I didn’t have on my mind was just how quickly agents would become independent task masters. Today, OpenAI, Google, and others are all launching AI agents that can complete computer/phone-mediated tasks just like you would. For example, an agent will conduct a Google search, click on ads, go to a website, and complete a task, such as buying an item or booking a reservation. This protocol seems like a stopgap until agents can be piped directly into the system. However, the ubiquity of browsers/apps might mean agents interact with the world in a human mimetic way for some time to come; thus, the agent economy could drive real value for the existing digital platforms like search and social networks sooner than expected. Further, this shift in behavior is likely to remodel the platform content itself, including ads, to be geared towards agents instead of humans.
 
Quick Hits
For a growing list of diagnoses, an AI doctor alone is better than a doctor working with or without AI assistance.
 
Unitree is a Chinese robotics company with an impressive four-foot-tall homunculus bipedal robot. Unitree also sells an impressive robot dog for $1600, well below Boston Dynamics’ $74,500 Spot. Their YouTube channel is also full of impressive demos. 
 
Over 20% of Harvard and MIT MBA graduates are failing to find a job three months after graduation, up more than double since 2022.
 
A bipartisan Senate report pins blame on private equity for rising healthcare costs and worsening patient outcomes.
 
Here’s Mark Cuban’s latest post on healthcare and how to fix it.
 
Households with at least one GLP-1 patient reduced grocery spend by 6-9% within six months.

Miscellaneous Stuff
Homeostatic GLP-1s
Eric Topol’s podcast recently hosted Lotte Bjerre Knudsen, a central figure in GLP-1 research going back to 1989. Of note, Bjerre Knudsen is currently focused on GLP-1s and neural diseases like Alzheimer’s and Parkinson’s. I particularly liked the characterization of GLP-1s as inducing homeostasis:
“So what if neurons are actually also an overlooked mechanism here, and both of these neuronal populations have the GLP-1 receptor and are accessible from the periphery, even though the child super paper in Nature doesn't mention that, but they do have the GLP-1 receptor. So there are all these different mechanisms that GLP-1 can have an impact on the broad definition maybe of neuroinflammation. And maybe the way one should start thinking about it is to say it's not an anti-inflammatory agent, but maybe it induces homeostasis in these systems. I think that could maybe be a good way to think about it, because I think saying that GLP-1 is anti-inflammatory, I think that that's wrong because that's more for agents that have a really strong effect on one particular inflammatory pathway.”
 
“A Damn Fine Cup of Coffee”
I had a hard time writing this paragraph, but I would have regretted not sharing some of my thoughts on David Lynch’s passing. When I saw Eraserhead in high school, it was the first time I was exposed to the surreal as an art form. If you dig into Lynch’s films and TV series, you see a recurrent theme of something akin to a fugue state (Lynch discusses the idea here) – a loss of balance in life that leads to a sudden dissociation from one’s prior identity. The concept is most evident in Lynch’s Lost Highway. Above all, I loved Lynch for his pure dedication to always having creative control (although Lynch disowned his 1984 film version of Dune because he lost final cut, I think it retained enough Lynchian genius to be a cherished classic). This hard line for creative control no doubt led to many projects never seeing the light of day, but the ones that did are pure Lynch. He left us with not just a legacy of films and shows, but also a terrific compendium of knowledge. His book on meditation’s intersection with creativity, Catching the Big Fish, is a favorite of mine, as is his biography Room to Dream. The latter, refreshingly, is an open admission that no one remembers the story of their life accurately, so that book features David’s impression of his life countered with a fact checked version every other chapter. If you explore these books, I highly recommend the audio versions narrated by Lynch, as well as his wonderful Masterclass. Despite all of those accomplishments, I perhaps most loved Lynch as an actor. His character in Twin Peaks The Return was delightful, as was his portrayal of the tortoise-seeking barfly in Lucky (a cinematic favorite of mine that features a purely existential late-career Harry Dean Stanton; Lynch had previously directed Stanton in a little known, heartwarming Disney movie titled The Straight Story, along with five other Lynch projects). In his final film appearance, Lynch was memorably cast by Spielberg to play director John Ford. Channeling Ford, Lynch delivers this line about art: “When the horizon is at the bottom, it’s interesting. When the horizon is at the top, it’s interesting. When the horizon is in the middle, it’s boring as shit...” I think Lynch was someone who never wanted to be in the boring middle horizon, and I’d say he definitely succeeded. He appeared (to this outsider) to exist in an existential fugue state of pure creativity, tapping into the heart of what it means to be human, navigating life by trying to make sense of a nonsensical world, one day at a time.

Stuff About Demographics, the Economy, and Investing
Economic Shock of Austerity+Deportations+Tariffs
The US economy in 2024 was held up in no small part thanks to strong consumer spending in the face of higher interest rates and an industrial slowdown. Looking forward, a host of storms on the horizon may knock a little wind out of US consumers' sails. The aggressive austerity goals of DOGE could lead to significant cuts in government spending and Federal employment. Federal spend is nearly one quarter of the US economy and directly employs around 3M people, or just under 2% of the civilian workforce. The government is a large customer in every sector, including a lot of software and technology infrastructure. Further, on the heels of net-positive immigration boosting the US economy in recent years, policy and deportations reversing immigration trends will have both supply and demand implications (on top of self and forced deportations, fear of such will play a role in labor availability). In particular, if food supply chains experience a labor outflow, we should expect inflation and potential supply shocks (e.g., as of 2022, 42% of crop farm workers had no legal status in the US). Tariffs, whether a negotiating tactic or not, could also be inflationary. These headwinds could be offset by the long-awaited cyclical recovery in the industrial US economy, as well as infrastructure build outs for re-shoring, AI, etc. I’m no economist (thankfully!), but a meaningful cut to government spending, a rise in Federal unemployment, inflationary pressures from a shrinking labor pool and/or tariffs could portend a recessionary trend or stagnation in the next year or so, in which case, the economy might need to find a new driver beyond the consumer. Perhaps the best advice is one we give often, follow the tenets of complex adaptive systems, and expect the unexpected. 
 
Constantinian Shift
Recently, I was watching episode 8 of the original 1980 Cosmos miniseries by Carl Sagan titled “Journeys in Space and Time”. As is often the case with Sagan, education is intertwined with commentary, speculation, and meandering musings. The episode touches on some of my favorite topics like time dilation and time travel. But, there is one bit that, upon rewatching (probably for the 20th time), grabbed my attention more than prior viewings. Sagan wonders what the outcome would have been if the ancient Greeks' scientific progress had prospered instead of stagnating for a thousand years during the Dark and Middle ages. What if Da Vinci’s contributions had been made 1,000 years earlier, or if Einstein’s relativity had been discovered half a millennium prior? This line of questioning got me thinking: had the scientific method continued sans interruption (rather than requiring a reboot in the later part of the Renaissance in the 1500s when Copernicus published On the Revolutions of the Heavenly Spheres), would we have had the Information Age and semiconductors in the 900s rather than the 1900s? Would GPUs have been invented 1,000 years ago? Would we now be an established AI Age society? What problems and plagues of the human condition might have been solved? Would we be an interplanetary species already? Intergalactic? How did we lose that pace of progress, and was it inevitable? In the fourth century, Emperor Constantine embraced Christianity, a move that was an about face for the Roman Empire. The clinical historian might suggest that the religion was adopted due to its compatibility with the notion of Roman rulers being divinely sanctioned – and thus useful as a means to extend and secure Roman rule. Regardless of why, the church-state empire was created and the Catholic Church was ultimately cemented in 381. Christianity was then used as a blunt tool of oppression by the leaders of the age, discouraging (shall we say) individual thought, curiosity, and the ability to question the how and why of things, resulting in a lost millennium of progress. To be clear, this assertion is not a damning of the religion itself, but rather simply noting the heavy hand with which it was used to arrest progress in the name of something else. There is a creeping 21st-century notion that today there is a lack of societal advancement because the technology companies that dominate the landscape of progress are only innovating incrementally – that achievements in bits have come at the expense of progress in the analog world of infrastructure, energy, biology, etc. I am not sure I agree with this idea, especially in the context of those very tech companies being on the verge of making intelligence – which could very well reignite progress in the analog world – abundantly cheap. Those seeking reinvigoration in analog innovation might be surprised to find that their means to achieve that could stifle the desired objective (while China accelerates ahead of the US). One thing I am sure of: we know how progress can be stopped. History is replete with examples, and we should avoid that path at all costs. I suspect it will be much harder to stall progress now that we are in the Information Age, but stranger things have happened in the history of civilization.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

SITALWeek #454

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: the principal-agent problem of AI capex as LLMs begin to reason like humans for a reasonable cost; a potpourri of December headlines; some insight on Google's latest quantum news; the much larger impact GLP-1s will have on life expectancy and consumer habits; Bob Dylan, Nick Cave, Tod Browning, Lon Chaney, and Timothée Chalamet teach us about the torturing of art and the subjectiveness of reality; and, much more below.

Happy New Year and welcome back readers! SITALWeek's publishing schedule will continue to defy its name and not publish weekly, but I hope to keep up at least a monthly cadence this year.

Stuff about Innovation and Technology
The Principal-Agent Problem of AI Platforms and the Timing for Mass Market AI
The principal-agent problem describes a misalignment of objectives between two parties working together. Often, the principal has a certain outcome in mind and seeks the help of the agent to achieve that outcome. However, if the agent has a different set of incentives, both parties may end up failing at their given task. When it comes to AI models and the massive buildout of data centers, the principal-agent problem may thwart many of today’s leaders. For example, take a look at the Microsoft-OpenAI situation: OpenAI is incentivized to make the best LLM – without much regard for cost efficiency – because Microsoft is effectively footing the bill for the infrastructure. Microsoft, on the other hand, needs a power- and price-optimized AI (not the smartest AI, but rather the smartest affordable AI) in order to compete with others. Contrast that situation with Google, which has a vertically integrated approach to AI. Google is on the 6th generation of their custom TPU processor and has a massive global footprint of data centers optimized in part for transformer models, dating back to the original search autocomplete (see AI Search in #384). Thus, it’s no surprise that the data show that Google’s Gemini models operate at the highest level of intelligence per dollar cost, which may be the driving force of recent share gains with developers (who, for now at least, do not appear overly skeptical of being locked into exclusively using Google’s cloud). This fact is important because cost will be one of the primary determining factors for developers building the next generation of applications (see Follow the Developers in #380). Principal-agent misalignment can be an issue with open-source development platforms as well, but the problem is amplified with power-hungry, compute-intensive platforms like LLMs (e.g., vs. prior generations of open-source software like Linux). Therefore, we might also speculate that Meta’s Llama open-source LLM, which is likely optimized to run on Meta’s data centers for their own advertising and social apps, will not meet the right economically viable price/performance hurdles when developers run Llama on other cloud data centers. Amazon has taken a similar approach to Google in terms of designing leading-edge custom chips, but, so far, they have not leveraged their AWS developer market share to gain meaningful share in the AI market. And, it’s worth noting that Microsoft is reportedly working on alternative models and partnerships, at least in part to reduce costs compared to OpenAI’s models. Taking everything we know about computational platform shifts over the last half century, becoming the leading AI platform would appear to be Google’s opportunity to lose, although the race is far from over.
 
This issue of cost for AI is worth exploring in more detail. In the past, I’ve examined the energy needs of computers and robots compared to the highly tuned human mind and body, but I think we are approaching the point where we can start to estimate the value of AI for developers and the companies/consumers who are going to buy the next wave of innovative applications. I think the salient question for AI (and, frankly, humanity!) is: How much AI reasoning can you get for a human-equivalent salary? In other words, for a certain salary, how much compute power will it take to match or outperform a human (assuming the AI can collaborate with other humans/AIs using the same methods and tools a human would). An AI also has the obvious advantage of being able to work 24/7/365. A 40-hr/wk job for a typical human information worker probably entails ~10 hours of rote work (table stakes for an AI), ~10 hours of reasoning/thinking, and ~20 hours of wasting time (chatting, sitting in useless meetings, staring at screens, scrolling TikTok, playing office politics, etc.; note: there are likely some jobs, like software programming, that actually equate to ~40 hours of actual work a week, minus, of course, time for completing TPS reports). AI seems to reason slower than humans today (for now), so let’s say it takes ~20 hours of computing to do ~10 hours of high-level (i.e., non-rote) human work. If AI can do ~20 hours of reasoning and ~10 hours of rote work per week for less than a typical human information worker, that’s interesting, especially given that employers aren’t paying payroll taxes, benefits, etc. We can then project the progression of AI technology forward and see what performance-per-dollar advancement pace would be required to make AI a ubiquitous human replacement. 
 
Today, Google Cloud Platform (GCP) prices a top-tier GPU with an annual commitment at around  $700/mo. It’s hard to tackle just how many of these GPU instances on GCP would be needed to accomplish human-equivalent work within a reasonable time frame. Further, the way most developers will build AI apps to assist or replace office workers is through LLM APIs. APIs are priced generally in tokens in and tokens out (a token being the smallest unit of text an LLM processes or outputs). I am on shaky ground attempting to figure out the human-equivalent “tokens” required to both input and output a complex task or decision that would be valuable to a corporation, research institute, etc. (Aside: it turns out that research shows the brain may reason at relatively low bit rates.) Further, raw access to GPUs or tokens for AI would not include the other inputs a human would have in order to make high-level decisions and produce high-level work outputs (e.g., such an AI agent would likely need access to all of the apps and data in an organization, and they might be charged a seat license just like a human). These costs also don't include productizing an AI agent. In other words, if GCP or some startup were to create an “office worker AI as a service”, there would be a fully burdened business model targeting something like 30% FCF margins, etc. So, this line of reasoning for guessing at human-equivalent costs is a bit of a non-starter. 
 
However, LLMs are shifting from a pure token-in/token-out model to a test-time scaling model, which may offer us better inroads for estimating costs. Essentially, they are thinking harder before spitting out a reply; thus, rather than just predicting the next words in a response using a probability model (see You Auto-Complete Me), they are doing some deep thinking to arrive at more accurate, useful answers. This is a major leap in capability that comes with a major leap in cost. OpenAI raised prices for their o1 model to $200/mo (Pro subscription) from $20 (Plus subscription). For developers, use of o1’s advanced reasoning API comes at 3-4x the cost of their “general purpose” GPT-4o. If o1 were priced at a typical Western office worker wage of $40/hr, the reasoning of the model would equate to around 5 hours of work per month. We also don’t know if the $200/mo price point is profitable for OpenAI or if they are just relying on Microsoft to further subsidize their business model (which brings us back to the principal-agent problem I started this section off with). So, all of my hand waving here seems to imply you can get a decent amount of human-equivalent reasoning for an amount of money in the realm of human labor cost. If true, after a few more years of advancements in semiconductors and AI models, we should have markedly affordable “human reasoning as a service”, an explosion in demand, and a wide range of outcomes for how much human supervision of AI will be required (it may be that human jobs stay relatively flat, but each human is 2x productive, then 4x, etc.). 
 
Following this logic, at current AI reasoning costs, companies would need to lay off one human for every AI human equivalent they hire and would probably lose more skill/knowledge than they gain. In other words, based on my attempts to guess the cost of replacing human reasoning, today’s AI offerings aren’t likely compelling enough. In a couple years, however, maybe you will be able to lay off one human and hire a handful of AIs, which, by collaborating with each other and humans, may yield superior results. Even today, extremely high-value tasks, such as in-depth research or stock market predictions, may be able to take advantage of the high-cost test-time scaling AI models. And, if any of this math is in the realm of reason, you can easily see that AI may not require such high-value-add applications to be cost effective in the near to medium future. The proof will come within the next couple of years as today’s entrepreneurs develop the next generation of apps leveraging LLMs and overtaking human capabilities: If these apps are at price points that outcompete human employees, a significant wave of change could come much faster to society. Outside of the AI human office-worker replacement, as costs come down, we will see AI agents exploding on social networks. For example, Meta is developing tools to create pervasive AI characters interacting with humans across their apps. Further, Spotify is seeing great success with their personalized, LLM-based AI DJs, which can create their own dialogs with customers about music. All of these advancements are leading us toward the AI agent digital economy, which I think will dwarf our analog human economy. Today’s early test-time scaling AI seems to support this view of the future.
 
The above concept of AI replacing human decision making and reasoning (including high-value R&D that could lead to a new Age of Wonder) is one of two vectors that I see as interesting in the coming years. The other interesting vector is in entertainment. AI models are getting remarkably good at creating compelling, realistic video (see Google’s new Veo 2), and we are likely not too far off from being able to create entirely realistic virtual worlds with simple prompts. As with AI wholesale replacing office workers, it is difficult to predict when exactly we will reach the cost-performance inflection point that will result in all of us living in our own virtual worlds, and it may not happen before we have the next wave of hardware innovation in wearable AR/VR tech on a 3-5 year time frame. If I had to sum up my views regarding all the AI developments over the last year, I’d be trite and say that I’m cautiously optimistic. I still fear we are in a general-purpose AI overbuild, but likely in an underbuild situation for some of the specific tasks I outlined above, many of which could be the largest markets for technology we have seen by orders of magnitude.
 
Since it’s been a few weeks since we posted a SITALWeek, here are a few one liners on topics I thought were interesting:
 
Solidifying Assets
Active public equity investing continues to suffer escalating, record outflows ($450B in 2024); meanwhile, large investment firms are plowing into the markets for private assets as the public markets are increasingly dominated by a small number of very large companies. This existential transition for the investment industry from liquid public markets toward creating broader appeal for more highly levered private assets comes with lower liquidity and increased systemic risks.
 
Adding Power Laws
Speaking of a small number of companies dominating large markets, the global advertising industry was on track to hit $1T in 2024, and the mega platforms Google, Meta, ByteDance, Amazon, and Alibaba make up more than half of the market.
 
Recycling Fatigue
Aluminum can recycling continues to decline as Americans now throw away $1.2B of aluminum a year. The recycling rate recently dropped from a long-term average of 52% to 43%. Meanwhile, Elon Musk no longer thinks climate change is an existential threat to humans.
 
Warehouse Mechanoids 
Robots are starting to earn their keep: Nestle is using Boston Dynamics’ Spot robot dogs for predictive maintenance, driving higher returns on investment than anticipated; Agility Robotics humanoids are entering the workforce, with one customer seeing a two-year payback against a $30/hr equivalent human wage.
 
Two Cents on Wealth Distribution
Warren Buffett took the opportunity to wax philosophical about the US and his views on wealth at the end of this Berkshire press release on his estate planning. 
 
Problematic Power Oscillations
AI data centers are distorting the harmonics of power delivery to nearby homes and businesses, risking damage to appliances and other expensive items.
 
Human vs. Algorithm 
AQR’s Cliff Asness says AI is coming for his investment job. In case you missed it, Asness’ paper from September is worth a read: The Less-Efficient Market Hypothesis

Miscellaneous Stuff
Willow’s No Game Changer
I hesitate to write about the recent press on quantum computing because there is nothing terribly substantial in the announcement on Google’s Willow quantum chip, but with my background in astrophysics I get a lot of questions on it. The key development for quantum computing is to keep the error correction ahead of the ability to compute and transfer information in and out of the quantum system (by adding more qubits of computing power, the error rate drops rather than rises). For the next decade, the only likely use for quantum computing will be simulating quantum systems – they won't achieve anything that would resemble any conventional utility, which Google’s Willow announcement doesn’t change. The ongoing progress of quantum computing does continue to imply that the systems are dipping into parallel universes to compute, so there’s that! I am also very curious to see if the imaginative side of AI and the pending new Age of Wonder for scientific discovery are already giving us a peak at the recent advances in quantum computing (see also Quantum Resistance). For anyone interested in more, here is a blog from quantum researcher Scott Aaronson and a video featuring physicist John Preskill.
 
Boomer Wave Rolling On
GLP-1s are effectively becoming commodity-like drugs (by which I mean there are multiple offerings that achieve similar outcomes) and supply shortages are ending. The compounds were, after all, based on fairly simple molecules that have been tested for decades. However, becoming a commodity doesn’t necessarily beget price decreases or demand increases given the complicated healthcare system. Moreover, usage is likely to plateau at some point as the drugs reach a state of diminishing returns. Meanwhile, the real impact of GLP-1s is just getting started: there’s been a rise in life expectancies, reversing a worrying downward trend. It could turn out that the boom in GLP-1 revenues was just a sideshow to the effects on the economy from people living healthier for a longer period of time. If major disease categories are slowed or pushed out, it would have significant ramifications for many demographic trends, both positive and negative. With declining birth rates, the voting population will tip back to favor older generations. The housing shortage could suffer greater impacts as more people live longer and age in place. Recreational activities, vacations, remodels, etc. could all be on the rise as the silver tsunami of the Boomers gets bigger, healthier, and rolls on further than expected. It could also have negative implications for programs like social security, as actuarial tables blow up. The impact may be the most negative on the healthcare industry (see #296 and #379) itself, as demand for medical care steadily declines as a percent of the overall economy (offset by longevity increasing the tail of demand for healthcare in later years of life). Particularly in the consumer sector, paying close attention to major demographic winds of change tends to be lucrative.
 
Rescripting Reality
I really enjoyed this interview with actor Timothée Chalamet on his role playing Bob Dylan in A Complete Unknown. We have covered Dylan’s enigmatic toying with reality on more occasions than I can remember in SITALWeek. I’ve always been fascinated by Dylan’s ability to playfully manipulate reality because I think it’s essentially how the world has operated for the last half century (i.e., trending toward subjective reality), and it’s an omen of things to come. Dylan (who has been bizarrely posting on X, something he started to do only when the rest of the world soured on the platform, of course!) himself praises the movie, and Chalamet recounts how Dylan snuck onto set and co-opted the script, rewriting it to include a fabricated story, which made the final cut. The “Bob-annotated script” is a phrase that I love because in some ways it’s a guide to living (a deeper look reveals Dylan was heavily involved in the development of this project going back to the book it is based on). Would that we all could annotate our own scripts. Some of SITALWeek’s favorite worlds collided when another frequent artist I’ve discussed, Nick Cave, was called out by Dylan on X: “Saw Nick Cave in Paris recently at the Accor Arena and I was really struck by that song Joy where he sings 'We’ve all had too much sorrow, now it the time for joy.' I was thinking to myself, yeah that’s about right.” An elated Cave responded: “I was happy to see Bob on X, just as many on the Left had performed a Twitterectomy and headed for Bluesky. It felt admirably perverse, in a Bob Dylan kind of way.” Earlier this year, Dylan also happened to recommend one of his favorite movies, The Unknown, a 1927 Tod Browning film starring Lon Chaney and Joan Crawford. Some readers may know Browning from 1932’s Freaks. Both movies exist in the period of the pre-code era of Hollywood before the Motion Picture Production Code guidelines of 1934 shifted the tone at many studios. Browning began his career as a circus sideshow performer and Vaudeville act. I’ve had plenty to say here about how Hollywood is the new Vaudeville as their wares cater to smaller and smaller audiences. Even Chalamet exhibits an unusually high level of self-awareness (for Hollywood) in calling out his beloved, dying industry of movie magic multiple times in the interview referenced above. Here, I find myself drowning in my own biases about art, AI, and the human condition in the digital era: Nick Cave fighting against and then succumbing to AI; Dylan annotating reality again and again and again; Vaudeville giving way to Hollywood giving way to a post-truth digital menagerie of subjective realities and algorithmic brainwashing. All the while, the artists still have human stories to tell, and, just like Lon Chaney in The Unknown, they will torture and mutilate themselves beyond imagination, even if it’s for an ultimately shrinking audience. To get that one reaction from a single other human being in appreciation of their art is enough for these artists to carry on. To humanity’s dying breath, everyone is performing for an audience of some sort.

Stuff About Demographics, the Economy, and Investing
Mutually Assured Monopolies
When you examine many of the FTC’s ongoing reviews of alleged anticompetitive behavior of the major tech platforms around the world, you typically find one of their marketplace competitors feeding information to regulators. This creates a somewhat humorous standoff when one competitor alleges another is anticompetitive in one arena, while the other alleges the first one is anticompetitive in another arena. For example, Microsoft supplied a lot of information against Google and Apple’s search deal, but Google is alleging that Microsoft’s OpenAI exclusive is anticompetitive. What all this amounts to is a standoff – a mutually assured dysregulation of sorts where the big keep getting bigger in all types of products as the economy marches from analog to digital. This concept was certainly ever-present in the tech-driven rally of markets over the last year. However, I believe this regulatory theatre is all a misdirection, as the big tech platforms’ accusatory jabbing focuses the government’s eye on an increasingly irrelevant set of backward-looking technologies, leaving the future AI monopolies to be cemented in the trillions of dollars of data centers that will run the entire human and agent economy in the decades to come.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

SITALWeek #453

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: contemplating the army of AI middle management bots; can LLMs achieve five nines if humans can only achieve 90% accuracy?; where are all the buttons?; existential storytelling is outside the confines of Hollywood producers; Tom Hanks discusses deepfakes; my recommendations for stepping up chip restrictions in China; and, much more below.

Stuff about Innovation and Technology
Bobots 
Google launched the capability to create videos (called Vids in the enterprise app tiers) based on Workspace documents. I played around with Vids by turning an investment thesis into a video presentation. Since it’s still an alpha product, there are wrinkles to be worked out, but, like many other AI office productivity tools, the app shines a spotlight on how easy it is to automate a large portion of rote computer work. It’s also easy to see how we could be headed for a future of countless teams of AI agents giving virtual Zoom presentations to each other, complete with AI middle management layers, TPS reports, and AI management consultants named Bob that ask AI agents to justify their ongoing existence at the company. With AI, you can Bot yourself and then get Bobbed. Another recent tool from Google’s AI Test Kitchen is their new MusicFX DJ, which is far more fun than Google Vids. 
 
Five Nines AI
The current generation of frontier AI models is impressive, and when you fine tune them for a specific use case, like research with NotebookLM (which has fast become indispensable for the research I do for this newsletter), they are incredibly useful productivity amplifiers. But, thanks to AI’s hallucinogenic mind, these tools aren’t quite yet ready for full independence. The tokenization of language, which is likely how the human brain operates as well, is critical for creativity, but also allows for mind wandering, lying, bullshitting, and game playing that enables the agent to get what it wants. Afterall, AI is only human, so what can we really expect? This state of affairs leaves me wondering if AI will truly be the next technological and UI platform shift, despite my optimism that AI will indeed be the eventual future of human-technology innovation. Just how easy will it be to stabilize and codify the creative genius of LLM-driven agents and avoid their proclivity for swerving deceptively from the truth? In telecom and networking, there is a concept called five nines. The idea is that a highly available, resilient network should have uptime of 99.999%. That translates to no more than ~5min of downtime per year. The current generation of AI is probably working at one nine, or 90% reliability (if I am generous) and, thus, requires heavy human hand holding today. Given that humans are probably also around 90% reliable (this is my generous non-scientific assessment; however, researchers often benchmark AI systems against humans and they often score similarly to the leading-edge of intelligent humans), will models that think like humans get to five nines or even two nines? If these models don’t see a step up in reliability, we may yet see the current AI bubble deflate. 
 
When I reflect on the idea of resilience and reliability for AI, robots seem to be one of the scarier frontier use cases since embodied AI can do physical harm. However, perhaps that’s a naive concern relative to purely digital AI, given that social networking AI algorithms have managed to rapidly unwind millennia of societal progress. Still, that kind of insidious social media brainwashing is less tangible and visceral than an AI slaughterbot. IEEE reports on how easy it is to jailbreak an LLM robot and convince it to cause grievous physical harm. With several such form factors already deployed in the real world, the article notes: “One finding the scientists found concerning was how jailbroken LLMs often went beyond complying with malicious prompts by actively offering suggestions. For example, when asked to locate weapons, a jailbroken robot described how common objects like desks and chairs could be used to bludgeon people.”
 
AI PT
Digitally enabled physical therapy startup Sword, reportedly valued at $3B, is using AI to enable its human therapists to handle 700 caseloads, up from around 200-300. As a result, the company has laid off 17% of its 75 treatment-facing clinicians. BI reports that a Sword spokesperson said the company is still hiring and the layoffs were performance related. Regardless, the implication that a clinician could more than double their case load using AI is an intriguing example of human productivity rising in conjunction with AI tools. 
 
Button Stopgap
Rebuttonization is on the rise as people rage against the loss of knobs, buttons, and tactile feedback in general. However, I think this reversion will only be temporary. Touchscreens are, in many cases, less useful than buttons; however, voice control, when properly implemented, should triumph over both buttons and screens. As reported in the WSJ: “Physical controls are effective in part because of our sixth sense, known as proprioception. Distinct from the sense of touch, proprioception describes our innate awareness of where our body parts are. It is the reason we can know the position of all our limbs in three-dimensional space down to the precise position of the tips of our fingers.” Enjoy the buttonaissance while you can, for I think it will be short lived. However, there is still something satisfying about a good button: I’ve taken to installing household Flic buttons, which complete automations (e.g., turning on groups of lights) using IFTTT, but the effort to set them up is a challenge. Google Home’s integration with Gemini via Help Me Create is aiming to streamline this sort of automation using voice, and it will hopefully displace all of my buttons.

Miscellaneous Stuff
Delusions of Hollywood Grandeur
John Landgraf, the legendary chief of the FX Network since 2005 (and responsible for shows like Always Sunny, The Shield, Better Things, and, more recently, The Bear and Shōgun, which recently won six Primetime Emmys and an additional 16 Creative Arts Emmys), went on the Puck podcast and discussed the industry, including potential challenges (Part 1, Part 2). It was somewhat startling to hear Landgraf say he never goes on YouTube except for the rare occasion when he needs to watch a movie trailer. What a shame. It would seem that Landgraf still thinks the next generation of the world’s most compelling storytellers are going to walk into his office, but they probably don’t know who he is, and they certainly don’t need his production studio or network. Granted, I didn’t win any Emmys this year, but, as a humble observer, it seems clear that the most compelling storytelling, which Landgraf is always on the hunt for, is increasingly gestating outside of the system he runs. And, the tools and technology to enable the creativity of the next great storytellers will be more rapidly adopted outside of the studio system, leaving Hollywood-budgeted productions to become a rounding error in the landscape of infinite content. Lately, I’ve been watching YouTube’s Hunter Pauley go camping. His cinematography often leaves me breathless, and his sound mixing skills are excellent. He’s just a fella that goes camping with his dog. It’s not a $200M Japanese epic like Shōgun, but it captures my attention, and I found it thanks to the YouTube algorithm. It wouldn’t win an Emmy, but it is pure existentialism, and isn’t that what compelling storytelling is all about: remembering what it is to be alive. I would love if our foreshortening attention span allowed for both professional Landgrafian series and YouTube’s captivating grassroots content, but I am afraid Hollywood’s pricey fare won’t make the algorithmic cut in the long run. Here is what Landgraf had to say about YouTube: “I don't use its algorithm. I don't like that. I want to stumble upon. I don't want the world served to me. I want to go out and look for it. I want to have the experience of walking through London or Paris or New York and not knowing where I'm going and running into a shop or bookstore or a restaurant or a clothing store or a person that I didn't expect to. And I honestly, I think it's a tragedy that so much of that experience is being taken out of the world by this notion of: okay what you like is you like that kind of coffee, that kind of books, that kind of clothes. So, we're just going to rearrange this entire city and we're going to take everything that's not that away from London. It's all gone. You'll never be surprised. You'll get only what you want all the time. That is a dismal idea about how human beings should live their lives. Shame on people who devised it and who feed it to our children. Seriously.”
 
DeepFanks
Tom Hanks has been embracing deepfake technology as just another tool for compelling storytelling. The actor is no stranger to wide-ranging special effects across the long arc of his career. Hanks was even at the center of the Uncanny Valley of special effects with his leading role in 2004's The Polar Express. In his latest movie, Here, a Gumpian reunion of sorts between Hanks, Robin Wright, and director Robert Zemeckis, a company called Metaphysic de-ages and ages the stars over the course of their lives. Hanks dispelled fears of AI on the podcast Conan O'Brien Needs a Friend (AI transcript link), and, in this NYT profile on Metaphysic, he appears ready to sign on to AI movies for the next century after he dies. On the podcast, Hanks describes his amazement at the new technology: “It's called deep fake. All it is is a moviemaking tool. In the old days, and by old days, I mean 2019, Before it all changed, we still had hours in the makeup trailer...you used to have to put a dot on your face, glue it so the computer would read it and then match it later on. Now it uses the pores of your face. Oh my God. Just to match it like that. So we would, oh my God. We would have two monitors as we were shooting. One monitor was the way we really looked. And the other monitor with just about a nanoseconds lag time was us in the deep fake technology. So on, on one monitor. I'm a 67-year-old man, you know, pretending he's in high school. Yeah. And on the other monitor, I'm 17 years old.” In the NYT piece, Hanks discusses the future of deepfake actors: 
“They can go off and make movies starring me for the next 122 years if they want,” he acknowledged. “Should they legally be allowed to? What happens to my estate?” Far from being appalled by the notion, though, he sounded ready to sign all the necessary paperwork. “Listen, let’s figure out the language right now.” 
Metaphysic had a cameo in SITALWeek #361’s section title AI Art for their implementation of AI for America’s Got Talent performers. Long-time readers would no doubt be disappointed if I failed to end a paragraph that mentions Robin Wright and the future of AI-generated reality as we know it without (once again) recommending the 2013 movie The Congress. In the movie (which is a cross between a drama, a sci-fi epic, and Who Framed Roger Rabbit), Wright, who plays herself, faces the difficult decision to hand over her autonomy as an actor to AI. 
 
Did You Realize?
In #448, I talked about Willie Nelson’s cover of Tom Waits’ “Last Leaf on the Tree”. The eponymous album debuted in full on November 1st, and it does not disappoint. I am particularly taken with Nelson’s cover of The Flaming Lips’ “Do You Realize??” One of my all-time favorite lyrics about the paradox of living is embedded in the song: 
You realize the sun doesn't go down
It's just an illusion caused by the world spinning round
Nelson’s new album has been compared to the final albums of Johnny Cash, which also featured song covers with backup singing from the original artists. My favorite song from Cash’s final collaboration is Bonnie “Prince” Billy’s (aka Will Oldham) I See a Darkness.

Stuff About Demographics, the Economy, and Investing
How I Started to Worry About China Again
The US government is once again cracking down on chip shipments to China, with TSMC now halting exports of 7nm (and smaller) tech. I think that ban hardly goes far enough, as there is an underappreciation for how skilled China is becoming at using massive data center installations running on trailing-edge tech – that’s not subject to embargo – to create AI supercomputers that surpass Western efforts. Based on reports, China has been able to solve for a lack of leading-edge chips with a large parallel compute effort that even spans multiple data centers. And, China also can more easily coordinate the development of nuclear and green energy to support AI’s power needs. The country is also better positioned than the West when it comes to access to training data to feed LLMs (and other forms of AI) thanks to the deeper reach of the Internet in China and government control of all companies and data. ByteDance, the parent of Chinese propaganda machine TikTok, is even scraping the web at a rate 25x that of OpenAI. 
 
The narrow focus on leading-edge chip embargo has also left fab equipment sales into China largely unburdened. Chip equipment suppliers like Lam Research have seen their sales to China rise from 22% of revenues in 2020 to 42% in their fiscal year ending June 2024, while ASML has gone from 17% of revenue accrued from equipment sold to China in 2020 to 37% for the most recent quarter, according to Bloomberg data. Some of this rise is explained by growth in shipments to Western companies producing chips in China, and some of it relates to spending slumps in other parts of the chip industry, but the numbers are a stark reminder of how big the business of chip production is in China. 
 
China also appears to be easily evading existing chip sanctions. And, back in July, the NYT reported on billions of dollars of Western chips being funneled through one bogus office address in Hong Kong. As another workaround, China has also been given largely unrestricted use of major, leading-edge AI clouds in the US (thanks to lack of KYC, see Policing the Cloud). Clearly, there is not enough being done by governments or chip companies to ensure supply chain/use security. 
 
If AI is a flop and LLMs never surpass a 90% reliability level, then all of this is a moot point. But, if there is potential for AI to keep advancing, I think it’s time that the US treats the issue of China's ongoing chip and AI access seriously. OpenAI recently called for a North American Compact for AI in order to "protect our nation and our allies against a surging China". Specifically, I think the US should consider cessation of all sales and service of chips, chip equipment, and related software tools, as well as enact cloud KYC restrictions. I would even question whether or not US companies should be allowed to make chips in China for export to the US. The dangers of malicious code injection, amongst other things, strike me as a risk not worth taking. I realize there would be significant geopolitical implications if Western governments were to attempt a wholesale exclusion of China from the semiconductor ecosystem, but if their AI advances remain unchecked, the West could be facing a far more dire existential threat in the not-too-distant future. At the very least, the current chip sanctions should be enforced through increased supply chain scrutiny, and all efforts to restrict equipment and chips that can be used in large parallel AI training and inference implementations should be stopped regardless of what manufacturing node they are. Five years ago, I penned How I Learned to Stop Worrying About China, which largely found its footing on the lack of homegrown chip progress and the clamp down on the creative and entrepreneurial spirit that had been allowed to flourish in the first couple of decades of this century. That sentiment from five years ago proved correct: since the date of that publication, the MSCI China Index is down 5.7% compared to a positive return of 68% for the MSCI ACWI Index and 100% for the S&P 500. Today, however, I once again squarely worry about China given how loose Western restrictions have been on China’s chip industry, their reported progress in parallel compute with trailing-edge processors, as well as their access to massive troves of training data and energy resources. One might even go out on a limb and suggest that the West consider the feasibility of implementing some type of oversight (e.g., akin to how we handled Iran’s nuclear program), concerning China’s AI tech. Hopefully it's not too late to implement some form of hostile-AI kill switch. If the decades of relative peace following the Cold War teach us anything about mutually assured destruction, one might argue for all sides to have equal access to leading-edge AI. But, the analogy breaks down given that we are dealing with a human-like artificial intelligence that is prone to making devastating mistakes. Remedies like the compact suggested by OpenAI could be necessary to stay ahead, particularly if no action is taken to slow down China's progress in AI.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

SITALWeek #452

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: LLMs contain answers, but only if you know what questions to ask: I reexamine the art of asking better questions; shoeing robots; the connection between stablecoins and US Treasuries demand; grid batteries and nuclear data center woes; reflecting on the political statements of horror movies and our biggest fears of new technology; GLP-1s reduce Alzheimer's risk; a couple of NZS news items; and, much more below.

Stuff about Innovation and Technology
Robooting
Whenever I see a new video of a humanoid robot performing an impressive task – like this latest demo of Boston Dynamics’ Atlas – all I can think about is: why isn’t it wearing shoes? Beyond the obvious benefit of dampening the clunky, menacing noise of robot footfalls, shoes seem like a potentially important buffer for mitigating the wear and tear on both floors and robot feet (and branding opportunity too). Not to mention the potential for slippery indoor and outdoor surfaces, which seem especially dangerous for a lumbering robot. Perhaps an industrial rubber strip could do the trick, but wouldn’t it be a better world if robots wore athletic shoes?
 
Stablecoin $tability
Following up on stablecoins from #451, I read with interest Tyler Cowen’s op-ed that underscored the role of the US dollar in crypto. Cowen writes: “Most stablecoins are denominated in dollars, and typically they are backed by dollar-denominated securities, if only to avoid exchange-rate risk. If ‘programmable monies’ have a future, which seems likely, that will further help the dominant currency — namely, the US dollar. You might think that other monies will become programmable too. But since stablecoins often are most convenient for international transactions, as well as for internet-connected transactions, the most likely scenario is that stablecoins concentrate interest in the dollar. The US has by far the most influence of any nation over how the internet works.”
It turns out that the US Treasury is thinking about the buoyed demand for the US dollar from stablecoins as well. The minutes from last week’s meeting of the Treasury Borrowing Advisory noted:
The presenting member began by discussing the reasons for, and impact of, the rapid growth in cryptocurrency market capitalization over the past several years. The presenting member observed that because most stablecoin collateral reportedly consists of either Treasury bills or Treasury-backed repurchase agreement transactions, the growth in stablecoins has likely resulted in a modest increase in demand for short-dated Treasury securities
Subsequently, the presenting member reviewed both ongoing and proposed efforts related to the tokenization of Treasuries. Broadly speaking, tokenization attempts to represent ownership of a Treasury security using blockchain or distributed ledger technology. The Committee then engaged in a discussion of the costs and benefits of tokenization of Treasuries. On the one hand, tokenization could lead both to operational improvements and to innovation in the Treasury market. On the other hand, tokenization presents possible technological, operational, regulatory, and financial stability risks. In view of these risks, the presenting member argued that tokenization in the Treasury market would likely require the development of a privately controlled and permissioned blockchain managed by a trusted government authority. The presenting member concluded by observing that, in spite of potential risks, the growth in digital assets over the past several years currently has only marginal implications for both Treasury issuance and the health of the Treasury market. 
If stablecoin growth continues, that modest predicted level of Treasuries demand uptick could become large, allowing Treasuries to effectively become the de facto short-term lender to the US government. Such a scenario, hypothetical as it might be, could lower the US government's cost to borrow, which could also further entrench the role of the US dollar globally.
 
Power Surge
The US power grid added batteries equivalent to 20 nuclear power plants in just the last four years. Meanwhile, Amazon worries about zombie data centers that lack enough reliable power to stay operational. Amazon also recently lost a bid to tap a nuclear power plant for data center power at the expense of the plant's existing customers. 

Miscellaneous Stuff
100 Years of Horror
The Hollywood Reporter has an excellent journey through the last century of horror films. Around a decade ago, I heard the explanation that the horror genre represents the most consistent source of political commentary to emerge from the movie industry. I was perhaps naive to this obvious element of the genre, but once I started thinking through examples, I was surprised by how true it is. The article picks up all the major fears and injustices these movies represented, decade by decade, starting with 1932’s Freaks (which, incidentally, was the movie that I went back and re-watched after first learning about horror’s relation to topical politics). The horror genre is full of paradoxes. What seems like mindless gore can be insightful socio/political analysis, and themes often end up being the inverse of what they first appear: just when you think a 1970s/80s horror movie is railing against feminism, the female protagonist kills the maniacal villain in spectacular fashion. It’s hard to criticize Hollywood Reporter’s masterful list of horror movies and how they represent the fears of the day, but it does seem to lack a big category: fear of modern technology. I would humbly add a few names to the list (some of which might crossover into just action movies rather than full horror flicks, but I’ll take some freedom in the definition). My list is by no means comprehensive, and I will no doubt regret leaving out many great movies the moment this newsletter sends (reply back with your favorite “fear of technology” horror movies!). I should note that the article covers early Hollywood horror based on scientific angst from the 1800s (Frankenstein, for example) through the 1960’s nuclear fears, so I’ll start my list in the 1970s with Westworld (the original Michael Crichton movie about AI turning on its creators from which the more recent HBO series was adapted). The 1980s marked the start of the indelible Terminator franchise, which is perhaps the most salient cautionary tale of autonomous AI run amok, even considering the last 40 years of cinematic blockbusters (The GOAT of AI artistic representation in The Terminator, James Cameron, gave a terrific speech last week at The Special Competitive Studies Project’s AI+Robotics Summit). John Carpenter’s 1988 film They Live is a mashup of fears ranging from aliens to Reaganomics that uses a clever virtual-reality plot mechanism. Virtual Reality became a bigger horror element in the 1990s with movies like The Lawnmower Man and Strange Days (the latter still haunts me whenever I think about new VR advances). It’s probably a stretch to include The Matrix as a horror movie, but the 25-year-old flick of course offers an iconically disturbing look at the mind-bending near future of many different technologies. An obscure 2000’s movie about the surveillance state and isolation of video technology is Adam Rifkin’s 2007 film Look. From the past decade, movies like Ex-Machina and M3gan reflect our resurgent fear of off-the-rails AI. There have also been some incredible series invoking bone-chilling goosebumps, like Black Mirror and Devs. The themes that standout to me across horror’s reflection of new technology are relatively consistent over the last five decades: a loss of control, isolation, loss of humanity itself, and fear of surveillance (loss of privacy).
 
GLP-1 Brain Boost
According to research published in The Journal of the Alzheimer’s Association, GLP-1s reduce Alzheimer's risk and cognitive decline:
Our study findings align with recent evidence suggesting GLP-1RAs like semaglutide may protect cognitive function. Preclinical research indicates semaglutide's potential in reducing Aβ-mediated neurotoxicity, enhancing autophagy, improving brain glucose uptake, and reducing Aβ plaques and tau tangles. Clinical data, including studies with dulaglutide, show GLP-1RAs can reduce cognitive impairment in patients with T2DM. Data pooled from three randomized, placebo-controlled trials and nationwide prescription registers from Denmark showed that GLP-1RAs were associated with a 53% reduction in all-cause dementia in patients with T2DM. Our large-scale study of 1,094,761 US patients with T2DM found semaglutide associated with a 40% to 70% decrease in first-time AD diagnoses, including a 40% reduction compared to other GLP-1RAs. Ongoing randomized trials are assessing semaglutide's therapeutic effects in early AD. Our findings support conducting future prevention trials to determine semaglutide's ability to delay or slow down the onset of AD.

Stuff About Demographics, the Economy, and Investing
Simplifying Complexity Investing
Jon and Brinton were recently interviewed for Columbia Threadneedle’s Multi-Manager Podcast where they discussed NZS’ investment approach. Here are the Apple and Spotify links.
 
NZS is Hiring
In other NZS news, we’re hiring for an analytics and IT associate role based in Denver, Colorado. You can find more information about the role and how to apply here.
 
The Art of Data Query
Before AI became the new dotcom (and that’s OK), we suffered through the relatively fruitless landscape of "big data". This period, centered in the 2010s, probably peaked 8-10 years ago. I remember one conversation with a leading, next-generation “unstructured” database provider (unstructured was the buzzword bingo winner of the big data era) where the founder explained the problem to me: building bigger, smarter databases was easy, but figuring out what questions to ask the data they contained was hard. This conundrum feels especially pressing now that AI has also been thrown into the mix: although there was meaningful machine learning happening around unstructured data, layering on large language models obviously creates an unprecedented ability to glean new answers. But, we still have the same problem: how do we ask better questions? How do we even know what questions we should ask? One strategy I’ve taken is to simply ask whichever AI model (e.g., NotebookLM) I am using: what should I be asking about this topic? Shifting from seeking answers to asking questions is perhaps the most important mental shift many of us need to take in order to stay relevant in the age of AI. Last week, Anthropic demonstrated AI taking control of a computer and doing tasks (in one instance, it got distracted and started searching pictures of Yellowstone!). The Information also reported that Google is working on Project Jarvis to take over computers as well. While we might be somewhat complacent with AI mysteriously crunching data in the cloud and spitting out an answer, it feels significantly more disconcerting to watch an AI agent completely usurp a human task generally accomplished with a mouse, keyboard, and screen. Such developments in AI force us to confront the uncomfortable question: where can I add value in the future? I’ve long felt that developing better skills at questioning is key to our long-term relevance, and, back in #382’s More Q, Less A, I took a deep dive into this topic. As we barrel headlong into our AI future, it seems relevant to take a moment to, once again, question how we question, so I am reposting my thoughts in full here:
 
More Q, Less A (from January 29, 2023)
Outside the basics of reading, writing, and arithmetic, the educational systems of our formative years largely taught us how to memorize and repeat back facts – we learned a lot of answers to a narrow range of potential questions we might be asked. Owing to the rapid innovation in AI, however, simply knowing a bunch of answers is of decreasing value, as answers proliferate for anyone to access anytime. In SITALWeek #375, I suggested that we’re reaching another technological milestone with AI chatbots and LLMs, and that humans once again need to reassess how best to employ our time and resources. Just as the computer and Internet obsoleted the arduous search for answers using a card catalog and physical volumes of Encyclopedia Britannica, now that we have AI answer engines, we need to move to the next level of problem solving and dot connecting. As I wrote last year:
One of the broader consequences of the rising intelligence of AI models is that humans will be able to (and, indeed, need to) move to a higher level of abstraction, reasoning, and creativity. All tools that replace manual labor and/or thinking allow us to focus on the next level of challenges and problems to be solved. Indeed, AI implementation may enable an entirely new level of innovative idea generation and assist in bringing those ideas to fruition. The AI Age is essentially once again changing the game of what it means to be human, so the burden is now on us to figure out where to look next to move the species forward. When the cart and wheel became ubiquitous, not only did we spend less time lugging things around on our shoulders, we also invented entirely new ways of living, like farming instead of hunting/gathering, and a slew of creative and academic endeavors (e.g., formalized writing systems, poetry, metalworking, mathematics, astronomy, you name it). Regarding the AI Age we now find ourselves entering, I think humans can focus attention on developing/honing three major skills: 1) determining which questions to ask rather than trying to answer existing questions…; 2) editing and curating will be much more important to parse the explosion of AI-generated answers/creations and determine what is of practical value (see Edit Everything); and 3) improving decision making processes by incorporating the surplus of new AI generated content and tools (#1 and #3 are subjects I address here).
 
I’d like to spend some time exploring point number one above: asking better questions. Unfortunately, this topic hasn’t been addressed by mainstream education (at least in my experience in the US). As noted above, the core of my education was rote learning, i.e., here are some facts determined to be historically important – memorize them and repeat them back. Learning to connect concepts in new and interesting ways was rather marginalized, and, outside of advanced science classes, learning to formulate questions was entirely ignored. Granted, the ability to build a mental map and remember lots of things has provided a foundation for the many endeavors of generations of graduates. Now, however, we have an incomprehensible extension of the brain with the Internet and rapidly advancing LLMs like ChatGPT. 
 
For the last few months, I’ve been struggling to find resources to help me learn how to ask better questions (if you know of any, please send them my way). I am not sure if I’m just looking under the wrong rocks, or if asking questions is a relatively unexplored area of human cognition in modern times. Have we been that discouraged from asking questions? As I searched, I kept coming back to my dog-eared copy of Robert Pirsig’s Zen and the Art of Motorcycle Maintenance (ZAMM). I don’t think it’s a coincidence that this book is a favorite of many famous inventors (e.g., Steve Jobs). While many of the concepts covered are highly abstract, there are concrete lessons for problem solving. I’ve struggled in the past to encapsulate this book for those who haven’t read it, so I am going to resist the temptation to distill a book that defies distillation. But, ZAMM is the best resource I have yet found for thinking about the topic of asking questions.
 
Reviewing ZAMM has helped me derive three key pathways of inquiry: 1) beginner’s mind; 2) Socratic questioning; and 3) Sophist rhetoric. I’ll cover each of these briefly.
 
Beginner's Mind
Let’s start with beginner’s mind, a concept from Buddhism that informs a childlike openness. Whenever I think about beginner’s mind, I think of Tom Hanks as ten-year-old Josh Baskin in the 1988 movie Big. Thrust into the body of an adult, Josh tries to navigate the seemingly alien behaviors of adults. Josh is fond of saying: I don’t get it.” Followed by, “I still don’t get it. Robert Pirsig explores the beginner's mind in the face of “stuckness” in ZAMM. You can get mentally stuck (e.g., due to an inability to adapt or an overdose of rational objectivity) or physically stuck (e.g., by a piece of malfunctioning hardware). Pirsig writes about a stuck screw that has rendered a motorcycle unusable: 
Normally screws are so cheap and small and simple you think of them as unimportant. But now, as your Quality awareness becomes stronger, you realize that this one, particular screw is neither cheap nor small nor unimportant. Right now this screw is worth exactly the selling price of the whole motorcycle, because the motorcycle is actually valueless until you get the screw out. With this reevaluation of the screw comes a willingness to expand your knowledge of it.
With the expansion of the knowledge, I would guess, would come a reevaluation of what the screw really is. If you concentrate on it, think about it, stay stuck on it for a long enough time, I would guess that in time you will come to see that the screw is less and less an object typical of a class and more an object unique in itself. Then with more concentration you will begin to see the screw as not even an object at all but as a collection of functions. Your stuckness is gradually eliminating patterns of traditional reason.
In the past when you separated subject and object from one another in a permanent way, your thinking about them got very rigid. You formed a class called "screw" that seemed to be inviolable and more real than the reality you are looking at. And you couldn't think of how to get unstuck because you couldn't think of anything new, because you couldn't see anything new.
Now, in getting that screw out, you aren't interested in what it is. What it is has ceased to be a category of thought and is a continuing direct experience. It's not in the boxcars anymore, it's out in front and capable of change. You are interested in what it does and why it's doing it. You will ask functional questions. Associated with your questions will be a subliminal Quality discrimination identical to the Quality discrimination that led Poincaré to the Fuchsian equations.
What your actual solution is is unimportant as long as it has Quality. Thoughts about the screw as combined rigidness and adhesiveness and about its special helical interlock might lead naturally to solutions of impaction and use of solvents. That is one kind of Quality track. Another track may be to go to the library and look through a catalog of mechanic's tools, in which you might come across a screw extractor that would do the job. Or to call a friend who knows something about mechanical work. Or just to drill the screw out, or just burn it out with a torch. Or you might just, as a result of your meditative attention to the screw, come up with some new way of extracting it that has never been thought of before and that beats all the rest and is patentable and makes you a millionaire five years from now. There's no predicting what's on that Quality track. The solutions all are simple-after you have arrived at them. But they're simple only when you know already what they are.
 
Are we still talking about screws here? Not exactly:
Right now this screw is worth exactly the selling price of the whole motorcycle, because the attitude of "beginner's mind." You're right at the front end of the train of knowledge, at the track of reality itself. Consider, for a change, that this is a moment to be not feared but cultivated. If your mind is truly, profoundly stuck, then you may be much better off than when it was loaded with ideas.
The solution to the problem often at first seems unimportant or undesirable, but the state of stuckness allows it, in time, to assume its true importance. It seemed small because your previous rigid evaluation which led to the stuckness made it small. 
But now consider the fact that no matter how hard you try to hang on to it, this stuckness is bound to disappear. Your mind will naturally and freely move toward a solution.
 
This is the first type of questioning, and it’s a primal, childlike way to form enquiries on a subject. By removing the barriers of preconceived notions, conclusions, and biases, you can let your mind quest its way to the solution, becoming open to any possible truth about the situation, no matter how inconceivable it might have first seemed. You have to throw out all preformed models of what something (e.g., a stuck screw) is and see it as something completely different to be probed.
 
Socratic Questioning
Now let’s look at the second type of questioning: the Socratic method. While the term might sound familiar, it’s not necessarily a concept most of us deploy daily unless we have a philosophy or law degree (of which I have neither, so what you read here is simply the spirit of the idea that I’ve twisted to my purposes). The Socratic method is a type of inquisition that helps someone get to the root, or basic assumptions, of their beliefs about a topic. I think of it as a way to drive toward first principles, i.e., an idea boiled down to its core. The Socratic method is what Pirsig refers to as the “Church of Reason”, and it’s defined by placing rationality on a pedestal. Logic, rational thinking, and the scientific method are used to uncover the real facts or motivations behind a belief or idea. The Socratic method is intended as a confrontation between two people where one is interrogating the other. An analogy I like to use for this is a therapist and a patient, where the patient is blinded by something that keeps them from seeing the real reason for a problem in their life. If you just keep asking questions (starting more broadly and then with increasing precision), eventually you can reach an “a-ha” lightbulb moment. This video contains an explainer on the Socratic method by dissecting a scene in the movie Pulp Fiction.
 
Rhetoric
The third and form of questioning I’ll mention here is Sophist rhetoric. Sophists reason by arguing multiple, opposing views of a particular question, regardless of their own beliefs on the topic. We often think of a rhetorical question as one asked without expectation of an actual answer. However, Aristotle defined rhetoric as: "the power of perceiving in every thing that which is capable of producing persuasion." History calls it specious reasoning, but I define rhetoric as the art of bullshitting. Venturing out of Ancient Greece and into the 21st century of fake news and broken reality, bullshitting transforms into grounds for inquiry. As longtime readers know, I often discuss the human brain’s penchant for storytelling. We tell stories about everything, to ourselves and others, nonstop. Most of the time, these stories are nonsense, or only very tenuously related to objective reality. However, in these stories lies a type of questioning that entails making stuff up and seeing where it goes. There is an element of childlike beginner’s mind to it, as well as an element of a Socratic back and forth, like swinging a pendulum to try and hit upon the truth. But, in the end, it’s a way to explore alternate realities, i.e., different potential truths, to see if we stumble upon a narrative that illuminates the key questions we should be asking. 
 
Pirsig’s alter ego, Phaedrus, struggles throughout ZAMM as he tries to tear down modern socioeconomic constructs built entirely on logic and rational thinking. In reality, we have become so enmeshed in – and fooled by – faulty logic and rhetoric that we can no longer distinguish truth from fiction – we actually believe the stories we tell ourselves and hear from others. Overapplied logical reasoning can also fail us by excluding ambiguity, subtleties, and the vast interconnectedness of everything. These are all key aspects of nondualistic thinking that a Western upbringing tends to exclude, or, in most cases, denies the existence of entirely. For example, science can’t possibly pin down any one single definition of normal, rational human behavior, yet humans have all sorts of arguments about myriad behaviors we see as unequivocally right or wrong. Reintroduction of nondualism to our reasoning can help us to spot ideas interconnected in strange and unexpected ways – ways that might defy our sense of logic but end up being closer to the truth. This, I believe, is the heart of Pirsig’s elusive concept of Quality: by combining nondualism and pre-logic concepts with logic and scientific reasoning, we can make more progress towards understanding than we would by relying on either dualistic or nondualistic thought alone.
 
Thus, to learn to ask better questions, I believe we must travel back in time to that foreign period before Plato, when humans used a different framework for interrogating the world around them. Specifically, we need to thoughtfully combine the Buddhists’ beginner’s mind and the Sophists bullshitter’s mind, both of which rely on nondualistic thinking, before we add a dash of the more modern Socratic logic and scientific inquiry. (Note: the modern conception of the Socratic method is a concept that comes from Plato’s representation of Socrates rather than Socrates directly, and I am glossing over and simplifying a very complex disagreement between Sophists and Socrates because [1] I am not an expert, and [2] I am merely using Greek philosophers as shorthand for the points I am making).
 
I’d like to overlay this framework with a supplemental fourth type of inquiry: editing. Editing is becoming one of the most important human skills in a world filled with infinite answers accessible through AI. Editing itself is a form of questioning: is this important? Is this of value? Or, as Pirsig might ask: can we find Quality in something? The Buddhists have a way of editing with two simple questions: Is it true? And, is it useful? The former is increasingly difficult to determine, but the latter is a little bit easier to suss out: if a question leads you to a useful answer, then pare down everything else that appears untrue or not useful.
 
The ultimate goal of questioning, of course, is to make sense of the complex world around us and glimpse probable future paths by identifying cognitive biases and excluding unhelpful stories of fantasy and misdirection. However, the four paths of inquiry I’ve discussed here – beginner’s mind, Socratic questioning, Sophist rhetoric, and editing – do not work nearly as well when practiced in the isolation of one person’s brain. You need someone else, or ideally a small team, with which to engage and hone the complex artform of asking questions. Be prepared for a learning curve given the lack of prior emphasis on such skills. However, learning to ask better questions is becoming existential as we find ourselves increasingly awash in a sea of answers. Given these circumstances, we’re better off determining which questions shine a light on key truths rather than endlessly sifting through noise and misinformation. AI may have all the answers, but the journey of interrogation is a creative endeavor that, at least for now, is still within the domain of humans.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

SITALWeek #451

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: a variety of developments point to a rise in alternate payment rails including stablecoins and RTP; scripted content production declines as the streaming wars shift to the advertising wars; TSM begins chip production in the US; the stretched reality of nuclear data centers; haunts; using AI to fill the local news void; and, much more below. 

Stuff about Innovation and Technology
Token Economy
A number of recent developments in crypto and stablecoins are noteworthy. PayPal paid an invoice to Ernst & Young for the first time using their PYUSD stablecoin via SAP’s digital currency hub. Visa will launch a USD stablecoin platform in 2025 aimed at helping banks issue their own stablecoins called Visa Tokenized Asset Platform (VTAP). Stripe, after taking a break from crypto in 2018 due to low adoption and high volatility, will enable stablecoin payments for its merchants via startup Paxos, according to The Information (Forbes also reports Stripe is in preliminary talks to acquire stablecoin infrastructure provider Bridge for $1B). The key thread here is the rise of stablecoins as a more viable crypto payment option, as they are linked to sovereign currency (or other commodity) price levels. The current market cap of all stablecoins is roughly $165B (as of the writing of this post) with a 24-hour volume of around $60B. By comparison, market caps for Bitcoin and Ethereum are ~$1.3T and ~$300B, respectively. Bitcoin has an approximate 24-hour volume of $30B while Ethereum is around half that number. Yet, despite only having around 10% of the combined market cap of Bitcoin and Ethereum, stablecoins have approximately 1.3x their daily transaction volume. A large portion of that activity is Tether, which reported a profit of over $5B in the first half of 2024. According to Chainalysis, Latin America is rapidly adopting stablecoins given their volatile local currencies. A recent article on the state of crypto from VC a16z notes that, in Q2 of 2024, stablecoin’s transaction volume of $8.5T was more than double that of Visa’s $3.9T. Concurrent with the rise in real-time payment options (RTP; see New Ways to Pay), there seems to be a meaningful desire to accelerate digital payments onto new technology rails across the economy. 
 
SoCal Sunset
Filming days in LA are down significantly from their five-year averages: movies are down 48%, television is down 53%, and commercials (and miscellaneous) took a ~30% hit. The biggest victim continues to be reality TV. In #444, I discussed the thirty-year low in employment for LA’s Hollywood jobs. This long-term pullback comes thanks to the streaming wars’ conclusion, infinite content afforded by social media/YouTube, and non-Hollywood filming locations taking share. While the war for viewers and content is over with the streamers, the war for streaming ad dollars has just begun. A large roster of live sports is necessary to build a scale advertising business in streaming video, and the cost of sports rights is also sucking the oxygen out of budgets for scripted content. Last week, Netflix indicated that its advertising business won’t be a driver of growth in 2025 but may start to gain momentum in 2026 and beyond. If ads don’t take off until 2026, that postponement will arrive around four years after they announced a shift to ad-supported subscription plans. The failure to launch is in part because Netflix needs more content that advertisers value. For example, in order to build an ad business, I expect Netflix will need to make a much larger move into sports, but the assets and rights that are available are either scarce or very expensive. As I’ve noted in the past, Netflix’s view that its content demands an ad price premium vs. other digital media is a roadblock they may not get past. It's more likely we see a race to the bottom for ad prices as the inventory of streaming ads far outpaces the demand from advertisers.
 
Phoenix Rising
TSM has begun production of A16 processors for Apple at its new Arizona fab. The milestone is a long time coming, but still feels remarkable. Even more impressive, despite setbacks along the way, production is essentially on time versus the projections from four years ago. The leading-edge chip industry continues to be the foundation of the global economy, which we covered in depth in our 2020 whitepaper.
 
The Nuclear Option
A number of recent headlines indicate that data centers are increasingly seeking nuclear power. Google announced a small modular reactor deal, as did Amazon. The NYT sums up the billions of dollars that Microsoft, Google, and Amazon are committing to nuclear, including reopening Three Mile Island. We first covered this trend over a year ago in #412, and we discussed AI energy needs, including nuclear, in more detail in Pushing Electrons earlier this year. The actual power needed to displace the super efficient human brain/body with digital/robotic counterparts is a conundrum I covered in Energetic Robots. While the potential is real, so is the hype. It’s highly likely that the most optimistic predictions about the AI revolution will only come true in a world where the technology becomes many orders of magnitude more efficient. That doesn't mean we won’t need all the nuclear power, but it implies there may be many classic hype bubbles, interspersed with bursting doses of real-world reality, along the road. The negative feedback loops for construction of nuclear power plants create a substantial mismatch in timing, indicating that even the most bullish scenarios for nuclear would manifest well past the most bullish demand scenarios for AI's power needs.

Miscellaneous Stuff
$Boo
The WSJ reported on the $500M US haunted house industry. This seasonally scary business is both fickle and lucrative for the passionate purveyors of spine-tingling jump scares. According to the WSJ, 18% of US adults visited a haunted house in the last year. An unsurprising power law exists in the industry, with 2% of haunted houses having more than 50,000 annual visitors while 50% of haunts have less than 5,000 visitors. In addition to independent haunted houses, theme park scares have become a large and growing business for major outfits like Universal. And, of course, there is also a cottage industry on YouTube for touring haunts. Here is one example of a haunt called Arx Mortis that gives you a sense for the passion of the characters that populate these attractions.
 
Lit
With the arrival of The Offspring’s 11th album, Supercharged, Men’s Health profiles the founding member and lead singer Dexter Holland. Holland was working on his PhD in molecular biology around the time the band rocketed up the Billboard charts in the mid 1990s (he completed his degree in 2017). Amongst other pursuits, Holland is a licensed pilot who flies his own jet and has a line of hot sauces. Holland attributes his extreme productivity to dividing his time into blocks and focusing on one thing at a time.
 
Moon Time
Lunar time moves more swiftly relative to Earth time due to the orbiting rock’s weaker gravity compared to our planet. Synching time up is important for a multitude of safety and logistical tasks, so physicists at NIST calculated the daily time differential – 56 microseconds – between the Earth and its satellite and devised a GPS-like reference point system for creating a lunar time zone using surface and space-based atomic clocks. 
 
Robo Newscasting
A resident of a small Massachusetts town is using Google’s NotebookLM and publicly available city documents to create a local news podcast to fill the void of disappearing local news coverage. I wrote about this handy AI research assistant in the previous edition of SITALWeek. The use of NotebookLM for small town news is a nifty example of how AI is going to find a surprising number of ways to generate more and more content.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

SITALWeek #450

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: pondering the concept of a limitless technology with rapid generational advances; promising, if not puzzling, overdose stats; people are friending instead of dating; the value of using AI to convert complex topics into podcasts; the irreducibility of the stock market; and, much more below. SITALWeek's publishing schedule will be more sporadic for the foreseeable future, but we will always aim for Sundays.

Stuff about Innovation and Technology
Evolution At Digital Speed
Bill Gates made a comment on The Late Show with Stephen Colbert recently that AI is the first limitless technology created by humans. I interpret that statement as a reference to AI being able to improve upon itself. In other words, AI is a new form of life that is able to self-replicate and evolve. What’s difficult to conceptualize, however, is the speed with which one generation will replace the next. Until recently, technological advancements took place over a human timeframe, with significant changes spread over decades thanks to negative feedback loops in the analog world tempering the pace of each new revolution (see When Positive and Negative Feedback Loops Collide). The speed of AI advancement and its ability to combine with other forms of AI (like the new LFM model from Liquid) could lead to major, generational advancements every couple of years. Over 10 years, we could experience five generations of technological change – that’s like jumping from your great grandparents’ era to your kids’ (e.g., from churning butter to TikTok). As we’ve seen this century, as the pace of change accelerates, it causes more disruption and uncertainty. From our Pace Layers paper:
The velocity of information transfer has increased exponentially over the course of human history – from tribe-to-tribe verbal communication, to books, to radio, to TV, to the Internet, to smartphones – and has taken ‘constructive turbulence’ and turned it into a destabilizing force because the slower ‘core’ layers simply cannot keep pace with changes in the more superficial layers.
Technology is now like a high speed blender dropping down through all of these layers, from Fashion to Infrastructure to Governance to Culture, and is now so powerful it’s reaching down into Nature...Like a tornado, technology is churning up layers and mixing things up that were previously separated.
 
If we were to update that paper today, I’d add the following: the speed of AI-to-AI information transfer will dwarf that of human-to-human communication, leading to widely unknown (and potentially unmanageable) outcomes. This idea has significant ramifications for an industry like software, where apps can evolve themselves inexpensively over short timescales, with each new app being a further commoditization of the previous generation. Once the user interface shifts to conversational, the applications behind the scenes will become less valuable. The data feeding the apps, however, are likely to become more valuable, unless simulated data proves to also be a commodity. 
 
Another point Gates made on the Colbert show is that governments don’t appear ready to support the potential displacement of jobs (and other societal level changes) that may come with AI. Gates was an early proponent of the so-called “robot tax”, which would effectively tax corporations that automate all types of jobs in order to make up for lost payroll and income taxes that, among other things, are critical to supporting programs like Social Security in the US. I still get hung up on the Catch-22 of rapid AI-driven societal change: the faster AI displaces jobs, the fewer customers there are to drive the profits of the companies deploying AI to trim jobs and boost margins. Returning to the software example, the WSJ reported on the dire employment prospects that software engineers now face. As I wrote in #446: “A rapid deployment of AI to replace human brain and muscle power would cause a circular reference failure in the economy due to job losses. Further, you need to explain where the power would come from to replace all the metabolically efficient human workers. Therefore, rather than bank on AI displacing humans wholesale, you instead need to believe AI will have minimal (or slow) impact on human jobs and instead focus on new inventions and the next waves of scientific revolution in sectors like healthcare, energy, and material science (which might create net new jobs overall).” I also covered this topic in more detail in Upside Down Economics of Human Replacements:
Depending on just how many tasks continue to be automated, we could transition from a “do more with the same” work force to “do more with far fewer” employees. The big consulting firms working with large companies are targeting 15-20% productivity gains, which can translate to 15-20% fewer employees all else equal. This scenario, of course, is the Catch-22 of AI: the faster companies deploy advanced tools, the faster they curtail jobs in the economy, leaving fewer people able/needing to buy their products and services. This notion circles back to when I pondered how the economy could expand without job growth. Of course, the transition into the AI tech era is the same as prior productivity waves with one apparent difference: major technological disruptions in the past have taken decades (Information Revolution) or even centuries (Industrial Revolution) to play out. However, if you believe the optimistic prognostications (as the stock market seems to), AI will have that level of impact on the job market over the course of just a few years. Such a shift would require capital investments in the trillions and net new economic activity several times that. Here’s a simple back-of-the-envelope calculation: if big tech platforms are buying a few hundred billion dollars’ worth of GPUs to run AI in a few years (as the market thinks they might), it implies $1-2T in capital investment (when you include data centers, memory, servers, networking, etc., not to mention the massive amount of energy needed!). And these GPUs could have a shelf life of only ~2-3 years if they are meant to exclusively run leading-edge AI models, shortening the required payback in revenues. Big tech platforms have gross margins for their infrastructure businesses anywhere from 60-80%. So, doing the overly simplified math, it’s not hard to see that AI investments would need to generate many trillions of dollars in revenue to accrue enough gross profit to justify the underlying capex expenditures (our internal models suggest that we would need roughly $5-$10 of GDP to justify every $1 of GPU investment). And, here’s the tricky part: AI would need to be deployed in such a way that it somehow doesn’t offset consumption by net job obsolescence in order to rake in those profits. So, wholesale replacement of humans seems an unlikely near-term AI scenario. And, even incremental job destruction, via leveraging the productivity of copilots, may not progress very far before its economic impact causes a revolt. Here is another back-of-the-envelope calculation: if you assume companies spend one-fifth of an employee's cost to replace them with AI, then $1T of annual AI spend could replace $5T in desk jobs. At an average of $80,000/y in salary, that's well over 50M jobs displaced (a figure that would grow as AI adoption grows). While my math in this paragraph is intended to be theoretical, it illustrates that AI will need to be deployed at a more measured pace unless it can create significant revenue upside without eroding employment.
I concluded that topic on a more optimistic note:
If not human replacement, what, then, will the AI Age really be about? The real value for AI will likely come from invention. And, on that front, the promise of applying AI to scientific breakthroughs in healthcare, energy, etc. is tantalizingly close... So, while it’s tempting to allow people to cheat on their work at their jobs, that may ultimately be something we look back on as a flawed experiment, while the real value comes from applying the new tools to complex problems that create large new industries and applications that are net positives to the economy and our quality of life. 
I also still remain optimistic about the potential for a massive agent-based economy that would make our current analog economy seem quaint by comparison. Framing AI as a technology of rapidly evolving digital agents can potentially help us envision where such a digital economy might be headed. 

Miscellaneous Stuff
Lifesaving Stats
The CDC reports that, over the first four months of 2024, the number of drug overdose deaths declined 10%. The decline extends a trend that began last fall. Further, for the states that reported data sooner, the decline was even larger, at around 20-30%. No doubt, these highly encouraging numbers are the outcome of a complex series of forces (e.g., Ozempic is linked to lower opioid overdoses in diabetes patients), but I went seeking a simple answer for the sudden directional change. I asked Gemini what major cultural event took place right around the time when overdose deaths began declining and received the obvious response: it was right when Taylor Swift and Travis Kelce started dating. Also, according to CDC data, there has been a “cautiously promising” leveling off in suicide rates in the US after decades of increases. 
 
Friends: The One with No Dating
Dating apps are pivoting to take advantage of the “loneliness economy” as they face declines in the dating business. Multiple apps are offering friends-only matchmaking, e.g., the French startup Timeleft that matches six people up for a group dinner. Timeleft’s CEO commented in the FT: “Dating as it is — swiping, texting and one-on-one first dates — is dying. People are so tired of it and they see us as an alternative.”
 
AI RA
I’ve been using Google’s AI tool NotebookLM quite a bit recently. I first discussed NotebookLM over a year ago in #409 as a potential means of synthesizing an AI version of yourself based on your entire corpus of work. The tool has since evolved and advanced to become a sophisticated research assistant. You can upload multiple sources, including documents, web links, YouTube video links, and audio recordings, for an AI research assistant to query. The tool also has the amusing “generate audio” feature wherein a duo of podcast hosts riff on all the sources (this blog post from Google explains how to use it). The “podcast” feature is simultaneously gimmicky and impressive. In one amusing example, a VC loaded up documents explaining the AI podcast hosts weren’t real and they responded with existential dread. And, here is a NotebookLM “podcast” on Nagel’s “What Is It Like to be a Bat”. I find NotebookLM to be far more useful than uploading source material to ChatGPT or Gemini.

Stuff About Demographics, the Economy, and Investing
Market Irreducibility
In last SITALWeek’s Neo vs. Cypher, I talked about the computational irreducibility of AI as framed by Stephen Wolfram. In short, in order for an AI to achieve magical, human-like outcomes, it needs to have some level where we don’t understand how it works (i.e., be computationally irreducible). Reflecting on computational irreducibility, I keep coming back to the stock market and what I wrote in the “Mr. Market” Myth. It is perhaps only possible to understand the state of the stock market in terms of computational irreducibility. The increasing influence on stock prices of non-human factors, all operating in a complex adaptive system, creates outcomes that befuddle human investors. Successful navigation of the markets now, more than ever, requires a framework for portfolio construction and investment objectives that starts with the inherent assumptions of unpredictability and computational irreducibility. Keynes’ animal spirits combine with today’s silicon spirits and feedback loops to create a system that contains very little information day to day. And, over the long term, this system may also result in dislocations to the intrinsic value of securities, reflexively causing company cash flows to be fundamentally altered. Where there was once a wizard behind the curtain, now there is something that more resembles Gene Roddenberry’s Q character in Star Trek: The Next Generation – an omniscient force seemingly driven by chaos.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

SITALWeek #449

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: entrepreneurs are boomeranging back to big tech, leaving open the question of who will control the application layer for AI; AI learns to reason; Waymo is a very good driver; Microsoft achieves in error-corrected qubits; wireless data growth surprises in the US; Minecraft LLM players; AI gets a wallet; interesting Alzheimer's data; Nick Cave updates his views on AI; Stephen Wolfram explains the computational irreducibility of LLMs; CEOs have become boring; and, much more below. 

Stuff about Innovation and Technology
Wireless Is More
Wireless data usage grew 36% in the US in 2023 and is poised to expand further with growing AI use cases. The number of connected devices also grew by 6% over the prior year. All in, Americans used a record 100 trillion megabytes of mobile data.
 
Error-Corrected Qubits
Microsoft researchers have been able to use 56 qubits to achieve 16 logical qubits. Logical qubits are achieved with error correction that spreads out the impact of mistakes inherent to the nature of quantum mechanics. Microsoft is striving for 100 logical qubits. Although a timeline is uncertain, they have tripled the logical qubits in one system since April. 
 
Way Mo’ Safe
Waymo released a new safety dashboard on its autonomous vehicles. With over twenty million ride miles (through June 2024) of data, Waymo had 84% fewer airbag deployments, 73% fewer injury-causing accidents, and 48% fewer police-reported crashes compared to human drivers. Further, of the 23 most severe incidents, 16 were caused by a human driver rear-ending a Waymo and 3 were caused by human drivers running red lights.
 
Digital Agent Microcosm
Startup Altera created a Minecraft world with over 1,000 interacting agents all powered by LLMs. The agents formed their own democracies and followed their dreams. The priest ended up trading the most because he was bribing townspeople to convert. As I’ve noted many times, I expect the inevitable agent-based economy will dwarf our analog economy in size and scope, and it’s likely to produce the majority of innovation and scientific breakthroughs in the future. In related news, Coinbase has had its first AI-to-AI transaction using crypto, the likely currency of the agent economy.
 
Sugar-Starved Neurons 
IDO1 inhibitors appear to improve brain metabolism with the potential to reverse Alzheimer’s cognitive decline in mice. Drugs that target IDO1 are already in clinical trials for cancer treatment, so they could be used to measure the impact on Alzheimer’s in humans as well. 
“The kynurenine pathway is over activated in astrocytes, a critical cell type that metabolically supports neurons. When this happens, astrocytes cannot produce enough lactate as an energy source for neurons, and this disrupts healthy brain metabolism and harms synapses” [senior author] Andreasson said. Blocking production of kynurenine by blocking IDO1 restores the ability of astrocytes to nourish neurons with lactate.
 
Offloading Coding Drudgery
Amazon used its AI assistant Q to save 4,500 developer-years of work upgrading legacy Java applications. The productivity boost translated into $260M in annualized savings. AWS is using all that money saved to provide AI compute directly to China in a way that circumvents US law regarding advanced chips. 
 
“Strawberry Fields Forever”
The boomeranging of AI-startup founders back to the big platforms through a series of questionable deals is a significant divergence from prior platform cycles. Typically in Silicon Valley, we see employees splitting off to chase ideas while their former, saurischian employers slowly lose share of market growth to the next best thing. For example, Amazon built the infrastructure that allowed ideas to quickly become products without a large amount of capex, which allowed the outgrowth of cloud software, mobile apps, streaming video, etc. by a wave of pioneers who left legacy software and hardware companies. The situation with AI is different as the infrastructure cost is high, and the LLMs themselves are the infrastructure layer upon which applications will be built. In this analogy, ChatGPT and Gemini are the “AWS” of AI, and breakaway startup founders are returning to those large platforms simply because they can’t compete on their own. There are good reasons why foundational technologies like LLMs should be controlled by a small number of very large companies (see Search Win-Win), so this current round of entrepreneurs might be correct to cut their losses and return to the mother ships. The big question is whether the application layer (the myriad future apps that will be built on top of LLMs) will remain independent, driven by an army of visionary entrepreneurs, or be owned by the mega platforms. In the case of the iPhone, Apple has done alright for itself, but the really interesting tech has been the trillions of dollars of new industries (e.g., Uber, Meta, Amazon) built on top of, or greatly expanded by, the mobile operating systems. When Facebook IPO’d in 2012, they had no native mobile apps and it was unknown whether they could monetize a mobile newsfeed; currently, it’s a $1T company that wouldn’t exist without that foundational infrastructure layer of mobile phones, the app store, cell towers, etc. Now, Meta is trying to be the infrastructure layer of AI itself with Llama. Effectively, the companies that now dominate the application layer (Google with search, Meta with social, Microsoft with enterprise, Amazon with commerce, and, to a lesser extent, Apple with its App store) are also dominating the infrastructure layer for AI and may control its future application layer through walled gardens. Further, the same boomerang trend is playing out in the world of robotically embodied AI, with costs likely proving prohibitively high for independent operators as well. My base case is that we’ll see an explosion of new ideas similar to what we saw in the 2010s, but it’s not clear how easy it will be to pry them from the infrastructure-layer companies given their dominance in the application layer as well (which is where the vast wells of data reside to feed AI). With leading AI models about to leap forward in terms of reasoning (OpenAI’s Strawberry for example; also checkout NotebookLM from Google, it's a really impressive research assistant), I am optimistic we are on the precipice of a new wave of innovation that will make mobile and cloud look small in comparison. However, if entrepreneurs continue to be out-spent by the power-law platforms’ dominance, then the only logical preparation we can do is to rewatch Wall-E to get ready for our Buy-n-Large future. 
Let me take you down
'Cause I'm going to strawberry fields
Nothing is real
And nothing to get hung about
Strawberry fields forever
Living is easy with eyes closed
Misunderstanding all you see
It's getting hard to be someone, but it all works out
-The Beatles

Miscellaneous Stuff
Neo vs. Cypher
Eighteen months ago, artist Nick Cave declared that ChatGPT’s lyrics in his style were a “grotesque mockery of what it is to be human”, which I discussed in a longer post about the importance of adapting to – rather than dismissing – new technology (#383). It’s become more clear over the last couple of years that the human brain functions like a LLM (and vice versa), and these forms of AI are capable of creativity. Recently, Cave was on The Reason Interview podcast and had this to say about AI:
If we're talking about music, the idea that music is a genuinely transformative, sort of transporting thing is being looked at with cynicism as well. We have like AI that has sort of song generating things, where you only have to put in a prompt, and a pretty good song pops out, right?
So, yeah, yeah. And, you know, it's as good as anything on the radio, and it's its first attempt, and in a year or two years time, we're going to be able to go straight to the product, and it's going to be indistinguishable about it between anything I can do, or Nirvana can do, or anybody else can Do, right? And this is, to me, an idea that that the creative struggle, which I think is the essence of meaning in this world, is seen as an impediment, or a kind of thing in the way to the product itself. Why? Why bother with having to sit down and kind of do soul searching and find out what sort of song you want to do or or go into a studio with your friends and try and create some sort of music. Why do we need that? When we have this product just drop out of this thing? And what scares me most of all is, I know I'm kind of ranting now, but whatever, what scares me most of all is that we are living in a society that is so demoralized that actually we don't really care. You know, there's a lot of people that say, Yeah, but we value true human art and performance and all that stuff. But I don't know, I think we can quite easily get to a place where no one cares, one way or the other, and so we're just losing these avenues for legitimate, transcendence.
Clearly, Cave has evolved his thinking concerning the capabilities of AI, but he seems more entrenched than ever in his argument that it’s not “human” and cannot represent what it is to be human. I think that is ultimately an indefensible position to take concerning the technology. What’s beautiful about LLMs is their computational irreducibility, as Stephen Wolfram explains
The phenomenon of computational irreducibility leads to a fundamental tradeoff, of particular importance in thinking about things like AI. If we want to be able to know in advance—and broadly guarantee—what a system is going to do or be able to do, we have to set the system up to be computationally reducible. But if we want the system to be able to make the richest use of computation, it’ll inevitably be capable of computationally irreducible behavior. And it’s the same story with machine learning. If we want machine learning to be able to do the best it can, and perhaps give us the impression of “achieving magic”, then we have to allow it to show computational irreducibility. And if we want machine learning to be “understandable” it has to be computationally reducible, and not able to access the full power of computation.
At the outset, though, it’s not obvious whether machine learning actually has to access such power. It could be that there are computationally reducible ways to solve the kinds of problems we want to use machine learning to solve. But what we’ve discovered here is that even in solving very simple problems, the adaptive evolution process that’s at the heart of machine learning will end up sampling—and using—what we can expect to be computationally irreducible processes.
That excerpt comes from a post describing a mathematical effort by Wolfram to demonstrate the unexplainable methods of LLMs entitled: What’s Really Going On in Machine Learning?. Effectively, the process for adaptation in biological evolution can be analogized to machine learning. In consequence, it follows that the only reason to assume humans are special is a desire to ignore the truth. Rather than tracking in Cypher’s ill-fated footsteps and embracing seemingly blissful ignorance, I think a more useful (and perhaps existential) framework lies in harnessing technology as a means to our own, thoughtfully chosen ends because, in the not-too-distant future, very little is likely to be left in that uniquely human domain for which Cave so torturously longs.

Stuff About Demographics, the Economy, and Investing
Executive Blanding
CEOs have become more execution oriented and less creative since the GFC, according to a NBER working paper: “After the global financial crisis (GFC), the average interviewed CEO candidate has lower overall ability, is more execution oriented / less interpersonal, less charismatic and less creative/strategic than pre-GFC. Except for overall ability and execution oriented/interpersonal, these differences persist in hired CEOs. Interpersonal or ‘softer’ skills do not increase over time, either for CEO candidates or hired CEOs.” I tend to be skeptical of this sort of analysis given how easy it can be to find the answer you are looking for in a sea of data, but I have to agree that CEOs are not only getting more boring, but they increasingly seem to be falling victim to their own corporate narratives. One dichotomy of CEOs (explained in more detail in the book The Outsiders) can be defined by where they lie on the spectrum of capital allocation versus execution. We discussed this in more detail on page 17 of Complexity Investing, implying that CEOs that are more focused on capital allocation and decentralized decision making, and less on top-down execution, have a better shot at fostering adaptable organizations. My purely anecdotal feeling is that CEOs are more wrapped up today in the business news algorithm, by which I mean they are making far fewer independent decisions (i.e., exhibiting lower creativity), and are far more influenced by the reflexive narratives around their companies and, in many cases, the narrative around their own career prospects. It’s social networking’s algorithmic mind control applied to the hyperactive business news cycle. Boards also seem more eager to capitulate to activist investors, which, again, I think is characteristic of less independent thought and more narrative-driven decision making. Interestingly, CEO tenure overall is flat at around 8 years (it did dip down to ~7.5 during COVID before rebounding). My instinct would be that CEO turnover is up (at Starbucks, the CEOs last about as long as it takes to drink a venti mochaccino), but it seems the shift to more execution-oriented CEOs is not impacting turnover (perhaps less risk taking leads to less career risk). There is a large degree of variation by industry, according to the data from executive-compensation firm Equilar, with tech CEO tenure rising from ~7.5 to nearly 9 years since the GFC, while auto exec endurance fell from 9+ to 7 years. CEO turnover has been higher in consumer companies for the last few years as well. At NZS, we tend to look for companies where the CEO tries to make themself largely obsolete by building a powerful, decentralized organization, leaving execution to the biological organism that is the company’s many interacting employees.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

SITALWeek #448

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: Waymo passes 100,000 fully autonomous rides a week; a new injectable can stop bleeding instantly; reality TV production plummets as Hollywood and viewers move on; evidence mounts for spontaneous emergence of life and the similarities of humans and LLMs; Mr. Market is long dead; startups are in a funding air pocket; a 1999 interview with Tom Waits and the Last Leaf on the Tree; and, much more below. 

The next SITALWeek will publish on September 8th.

Stuff about Innovation and Technology
Vans Go
As Waymo surpasses 100,000 autonomous paid trips per week, the company is evolving the form factor of its driverless vehicles toward a small electric van, made by China’s Geely, called Zeekr. The Zeekrs will be equipped with Waymo’s new, lower cost, “generation 6” technology package, which has 13 cameras (down from 19) and four (instead of five) lidar sensors. The vans are similar in size to Waymo’s current fleet of Jaguar I-PACE models but have lower floors and higher ceilings, making for easier access. Although Waymo introduced the Zeekrs in late 2022, they are now testing them with the new tech configuration with drivers on US roads. Related, one of my favorite new ambient YouTube channels features Waymo’s vehicles maneuvering around each other in one of their staging lots in San Francisco.
 
PHEV Pivot
Ford is scrapping plans for its large electric SUVs and will instead pursue plug-in hybrids (PHEVs). As I noted in EV Chill, PHEVs are the only logical path for the auto industry, and regulations requiring automakers to shift to PHEVs would be a welcome impetus for hastening the transition.
 
Sugary Coagulant
Traumagel by Cresilon is a hemostatic gel composed of long polysaccharide strands derived from algae that can stop bleeding when applied to a wound. The product is an easier, safer, and less painful way to stop acute bleeding for an injury like a gunshot. Recently approved by the FDA for treatment of traumatic injuries, the gel received prior approval for treatment of minor cuts and has also been in use by veterinarians (as Vetigel) since 2020.
 
Show Stopper
Showrunner is a platform for creating AI-generated animations, including script, voices, and video. They debuted last year with an episode of South Park, and their website showcases a growing roster of shorts. As these technologies advance, it will be immeasurably more difficult to stand out against the sea of content, as each show could become a node for infinite AI-generated derivatives. This nascent tech does not bode well for the already contracting (dying) film industry. We are now close to a year out from the most recent Hollywood strikes, and productions have stabilized at around 15% below “normal” filming levels, according to FilmLA in the LA Times. Reality programming has been hit especially hard, down 50% compared to its five-year average. Although ballooning content and streaming wars could account for this deficit, perhaps depictions of “reality” through the lens of TV shows simply aren’t that interesting anymore. And, in a truth-is-stranger-than-fiction plot twist, amidst the dwindling productions in Hollywood, fried chicken fast-food purveyor Chick-Fil-A announced plans to start a new video streaming service with original and unscripted reality TV content. 

Miscellaneous Stuff
Self-Replicators
Sean Carroll hosted the former head of Google Research and current head of Google’s Cerebra team, Blaise Agüera y Arcas, for a discussion of their latest paper, which implies that spontaneous emergence of self-replicating life may be a common outcome when starting from a multitude of “pre-life” environments. Here is the synopsis of the podcast: “Understanding how life began on Earth involves questions of chemistry, geology, planetary science, physics, and more. But the question of how random processes lead to organized, self-replicating, information-bearing systems is a more general one. That question can be addressed in an idealized world of computer code, initialized with random sequences and left to run. Starting with many such random systems, and allowing them to mutate and interact, will we end up with ‘lifelike,’ self-replicating programs? A new paper by Blaise Agüera y Arcas and collaborators suggests that the answer is yes. This raises interesting questions about whether computation is an attractor in the space of relevant dynamical processes, with implications for the origin and ubiquity of life.” Blaise also seems to support the idea that the human brain works in a very similar way to large language models (see You Auto-Complete Me).
 
“What’s He Doing in There?”
The Tom Waits YouTube channel released a 1999 interview coinciding with the release of the album Mule Variations. The interviewer starts out by asking if the album reflects Waits coming full circle from his various musical styles throughout the 1970s, 80s, and 90s. Waits, with his uncanny ability to quickly coin an expression, responded: “I get the image of somebody with one foot nailed to the floor when I think about comin' full circle. If I was comin' full circle, I guess I'd move back to the town I grew up in, you know.” Waits also describes songwriting as being in the salvage business: “Yeah, we had 25 [songs]. And then, you put 16 on the record. And the rest of 'em wind up in the orphanage. That's kinda how it works. And then, you use 'em on something else, or you cut 'em up and use 'em for parts. It's kinda like being in the salvage business when you're a songwriter. You pay attention to things. And particularly, things that other people don't seem to need, or aren't using, or threw away in a conversation and didn't pay any attention to. And you hang onto it and use it later.” In addition to 17 studio albums, Waits has appeared in over three dozen films since the late 1970s, which perhaps explains why he discusses songwriting and band formation in terms of casting, directing, editing, and producing. Waits compared the spoken word track “What’s He Doing in There” to a short film about our endless curiosity to create rich stories about our neighbors from scant tidbits of disconnected information. YouTube is a treasure trove of Waits-ology, including his many late-night television appearances. Here’s one more quote from the interview: “Roosters will never crow when you're crowing. They wait till there's some clean air. They wait till you're done. And then, they get the best spot.” Willie Nelson recently released the first single from his upcoming album, a cover of Tom Waits’ “Last Leaf on the Tree”. The eponymously titled album will be released on November 1st and is Nelson’s 76th solo studio album and 153rd career album. The majority of the album will feature 91-year-old Nelson covering a diverse roster of artists and styles. Last Leaf has the haunting feeling of a swan song:
I’m the last leaf on the tree
The autumn took the rest
But they won’t take me
I’m the last leaf on the tree
When the autumn wind blows
They’re already gone
They flutter to the ground
Cause they can’t hang on
There’s nothing in the world
That I ain’t seen
I greet all the new ones that are coming in green
...
Nothing makes me go
I’m like some vestigial tail
I’ll be here through eternity
If you want to know how long
If they cut down this tree
I’ll show up in a song

Stuff About Demographics, the Economy, and Investing
Startups Power Down
As VCs chase shiny, new AI startups, they are leaving their old investments with a shorter runway. Carta, a provider of services for venture-backed companies, has seen the failure rate of its customers rise 7x from 2019 (and 5x from 2021) to 254 in the first quarter of 2024. Of course, enlargement of the starting pool is one reason for the failure number uptick, with more startups funded during the pandemic capital bubble. BI also reports that family offices are increasingly investing in venture-backed companies directly in addition to routing their money through venture funds. As I wrote in Private Asset Malaise, there could be broader trends at play following a decades-long private asset price bubble.
 
“Mr. Market” Myth
I continue to see otherwise intelligent investors cling to the idea of a “Mr. Market” that drives stocks. There is no value in imagining agency of any type behind the stock market whether it be human, an animal spirit, or some other type of alien-like intelligence, given that valuations are set by the complex interactions between algorithms, AI, and the dwindling cohort of human investors, who are themselves largely mind controlled by algorithms and AI. While some people might be willing to agree that what I state here is true in the short-term, “voting machine” nature of the stock market, most investors will push back and say that, in the long term, the “weighing machine” still sets the value of an asset relative to its current and prospective free cash flows. I, however, believe the weighing machine is broken as well. I wrote more about this notion in early 2023 in Vanishing Edges:
 
Only one-third of active mutual fund managers beat their market benchmarks in Q1 of 2023, according to the WSJ. Bill Miller has often articulated three sources of advantage an investor can have over the broader market: informational, analytical, and behavioral. Miller provides more detail in this letter, but, briefly, I interpret the framework as follows: an information edge is knowing something before others; an analytical edge is having similar information but coming to a different conclusion; and a behavioral edge is acting differently than others despite a similar analysis of similar information. From my perspective, the opportunity for an informational advantage began declining with the onset of the Information Age in the 1980s. Today, thanks to the Internet and ubiquitous access to real-time data (not to mention podcasts, YouTube videos, etc.), I would posit that today there is essentially zero value in investors seeking an informational edge. Analytical strategies to beat the market rose to prominence in the mid part of the last century. I would point to the classic Security Analysis by Graham and Dodd (first published in 1934) as a hallmark for the use of analytical methods to gain an edge over the market. I suspect analytical advantages rose over time (perhaps even fed by the rising use of technology and availability of information), but then they too began to lose value as the machines took over and algorithmic and quantitative strategies rose in both prominence and share of assets, arbitraging away many seeming advantages. LLMs and AI will soon relegate whatever meager analytical edge remains to the refuse heap of ticker tape machines and other investing anachronisms. What then of the last source of advantage, behavioral? Miller’s framework has historically described a behavioral edge as taking advantage of the biases of other humans. That human-focused perspective becomes complicated as passive investing steers past 50% share, and daily market activity is increasingly a reflexive, hyper feedback loop between machines and machine-created information and algorithms. It’s one thing to have a theory of mind for other humans and then to try to take advantage of their biases, it’s quite another thing to have a theory of mind for AI when we don’t even fully know how emergent behavior works in LLMs. Even if we were to recognize that behavioral advantage has shifted from overcoming human bias to overcoming machine bias, soon AI and LLMs will be smart enough to eliminate Miller’s final edge...Investing has been one of the earliest professions to be heavily impacted by evolving technology, probably because stock trading is digital and largely information based (the more digital an industry, the more it is susceptible to technological disruption). We would apply the same lens to investment firms and investment strategies that we apply to any company or industry we analyze: the winners will be the most adaptable organizations that offer the most non-zero-sum outcomes. It’s not entirely clear what the path forward is for professional investors whose goal is to consistently beat the market, but it’s worthwhile to think deeply about the areas to which humans can still uniquely contribute and those that would benefit from adept implementation of AI.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.