SITALWeek

Stuff I Thought About Last Week Newsletter

SITALWeek #454

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: the principal-agent problem of AI capex as LLMs begin to reason like humans for a reasonable cost; a potpourri of December headlines; some insight on Google's latest quantum news; the much larger impact GLP-1s will have on life expectancy and consumer habits; Bob Dylan, Nick Cave, Tod Browning, Lon Chaney, and Timothée Chalamet teach us about the torturing of art and the subjectiveness of reality; and, much more below.

Happy New Year and welcome back readers! SITALWeek's publishing schedule will continue to defy its name and not publish weekly, but I hope to keep up at least a monthly cadence this year.

Stuff about Innovation and Technology
The Principal-Agent Problem of AI Platforms and the Timing for Mass Market AI
The principal-agent problem describes a misalignment of objectives between two parties working together. Often, the principal has a certain outcome in mind and seeks the help of the agent to achieve that outcome. However, if the agent has a different set of incentives, both parties may end up failing at their given task. When it comes to AI models and the massive buildout of data centers, the principal-agent problem may thwart many of today’s leaders. For example, take a look at the Microsoft-OpenAI situation: OpenAI is incentivized to make the best LLM – without much regard for cost efficiency – because Microsoft is effectively footing the bill for the infrastructure. Microsoft, on the other hand, needs a power- and price-optimized AI (not the smartest AI, but rather the smartest affordable AI) in order to compete with others. Contrast that situation with Google, which has a vertically integrated approach to AI. Google is on the 6th generation of their custom TPU processor and has a massive global footprint of data centers optimized in part for transformer models, dating back to the original search autocomplete (see AI Search in #384). Thus, it’s no surprise that the data show that Google’s Gemini models operate at the highest level of intelligence per dollar cost, which may be the driving force of recent share gains with developers (who, for now at least, do not appear overly skeptical of being locked into exclusively using Google’s cloud). This fact is important because cost will be one of the primary determining factors for developers building the next generation of applications (see Follow the Developers in #380). Principal-agent misalignment can be an issue with open-source development platforms as well, but the problem is amplified with power-hungry, compute-intensive platforms like LLMs (e.g., vs. prior generations of open-source software like Linux). Therefore, we might also speculate that Meta’s Llama open-source LLM, which is likely optimized to run on Meta’s data centers for their own advertising and social apps, will not meet the right economically viable price/performance hurdles when developers run Llama on other cloud data centers. Amazon has taken a similar approach to Google in terms of designing leading-edge custom chips, but, so far, they have not leveraged their AWS developer market share to gain meaningful share in the AI market. And, it’s worth noting that Microsoft is reportedly working on alternative models and partnerships, at least in part to reduce costs compared to OpenAI’s models. Taking everything we know about computational platform shifts over the last half century, becoming the leading AI platform would appear to be Google’s opportunity to lose, although the race is far from over.
 
This issue of cost for AI is worth exploring in more detail. In the past, I’ve examined the energy needs of computers and robots compared to the highly tuned human mind and body, but I think we are approaching the point where we can start to estimate the value of AI for developers and the companies/consumers who are going to buy the next wave of innovative applications. I think the salient question for AI (and, frankly, humanity!) is: How much AI reasoning can you get for a human-equivalent salary? In other words, for a certain salary, how much compute power will it take to match or outperform a human (assuming the AI can collaborate with other humans/AIs using the same methods and tools a human would). An AI also has the obvious advantage of being able to work 24/7/365. A 40-hr/wk job for a typical human information worker probably entails ~10 hours of rote work (table stakes for an AI), ~10 hours of reasoning/thinking, and ~20 hours of wasting time (chatting, sitting in useless meetings, staring at screens, scrolling TikTok, playing office politics, etc.; note: there are likely some jobs, like software programming, that actually equate to ~40 hours of actual work a week, minus, of course, time for completing TPS reports). AI seems to reason slower than humans today (for now), so let’s say it takes ~20 hours of computing to do ~10 hours of high-level (i.e., non-rote) human work. If AI can do ~20 hours of reasoning and ~10 hours of rote work per week for less than a typical human information worker, that’s interesting, especially given that employers aren’t paying payroll taxes, benefits, etc. We can then project the progression of AI technology forward and see what performance-per-dollar advancement pace would be required to make AI a ubiquitous human replacement. 
 
Today, Google Cloud Platform (GCP) prices a top-tier GPU with an annual commitment at around  $700/mo. It’s hard to tackle just how many of these GPU instances on GCP would be needed to accomplish human-equivalent work within a reasonable time frame. Further, the way most developers will build AI apps to assist or replace office workers is through LLM APIs. APIs are priced generally in tokens in and tokens out (a token being the smallest unit of text an LLM processes or outputs). I am on shaky ground attempting to figure out the human-equivalent “tokens” required to both input and output a complex task or decision that would be valuable to a corporation, research institute, etc. (Aside: it turns out that research shows the brain may reason at relatively low bit rates.) Further, raw access to GPUs or tokens for AI would not include the other inputs a human would have in order to make high-level decisions and produce high-level work outputs (e.g., such an AI agent would likely need access to all of the apps and data in an organization, and they might be charged a seat license just like a human). These costs also don't include productizing an AI agent. In other words, if GCP or some startup were to create an “office worker AI as a service”, there would be a fully burdened business model targeting something like 30% FCF margins, etc. So, this line of reasoning for guessing at human-equivalent costs is a bit of a non-starter. 
 
However, LLMs are shifting from a pure token-in/token-out model to a test-time scaling model, which may offer us better inroads for estimating costs. Essentially, they are thinking harder before spitting out a reply; thus, rather than just predicting the next words in a response using a probability model (see You Auto-Complete Me), they are doing some deep thinking to arrive at more accurate, useful answers. This is a major leap in capability that comes with a major leap in cost. OpenAI raised prices for their o1 model to $200/mo (Pro subscription) from $20 (Plus subscription). For developers, use of o1’s advanced reasoning API comes at 3-4x the cost of their “general purpose” GPT-4o. If o1 were priced at a typical Western office worker wage of $40/hr, the reasoning of the model would equate to around 5 hours of work per month. We also don’t know if the $200/mo price point is profitable for OpenAI or if they are just relying on Microsoft to further subsidize their business model (which brings us back to the principal-agent problem I started this section off with). So, all of my hand waving here seems to imply you can get a decent amount of human-equivalent reasoning for an amount of money in the realm of human labor cost. If true, after a few more years of advancements in semiconductors and AI models, we should have markedly affordable “human reasoning as a service”, an explosion in demand, and a wide range of outcomes for how much human supervision of AI will be required (it may be that human jobs stay relatively flat, but each human is 2x productive, then 4x, etc.). 
 
Following this logic, at current AI reasoning costs, companies would need to lay off one human for every AI human equivalent they hire and would probably lose more skill/knowledge than they gain. In other words, based on my attempts to guess the cost of replacing human reasoning, today’s AI offerings aren’t likely compelling enough. In a couple years, however, maybe you will be able to lay off one human and hire a handful of AIs, which, by collaborating with each other and humans, may yield superior results. Even today, extremely high-value tasks, such as in-depth research or stock market predictions, may be able to take advantage of the high-cost test-time scaling AI models. And, if any of this math is in the realm of reason, you can easily see that AI may not require such high-value-add applications to be cost effective in the near to medium future. The proof will come within the next couple of years as today’s entrepreneurs develop the next generation of apps leveraging LLMs and overtaking human capabilities: If these apps are at price points that outcompete human employees, a significant wave of change could come much faster to society. Outside of the AI human office-worker replacement, as costs come down, we will see AI agents exploding on social networks. For example, Meta is developing tools to create pervasive AI characters interacting with humans across their apps. Further, Spotify is seeing great success with their personalized, LLM-based AI DJs, which can create their own dialogs with customers about music. All of these advancements are leading us toward the AI agent digital economy, which I think will dwarf our analog human economy. Today’s early test-time scaling AI seems to support this view of the future.
 
The above concept of AI replacing human decision making and reasoning (including high-value R&D that could lead to a new Age of Wonder) is one of two vectors that I see as interesting in the coming years. The other interesting vector is in entertainment. AI models are getting remarkably good at creating compelling, realistic video (see Google’s new Veo 2), and we are likely not too far off from being able to create entirely realistic virtual worlds with simple prompts. As with AI wholesale replacing office workers, it is difficult to predict when exactly we will reach the cost-performance inflection point that will result in all of us living in our own virtual worlds, and it may not happen before we have the next wave of hardware innovation in wearable AR/VR tech on a 3-5 year time frame. If I had to sum up my views regarding all the AI developments over the last year, I’d be trite and say that I’m cautiously optimistic. I still fear we are in a general-purpose AI overbuild, but likely in an underbuild situation for some of the specific tasks I outlined above, many of which could be the largest markets for technology we have seen by orders of magnitude.
 
Since it’s been a few weeks since we posted a SITALWeek, here are a few one liners on topics I thought were interesting:
 
Solidifying Assets
Active public equity investing continues to suffer escalating, record outflows ($450B in 2024); meanwhile, large investment firms are plowing into the markets for private assets as the public markets are increasingly dominated by a small number of very large companies. This existential transition for the investment industry from liquid public markets toward creating broader appeal for more highly levered private assets comes with lower liquidity and increased systemic risks.
 
Adding Power Laws
Speaking of a small number of companies dominating large markets, the global advertising industry was on track to hit $1T in 2024, and the mega platforms Google, Meta, ByteDance, Amazon, and Alibaba make up more than half of the market.
 
Recycling Fatigue
Aluminum can recycling continues to decline as Americans now throw away $1.2B of aluminum a year. The recycling rate recently dropped from a long-term average of 52% to 43%. Meanwhile, Elon Musk no longer thinks climate change is an existential threat to humans.
 
Warehouse Mechanoids 
Robots are starting to earn their keep: Nestle is using Boston Dynamics’ Spot robot dogs for predictive maintenance, driving higher returns on investment than anticipated; Agility Robotics humanoids are entering the workforce, with one customer seeing a two-year payback against a $30/hr equivalent human wage.
 
Two Cents on Wealth Distribution
Warren Buffett took the opportunity to wax philosophical about the US and his views on wealth at the end of this Berkshire press release on his estate planning. 
 
Problematic Power Oscillations
AI data centers are distorting the harmonics of power delivery to nearby homes and businesses, risking damage to appliances and other expensive items.
 
Human vs. Algorithm 
AQR’s Cliff Asness says AI is coming for his investment job. In case you missed it, Asness’ paper from September is worth a read: The Less-Efficient Market Hypothesis

Miscellaneous Stuff
Willow’s No Game Changer
I hesitate to write about the recent press on quantum computing because there is nothing terribly substantial in the announcement on Google’s Willow quantum chip, but with my background in astrophysics I get a lot of questions on it. The key development for quantum computing is to keep the error correction ahead of the ability to compute and transfer information in and out of the quantum system (by adding more qubits of computing power, the error rate drops rather than rises). For the next decade, the only likely use for quantum computing will be simulating quantum systems – they won't achieve anything that would resemble any conventional utility, which Google’s Willow announcement doesn’t change. The ongoing progress of quantum computing does continue to imply that the systems are dipping into parallel universes to compute, so there’s that! I am also very curious to see if the imaginative side of AI and the pending new Age of Wonder for scientific discovery are already giving us a peak at the recent advances in quantum computing (see also Quantum Resistance). For anyone interested in more, here is a blog from quantum researcher Scott Aaronson and a video featuring physicist John Preskill.
 
Boomer Wave Rolling On
GLP-1s are effectively becoming commodity-like drugs (by which I mean there are multiple offerings that achieve similar outcomes) and supply shortages are ending. The compounds were, after all, based on fairly simple molecules that have been tested for decades. However, becoming a commodity doesn’t necessarily beget price decreases or demand increases given the complicated healthcare system. Moreover, usage is likely to plateau at some point as the drugs reach a state of diminishing returns. Meanwhile, the real impact of GLP-1s is just getting started: there’s been a rise in life expectancies, reversing a worrying downward trend. It could turn out that the boom in GLP-1 revenues was just a sideshow to the effects on the economy from people living healthier for a longer period of time. If major disease categories are slowed or pushed out, it would have significant ramifications for many demographic trends, both positive and negative. With declining birth rates, the voting population will tip back to favor older generations. The housing shortage could suffer greater impacts as more people live longer and age in place. Recreational activities, vacations, remodels, etc. could all be on the rise as the silver tsunami of the Boomers gets bigger, healthier, and rolls on further than expected. It could also have negative implications for programs like social security, as actuarial tables blow up. The impact may be the most negative on the healthcare industry (see #296 and #379) itself, as demand for medical care steadily declines as a percent of the overall economy (offset by longevity increasing the tail of demand for healthcare in later years of life). Particularly in the consumer sector, paying close attention to major demographic winds of change tends to be lucrative.
 
Rescripting Reality
I really enjoyed this interview with actor Timothée Chalamet on his role playing Bob Dylan in A Complete Unknown. We have covered Dylan’s enigmatic toying with reality on more occasions than I can remember in SITALWeek. I’ve always been fascinated by Dylan’s ability to playfully manipulate reality because I think it’s essentially how the world has operated for the last half century (i.e., trending toward subjective reality), and it’s an omen of things to come. Dylan (who has been bizarrely posting on X, something he started to do only when the rest of the world soured on the platform, of course!) himself praises the movie, and Chalamet recounts how Dylan snuck onto set and co-opted the script, rewriting it to include a fabricated story, which made the final cut. The “Bob-annotated script” is a phrase that I love because in some ways it’s a guide to living (a deeper look reveals Dylan was heavily involved in the development of this project going back to the book it is based on). Would that we all could annotate our own scripts. Some of SITALWeek’s favorite worlds collided when another frequent artist I’ve discussed, Nick Cave, was called out by Dylan on X: “Saw Nick Cave in Paris recently at the Accor Arena and I was really struck by that song Joy where he sings 'We’ve all had too much sorrow, now it the time for joy.' I was thinking to myself, yeah that’s about right.” An elated Cave responded: “I was happy to see Bob on X, just as many on the Left had performed a Twitterectomy and headed for Bluesky. It felt admirably perverse, in a Bob Dylan kind of way.” Earlier this year, Dylan also happened to recommend one of his favorite movies, The Unknown, a 1927 Tod Browning film starring Lon Chaney and Joan Crawford. Some readers may know Browning from 1932’s Freaks. Both movies exist in the period of the pre-code era of Hollywood before the Motion Picture Production Code guidelines of 1934 shifted the tone at many studios. Browning began his career as a circus sideshow performer and Vaudeville act. I’ve had plenty to say here about how Hollywood is the new Vaudeville as their wares cater to smaller and smaller audiences. Even Chalamet exhibits an unusually high level of self-awareness (for Hollywood) in calling out his beloved, dying industry of movie magic multiple times in the interview referenced above. Here, I find myself drowning in my own biases about art, AI, and the human condition in the digital era: Nick Cave fighting against and then succumbing to AI; Dylan annotating reality again and again and again; Vaudeville giving way to Hollywood giving way to a post-truth digital menagerie of subjective realities and algorithmic brainwashing. All the while, the artists still have human stories to tell, and, just like Lon Chaney in The Unknown, they will torture and mutilate themselves beyond imagination, even if it’s for an ultimately shrinking audience. To get that one reaction from a single other human being in appreciation of their art is enough for these artists to carry on. To humanity’s dying breath, everyone is performing for an audience of some sort.

Stuff About Demographics, the Economy, and Investing
Mutually Assured Monopolies
When you examine many of the FTC’s ongoing reviews of alleged anticompetitive behavior of the major tech platforms around the world, you typically find one of their marketplace competitors feeding information to regulators. This creates a somewhat humorous standoff when one competitor alleges another is anticompetitive in one arena, while the other alleges the first one is anticompetitive in another arena. For example, Microsoft supplied a lot of information against Google and Apple’s search deal, but Google is alleging that Microsoft’s OpenAI exclusive is anticompetitive. What all this amounts to is a standoff – a mutually assured dysregulation of sorts where the big keep getting bigger in all types of products as the economy marches from analog to digital. This concept was certainly ever-present in the tech-driven rally of markets over the last year. However, I believe this regulatory theatre is all a misdirection, as the big tech platforms’ accusatory jabbing focuses the government’s eye on an increasingly irrelevant set of backward-looking technologies, leaving the future AI monopolies to be cemented in the trillions of dollars of data centers that will run the entire human and agent economy in the decades to come.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

jason slingerlend