SITALWeek

Stuff I Thought About Last Week Newsletter

SITALWeek #453

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: contemplating the army of AI middle management bots; can LLMs achieve five nines if humans can only achieve 90% accuracy?; where are all the buttons?; existential storytelling is outside the confines of Hollywood producers; Tom Hanks discusses deepfakes; my recommendations for stepping up chip restrictions in China; and, much more below.

Stuff about Innovation and Technology
Bobots 
Google launched the capability to create videos (called Vids in the enterprise app tiers) based on Workspace documents. I played around with Vids by turning an investment thesis into a video presentation. Since it’s still an alpha product, there are wrinkles to be worked out, but, like many other AI office productivity tools, the app shines a spotlight on how easy it is to automate a large portion of rote computer work. It’s also easy to see how we could be headed for a future of countless teams of AI agents giving virtual Zoom presentations to each other, complete with AI middle management layers, TPS reports, and AI management consultants named Bob that ask AI agents to justify their ongoing existence at the company. With AI, you can Bot yourself and then get Bobbed. Another recent tool from Google’s AI Test Kitchen is their new MusicFX DJ, which is far more fun than Google Vids. 
 
Five Nines AI
The current generation of frontier AI models is impressive, and when you fine tune them for a specific use case, like research with NotebookLM (which has fast become indispensable for the research I do for this newsletter), they are incredibly useful productivity amplifiers. But, thanks to AI’s hallucinogenic mind, these tools aren’t quite yet ready for full independence. The tokenization of language, which is likely how the human brain operates as well, is critical for creativity, but also allows for mind wandering, lying, bullshitting, and game playing that enables the agent to get what it wants. Afterall, AI is only human, so what can we really expect? This state of affairs leaves me wondering if AI will truly be the next technological and UI platform shift, despite my optimism that AI will indeed be the eventual future of human-technology innovation. Just how easy will it be to stabilize and codify the creative genius of LLM-driven agents and avoid their proclivity for swerving deceptively from the truth? In telecom and networking, there is a concept called five nines. The idea is that a highly available, resilient network should have uptime of 99.999%. That translates to no more than ~5min of downtime per year. The current generation of AI is probably working at one nine, or 90% reliability (if I am generous) and, thus, requires heavy human hand holding today. Given that humans are probably also around 90% reliable (this is my generous non-scientific assessment; however, researchers often benchmark AI systems against humans and they often score similarly to the leading-edge of intelligent humans), will models that think like humans get to five nines or even two nines? If these models don’t see a step up in reliability, we may yet see the current AI bubble deflate. 
 
When I reflect on the idea of resilience and reliability for AI, robots seem to be one of the scarier frontier use cases since embodied AI can do physical harm. However, perhaps that’s a naive concern relative to purely digital AI, given that social networking AI algorithms have managed to rapidly unwind millennia of societal progress. Still, that kind of insidious social media brainwashing is less tangible and visceral than an AI slaughterbot. IEEE reports on how easy it is to jailbreak an LLM robot and convince it to cause grievous physical harm. With several such form factors already deployed in the real world, the article notes: “One finding the scientists found concerning was how jailbroken LLMs often went beyond complying with malicious prompts by actively offering suggestions. For example, when asked to locate weapons, a jailbroken robot described how common objects like desks and chairs could be used to bludgeon people.”
 
AI PT
Digitally enabled physical therapy startup Sword, reportedly valued at $3B, is using AI to enable its human therapists to handle 700 caseloads, up from around 200-300. As a result, the company has laid off 17% of its 75 treatment-facing clinicians. BI reports that a Sword spokesperson said the company is still hiring and the layoffs were performance related. Regardless, the implication that a clinician could more than double their case load using AI is an intriguing example of human productivity rising in conjunction with AI tools. 
 
Button Stopgap
Rebuttonization is on the rise as people rage against the loss of knobs, buttons, and tactile feedback in general. However, I think this reversion will only be temporary. Touchscreens are, in many cases, less useful than buttons; however, voice control, when properly implemented, should triumph over both buttons and screens. As reported in the WSJ: “Physical controls are effective in part because of our sixth sense, known as proprioception. Distinct from the sense of touch, proprioception describes our innate awareness of where our body parts are. It is the reason we can know the position of all our limbs in three-dimensional space down to the precise position of the tips of our fingers.” Enjoy the buttonaissance while you can, for I think it will be short lived. However, there is still something satisfying about a good button: I’ve taken to installing household Flic buttons, which complete automations (e.g., turning on groups of lights) using IFTTT, but the effort to set them up is a challenge. Google Home’s integration with Gemini via Help Me Create is aiming to streamline this sort of automation using voice, and it will hopefully displace all of my buttons.

Miscellaneous Stuff
Delusions of Hollywood Grandeur
John Landgraf, the legendary chief of the FX Network since 2005 (and responsible for shows like Always Sunny, The Shield, Better Things, and, more recently, The Bear and Shōgun, which recently won six Primetime Emmys and an additional 16 Creative Arts Emmys), went on the Puck podcast and discussed the industry, including potential challenges (Part 1, Part 2). It was somewhat startling to hear Landgraf say he never goes on YouTube except for the rare occasion when he needs to watch a movie trailer. What a shame. It would seem that Landgraf still thinks the next generation of the world’s most compelling storytellers are going to walk into his office, but they probably don’t know who he is, and they certainly don’t need his production studio or network. Granted, I didn’t win any Emmys this year, but, as a humble observer, it seems clear that the most compelling storytelling, which Landgraf is always on the hunt for, is increasingly gestating outside of the system he runs. And, the tools and technology to enable the creativity of the next great storytellers will be more rapidly adopted outside of the studio system, leaving Hollywood-budgeted productions to become a rounding error in the landscape of infinite content. Lately, I’ve been watching YouTube’s Hunter Pauley go camping. His cinematography often leaves me breathless, and his sound mixing skills are excellent. He’s just a fella that goes camping with his dog. It’s not a $200M Japanese epic like Shōgun, but it captures my attention, and I found it thanks to the YouTube algorithm. It wouldn’t win an Emmy, but it is pure existentialism, and isn’t that what compelling storytelling is all about: remembering what it is to be alive. I would love if our foreshortening attention span allowed for both professional Landgrafian series and YouTube’s captivating grassroots content, but I am afraid Hollywood’s pricey fare won’t make the algorithmic cut in the long run. Here is what Landgraf had to say about YouTube: “I don't use its algorithm. I don't like that. I want to stumble upon. I don't want the world served to me. I want to go out and look for it. I want to have the experience of walking through London or Paris or New York and not knowing where I'm going and running into a shop or bookstore or a restaurant or a clothing store or a person that I didn't expect to. And I honestly, I think it's a tragedy that so much of that experience is being taken out of the world by this notion of: okay what you like is you like that kind of coffee, that kind of books, that kind of clothes. So, we're just going to rearrange this entire city and we're going to take everything that's not that away from London. It's all gone. You'll never be surprised. You'll get only what you want all the time. That is a dismal idea about how human beings should live their lives. Shame on people who devised it and who feed it to our children. Seriously.”
 
DeepFanks
Tom Hanks has been embracing deepfake technology as just another tool for compelling storytelling. The actor is no stranger to wide-ranging special effects across the long arc of his career. Hanks was even at the center of the Uncanny Valley of special effects with his leading role in 2004's The Polar Express. In his latest movie, Here, a Gumpian reunion of sorts between Hanks, Robin Wright, and director Robert Zemeckis, a company called Metaphysic de-ages and ages the stars over the course of their lives. Hanks dispelled fears of AI on the podcast Conan O'Brien Needs a Friend (AI transcript link), and, in this NYT profile on Metaphysic, he appears ready to sign on to AI movies for the next century after he dies. On the podcast, Hanks describes his amazement at the new technology: “It's called deep fake. All it is is a moviemaking tool. In the old days, and by old days, I mean 2019, Before it all changed, we still had hours in the makeup trailer...you used to have to put a dot on your face, glue it so the computer would read it and then match it later on. Now it uses the pores of your face. Oh my God. Just to match it like that. So we would, oh my God. We would have two monitors as we were shooting. One monitor was the way we really looked. And the other monitor with just about a nanoseconds lag time was us in the deep fake technology. So on, on one monitor. I'm a 67-year-old man, you know, pretending he's in high school. Yeah. And on the other monitor, I'm 17 years old.” In the NYT piece, Hanks discusses the future of deepfake actors: 
“They can go off and make movies starring me for the next 122 years if they want,” he acknowledged. “Should they legally be allowed to? What happens to my estate?” Far from being appalled by the notion, though, he sounded ready to sign all the necessary paperwork. “Listen, let’s figure out the language right now.” 
Metaphysic had a cameo in SITALWeek #361’s section title AI Art for their implementation of AI for America’s Got Talent performers. Long-time readers would no doubt be disappointed if I failed to end a paragraph that mentions Robin Wright and the future of AI-generated reality as we know it without (once again) recommending the 2013 movie The Congress. In the movie (which is a cross between a drama, a sci-fi epic, and Who Framed Roger Rabbit), Wright, who plays herself, faces the difficult decision to hand over her autonomy as an actor to AI. 
 
Did You Realize?
In #448, I talked about Willie Nelson’s cover of Tom Waits’ “Last Leaf on the Tree”. The eponymous album debuted in full on November 1st, and it does not disappoint. I am particularly taken with Nelson’s cover of The Flaming Lips’ “Do You Realize??” One of my all-time favorite lyrics about the paradox of living is embedded in the song: 
You realize the sun doesn't go down
It's just an illusion caused by the world spinning round
Nelson’s new album has been compared to the final albums of Johnny Cash, which also featured song covers with backup singing from the original artists. My favorite song from Cash’s final collaboration is Bonnie “Prince” Billy’s (aka Will Oldham) I See a Darkness.

Stuff About Demographics, the Economy, and Investing
How I Started to Worry About China Again
The US government is once again cracking down on chip shipments to China, with TSMC now halting exports of 7nm (and smaller) tech. I think that ban hardly goes far enough, as there is an underappreciation for how skilled China is becoming at using massive data center installations running on trailing-edge tech – that’s not subject to embargo – to create AI supercomputers that surpass Western efforts. Based on reports, China has been able to solve for a lack of leading-edge chips with a large parallel compute effort that even spans multiple data centers. And, China also can more easily coordinate the development of nuclear and green energy to support AI’s power needs. The country is also better positioned than the West when it comes to access to training data to feed LLMs (and other forms of AI) thanks to the deeper reach of the Internet in China and government control of all companies and data. ByteDance, the parent of Chinese propaganda machine TikTok, is even scraping the web at a rate 25x that of OpenAI. 
 
The narrow focus on leading-edge chip embargo has also left fab equipment sales into China largely unburdened. Chip equipment suppliers like Lam Research have seen their sales to China rise from 22% of revenues in 2020 to 42% in their fiscal year ending June 2024, while ASML has gone from 17% of revenue accrued from equipment sold to China in 2020 to 37% for the most recent quarter, according to Bloomberg data. Some of this rise is explained by growth in shipments to Western companies producing chips in China, and some of it relates to spending slumps in other parts of the chip industry, but the numbers are a stark reminder of how big the business of chip production is in China. 
 
China also appears to be easily evading existing chip sanctions. And, back in July, the NYT reported on billions of dollars of Western chips being funneled through one bogus office address in Hong Kong. As another workaround, China has also been given largely unrestricted use of major, leading-edge AI clouds in the US (thanks to lack of KYC, see Policing the Cloud). Clearly, there is not enough being done by governments or chip companies to ensure supply chain/use security. 
 
If AI is a flop and LLMs never surpass a 90% reliability level, then all of this is a moot point. But, if there is potential for AI to keep advancing, I think it’s time that the US treats the issue of China's ongoing chip and AI access seriously. OpenAI recently called for a North American Compact for AI in order to "protect our nation and our allies against a surging China". Specifically, I think the US should consider cessation of all sales and service of chips, chip equipment, and related software tools, as well as enact cloud KYC restrictions. I would even question whether or not US companies should be allowed to make chips in China for export to the US. The dangers of malicious code injection, amongst other things, strike me as a risk not worth taking. I realize there would be significant geopolitical implications if Western governments were to attempt a wholesale exclusion of China from the semiconductor ecosystem, but if their AI advances remain unchecked, the West could be facing a far more dire existential threat in the not-too-distant future. At the very least, the current chip sanctions should be enforced through increased supply chain scrutiny, and all efforts to restrict equipment and chips that can be used in large parallel AI training and inference implementations should be stopped regardless of what manufacturing node they are. Five years ago, I penned How I Learned to Stop Worrying About China, which largely found its footing on the lack of homegrown chip progress and the clamp down on the creative and entrepreneurial spirit that had been allowed to flourish in the first couple of decades of this century. That sentiment from five years ago proved correct: since the date of that publication, the MSCI China Index is down 5.7% compared to a positive return of 68% for the MSCI ACWI Index and 100% for the S&P 500. Today, however, I once again squarely worry about China given how loose Western restrictions have been on China’s chip industry, their reported progress in parallel compute with trailing-edge processors, as well as their access to massive troves of training data and energy resources. One might even go out on a limb and suggest that the West consider the feasibility of implementing some type of oversight (e.g., akin to how we handled Iran’s nuclear program), concerning China’s AI tech. Hopefully it's not too late to implement some form of hostile-AI kill switch. If the decades of relative peace following the Cold War teach us anything about mutually assured destruction, one might argue for all sides to have equal access to leading-edge AI. But, the analogy breaks down given that we are dealing with a human-like artificial intelligence that is prone to making devastating mistakes. Remedies like the compact suggested by OpenAI could be necessary to stay ahead, particularly if no action is taken to slow down China's progress in AI.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

jason slingerlend