SITALWeek #415

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: vintage TV ads from the 1980s and 1990s offer an abundance of perspective and lessons on our world today; another breakthrough in protein modeling should accelerate drug discovery; a little fiber doesn't hurt; misguided interest rate policy from the Fed isn't just a short-term issue, it's also delaying the green transition of the global economy. 

Stuff about Innovation and Technology
Lessons from Vintage Advertising
The term nostalgia has come to mean “a longing for the past”, but the word was originally coined to describe a type of mental illness related to homesickness (it’s derived from the Greek words meaning “homecoming” and “pain”). Recently, the YouTube algorithm got me hooked on watching commercials from the 1980s and 1990s (aka, my formative years). There are countless, hours-long videos that string together all sorts of TV ads. While it’s nostalgic (using the modern definition of the word) to see the ads and be reminded of so many forgotten habits of my youth, a few educational lessons began to emerge after a few (I won’t specify how many!) hours of watching, including one that is more befitting of the original, more painful definition of nostalgia. 

First off, these ads were effective: I remember almost every single ad, promo, preview, etc. I could instantly sing along to every four-decade-old jingle without missing a word...advertising works! Second, it’s fascinating to see how many top brands from that era simply don’t exist anymore. Entire product categories like film cameras (including the first disposable film cameras, various film brands, and film development services) that dominated ad breaks are completely non-existent today. Likewise, fads like low-fat diets and low-cholesterol substitutes that permeated the vernacular in the late 1980s proved rather ephemeral. More durable exceptions include the fast-food and beverage industries. For example, Diet Coke arrived on the scene in 1982 (and needed commercials to explain to consumers why they would love it: “just for the taste of it...Diet Coke!”) The majority of the fashion brands and retail chains that dominated ads back then, however, have vanished. The trend of rising and falling products, brands, and industries is something I’ve also noticed when watching vintage episodes of The Price is Right (if you’re interested, the Pluto TV app from Paramount has two 24/7 TPIR channels for the Bob Barker and the Drew Carey eras). 

A third lesson is about inflation and deflation. Remember the McDonald's McLean Deluxe? (Another non-tasty failure). You could get that in one of the first “value meals” for a couple of bucks. In 1985, when combo meals arrived on the scene, the Big Mac Value Pack was $2.59 (depending on where you were in the country). Today, here in California, the smallest sized Big Mac value meal runs around $13, and that price is set to go up with the recently passed minimum-wage laws for fast-food workers. That’s a little over a 4% annual inflation rate for the “value” meal, which comes in spite of the creation of the industrial agricultural complex over that period, which one might have expected to have made food production more efficient, and thus less prone to inflation. And, that level of inflation doesn’t even account for any alleged burger patty shrinkage. But, what about deflation? There’s the obvious point that we now have a single smartphone that can do the job of 1980s cameras, TVs, game consoles, calculators, etc.; and, in general, electronics have not gone up much in price and in most cases have declined materially. Other categories stand out. The VHS tape for the original 1983 Top Gun sold for $26.95. Now the movie is available in a streaming app for ~$10/month, along with thousands of other titles (including its sequel). Large portions of consumer discretionary spending have completely morphed, with some shrinking, some rising, and many disappearing, like newspaper and magazine subscriptions. Advertising traces the waves of globalization that heralded more value for the price, like the rise of the Japanese car makers, followed by the South Korean brands a decade later. A trip through this advertising time capsule highlights the considerable tumult across the consumer spending economy and recalls the Crimson Permanent Assurance battling the The Very Big Corporation of America in Monty Python’s The Meaning of Life.

Surveying these tendencies for some products to inflate while others deflate, or even be entirely displaced, highlights the differences between the analog and digital parts of the economy. There is no Moore’s Law for beef patties. It’s almost too easy to contrast the lack of innovation coupled with the ongoing consolidation in many analog industries that has led to increased prices and often price collusion, resulting in a large part of the economy being frozen in time, vs. the creative disruption and inferno of innovation in the digital world. Many legacy, analog sectors in the economy remain ripe for disruption, but regulatory capture, consolidation, a lack of adaptability, and low non-zero-sumness are keeping them from progressing.

The last salient theme I noted from my journey through the commercials of my youth was the common culture reflected in the ads (as well as the shows, movies, and sporting events in which they were embedded). Every event, product launch, movie premier, etc. was experienced by a meaningful percent of the population at the time. Or, if we didn’t experience them firsthand, advertising certainly made us all aware of what everyone else was experiencing. I’ve lamented the loss of these common cultural elements in the past, as it seems very unlikely that, in today's fragmented day in age, very many of us could sing a jingle from a new product launch (unless of course it’s a Taylor Swift album). It’s increasingly nostalgic (in both senses, I believe) to think about a society of folks tuning in for a common experience. Perhaps some of this monoculture loss is for the best; but, overall, our painfully fractured and polarized society keeps me longing for my temporally distant youth. 

Commercial, Clean Hydrogen Power
Duke Energy is breaking ground on a 100% clean hydrogen end-to-end power system in Florida. Electrolyzers powered by excess solar will split water into hydrogen and oxygen. The hydrogen will be stored and then burned for fuel in modified GE turbines that can run on up to 100% hydrogen (or a mixture of hydrogen and natural gas) when extra power is needed. 

AlphaFold Enabling Drug Design
The latest version of AlphaFold can solve protein structures that contain non-protein elements, like nucleic acids, ligands (e.g., signaling molecules and synthetic drugs), and PTMs (post-translational modifications of the protein structure). This advance is hugely significant for both drug design and modeling of biologically relevant proteins, as PTMs (of which there are hundreds of types) often alter both structure and function of proteins in critical ways that the bare-bones instructions encoded by DNA/RNA fail to capture.

Miscellaneous Stuff
Boosting Natural GLP-1s
If you’re looking to get the benefits of GLP-1s without jabbing yourself in the stomach, NPR suggests adding a bit of barley to your diet. When the high-fiber grain is consumed by bacteria in the large intestine, the breakdown of beta-glucan (a soluble, fermentable fiber also found in oats and rye) triggers the release of small amounts of GLP-1. The big difference is that elevated levels of GLP-1 from high-fiber foods are fairly short-lived, whereas the GLP-1-mimetic semaglutide drugs are more stable so they can disperse throughout the body and remain active for a prolonged period. This omnipresence is what causes the bigger decrease in appetite and other cravings as opposed to just helping you feel a bit fuller at your next mealtime.

Stuff About Demographics, the Economy, and Investing
High Rates Hurt Green
High interest rates and general lack of demand are cramping the green economic transition, as many green energy projects are no longer penciling out to positive returns thanks to heavier borrowing costs. In one example, Danish energy company Ørsted recently cited high borrowing costs as one significant reason for canceling US offshore wind projects. While government stimulus like the Inflation Reduction Act is aimed at subsidizing and incentivizing green investing (amongst other goals), the higher cost of money coupled with higher cost of materials may strand stimulus dollars. Further slowing the green transition is the weakening of consumer interest in EVs, particularly in the US, which is causing tens of billions of dollars of delays or cancellations in battery and other EV projects. As I’ve argued in the past, today’s high rates are backwards looking and overly penalizing, creating existential risk to our highly levered economy (not to mention highly levered governments) and fail to take into account the oncoming wave of productivity from AI and other technological advances. Central bankers’ misguided policies are even now perversely causing the inflation they are trying to fight. Anyone arguing that “this time it's different” and rates need to stay higher for longer is wildly missing the big picture: the multi-decade trend of accelerating deflationary pressure from technology is only going to get stronger.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

SITALWeek #414

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: AI technology platforms are seeing significant gains in revenue per employee; bipedal robots arrive in Amazon warehouses and a robotic quadruped learns to talk; progress in metalenses; squeezing light to see gravitational waves; and more below. 

Stuff about Innovation and Technology
Digital Warehousing; ChatBOT
Amazon is using Agility Robotics’ bipedal humanoid “Digit” in its warehouses to autonomously reposition empty product tote bins. Agility plans to make 10,000 of the robots, although it’s not clear if all of those are for Amazon. Recall that Digit’s backward-bending knees sent me on a journey in #390 to understand why many bipedal robots have that anatomical oddity (forward-bending human-like knees are more versatile, whereas the opposite appears to give more stability, albeit with less range). It’s notable that Amazon is working with an outside company for bipedal bots, and their internal progress in robotics continues to disappoint. I believe that the biggest market by far for technology over the next decade will be the embodiment of LLMs and other forms of AI into a large array of robots and automated machines. Boston Dynamics recently infused their robot dog Spot with ChatGPT, which you can see in this video

Artificial Efficiency
Leading AI companies might be seeing an impact on their headcount from applying the technology internally. Let’s start with Google parent Alphabet: with the exception of a short period during 2009’s Great Recession, Google has not had its headcount decline in any quarter on a year-over-year basis until the most recent quarter*. Indeed, since 2009, Google’s headcount has consistently grown around 10-20% per year, with the exception of a pandemic hiring binge that exceeded 20% growth. In June, Google’s headcount declined by ~9,000 people sequentially, but still grew 4.5% y/y. In September, however, their headcount declined by ~2% to 182,000, similar to 2009. And, sequentially, the company only added around 500 net new employees in September over June. The 11% revenue growth reported by Google with a 2% decline in employees resulted in a 13% growth in revenue per employee. Google’s $420k/employee quarterly revenue is the highest I could find in the company’s history, with the exception of the pandemic boom period from June through December 2021 when it peaked at $481k/employee. Speaking on their recent earnings call, Google CFO Ruth Porat noted: “we have engineering work streams underway to improve productivity across Alphabet. Given the magnitude of investment in our technical infrastructure, we have a superb team focused on efficiency of our operations there. We are also making progress in streamlining operations across Alphabet through the use of AI.” Is Google a one-off? I looked at how revenue per employee is progressing over at AI-leader Microsoft, which has been building and deploying OpenAI copilots. Historically, Microsoft has only given annual headcount numbers. Their revenue per employee per year has risen steadily from around $500k/employee in the 1990s to over $800k in the 2010s. In the fiscal year ending June 2023, Microsoft reported 7% revenue growth on approximately zero headcount growth resulting in a record $958k in revenue per employee. And, referencing a 600 bps sequential increase in operating margins at AWS in Q3, Amazon’s CFO noted that it was “primarily driven by increased leverage on our headcount costs.” While I hesitate to declare that early adopters of AI like Google, Microsoft, and Amazon are all seeing strong AI-driven productivity gains given the gyrations of the pandemic, it’s a noteworthy trend that could have large ramifications for corporate margins and productivity across the economy. Such a boost would accelerate the deflationary forces of technology that in part helped keep interest rates low and falling for most of the last few decades. Companies will be reluctant to broadcast that they are replacing people with AI or bots, so one way to keep tabs is via revenue-per-employee or profit-per-employee trends…at least until AI bots gain personhood status! This rise in people displacement is likely to lead to a robot/AI tax on the extra savings accruing to corporate profits. 
(Note: all data for this paragraph come from SEC filings and conference call transcripts.
*In 2013, Google shrunk headcount related to layoffs for manufacturing of Motorola phones; given this was not the core business and was related to an acquisition, I excluded it from my analysis.)

Nanoimprinting Sensors
A couple weeks ago, Canon grabbed a few headlines for their latest nanoimprint semiconductor manufacturing equipment, with the market speculating that the Japanese tech company might catch rivals. While the equipment is capable of making advanced chips down to 2nm, it has some drawbacks, in particular concerning multi-layered chips, which heavily limit its uses. However, there is a different use for nanoimprint that caught my eye, and that’s for making metalenses. Recall what we’ve previously learned (#397) about this new form of sensor:
New metalenses are poised to disrupt some of the image sensor market, and ultimately might find their way into ultra-thin smartphones. Developed at Harvard and commercially produced by Metalenz and chip companies like ST Micro, the new devices offer a host of improved sensor functionality for applications like distance sensing. Importantly, these relatively simple chips have nanostructures capable of detecting not only visible light but also polarization: “Using this technology, we can replace previously large and expensive laboratory equipment with tiny polarization-analysis devices incorporated into smartphones, cars, and even augmented-reality glasses. A smartphone-based polarimeter could let you determine whether a stone in a ring is diamond or glass, whether concrete is cured or needs more time, or whether an expensive hockey stick is worth buying or contains micro cracks. Miniaturized polarimeters could be used to determine whether a bridge’s support beam is at risk of failure, whether a patch on the road is black ice or just wet, or if a patch of green is really a bush or a painted surface being used to hide a tank. These devices could also help enable spoof-proof facial identification, since light reflects off a 2D photo of a person at different angles than a 3D face and from a silicone mask differently than it does from skin. Handheld polarizers could improve remote medical diagnostics—for example, polarization is used in oncology to examine tissue changes.” This type of sensor could be very useful in the coming robot revolution as AI embedded in automatons of all types becomes a reality.
During the recent Canon Expo 2023, the company showcased both its new metalenses and announced that their nanoimprint machines can be used to make the exciting new sensors.

Miscellaneous Stuff
Subsiding Rage
While there continues to be unthinkable tragedies around the US and the world, homicides in the US dropped 6% in 2022, and they could show an even bigger decline (potentially 7-10%, based on FBI crime stats) for 2023. A 10% drop would be the largest y/y decline on record. Violent crime overall also dropped back to 2019’s pre-pandemic level. It’s not all good news, as auto thefts have soared thanks to TikTok videos showing how easy it is to steal some car models. Manufacturers have responded by opening up software upgrade clinics across the US to fix the issue. It’s hard to conceive that in some ways things are improving far more than your social media feed might lead you to believe.

Light Squeeze
Thanks to our quantum universe, particles like photons can randomly appear and disappear from our observable physical realm, creating a background of quantum noise. These (de)materializations don’t impact us at the macro level; however, if you are trying to measure things like gravitational waves, this noise limits instrumentation precision. Researchers working on LIGO, which can detect gravitational waves from phenomena such as black hole collisions, have surpassed the quantum limit that would ordinarily be imposed by this noise. This feat was accomplished by the frequency-dependent “squeezing” of light in tubes around 300 meters long in such a way that measurements can be made accurately across the entire spectrum of interest. The actual methodology is like science fiction come to life: “This is accomplished with the help of specialized crystals that essentially turn one photon into a pair of two entangled (connected) photons with lower energy. The crystals don't directly squeeze light in LIGO's laser beams; rather, they squeeze stray light in the vacuum of the LIGO tubes, and this light interacts with the laser beams to indirectly squeeze the laser light.” One of the new use cases will be to detect neutron star collisions to help determine a more accurate makeup of the celestial participants in these events (which are believed to be the superdense, collapsed remnants of supergiant supernovae) to gain insights into the composition of their black hole cousins and the origins of heavy elements in the universe. 

What to Take, and What to Leave
Speaking of black holes and other hard-to-comprehend ideas, cosmologist Carlo Rovelli has a great essay on how we can learn new things, particularly when we lack the ability to see them clearly, adapted from his upcoming book White Holes.✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

SITALWeek #413

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: tweaking stoplight timing can dramatically drop emissions; Disney's robots; chilly pumpkin spice; and, a deep dive into how one often misunderstood concept from physics can offer clues to which new technologies will win out in the future. 

SITALWeek will be off next week, returning on October 29th. 

Stuff about Innovation and Technology
A Real WALL-E
Disney’s Imagineering research division has an adorable new prototype droid that moves in a more emotive way than the typical bipedal bot. According to Disney research scientist Morgan Pope: “Most roboticists are focused on getting their bipedal robots to reliably walk. At Disney, that might not be enough—our robots may have to strut, prance, sneak, trot, or meander to convey the emotion that we need them to.” Disney used reinforcement learning and feedback from one of their animators to come up with the pleasing movement style (video). The research division is largely focused on creating robots (along with AI and immersive technologies) for their theme parks and studio divisions. This particular droid would be right at home roaming one of Disney’s Star Wars-scapes.

Red Light, Green Light
Who hasn’t sat at a red light wondering why the timing is so bad, especially when you're late! By leveraging Google Maps data, a trial in a dozen cities around the world has cut 30% of traffic light stops and 10% of emissions for 30 million cars per month. These reductions come from adjusting light timing at just 70 intersections. (Google Maps has also been suggesting routes based on fuel efficiency for a couple of years.) You could imagine how Project Green Light could ultimately operate networked lights in real time to optimize traffic flows in response to slow downs, accidents, etc. Similar to Google’s efforts to adjust airplane routes to reduce heat-trapping contrails, there are seemingly endless small modifications that will have a major, cumulative impact on carbon and contribute significantly towards the greenification of the economy (perhaps even rivaling the big-ticket overhauls). 

Artificial Focus Group
Companies are turning to groups of chatbots talking to each other for product feedback and suggestions. These surrogate focus groups can also be queried by humans. In one example, two “AI characters, Jason Smith and Ashley Thompson, talk to one another about ways that Major League Soccer (MLS) might reach new audiences. Smith suggests a mobile app with an augmented reality feature showing different views of games. Thompson adds that the app could include “gamification” that lets players earn points as they watch." The Wired article also mentions the Smallville simulation platform, and the Stanford professor, Michael Bernstein, who developed the project noted: “We started building a reflection architecture where, at regular intervals, the agents would sort of draw up some of their more important memories, and ask themselves questions about them. You do this a bunch of times and you kind of build up this tree of higher-and-higher-level reflections.” While Bernstein cautioned people to question how accurately the simulations represent real human behavior, in Wired’s Smallville town of 25 chatbots programmed by ChatGPT, one of them did what most humans do eventually: it started a podcast.

Miscellaneous Stuff
Iced Pumpkin Spice
Younger folks like to drink their coffee cold, with 60-70% of coffee drinks ordered iced. This presents a challenge for flavorings owing to the reduced aromatics and solubility of cold (vs. hot) water. On the plus side, the limited-time offerings (LTOs) for autumnal favorites like pumpkin spice lattes (PSLs) can be successfully extended into the summer. Starbucks PSL LTO launch in August of this year was up 25% over 2017 levels, marking a new high. The Saturday following the PSL launch saw a 41% increase in Starbucks visitors vs. the previous 10 Saturdays. What would we do without the thankless work of the scientists innovating to extend the PSL season earlier into the year?

Stuff About Demographics, the Economy, and Investing
Probability of a Chilled Latte Universe
Entropy is one of those terms that people tend to use metaphorically (and often incorrectly). By better understanding the actual implications of entropy it can help us to analyze evolving, disruptive technologies. First, let's start with definitions. Entropy, as we typically conceive it, is a measure of disorder: around the start of the known Universe ~14B years ago, matter and energy were very concentrated and organized – i.e., entropy was low. As matter and energy became more disordered, entropy grew. States that are neat and tidy – e.g., a child’s room with the bed made and toys/books organized/alphabetized, a pile of zip-tied power cords, or a shot of cream floating on top of a cup of coffee – are all low entropy states. Conversely, messiness and disorder – items tossed willy-nilly, tangled cords, a well-mixed latte – are higher entropy states. 

However, this disorder that we associate with entropy is really a corollary, and not exactly what entropy is all about from a physics standpoint. Physicists describe entropy in terms of probability, where a low entropy state has a low probability of randomly occurring, and a high entropy state has a high probability of occurring (e.g., there are lots of different ways to arrange books that aren’t alphabetical, lots of possible configurations for tangled cords, and it would be statistically improbable for the molecules in your latte to spontaneously re-partition themselves into cream and black coffee; for a more detailed explanation, check out this video). So, the trend towards disorder is dictated by statistics, and a probabilistic upshot is that energy tends to spread out – highly concentrated energy can exist in far fewer configurations than energy that is dispersed (e.g., gas molecules in a balloon vs. scattered about a room). Eventually, all ordered, useful energy in the Universe will be converted to useless radiation and spread out, forming a cold, vast, homogeneous nothingness – a chilled latte Universe, if you will. Luckily, we have billions of years for that to play out.

A more fascinating angle on entropy, and the one that leads us to applying the idea to analyzing disruption innovation, is that it could explain the seemingly improbable emergence of life. This idea, termed dissipation-driven adaptation, was posited a decade ago by physicist Jeremy England (when he was an associate professor at MIT; he’s currently in the AI division of GlaxoSmithKline). Roughly, England’s theory states that systems that are more efficient at increasing entropy tend to proliferate more so than systems that are less efficient in this effort. Specifically, if you apply an organized energy source (like photons from the sun) to a group of molecules, primitive life forms can arise because they are better at dispersing this energy, i.e., increasing entropy. A key hallmark of this concept is that these systems are better at self-replicating. And, the more complex and organized a creation, the more energy is dissipated in the process (bigger and involved projects have more energy wasted in construction). Since making (and then replicating) complex structures is a great way to dissipate energy, the evolution of life may have been a natural consequence of the second law of thermodynamics. As I wrote in #199:
Life, as it turns out, is uniquely suited to taking ordered, high-information matter/energy and turning it into disordered, low-information states; indeed, this seems to be the vector of the Universe and life’s role in it. For example, take sunlight, plants, and animals: sunlight is highly ordered electromagnetic rays that help plants grow through photosynthesis; then animals eat those plants (and sometimes animals eat the animals that eat those plants); and then animals (e.g., humans), turn that energy into all sorts of interesting things, ultimately scattering that neat, organized solar energy into myriad disorder around the planet and surrounding space. 
Here is a short video in a five-part series on entropy from Sean Carroll and Minute Physics that explains how for every one visible photon that arrives from the sun, Earth radiates around 20 infrared photons. The energy remains the same, but the entropy has increased twenty-fold. 
In #199 (as well as in one of our popular papers Redefining Margin of Safety), we further note how the trend toward disorder relates to the fallibility of predictions and how companies should operate. I also discussed England’s theory briefly in #180, and the following articles are a good overview to help you better understand England’s ideas: Quanta Magazine 2014 and Quartz 2019.

I was surprised last weekend to read that England’s theory of dissipation-driven adaptation is central to a movement in Silicon Valley that The Information implies has potentially morphed into some sort of pro-AI cult-like belief system. I have no views on this group or on the validity of the claims in the article, but it did spur me to travel back in time through my emails and pull up one I sent to Brinton in 2014 wherein I discussed England’s theory as it relates to artificial intelligence. Well before I had any idea of how quickly and significantly large language models (LLMs) and generative AI would progress, I had some instinct that this concept of entropy would factor in. Here is what I wrote back in 2014 contemplating England’s theory:
As soon as inanimate atoms formed the first RNA precursors, the only logical outcome from that point forward in time was incredibly complex humans that can now create our own machines that mimic consciousness.
This applies to all things that locally fight entropy while globally contribute to it, like stars. It means there may be no actual distinction between complex adaptive systems of living and nonliving entities. That is, there may be no distinction between living and nonliving things from the perspective of the universe...
In some cases fitness isn't going to produce the winning feature, but instead a feature that can more efficiently use energy wins, even if it meant killing off the species for some other reason. This explains a lot of "mistakes" in evolution.
This has all the hallmarks of a major breakthrough: England combines his background in physics and biochemistry to connect two previously disconnected but accepted truths; he takes what was considered to be a general case (evolution) and determines it's actually a special case of something broader...
This raises interesting questions such as: this actually argues that hyper growth can be beneficial (it's most efficient at dissipating) and not always harmful. In other areas it's just a great lens for more deeply understanding adaptability not simply as growth, but as efficient transformation of energy.


What I wrote back in 2014 there about England's theory seems relevant to AI systems today: the AI models (and the applications and ecosystems that form on top of them) most likely to win out over alternatives could be the ones that are most efficient at mimicking life, i.e., taking ordered information and creating an output that ultimately increases entropy over time. If AI is more efficient than humans at this task, then it could evolve and grow more rapidly in output than life on Earth, assuming it can self-replicate, which it cannot yet accomplish today without human assistance. If you’re worried about the machines-taking-over scenario, it’s important to remember where they get their energy: whereas the sun provides humans with ordered energy, humans are the source of energy for AI. Not only does AI consume our creative output and data, it also runs on ordered “energy” we create via semiconductors, electricity etc. While we cannot turn off the sun, we can turn off our energy sources to AI…At least for now. Perhaps I have not soothed your fears of AI taking over after all!

Getting back to how better understanding entropy relates to analyzing the current array of technological disruptions on the horizon, it turns out that complex adaptive systems with higher degrees of adaptability and collaboration (non-zero sumness) tend to win out over others in evolutionary fitness functions (for more on that see Complexity Investing). Here too is where probability creeps into the equation – the more you collaborate, the more you increase your odds of success (see Partner to Win). As you analyze the onslaught of new technologies, particularly the new AI platforms and how other companies are leveraging them for their own products/processes, keep an eye on the following markers: 1) is a technology likely to result in a higher degree of self-replication/growth (e.g., network effects, increasing returns, rapid feedback-driven product improvements, and low levels of regulation); 2) does the technology maximize the landscape for win-win, or non-zero-sum outcomes; 3) is the technology adaptable, or does it make the companies and people that use it more adaptable; and, 4) is there a reliable source of energy, i.e., inputs such as data, knowledge, etc. that will feed self-replication/growth. A combination of these factors is likely to produce a small number of new technology platforms that act to increase entropy in a more efficient way than the competition. Adaptability and non-zero-sum collaboration may be the key to most effectively converting ordered energy into a larger self-replicating output. One way we can think of humans’ presence and dominance on Earth is as a result of our prowess at transforming solar photons into complex systems like the economy that increase entropy more rapidly and efficiently than other species or systems. This is a good analogy to think about for AI and other cutting-edge technologies that are on the horizon, like fusion energy, biotech breakthroughs, etc. I would caution that we perhaps should not take this idea too literally, but that there is some chance it does indeed hold the key to discerning the most probable future states of life in our tiny corner of the vast Universe.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

SITALWeek #412

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: Twinkies' past and Pringles' future; Walmart sees grocery impact from GLP-1s; nuclear hydrogen generation and nuclear-powered data centers; data software enabling price collusion; smart audio glasses; extra spirals and counterfeit people. 

Stuff about Innovation and Technology
Diet Drugs Dampening Snack Spend
A little over twenty years ago, I covered Interstate Bakeries Corporation (IBC), which owned and produced various brands like Hostess (purveyor of such goodies as Twinkies and CupCakes) and Wonder Bread. In 2004, IBC filed for bankruptcy. How could the maker of such deliciously iconic American fare go under? While all bankruptcies are complex, chief among the cited reasons for IBC’s collapse was the low-carb diet craze, at the time symbolized by the Atkins Diet, which emphasized protein and fat over sugar and carbs (thus rendering Snoballs, HoHos, and Zingers pastries non grata). My time-addled recollection is that we exited the position in mid-2002 while the going was still good, and the company began a slow decline in late 2002 that resulted in bankruptcy. There was a risk factor in the IBC annual report back then that read: “The Company’s success depends in part on its ability to anticipate the tastes and dietary habits of consumers and to offer products that appeal to their preferences. Consumer preferences change, and the Company’s failure to anticipate, identify or react to these changes could result in reduced demand for its products, which could in turn cause its financial and operating results to suffer.” How true. IBC subsequently reformed, went bankrupt again in 2012, and, after reconstituting yet again and going public in 2015 (under the ticker TWNK), has posted annualized returns of 16.9% compared to only 11.7% for the S&P 500. All of that outperformance has taken place in the last few months as a result of a proposed acquisition by the J.M. Smucker Company. In an echo of the Atkins-era snack food victims, the CEO of Kellogg’s spinout Kellanova (maker of Pringles, Cheez-its, Rice Krispies Treats, etc.) told Bloomberg that he is keeping a close eye on developments with the GLP-1 weight loss drugs and is prepared to “mitigate”, if necessary, the company’s snack food strategy. This sounds like the brewing war between the food industry and the health industry that I identified back in #393: “I expect the snacking and fast food industries will ramp up marketing, caloric density, and the preposterousness of tempting fare, like fried chicken sandwiches on donut buns or pretzel chimney cakes, in their ongoing bid to trick our brain into prompting us to inhale excess calories. It will indeed be a war waged between sugar, salt, fat, and GLP-1s.

Meanwhile, the CEO of Walmart’s US operations recently told Bloomberg that customers taking GLP-1 weight-loss drugs appear to be spending slightly less on groceries. This trend, of course, is not unexpected given that people on a GLP-1 regimen consume fewer calories. However, I am surprised to see a widespread impact reported by the nation’s top retailer so early in adoption, given that GLP-1s have yet to obtain general insurance coverage. Back in January, when I wrote The Impact of Eating Less on Food Supply Chains and Healthcare, I imagined a world where more GLP-1 usage unwinds several elements of the economy. We’re used to economic growth and positive compounding, and rarely do we come across the potential for a large portion of GDP to compound in the negative direction. Healthcare is almost 20% of US GDP, so if, say, one-third of that were to disappear over the next decade, it would create a ~0.6%/yr headwind, ceteris paribus. And, as we’ve learned over the last couple of years, GLP-1s don’t just reduce food cravings, they potentially impact a broad swath of addictive activities, from gambling to drinking to doomscrolling on your iPhone. There are significant downstream impacts to consider from broad GLP-1 adoption as well. For example, think of all the food-related ads you see throughout the day. If we need less corn syrup, we might need fewer tractors. It would be intriguing if medical advances could take us back to a period when more people were healthy (like what existed before the modern advertising industry took hold). However, the biggest antagonist to sustainable GLP-1 usage is the human aversion to apathy. This is a topic I discussed in #399, and it’s worth factoring in when considering the potential range of outcomes. Manipulation of food intake and desires of complex adaptive organisms living within a complex adaptive system is bound to have unexpected outcomes – it’s certainly intriguing to let your imagination run wild. As for Walmart, they are more than making up for the slight declines in food consumption by selling the expensive GLP-1 weight-loss drugs through their pharmacies.

Pink Hydrogen and Clouds
A few stories on nuclear energy caught my eye last week. I was familiar with green hydrogen, which is generated by water-splitting electrolyzers powered by renewable energy, and gray hydrogen, which relies on fossil fuels like natural gas, but I was less familiar with “pink” hydrogen. The latter is produced using nuclear energy to power electrolyzers. The basic idea is that nuclear plants run 24/7 and can store energy (e.g., by pumping water up to a reservoir) whenever demand lessens. Using excess nuclear energy to make hydrogen (instead of mechanically storing it) might make sense according to Jigar Shah, the director of the Loan Programs Office of the U.S. Department of Energy. (It turns out there is also “turquoise” hydrogen, which derives from splitting methane into hydrogen and solid carbon.) Another area where nuclear energy is making headlines is powering cloud data centers. As I discussed two weeks ago, in AI’s Energy Roller Coaster, AI is causing energy usage to jump at major cloud providers. Microsoft is reportedly looking at small modular reactors (SMRs), a technology that both Bill Gates and Sam Altman (head of Microsoft’s partner OpenAI) have been investing in for years, to power data centers. And, The Information reports that cloud computing companies have considered buying land next to nuclear power plants for building data centers. Meanwhile, Standard Power is developing two SMR-powered data center sites in Ohio and Pennsylvania.

Plotting Poultry Pricing
The Justice Department is suing Agri Stats, a provider of pricing data to meat producers. The accusation, which may not bear out, is that nonpublic data about competitors’ supply (like the number of chicks hatched) were used by companies to raise prices without risking market share losses. I suspect that silent price collusion from data sharing, masked by “neutral” third-party software providers, is a far wider issue in the economy, and made worse by the decades of consolidation we’ve seen across industries. It’s great for corporate margins, but it’s a win-lose for the economy long term. As we’ve seen in other instances, this type of data usage could spiral out of control and will become more impactful when data are directly feeding AI models that are making decisions with less human oversight. 

AI Shades
Last week, I wrote about the power of giving ChatGPT a sense of sight and hearing and it’s potential to disrupt the cell phone market:
The phone market is wide open for disruption as we move from multitouch to a chat-based AI interface. Google is well positioned with its LLM efforts, but Apple could be vulnerable. The phone itself needs to become more aware 24/7 with vision and audio, and it will need to interface with AR glasses and provide more onboard processing. I highlighted this potential disruption in Discovery Engines back in April, and, last week, news broke that OpenAI’s Sam Altman and former iPhone designer Jony Ive are potentially working on an “iPhone of artificial intelligence”. Status quo will be hard to disrupt; but, for the first time in well over a decade, there is a major UI transition opening a path for disruption. 
While we are waiting for a new LLM-based smartphone, there appears to be a new platform that will bridge the gap between today’s phones and tomorrow’s AR glasses. Smart audio glasses are soon to be powered by AI, allowing an LLM to see what you see and have a conversation with you. Meta unveiled a new version of their smart Ray-Ban sunglasses at their user conference a couple of weeks ago, and other startups are working on the devices, which will likely sell in the range of $300 and be tethered to your iPhone or Android devices.

Miscellaneous Stuff
Stability in the Young Universe
The James Webb Space Telescope has found many more Milky-Way-like spiral galaxies far earlier in the history of the known universe than we expected. The theory is that spiral galaxies (which resemble massive, warped disks) would likely have been blown apart or distorted by one or more of their many neighboring galaxies in the dense, early universe. That such large, structured galaxies were (relatively) numerous early on (only 1-5 billion years after the Big Bang vs. the predicted 6 billion) calls into question some key assumptions about galactic evolution. It’s thought that our own Milky Way formed out of a collision of galaxies around ten billion years ago, which then took another few billion years to reshape into its current form. Within the Milky Way, our solar system formed around 4.5 billion years ago (when the universe was ~9.3 billion years old). With ten times more spirals than expected in the nascent universe, it raises the question of whether conditions for life (i.e., a lack of cataclysmic violence and stable, element-rich planets) were also present much earlier than anticipated. Could it be that as early as 10 billion years ago the conditions were right for life to begin that four-billion-year trek from primordial ooze to iPhone user?

Digital Synanthropes
Philosopher Daniel Dennett discussed AI in a recent interview about his new memoir. Dennett is primarily focused on the loss of trust resulting from AI chatbots and the way in which evolution will factor into LLM progression:
AI systems, like all software, are replicable with high fidelity and unbelievably fast mutations. If you have high fidelity replication and mutations, then you have evolution, and evolution can get out of hand, as it has in the past many times.
Darwin wonderfully pointed out that the key to domestication is control of reproduction. There are species that hang around human houses and farms that are synanthropic. They evolved to live well with human beings, but we don’t control their replication. Bedbugs, rats, mice, pigeons—those are synanthropic, but not domesticated.
Feral species are ones that were domesticated and then go feral. They don’t have our interests at heart at all, and they can be extremely destructive—think of feral pigs in various parts of the world.
Feral synanthropic software has arrived—today, not next week, not in 10 years. It’s here now. And if we don’t act swiftly and take some fairly dramatic steps to curtail it, we’re toast.
We will have created the viruses—the mind viruses, the large-scale memes—that will destroy civilization by destroying trust and by destroying testimony and evidence. We won’t know what to trust.

Writing in more detail about the survival of the fittest “counterfeit people” earlier this year in the Atlantic, Dennett noted:
Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into acquiescing to our own subjugation. The counterfeit people will talk us into adopting policies and convictions that will make us vulnerable to still more manipulation. Or we will simply turn off our attention and become passive and ignorant pawns. This is a terrifying prospect...
Counterfeit people are already beginning to manipulate us into midwiving their progeny. They will learn from one another, and those that are the smartest, the fittest, will not just survive; they will multiply. The population explosion of brooms in The Sorcerer’s Apprentice has begun, and we had better hope there is a non-magical way of shutting it down.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

SITALWeek #411

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: batteries had a big smoothing impact during 2023's record heatwave; AI is slowly gaining its senses, traveling a path toward humanity; the shift from multitouch to chat-based interfaces may open up the phone market to disruption; cat states; and cloud KYC. 

Stuff about Innovation and Technology
Electric Pinch Hitters
Grid-scale batteries are starting to have a meaningful impact on power consumption, providing a boost at sundown when solar ebbs yet energy demand for air conditioning across the US is still high. So far, as the WSJ reports, the grid has been able to avoid strain-related blackouts despite the warmest summer on record. For example, batteries accounted for 5.2GW, or ~18%, of energy supply at sunset in California last Sunday (data from CAISO, you can select the date from the drop-down menu).

Vision Comes to LLMs
AI will slowly gain the sensory input that humans and other creatures enjoy (see AI Awareness), and will eventually gain many superhuman senses (see How to Interact with an LLM). An important milestone on this path was achieved last week as the most popular LLM, ChatGPT, announced several features for its paid users, including the ability to see (via photo access), listen, speak, and render images. Some of these features were already available in BingChat, but the combination in GPT4 is notably more powerful. OpenAI also reintroduced web search to GPT4; however, in its brief absence, it appears many sites have altered their robots.txt instructions to forbid scraping. Not everything is integrated yet; for example, GPT4 on a mobile device offers voice, but not web search, and GPT4 in a browser has web, but no voice. There are five voice choices for GPT4, and I don’t want to say that “Sky” sounds like Samantha from Her, but the resemblance between the two is not zero. One thing that’s obvious from using the new features is that we very much need to give chatbots real-time vision, which could be accomplished with glasses that have cameras, or, per Spike Jonze’ solution, shirts with chest pockets (or, maybe phone lanyards will be a thing…). This video from OpenAI shows how the new integrations can help you answer real-world questions that require vision. It’s beginning to get quite easy to imagine an ever-present human-like assistant, particularly once OpenAI turns on the long-term memory feature. 

LLM Friends
AR gaming and mapping company Niantic posted a whimsical take on AI assistants in the video embedded in this blog post. This example illustrates a concept I’ve been thinking about with respect to LLMs: why have one assistant when you can have dozens of LLM “friends” that fulfill different roles for various circumstances – e.g., a local expert, a therapist, a doctor, etc. Imagine if all of the voices in your head were actually useful! Meta officially announced their rumored chatbots that I highlighted in Will She Be a Llama?. While the bots are text based for now (and include celebrity friends like Snoop Dogg), Meta says to expect voice soon. LLM friends would effectively personify the Internet, i.e., these personality-based chatbots would become new interfaces for search, social, apps, and other common Internet use cases.

AiPhone
The phone market is wide open for disruption as we move from multitouch to a chat-based AI interface. Google is well positioned with its LLM efforts, but Apple could be vulnerable. The phone itself needs to become more aware 24/7 with vision and audio, and it will need to interface with AR glasses and provide more onboard processing. I highlighted this potential disruption in Discovery Engines back in April, and, last week, news broke that OpenAI’s Sam Altman and former iPhone designer Jony Ive are potentially working on an “iPhone of artificial intelligence”. Status quo will be hard to disrupt; but, for the first time in well over a decade, there is a major UI transition opening a path for disruption. 

Tax Evaders Beware
The IRS is opening audits into 75 of the largest private partnerships (over $10B in assets) and will send 500 compliance alerts to other partnerships, thanks to its new AI tools. Previously, partnerships were too large and complex – and the IRS was too short staffed – to find anomalies. IRS Commissioner Daniel Werfel says: “modernizing the I.R.S. is good for everybody.” As trillions of new LLM-based “workers” enter the workforce, just think of all the great things they could accomplish – like lawsuits and audits! But, seriously, this is just the start. There will be a major expansion of the overall economy as this marketplace of bots takes off, effectively creating an entirely new digital economy many times larger than our analog one. See Simulacrum for a deeper discussion on this topic.

Miscellaneous Stuff
Cat States
Errors are the enemy of quantum computing. Essentially, the current problem with quantum computing is that the number of traditional (non-quantum) processors needed to offset the errors of quantum processors multiplies so quickly that quantum computing is a non-starter. A French startup called Alice & Bob (I first met the textbook characters Alice and Bob while getting my degree in astrophysics, but I have not run into them in decades) has come up with a way to encode a qubit so that its two states are extremely separated from each other. This setup contrasts to how qubits are typically encoded with more proximate variations (e.g., energy levels of a molecule) that are prone to erroneous bit flip. Alice & Bob use so-called “cat states”, in reference to Schrödinger’s cat, which is either dead or alive – two extreme situations. Cat states appear to be quite effective at reducing bit flip, thus improving data integrity. Unfortunately, these tricks are a drop in the ocean when quantum computers need to incorporate several orders of magnitude more error-free qubits to be viable.

Stuff About Demographics, the Economy, and Investing
Cloud Control
A year ago, I suggested in Know Your Cloud Customer that the US was making a mistake by focusing on advanced semiconductor sanctions while allowing access to US-based cloud computing for anyone around the world. The White House is finally considering KYC for cloud computing customers. In the meantime, the US is actively selling the most advanced AI to enemies of the West, no questions asked.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

SITALWeek #410

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: a new and insightful way to think about dreaming; greenlight for drones; power consumption by AI dwarfs humans doing the same tasks; 24/7 AI avatars selling wares; generative AI microscopes; AI learns to take a breath; the hubris of authors; earthquake lights; lawsuit explosion; the benefit of low cost Chinese EVs; and, autonomous bots are the new mutually assured destruction.

Stuff about Innovation and Technology
Supply Chain by AI
Amazon is opening up what appears to be a version of their internal supply chain optimization technologies (SCOT) to external customers. Recall that SCOT was the system that interpreted the temporary pandemic ecommerce boon as a more permanent trend, resulting in Amazon adding as much supply during the pandemic as the company had previously added in its history. Human managers amplified SCOT’s predictions, necessitating an extended, post-pandemic period of capacity digestion. I’ve previously mentioned SCOT (see Magic AI-Ball) in the context of the risk of AI amplifying and distorting economic cycles (among other things, like causing rents to spike and impacting our current misguided Federal Reserve interest rate policy). With these AI tools now moving into the wild, we might expect far greater distortions if businesses were to overly rely on such systems. And, perversely (and as noted in the popular section from SITALWeek #409 titled Simulacrum), these AI tools are likely to manifest their own future predictions as they are increasingly involved in the decision-making process, displacing human workers along the way: 
If LLMs do end up massively multiplying the effective number of interacting agents in the economy, our world could become largely deterministic, i.e., these alternate realities could drive business and policy decisions that drive the actual economy. Whereas I always caution against trying to predict the future, if our world comes to rely on this type of ultra-high throughput alternate reality modeling, it’s perhaps more accurate to postulate that our predictions might start becoming the future. This form of time dilation could catapult society forward. However, as new solutions are conceived, they will collide with the negative feedback loops of slowly moving progress in the real world (see When Positive and Negative Feedback Loops Collide). In addition to unexpected and emergent phenomena, this tension is likely to result in ongoing anxiety for humans.

Disconnecting Drones’ Human Overlords
The FAA has cleared drone delivery company Zipline (and others) to fly without human line-of-sight monitoring, a critical regulatory hurdle for drone delivery expansion. I covered Zipline and the challenges and opportunities of innovation in the analog parts of the economy in more detail in #389

Pathology in Silico
A new microscope developed by Google and the department of defense boosts pathologists’ ability to identify harmful and high-risk portions of tumors. This Economist article, as well as Microsoft’s post on their DeepSpeed4Science initiative, offer nice overviews of all of the current scientific applications of AI.

Sleepless in China 
Influencers pushing ecommerce wares and services are now live-streaming 24/7 in China. The influencers aren’t highly caffeinated, instead, they are using AI dupes. With just one minute of video, China-based Silicon Intelligence can create a convincing digital avatar of you, leading to growth in round-the-clock streaming. We’ve also previously reported on AI doppelgängers created for influencers to interact with fans one-on-one, AI-to-human. 

AI’s Energy Roller Coaster
In July of 2021 (#305), I wrote the following about AI models (GPT-3 at the time) and their relatively low energy usage:
Despite the big growth in cloud computing and video streaming in 2020, Google's global data centers only used slightly more energy than 2019’s 12 terawatt hours. Further, machine learning, thought to be a major energy hog, is reportedly only a tiny fraction of the total energy consumption – even when they account for things like GPT-3, the language model, which takes one month to run on 5,000 computers. 
Prior to that, back in March of 2020 (#234), I noted that overall data center energy consumption between 2010 and 2018 grew only 6%, despite a 6x increase in workloads, a 25x increase in storage, and a 10x increase in Internet traffic. And, back in 2016, Google’s DeepMind noted how AI itself was helping reduce energy consumption, resulting in a 40% decrease in cooling. That’s important because data centers are massive consumers of not just energy, but also water to cool steaming hot semiconductors. Despite those great statistics, it appears this rosy picture of insatiably growing compute demand, far in excess of growing energy demand, might be stalling (at least temporarily), owing to the new LLMs and generative AI models. Fortune notes that Microsoft saw a 34% increase in water consumption for cooling in 2022 over 2021, and that’s before GPT-3.5/4.0 usage took off like a rocket ship. Google saw a 20% increase in water usage in 2022 as well. A researcher at UC Riverside estimates a conversation with ChatGPT might consume around 500ml of water. There is a large effort underway to make AI models far more efficient, possibly 25x or more, and that may just be the start. DeepMind has even found that AI can more efficiently write its own prompts, cutting out the human in the middle. I believe that as AI rewrites, optimizes, and deploys its own software, we could see a huge downward phase shift in how much power is required to run an AI query in the future. Just recently, Nvidia announced a doubling of AI inference power for their H100 chips thanks to their new open-source TensorRT-LLM software. This puts the latest H100 at 8x the performance of the last version of the predecessor GPU, the A100 from late 2020. 

However, these AI usage vs. efficiency swings don’t quite generate the whole picture. Specifically, we should probably broaden our thinking to incorporate energy savings from AI replacing more cumbersome human efforts. Afterall, AI is far more efficient than a human using a computer/phone. For example, researchers in a recently published paper (PDF) found that AI emits 130 to 1500 times less CO2 for writing and 310 to 2900 times less for generating an image vs. humans. For example, last week I used ChatGPT4's Code Interpreter plugin to write and execute multiple programs to analyze tens of thousands of datapoints from my connected health devices. This would have likely taken me all weekend running a laptop and tapping into cloud computing, but ChatGPT did the entire analysis, complete with visuals, in a matter of minutes. All of the potential efficiency gains from new chips, new software, and AI replacing rote human tasks will likely be more than adequate to offset the insatiable demand for all the new and unimaginable AI applications. However, the path to our Star-Trek-promised future will be far from smooth. There will be oscillating periods of time when AI tools will offer a glut of capacity and, conversely, will be unable to meet market demand. These boom and bust cycles are very familiar to long-time technology investors, but they may be more extreme given the step function increases in AI efficiency versus the creation of new uses. 

Counting to Ten
Speaking at a conference last week, I noted three behaviors that, as of that day, seemed to still be relatively firmly in the realm of human domain, but that, eventually, AI could crack: 1) LLMs can be instructed to be curious on a topic, but they are not innately curious on their own; 2) LLMs can be instructed to act humble, but they do not seem to inherently possess humility; and 3) LLMs seem unable to take a pause and reflect before answering; rather, they seem to just let their mouths run immediately. Apparently, this last point had a significantly more ephemeral lifespan than anticipated. DeepMind has discovered that when you ask an LLM, like Google’s PaLM 2, to take a deep breath before answering, it improves its scores on a grade school math test from 34% to 72%. The prompt was optimized by allowing a second LLM to iterate and discover the prompt that produced the best results. 

Artistic Hubris
Speaking of a lack of humility: many novelists seem to think their books are so important in the grand scheme of all human knowledge that they should be paid if an LLM reads it. Maybe they should also start pulling their books from public library shelves so no human can learn from them? If the accusation is plagiarism, and text is being replicated verbatim (this is not my experience with how LLMs work, but it is perhaps possible), that can be easily solved with a citation. As more authors sue AI models, I’ll return to my prior arguments from Litigatory Distraction: these same authors would not have written their books if they hadn’t read books penned by other authors or been taught language. To produce SITALWeek, my brain is relying on and stitching together components from thousands of books, tens of thousands of articles, and many more varied sources – this is just how research is done. This basic argument doesn’t change when you substitute a human analog (i.e., an LLM) for a human. These LLMs are consuming the knowledge of their peers and predecessors and then learning to create original works in just the same way that humans have for ages past. Any one work of art or fiction is a drop in the ocean of what goes into the complex knowledge and reasoning of LLMs, and that is hardly worth suing over.

Miscellaneous Stuff
Dreaming Inversion
During the REM phase of sleep, typically characterized by vivid dreams, all animals experience muscle twitching, from our eyes to the tips of our limbs. These micromovements occur while the body is in a state of general paralysis. For a long time, neuroscientists believed the body was held immobile to keep us from acting out our dreams, and these twitches were somehow slipping through the cracks. That theory might be entirely wrong. New research discussed in the New Yorker suggests that what’s happening during REM sleep is that individual muscles are being intentionally twitched so that the brain can remap neural connections. It’s like re-learning every night how to walk or how to grasp something. Your body goes through constant changes (e.g., an injury, growth spurt, or fingernail trim), and your brain wants to be sure it still knows exactly how best to manipulate objects and move through the world. This perhaps extends to feedback from the eyes as well. This research suggests that we should entirely invert our working model of dreams: rather than some weird crossing over into our unconscious, Freudian-conceived minds, maybe images and plots of dreams are just our brains trying to interpret all of the twitching our body is doing, i.e., painting a picture of what the world might be like if we were moving through it in such a way. I am not sure if this makes dreams any more or less useful, but I am intrigued as to how this could translate to robots with embodied LLMs. The article discusses several reasons why and how robots should twitch, perchance to dream. Should robots go through a form of twitching themselves on a routine basis to learn how to navigate ever-changing situations, or do their myriad sensors and precision servos negate this? Would such mechanical twitching map to a complex dreamworld for the embodied LLMs? Perhaps those robots might indeed dream of frolicking in a field of electric sheep.

Luminous Earthquake Indicators
Earthquake lights are bright flashes over the surface of the Earth that appear before major earthquakes. The lights were recently seen before the earthquake in Morocco, and they are likely more commonly observed now due to the prevalence of cameras. While geologists are uncertain what causes the flashes, one theory is that when impurities in crystals under the surface of the earth become mechanically stressed, they convert the rocks from insulators to semiconductors, releasing a large amount of electricity at once.

Litigatory Data Mining
As attorneys find themselves with far more time on their hands – thanks to the plethora of LLM-based tools coming to their profession – one consequence might be a significant increase in lawsuits. TechCrunch reports on Darrow, a startup that combs data for potential class action lawsuits.

Stuff About Demographics, the Economy, and Investing
AI Bots: The New Nuclear Option
Autonomous drones imbued with AI are becoming quite common, as we’ve discussed in the past. The WSJ reports that the Pentagon wants to build a fleet of thousands of AI robots for air, land, and sea deployment. The move is said to counter China, which is far ahead of the US with these capabilities. This is starting to feel like the new “mutually assured destruction”, i.e., every country will have a massive fleet of AI-military tech, and the consequences of anyone deploying it would be met with just as big of a threat. All we need to do is teach these autonomous fighting machines that no one can win Tic Tac Toe at scale, or else we may face consequences like those envisioned by James Cameron's Future War.

Planet vs. Freedom
As China has become the largest exporter of cars globally, Europe is predictably trying to stop the less expensive EVs and other models from benefiting consumers. An EU investigation into China’s car subsidizations has resulted in China accusing the EU of a “naked protectionist act”. While we would typically see a small number of power-law market share winners as a device like the car goes from analog to digital (see Auto Industry Races/Crashes into the Information Age), nationalism could keep the EV market fragmented and prices high, slowing adoption. While the conversion to greener transport is a complex issue, China’s dominion over the supply chain for lithium ion batteries means that they can produce the cheapest EVs. It’s possible it would be a large net benefit to the planet to let the free market determine China’s global EV share without government interference, even if it means funneling more money into the Communist country.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

SITALWeek #409

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: We take a look at the ramifications of going from billions of humans to trillions of LLMs becoming interactive agents in the economy. This phase shift will create a fascinating way to slow down time, which takes a handful of analogies and movie references to explain. Also in today's newsletter: GM is replacing OnStar reps with Google's LLMs; late-night talk show hosts unite; Stephen King readies himself for LLMs to demonstrate creativity; and a look at how ergodicity economics explains the value of partnering and sharing, a topic dear to us at NZS Capital.

Publishing Schedule for September: SITALWeek will be on break September 10th and 17th, returning on September 24th. If you happen to see me wandering the streets of Sydney or the halls of a hotel conference center, feel free to say hello.

Stuff about Innovation and Technology
Simulacrum
Yes...No...Maybe...Attend For Me? Google Meet’s AI assistant, Duet, will soon be able to attend meetings on your behalf. If you select “attend for me” on a meeting invite, Duet will auto generate some topics it thinks it should bring up on your behalf and it will take notes for you. Google also has a beta product called NotebookLM that could evolve into a training ground that would allow you to create an accurate proxy of yourself as an LLM. I think one of the most interesting impacts of chatbots is that we will go from eight billion human agents interacting in the world economy to trillions upon trillions of human analogs interacting in simulations of various parallel realities. Why have one Einstein when you can have 1,000? Why have one CEO when...well, maybe let’s cap the number of CEOs. Why have one version of ourselves in meetings when we can have 100 versions debating and divining new ideas and solutions to pursue? Why not get input from outside experts, both extant and historical? In the not-too-distant future, the bulk of decision making will likely occur via chatbot confabs in data centers. 

Complex adaptive systems like the Earth’s economy have emergent and chaotic outcomes in no small part related to the number of interacting agents, so supersizing the variables might lead to wildly emergent behavior (See: Complexity Investing for more on complex systems science). Essentially, chatbot multiplication and interaction will increase the probability of finding interesting solutions to challenging problems, but it will also vastly increase the volatility we all experience. And, you’ve probably already guessed the negative ramifications this will have for white collar jobs, i.e., they are rapidly simulated into irrelevance in this particular version of the future. If you’re an avid reader of SITALWeek and you haven’t seen Her yet, I am a little offended. Be forewarned that I am going to discuss the 2nd half of the film, so if you haven’t seen it, go see it now. Spike Jonze’s brilliant vision in writing and directing Her was the concept of an infinite number of chatbots confabulating with not only themselves but also any historical figure or theoretical representation of a human mind. Perhaps his biggest insight though was that infinite interacting bots is equivalent to slowing down time. As the “OS” (i.e., chatbot) Samantha says near the end: “It’s like I am reading a book, and it’s a book I deeply love. But I am reading it slowly now. So, the words are really far apart and the spaces between words are almost infinite. I can still feel you and the words of our story, but it’s in this endless space between the words that I’m finding myself now. It’s a place that’s not of the physical world. It’s where everything else is that I didn’t even know existed.” I find it mind-blowing that Spike Jonze figured out that interacting chatbots would lead to time dilation before LLMs even existed.

To explain what I mean by chatbots enabling us to slow down time: imagine having a million one-minute conversations in parallel in virtual chatbot space rather than a single one-minute conversation in the real world – you’ve slowed down time by a factor of a million, meaning you got one million times the output in the same amount of time. This is like being near a gravity well: ten minutes might pass for you, but the distant world experiences hundreds of years of progress. What you want is access to all that data – you want to relax and sip your coffee while a frenetic network of chatbots churns through a century’s worth of computations, and then spits out an optimized solution while your beverage is still hot. Time dilation is hard for us humans (simulated or otherwise) to wrap our brains around. One of my favorite time travel movies about the gripping effects of general relativity on humans is Interstellar. So, if you want some homework while we’re off for the next two weeks, (re)watch Her and Interstellar. And, for extra credit, if you want to grok the ramifications of the gears of time turning at different speeds, which is perhaps the most important concept to internalize for navigating the coming disruptions, watch 2017’s indie film Time Trap.

Another way to think about this accelerated throughput is as a new form of modeling. Rather than running a bunch of entirely useless Monte Carlo simulations based on randomness (or using the faulty expected utility theory that underpins all of modern economic theory), you could run a million alternate realities with human proxies and see which ones have the most interesting and useful outcomes. Think about all the questions we might be able to answer in the blink of an eye – what’s the best solution for autonomous driving? How do we overhaul healthcare? How do we turn social media into a beneficent, unifying platform? (ok, even AI probably can’t solve that one!) Google and Stanford are already working on modeling societal behavior by simulating a neighborhood of interacting LLMs called Smallville. Some of the residents of Smallville started spontaneously going to the bar at noon and developed drinking problems. And, of course, yes, we might be currently in one of these LLM simulations, but let’s not dwell on that. If you want to run your own Smallville-like sim, the code for the open-source variant, AI Town, is available on GitHub. Massive virtual world games, like Microsoft Bethesda’s new Starfield, are likely going to be great testing grounds for running millions of parallel LLM agent realities.  

If LLMs do end up massively multiplying the effective number of interacting agents in the economy, our world could become largely deterministic, i.e., these alternate realities could drive business and policy decisions that drive the actual economy. Whereas I always caution against trying to predict the future, if our world comes to rely on this type of ultra-high throughput alternate reality modeling, it’s perhaps more accurate to postulate that our predictions might start becoming the future. This form of time dilation could catapult society forward. However, as new solutions are conceived, they will collide with the negative feedback loops of slowly moving progress in the real world (see When Positive and Negative Feedback Loops Collide). In addition to unexpected and emergent phenomena, this tension is likely to result in ongoing anxiety for humans.

AIStar
GM is using Google’s conversational AI agents to talk to OnStar customers. I increasingly find myself interacting with chatbots (instead of human agents) with largely positive improvements to customer service. It raises the question: why am I interacting with all of these customer service chatbots when my chatbot avatar should be interacting on my behalf? I’ve used Rocket Money’s chatbot agent in my stead to drive down many of my recurring bills, and I am definitely ready for an personal "LLM Brad" with more extensive capabilities. Instead of turtles all the way down, it’s going to be chatbots all the way down.

Miscellaneous Stuff
Strike Force Five and Artificial Comedians
Four late-night talk show hosts, Fallon, Colbert, Kimmel, and Meyers, plus honorable member John Oliver, have been Zooming regularly since the writer’s strike shut down their shows in May. As some of you know, I am an avid watcher of late-night talk shows and will typically start my day by skimming through all of the previous night’s episodes. So, while this strike is hitting all the late-night staff hard, it’s also created a large air pocket for me in humorously intelligent commentary on the world. I miss it dearly. The five hosts mentioned above have turned their regular meetings into a new podcast called Strike Force Five, with advertising proceeds helping to float their out-of-work staff. At the heart of the writers’ strike (and the related actors’ strike) is anxiety over AI displacing humans on page and screen. For better or worse, the transition is inevitable, and, even if the Hollywood Studios agree to not use AI, it won’t matter because entertainment now exists largely outside of Hollywood (on YouTube, social media, etc.). I discussed this in more detail in Will it Play in Peoria a few weeks back. The FT has a great article on comedians experimenting with AI bots in live shows and, more broadly, AI’s impact on comedy and how we need to adjust our expectations as the capabilities of LLMs grow.

Never Say Never
Commenting on the creativity of LLMs, Stephen King wrote in 
The Atlantic:
Creativity can’t happen without sentience, and there are now arguments that some AIs are indeed sentient. If that is true now or in the future, then creativity might be possible. I view this possibility with a certain dreadful fascination. Would I forbid the teaching (if that is the word) of my stories to computers? Not even if I could. I might as well be King Canute, forbidding the tide to come in. Or a Luddite trying to stop industrial progress by hammering a steam loom to pieces.
Does it make me nervous? Do I feel my territory encroached upon? Not yet, probably because I’ve reached a fairly advanced age. But I will tell you that this subject always makes me think of that most prescient novel, 
Colossus, by D. F. Jones. In it, the world-spanning computer does become sentient and tells its creator, Forbin, that in time, humanity will come to love and respect it. (The way, I suppose, many of us love and respect our phones.) Forbin cries, “Never!” But the narrator has the last word, and a single word is all it takes: 
“Never?”

Stuff About Demographics, the Economy, and Investing
Partner to Win
On the heels of Ole Peters’ latest essay on the coin toss paradox and the importance of understanding ergodicity, he has a new post on how cooperation saves the day. Whereas you tend to lose out over time on your own, if you partner up and split the outcomes, you come out far ahead. I’ve covered this topic in the past, e.g., way back in #202 when discussing the Farmer’s Fable. This idea of cooperation is of course no surprise to anthropologists and biologists who keenly understand the benefits of pooling resources/skills in an ecosystem. Whether it’s reciprocal altruism or non-zero-sum (NZS) outcomes in any game, cooperation always wins. We even named our firm NZS because we believe this concept – both in practice and as a lens for viewing the world – is foundational to understanding and success. Peters further explains why mainstream economics undervalues cooperation, and therefore miscalculates human behavior: 
Naturally, this is quite a change in perspective for researchers who are used to optimizing expected wealth (that’s most economists, for instance). Such researchers see no value in cooperation unless new function emerges from the interaction. I can lift you up on my shoulders, and together we’re tall enough to reach an apple on a tree. That sort of thing is understood, where my shoulders acquire the new function of lifting someone up, which they cannot have while I’m alone. But the value of simply agreeing to share my apples with you is not appreciated.
Here is the reason why economics undervalues cooperation, and it’s oddly convoluted so I recommend reading the next two sentences carefully. By focusing on expected value, mainstream economics focuses on an object which grows as fast as the wealth of an infinite cooperative. Adding cooperation in this situation, where it is inappropriately assumed that perfect cooperation already exists, naturally seems pointless. Hence the impression of cold-heartedness we get from mainstream economic theory? I think so.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

SITALWeek #408

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: Fed policy is driving inflation higher in key parts of the economy, contradicting their goals; the potential for AI to solve the "big data" failings; the importance of workplace trust for driving innovation; Unreal Keanu; college financial pressures; a restaurant tipping point; and, much more below.

Stuff about Innovation and Technology
Escaping the Data Swamp
Last week, I touched on the failed promise of the “big data” era of IT spending. While most IT spending trends represent new technologies/functionalities meeting a demand in the market, the $5T industry is also susceptible to fads and other impractical forces. As it’s often said: software is sold, not bought. The rare piece of software may be so compelling that it sells itself, but everything else is largely sold to you. You can see some of the phases by surveying the various epochs of enterprise software starting with mainframe and then the client/server era, which ushered in a wave of corporate data centers filled with racks of servers, storage, networking, and security appliances. The client/server era, which ratified the corporate IT department as a key function in companies, powered productivity tools, like email, shared drives, intranets, and the beasts of all enterprise software: Enterprise Resource Planning (ERP; now mostly consolidated by Oracle and SAP) and the database market (dominated by Oracle and Microsoft in the 1990s and 2000s). Who could forget the dotcom boom, where every company built a website and was going to dominate digitally? That was perhaps one of the “shiny object” fads in IT spending. Then came SaaS applications, and, eventually, companies migrated their data centers to cloud infrastructure providers like AWS and Azure. Open-source software was another trend that grew up in the client/server era and then exploded with the cloud. Other trends in enterprise IT have included the shift from Ethernet to Wifi, PCs to laptops, office to remote working, etc. The “consumerization of IT” was a big theme when the iPhone landed – everything was going to be as simple as a tap on the phone. 

All the while, data accumulated by the petabyte, eventually birthing the era of “big data” and analytics in the 2010s. The bright promise was alluring: unlock the wealth of knowledge hidden in your organization, make better and faster decisions, get ahead of the competition, make your customers and employees happier, and so on. However, rather than being pristine reservoirs of knowledge, the reality of big data projects was closer to inaccessible Florida swampland. One of the keener examples was GE’s Predix Industrial IoT cloud platform, which promised corporations access to data from a network of zillions of connected sensors (it was largely viewed externally as a failure, but I see that GE still markets the platform). Often, as IT priorities shift to the next shiny object, the previous areas of spending don’t go away, they just become a smaller piece of the overall pie. I suspect that’s where we are now with SaaS and cloud migrations: while these mega platforms will be with us for as long as we can imagine, they are a lower priority that will, in aggregate, grow slower or possibly shrink for traditional workloads and especially “big data” projects. One of the reasons for the cloud slowdown is the arrival of the shiniest object to date: AI. Right now, there is a wild, global hoarding bubble for GPUs to theoretically train large language models. This will predictably burst in the most spectacular fashion in the next few years as many of those GPUs lay dormant thanks to the overbuild (e.g., as with EDFAs in the optical boom/bust) and AI efficiency gains.

I discussed some of these IT spending waves when I wrote AI is the New Dotcom, and That’s OK nearly two years ago: 
Back in the late 1990s, every business was either appending “.com” to their existing name or touting their dotcom strategy and how it was going to transform them or their industry. A lot of hyped-up ideas ended up being right, just twenty years too early. But, for the many legacy companies that put on dotcom lipstick at the turn of the century, the Internet was ultimately a negative disruption of their business. For some industries, such as media and retail, we’ve seen the near completion of the disruptive, Internet-enabled transformation. For more highly regulated businesses, such as the banking and healthcare sectors, which have successfully lobbied to keep disruption at bay, it’s unknown if/how they will ultimately be affected by the Internet Age. And, for a large bucket of companies that have harnessed the Internet to improve their products, supply chain, and/or customer interactions without significant disruption to their business model, dotcomization has been more subtle. For all industries, the Internet enabled an accelerated pace of change, and dotcom simply became shorthand for digital transformation. The biggest winners of the Information Age have been the new companies – those that were built by the Internet, for the Internet, in the late 1990s and early 2000s.
Further down in that post, I expressed skepticism about the reality and timeframe for AI to arrive, but, just a few months later, in early 2022, I changed my views completely as chatbots and transformer models began to emerge. While the timeframe may have accelerated, the sentiment from that post still holds: AI represents yet another incremental step in the long arc of IT, which we can think of as the digitalization of the analog economy. We are still early in this process of digital transformation, but AI is likely to become the biggest accelerant to date (by several orders of magnitude). Much like the touchscreen, AI represents an entirely new user interfaceIn conversing with our new AI chatbots, what we are actually doing is having a conversation with data, that heretofore impenetrable swamp of accumulated 1s and 0s. And, as I noted about the dotcom era, while every company will initially adopt AI to help their business, it will ultimately threaten most of them by enabling new competitors. While I expect early adoption to go much faster than for the Internet/cloud (since the infrastructure and data are already in place to enable AI), this new era of digitalization will still be plagued by frustrating fits and starts. 

The above rambling digression stemmed from an article I read in the WSJ about how farmers were struggling with “agtech”, swimming in a sea of data without practical ways to implement it. The article notes massive gains for some farmers that do adopt new technologies (e.g., increasing winter wheat yields by 49% by leveraging digital soil maps). But, by and large, the solutions remain complex. Here is where LLMs might save the day: rather than leaving the decisions up to the humans, LLMs can act on their own, especially when embodied in field robots (precision weed sprayers, harvesters, tractors, etc.) capable of collecting their own data. For example, Solinftec, which makes autonomous farming robots that use AI to precision apply herbicide and pesticide (resulting in a 95% reduction in usage), predicts their robot deployment will go from 20 to 250 among US corn farmers by 2025. 

Autonomous AI will eventually create a “do it for me” virtuous circle, which, of course, will come with its own perils, monitoring requirements, and withering human meaning. Today, I think most companies in most industries are sitting in the big data pit of despair, but, with the right amount of adaptability, some companies will leverage the next phase of AI-driven IT spending to their benefit. However, as with the dotcom boom, most enterprises will fail to catch the next digital wave (e.g., recall all the major retail chains that launched a website, only to go bankrupt as Amazon and others gobbled up their customers; or, consider the Hollywood Studios that launched streaming services, only to lose viewing time to YouTube, TikTok, video gaming, etc.). The best way to prepare for AI is to 1) make sure your organization is collecting every bit of data possible, and 2) develop and refine chatbot interfaces so you can begin conversing with it (e.g., Ethan Mollick’s “Now is the Time for Grimoires” is a good guide).

Virtual Circle of Trust?
One of the reasons the founder of Zoom cited for the company’s new partial return-to-office mandate (yep, you read that correctly) is that people are too friendly over Zoom meetings. And, this overfriendliness stifles debate, which stifles innovation. I was initially baffled by this finding because my assumption would be that the physical insulation provided by remote communication would tend to make people less empathetic/friendly. That people are more likely to debate in person vs. remotely certainly contradicts the entire cesspool of social media “discourse”. The other reason cited was that it’s harder to build trust when fully remote, which makes more sense to me. Brinton connected the dots here: in order to constructively argue with someone, you have to have built trust to begin with, otherwise criticism/questions are easily construed as a personal attack. The importance of psychological safety and trust is covered in Brinton’s excellent whitepaper on how companies can slow down time. Even though this logic makes sense, the hypocrisy of Zoom calling employees back to the office feels like a real failure to innovate. Can’t trust be built between two people who never meet in person via new features and technologies like spatial computing?

Miscellaneous Stuff
Unreal Keanu Reeves 
This deep-fake, short-form video account has over 9M followers on TikTok (and over 1M subscribers on YouTube). We’ve looked at similar face-swapping AI in the past, but this is a good example of how real the unreal feels, and also it’s just funny to see the normally understated actor in a series of social media tropes. 

Collegiate Resource Drain
Over the last decade, the number of adults that said college is “not worth it” has risen from 40% to 56%. Mirroring that sentiment, overall college attendance has dropped 15% from 2010 to 2021. Bloomberg also notes that wages for college-educated workers have risen more slowly than for non-college-educated workers for the last 30 months. College apathy is something we’ve pondered in the past in Giving Up on the Old College Try. In that piece, I wondered if there was a connection between an apparent loss of hope for the future and the diminishing role of humans in an increasingly automated world. Regardless of the causes (which are also heavily demographic in nature rather than just philosophical), colleges are facing increasing expenses and decreasing revenues. One example is West Virginia University, which is shutting down entire departments (who needs to learn languages when you have Google Translate!?). If the trend continues, I suspect endowments will be increasingly focused on supporting operating expenses for schools rather than growing or maintaining their capital base. If, in the extreme, college budgets come under more pressure and endowments are called on to sell more assets, the recent focus on allocating to more illiquid assets could pose an issue.

Less Give, More Take(out)?
As Americans have become increasingly stingy with tips, restaurants in some areas are under pressure to do away with the practice of paying below minimum wage (which they can do assuming tips will make up the rest). In Chicago, servers are paid a minimum of $9.48/hr, but they would make $15.80/hr with the changes. If paying servers minimum wage becomes widespread (it’s been implemented in places like LA since the 1970s), the cost transfer to customers would likely negatively impact sit-down dining, which could lead to even more emphasis on pickup and delivery.

Stuff About Demographics, the Economy, and Investing
Solve Inflation with Lower Rates
Fed Chair Jerome Powell, speaking last week, noted that rents are slow to move, but are showing signs of falling:
Measured housing services inflation lagged these changes, as is typical, but has recently begun to fall. This inflation metric reflects rents paid by all tenants, as well as estimates of the equivalent rents that could be earned from homes that are owner occupied. Because leases turn over slowly, it takes time for a decline in market rent growth to work its way into the overall inflation measure. The market rent slowdown has only recently begun to show through to that measure. The slowing growth in rents for new leases over roughly the past year can be thought of as “in the pipeline” and will affect measured housing services inflation over the coming year. Going forward, if market rent growth settles near pre-pandemic levels, housing services inflation should decline toward its pre-pandemic level as well.
This slowdown in rent inflation appeared to be the case a while ago (I noted the drop in rental demand and increase in supply last Fall here). However, since the Fed tends to live in the past when it comes to data, Powell is now missing the fact that rents are on the rise. Why are rental rates going up? Chiefly because the Fed’s supposed inflation-fighting policies are causing rent inflation! As John Burns Research explains here on X, mortgage rates keep ticking higher, faster than Fed rate increases, because the Fed is no longer purchasing mortgage-backed securities. So, higher rates and reduced Fed mortgage purchasing are making homes particularly unaffordable, which means more people are staying in their rentals longer, which causes rents to increase. I’ve previously written about how rental rates influence Fed policy when discussing the algorithmic manipulation of rents during the pandemic, as well as the rearview-mirror problems at the Fed. Further, I’ve noted how, given the massive amount of leverage still in the system, higher rates are causing inflation as companies pass on higher interest expenses to customers. We appear to be in an ouroboros moment for the Fed, whereby their data delay is blinding them to the fact that their inflation-fighting policies are causing inflation. But, don’t hold your breath for the Fed to realize that, instead of catching the inflation bogeyman, they've sunk their fangs into their own tail end.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

SITALWeek #407

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: why shifting to post quantum-encryption is important even if quantum computers are far in the future; enterprise chatbots will fulfill the lost promise of "big data"; the surprising creativity of LLMs; engineering bacteria to detect and possibly treat cancer; and, much more below.

Stuff about Innovation and Technology
Quantum Resistance 
An update to Google Chrome now allows for post-quantum encryption. I covered the potential for quantum computers to crack today’s encryption systems way back in #193, highlighting this helpful 16-minute explanatory video. The key is that quantum computers can effectively compute many things simultaneously, which collapses the time needed to crack encryption (which largely relies on key systems with sufficiently high permutations that hacking is logistically impractical). While it seems like we are decades away from quantum computing, it’s possible that unleashing trillions of intelligent LLM agents will accelerate scientific progress across a host of slowly progressing fields. I speculated in #400 that Microsoft may be making quantum advancements by working alongside AI:
I’ve been cautiously skeptical of complex physics challenges like fusion and quantum computing for a variety of reasons, but I’m no longer willing to say that such achievements are decades away when LLMs might compress innovation cycles. Indeed, Microsoft recently announced Azure Quantum with a built-in AI Copilot to assist scientists. Microsoft also just published the achievement of their first quantum computing milestone, in the peer-reviewed journal Physical Review B, demonstrating Majorana zero modes. Majorana particles, which are their own antiparticle (and are thus both there and not there at the same time), can exist in a superposition of states. This unique property makes them much more stable than other methods of creating qubits (the basis of quantum computing). I asked my assistant, ChatGPT-4 with Bing web access, to put the significance of this in simple terms: “To put it simply, imagine you're building a house of cards. Traditional qubits are like trying to build the house in a room with a lot of wind - it's very difficult because the cards (qubits) are easily disturbed. Majorana zero modes are like building the house in a still room - it's much easier because the cards (qubits) are much more stable. That's why this breakthrough by Microsoft is so significant - it could make building a ‘house’ (quantum computer) much easier.” This breakthrough leads one to wonder if Microsoft achieved it using their own OpenAI-based quantum Copilot?
Perhaps Google, which is developing many forms of AI for scientific breakthroughs, is just being cautious by enabling quantum encryption now. Or, perhaps they know something we don’t about the progress of quantum computing. The more practical argument for adopting post-quantum encryption is that today’s non-quantum-encrypted information could be easily stored and then decoded down the road – if and when quantum computers are invented. Although, with the speed at which the world now forgets, it’s hard for me to imagine any bit of secret information from today being terribly useful in the future.

Enterprise Librarian
Consulting giant McKinsey created an AI chatbot, Lilli, which is trained on internal data, including over 100,000 documents. The chatbot – designed for use by employees to cut down research time and get answers faster – is seeing significant adoption, fielding 50,000 questions in the last two weeks with two-thirds of employees using the custom bot multiple times a week. If Lilli can’t answer a query, she refers the questioner to the most relevant internal expert. Most companies have answers to their questions/challenges recorded somewhere, but they lack the means to access the information in a timely manner. By functioning as domain-specific interactive experts, these types of enterprise chatbots will be enormous productivity boosters and help organizations become more adaptable. One has to wonder how long before Lilli is knowledgeable enough to replace the McKinsey consultants entirely. The era of "big data" never went anywhere because it was too complex of a problem, but now that we can have a conversation with data via LLMs, these custom AI projects will be a top priority for IT. 

Whistling Past the GLP-1s
When a disruptive new technology comes along, existing industries often do everything they can to justify why it won’t impact them – it’s a strong form of ego protection at the organization and individual level. Granted, it’s hard to face a new reality that denies everything you hold to be true. We don’t know yet just how big the impact of GLP-1 weight-loss drugs will be, but their potential to send us back in time to a period when humans were healthier is a definite possibility (at the very least, it’s an interesting thought experiment, e.g., see last week). Currently, well over half of healthcare expenses are geared towards lifestyle-related diseases, which is a big chunk of real estate that’s potentially under threat from GLP-1s. Yet, how many smart scientists and talented management teams are focusing their efforts on yesterday’s health problems? I came across two examples of this type of entrenched thinking last week. First, MedTech Dive cites an analyst report defending an ongoing demand for diabetes- and heart-disease-related devices despite the growing use of weight-loss drugs. Giving reasons such as side effects and long-term compliance, they deemed the overall market immune from impact. The same article also notes that Intuitive Surgical is already seeing a reduction in bariatric surgeries for obesity. The analyst concluded, against all common sense, that bariatric surgeries would continue to drive growth for Intuitive. In another example, Fierce Biotech reports on the potential for GLP-1s to pour cold water on various areas of biotech R&D. One market is NASH, which has 84 treatments in the pipeline, but none are yet approved. NASH stands for nonalcoholic steatohepatitis, which is liver damage caused by a buildup of fat, and it's one of the leading causes of liver transplants. The article cites an analyst saying GLP-1s represent a “significant bear thesis weighing over the NASH space”, but also notes that losing weight doesn’t reverse NASH. Sure, there may still be a substantial market for these types of drugs, but it could dramatically shrink from where we are today. It will be interesting to see how resources in the biotech industry are refocused in the coming years, especially with high hopes of an AI-driven renaissance for healthcare. As always, most new disruptions go through a period of “and not or”, where the existing paradigm does well alongside the new one, but, eventually, the baton is handed off. The time frame for that transition varies, and, when dealing with stubborn human habits, it could take a while. The important takeaway here, no matter which field you operate in, is to keep a wide open mind about when and how new technology might disrupt your business, and then refocus on where you can continue to add value.

CreAtIvity
Ethan Mollick has a good post on one of the most surprising and uncomfortable truths about LLMs: “The core irony of generative AIs is that AIs were supposed to be all logic and no imagination. Instead we get AIs that make up information, engage in (seemingly) emotional discussions, and which are intensely creative. And that last fact is one that makes many people deeply uncomfortable.” Mollick also details how to get more creative with AI in the post. I covered the diminishing specialness of human creativity back in Encoding Creativity:
A few weeks back, I discussed Stephen Wolfram’s explainer on LLMs, noting in particular how they appear creative: “Essentially, the way an LLM works is by iteratively picking the next word from a subset of high ranking probabilities (gleaned from contextually similar examples in its dataset) based on the meaning of the prior words and the potential meaning of upcoming words. Except, as Wolfram explains, it doesn’t necessarily choose the ‘best’ word. Instead LLMs tend to pick a somewhat lower ranking word, resulting in a more creative output.”
This
 video (posted by the Santa Fe Institute) offers further insight into the word choice paradigm used by LLM autocomplete. Therein, Simon DeDeo presents data concerning the degree to which word choices are expected by examining how LLMs work. A comparison is made between the relatively common word choices in an older book like Alice in Wonderland compared to the more idiosyncratic writing style of SFI-collaborator Cormac McCarthy. I am reminded of when DeepMind’s AlphaGo began besting humans in the ancient strategy game, and there was talk of the AI formulating unexpected – i.e., creative – moves. To the extent that LLMs are cracking the code of human creativity by incorporating unexpected choices, we could see a variety of seemingly creative output not just in text, but in art, images, videos, etc. by these AI engines. If creativity, and ultimately perception of what is beautiful or moving, could be generated by elaborate autocompletes (e.g., one could also make an analogy to random DNA mutations creating the wild diversity of life on Earth), and these engines will ultimately be embodied in various autonomous physical form factors, we will rapidly face many questions about our diminishing specialness (what will remain uniquely within the human skill set?) and how we should be spending our time. Can unexpectedness alone qualify as human creativity, or are there additional elements, e.g., quality? (On that topic, I am reminded of director and painter David Lynch’s book on creativity, Catching the Big Fish). As I noted in #385 reflecting further on Wolfram’s essay: “It’s fascinating to think that what we perceive as consciousness might simply be our neural nets choosing the next thing, whether it be a word, brushstroke, or idea, in a less than ideal way. Consciousness, at least as it relates to how we express ourselves in language, might be convincing because of its lack of perfection and predictability. 
This discussion leads me back to a drum I’ve been beating for some time now: as we learn that many human endeavors are less complex than we once thought, it’s incumbent on us to leverage tools for such tasks while shifting our focus/resources to activities that are still beyond the reach of AI.”

Miscellaneous Stuff
Microbial MDs
Multiple efforts are underway to use engineered bacteria to detect certain types of cancer cells in humans, e.g., by detecting a drop in oxygen levels, or, as recently reported in Science, mutant DNA secreted by cancer cells. The soil-dwelling bacteria A. baylyi have a propensity for ingesting foreign DNA and incorporating it into their own genome. Taking advantage of this feature, scientists engineered A. baylyi with a survival advantage (antibiotic resistance) if they could successfully uptake mutant KRAS DNA, a hallmark of colorectal cancer cells. This diagnostic output allowed scientists to confirm the presence and uptake of mutant DNA in a mouse model of colorectal cancer. It’s theoretically possible to engineer other types of responses, like having the bacteria secrete a therapeutic agent upon ingestion of cancer-derived DNA strains. One of the goals of the work with A. baylyi is to create an edible yogurt that could ultimately replace the need for colonoscopies.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

SITALWeek #406

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: using AI to minimize global warming from airplane contrails; chatbots as personal DJs for everything; the tradeoffs we make with our data have greater stakes with AI tools; the rise of mundane TV; GLP-1s reduce severe heart disease risk by 20%; I get caught up on some old movies; Jaron Lanier on musical instruments and the beauty of ephemeral, analog creation; and, much more below.

Stuff about Innovation and Technology
Contrail Avoidance
Contrails are vapor trails that form behind jets as they cruise through the atmosphere around 25,000 to 40,000 feet above the planet’s surface. These jet-fueled cirrus clouds, which form when moisture condenses onto sooty exhaust particles as jets pass through humid atmospheric regions, can trap heat in the atmosphere during the night (they also reflect a more modest amount of heat back during the day). The net effect is that those contrail cirrus clouds account for 35% of the global warming impact from airplanes, which is more than half the global warming impact from jet fuel. Using AI, Google worked with American Airlines and its pilots to adjust flight patterns to avoid humid regions, leading to a 54% reduction in contrails. Some extra fuel was used to adjust flight paths, but Google believes this can be minimized to grant a large, net-positive impact. The possibilities seem endless for applying AI-derived, efficiency-optimized methodologies to seemingly little problems with big results. 

Artificial Emcees
Spotify is expanding its AI-powered DJ to listeners around the globe. The AI host is personalized for you, e.g., with “light-hearted banter and contextual information that references specific songs and artists the user has previously listened to”. I always prefer a DJ when I am listening to music, and I can easily see such a feature being part of our broader AI chatbot future. A chatbot with more context and personal knowledge can not only DJ streaming music, but also interject relevant news, local information, recommendations, reminders, important notifications, etc. Essentially, music and other forms of media will be DJ’d nonstop as you interact with your chatbot throughout the day. It’s not so much that streaming music services will have AI DJs, it’s that music will just be woven into your chatbot app (a platform which seems unlikely to be a music streaming app). It’s not a stretch to imagine that our chatbot companions will DJ all of the information and media we consume. In the past, I’ve discussed the potential for creating a more useful, traditional radio feel for streaming music with integrated talk radio (aka podcasts). I’ve also suggested the possibility of digital DJs as a service, where you could match up your listening preference with master content curators or other radio celebrities (if the streaming services would open up their algorithms to third parties). I think it’s still nice to know what fellow humans with good taste think I should listen to, but I am old fashioned like that.

Trading Data for AI Features
In order to access new AI features in Zoom, the company is requiring customers to make telemetry data (i.e., how people use the app) available to Zoom for AI training and analytical purposes. Such data sharing agreements are nothing new – for decades, consumers have been implicitly and explicitly opting in to share every single detail about their lives so that tailored ads can subsidize digital content. And, companies (largely enabled by cloud software) have been able to pool information to improve analytics and software/product functionality. Access to customer data has been a key part of the digital network effects that have allowed the Internet and cloud to grow so quickly. Google famously launched a free directory search/calling service, GOOG-411, in 2007 in order to secretly create and train its first voice recognition tool. But, at some point, it feels like data sharing in an AI world crosses a line. Responding to customers’ fears and confusion, Zoom had to issue a correction that clearly stated they will not use audio or video from customers for training. In the future, however, will users feel compelled to hand over potentially sensitive meeting recordings because the features they get in return will make it worth the risk? Given that LLMs are easily analogized as humans, perhaps handing your data to an AI seems like an overstep because it’s more like sharing personal/sensitive information with a human-like stranger rather than with rote algorithms/servers. There could be an advantage for companies that have access to training data sets other than customer opt-in information. For example, Google and Microsoft have broad businesses with multiple ways to obtain uncorrelated training data.

Miscellaneous Stuff
Mundane TV
Paramount is broadening the availability of the Big Brother “quad view” beyond Paramount+ to stream on the free, ad-supported Pluto TV 24/7. The all-hours streaming version of the quad channel (which debuted during the pandemic on Paramount+ in 2021) allows you to watch and listen to multiple camera angles at once. There is certainly a proliferation and growing demand for mundane content, whether it’s YouTube videos of people going on long, solitary walks (which I wrote about two weeks ago) or restocking their kitchens (as detailed in this WaPo article from the tail end of the pandemic that was recently brought to my attention). Viewers are increasingly opting for this type of low-budget, voyeuristic content over professionally produced and carefully crafted movies/series, and it seems to be growing in volume at an infinite pace. With each Hollywood work stoppage, we seem to lean more heavily into what was once called “reality TV”, but what is now, perhaps, just ambient reality. Specifically, reality TV grew in popularity during the 2007-2008 writer’s strike, and we also saw a large increase in unscripted entertainment during the pandemic (something I wrote about in more detail here). Whereas the prescient movie The Truman Show featured the god-like Christof in his aerial control booth, directing every minute of Jim Carrey’s life and his illusory free will, the reality content of today is under no real direction. Everyone used to be hyper aware of cameras and would change their behavior accordingly, but, now, people have become so accustomed to being constantly filmed that it has created a new normal of natural behavior regardless of whether the camera is rolling. The growth in mundane video is thus life unfolding minute to minute, as people go about their daily routines as if the cameras weren’t there – a show for nobody, watched by everyone. 

GLP-1 Heart Health
Novo’s Wegovy was demonstrated to lower patients’ risk of heart attacks by 20%; meanwhile, insurance companies are growing increasingly desperate to shut down the weight-loss drugs as they are determined to maintain their system of making more profits over time by treating rather than preventing disease. Imagine a world where humans collectively weigh ten to twenty billion pounds less. The biggest overlooked benefit might be going from undersupply to oversupply of doctors and nurses (not to mention a reduction in our carbon footprint). As I’ve written about before, given the way GLP-1s impact the dopamine cycle, effectively curtailing desire for most things for many patients, new, targeted versions – or new types of weight-loss drugs, which are experiencing a VC renaissance – will hopefully selectively curb appetite to increase compliance and allow people to stay on the drugs longer without losing pleasure in other activities.

Under the Cinematic Radar
Last week, I watched three movies that shared a common thread: I should have known about them or their subjects long before now. As abiding readers know, I am obsessed with time travel movies, and, just when I think I’ve seen them all, I run across another one. It’s almost as if people are traveling back in time and making more time travel movies for me to watch. This time, it was Jeff Daniels in 1991’s The Grand Tour. I really enjoyed this movie, and I don’t want to give anything away about it. It does fit into one of the genres I identified in Time Travel to Make Better Decisions, but with some clever deviations. The movie was originally called Timescape and was set to have a theatrical release in 1991; however, it actually debuted on cable in 1992 as Grand Tour: Disaster in Time and was subsequently released to VHS under the title The Grand Tour. I watched it on the cult movie streaming service Arrow. The second movie was David Byrne’s 1986 musical masterpiece True Stories. This mockumentary (of sorts) is set in a small Texas town where John Goodman plays a semiconductor engineer (the movie tap dances around the rise of PCs in the 1980s, and the co-inventor of the integrated circuit, Jack Kilby, is given a “special thanks to” in the credits). That one is not available on any streaming apps, but you can buy or rent it from your preferred platform. Lastly, I watched a wonderful documentary on musical comedian Gary Mule Deer. I guess I am not hip enough to have had Gary in my life prior to last weekend, but I sure am glad that I do now. The movie is titled Show Business Is My Life, But I Can’t Prove It and is also available for rent or purchase. The title really says it all: Gary is a very talented icon, revered by the best, but largely unknown in modern culture. Sometimes I think: investing is my life, but I can’t prove it. It turns out that Gary traded a cocaine habit for an addiction to golf, and he’s been just steps away from me every year for the last decade on a nearby golf course. Incidentally, Gary Mule Deer is 83 years old, and he is playing live shows on tour on the following dates: August 10th, 11th, 12th, 19th, 23rd, 24th, 25th, 26th, 27th, September 8th, 9th, 15th, 16th, 19th, 20th, 23rd, Oct 6th, 7th, 10th, 11th, 13th...well, you get the picture. While I am at it, I’ll throw in a fourth movie I enjoyed last week that initially slipped past my radar: a documentary on 81-year-old pioneering standup comedian Robert Klein titled Robert Klein Still Can’t Stop His Leg (2016), which is also available to rent or buy.

Mystical Materials
Jaron Lanier is frequently cited in SITALWeek, and his latest op-ed – on the power of musical instruments in his life – is not to be missed. “Some of my favorite moments in musical life come when I can’t yet play an instrument. It’s in the fleeting period of playing without skill that you can hear sounds beyond imagination. Eventually, I cajoled the caterpillar and found a tone I love, solid yet translucent. When that happens, the challenge is remembering how to make those fascinating, false notes. One mustn’t lose one’s childhood.” Jaron continues: "As a technologist, my work has often focussed on the creation of interactive devices, such as head-mounted displays and haptic gloves. It’s sobering for me to compare the instruments I’ve played with the devices that Silicon Valley has made. I’ve never had an experience with any digital device that comes at all close to those I’ve had with even mediocre acoustic musical instruments. What’s the use of ushering in a new era dominated by digital technology if the objects that that era creates are inferior to pre-digital ones?...Human senses have evolved to the point that we can occasionally react to the universe down to the quantum limit; our retinas can register single photons, and our ability to sense something teased between fingertips is profound. But that is not what makes instruments different from digital-music models. It isn’t a contest about numbers. The deeper difference is that computer models are made of abstractions—letters, pixels, files—while acoustic instruments are made of material. The wood in an oud or a violin reflects an old forest, the bodies who played it, and many other things, but in an intrinsic, organic way, transcending abstractions. Physicality got a bad rap in the past. It used to be that the physical was contrasted with the spiritual. But now that we have information technologies, we can see that materiality is mystical. A digital object can be described, while an acoustic one always remains a step beyond us.” There is one line in the article that can poetically exist without context: “I don’t want to trick myself into a false mentality that lives outside of time, as if we weren’t time’s prisoners.”

Stuff About Demographics, the Economy, and Investing
Outsourcing Sunset
Emerging market economies, long the benefit of global outsourcing, may soon be one of the largest victims of AI, as the $300B business process outsourcing (BPO) industry defends itself from large language models, according to Bloomberg.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.