SITALWeek

Stuff I Thought About Last Week Newsletter

SITALWeek #395

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: a close look at the Hollywood writers' strike and its broader ramifications for AI displacement; integrating reward and punishment into AI feedback loops could advance models far more rapidly; the value of surprise; speaking of surprises, apparently mind reading is becoming a reality; and, much more below.

Stuff about Innovation and Technology
AI Goes to Hollywood
Hollywood’s Writers Guild chose to go on strike last week after failed negotiations with the major studios. Among the complaints are concerns regarding the way streaming has negatively impacted residuals and reduced the size/duration of writers’ rooms, as well as the threat posed by AI’s expanding role. AI’s impact on content production has been a frequent topic in SITALWeek, as I see the movie industry as a hallmark for how AI might impact other industries both inside and outside of the creative arts. Whether it’s de-aging, voice-overs, AI scripting, virtual world sets, or, ultimately, actors selling rights to their entire persona (as in the 2013 sci-fi movie The Congress), it’s an industry ripe for disruption. Hollywood has always been an early tech adopter, and the industry long benefited from steady growth in viewing as screens proliferated and mobile devices expanded consumers’ screen time. However, recent developments have left us awash in a sea of infinite content with offerings that far outnumber the professional gems produced by Hollywood and other studios. Indeed, I think it’s fair to say that scripted content is becoming a niche industry that is at risk of shrinking its share of viewing even further with the current work stoppage. Even if there is a prolonged cessation (which seems likely given the negotiation gulf between the writers and the studios), we consumers have nothing to worry about because, as Matt Belloni from Puck joked recently: “the algorithms will take care of us”. There’s probably a lot of truth in that quip given the massive amount of content across YouTube, TikTok, podcasts, video games, unscripted shows, streaming services’ library content, and already scripted shows in production. Just last week, The Information reported that 45% of US YouTube viewing is now taking place on TV screens, up from less than 30% in 2020. 

Writers' desire to draw up some rules for how AI will be used by Hollywood makes sense, but it would be a mistake to believe it won’t have a large – and potentially negative – impact on the demand for their services. This could be bad timing for the strike, as The Hollywood Reporter points out, because AI can cross the picket line (which raises an awkward futuristic thought experiment: should AI be allowed to unionize?). I’d like to think that uniquely human sparks of creativity and connection (between writers, directors, actors, editors, composers, etc.) are responsible for the magic of great movies and TV shows, but I’m also open to the possibility that advanced AI will be able to match (or maybe even surpass) the quality of human creativity with time. Further, the future of storytelling might end up being more interactive and personal, requiring not so much a human writer, but a God-like AI engine that creates virtual worlds, stories, or even movie adaptations of favorite books customized for each viewer on demand. In a recent interview, Joe Russo, the co-director of Avengers: Endgame, and Donald Mustard, the Chief Creative Officer at Epic Games (maker of the Unreal Engine that is also used for virtual stages to shoot movies and TV shows), discussed this future of AI and storytelling: 
Russo: So potentially, what you could do with [AI] is obviously use it to engineer storytelling and change storytelling. So you have a constantly evolving story, either in a game or in a movie, or a TV show. You could walk into your house and say to the AI on your streaming platform: “Hey, I want a movie starring my photoreal avatar and Marilyn Monroe's photoreal avatar. I want it to be a rom-com because I've had a rough day,” and it renders a very competent story with dialogue that mimics your voice. It mimics your voice, and suddenly now you have a rom-com starring you that's 90 minutes long. So you can curate your story specifically to you.
That's one thing that it can do, but it can also, on a communal level, populate the world of the game, have intelligence behind character choice, you know, the computer-run characters in the game that can make decisions learn your play style, make it a little harder for you, make it a little easier for you, curate the story...How quickly we get there, I don't know, but that's where it's going.

Mustard: ...we're really not that far off from where some of the real-time engines like Unreal Engine...[are] very close to where you could be almost perfect photorealistic, real-time rendering...Or, on the fly, you could be like, “Yep, I wanna star as myself in a movie,” and you could watch it.

The writers’ strike is just one example of the discord we will see as workers and employers adapt to rapidly evolving technology. Every profession needs to be ready to deal with the impacts of AI on jobs and business models by identifying places where people can still add value or work alongside AI. A good starting point with any disruptive force is to ask as many questions as possible in order to plumb the limits of possible scenarios and outcomes. An important part of that process is having an open mind and holding your previous beliefs very loosely, a topic I explored in more detail in More Q, Less A.

AI Pile Drivers, Back Office, and TSA
A trio of stories got me thinking about how quickly some jobs will be taken over by AI and automation. Built Robotics RPD 35 robot is a fully autonomous piling machine that can set the stage for large-scale solar installations by putting piles into the ground 3-5x faster with more accuracy than human operators and crew (the bot can carry 200 15-foot beams and drive one eight feet into the ground every 78 seconds). Also in the news, the IBM CEO indicated they are pausing 7,800 hires that they think can be replaced by AI. Lastly, the TSA is rolling out automated facial recognition tools so travelers can self-scan IDs for authentication without interacting with an agent. I am reminded of the significant transfer of Western jobs to India-based IT and business process outsourcing (BPO) firms a couple of decades ago. That was a painful, but generally slow enough process that our job markets adapted despite some permanent job losses. Many of those jobs that were offshored are in customer service and back office, two areas that appear to be rapidly set for disruption from AI.

Miscellaneous Stuff
Surprise and Reward
Most people believe that their thoughts and actions are reactionary to the surrounding environment, but, in reality, our brain predicts what will happen second to second and then changes our behavior based on whether or not the predictions are correct. We are rewarded for correct predictions by a sense of order and balance, but we also need to push our limits a bit (and risk being wrong) in order to gain more experience on which to base future predictions. Recently, cognitive philosopher Andy Clark and cosmologist Sean Carroll had an insightful, high-level discussion on Carroll’s podcast of many of the reasons for (and consequences of) this prediction-based modus operandi and how this lens helps us understand human behavior. One part of their discussion that I connected with was the importance of surprise. Here is an excerpt of the conversation:
Sean Carroll: So couldn't we just say that there are two things going on? We want to minimize prediction error, but we also want to survive, so there's a constraint, we want to survive, and under that constraint, it's actually useful to go out and be surprised sometimes so we can update our predictive model...I don't know how mathematically that will work out, but it does seem a little bit intuitive to me.
Andy Clark: ...it looks as if very often, the correct move for a prediction-driven system is to temporarily increase its own uncertainty so as to do a better job over the long time scale of minimizing prediction errors, and that looks like the value of surprise, actually...I think we artificially curate environments in which we can surprise ourselves. I think, actually, this is maybe what art and science is to some extent, at least, we're curating environments in which we can harvest the kind of surprises that improve our generative models, our understandings of the world in ways that enable us to be less surprised about certain things in the future.
There is an obvious connection between this predictive model of the brain and large language models, which themselves are a form of predictive autocomplete. As we learned from Wolfram, one of the interesting things about LLMs that make them appear more creative is they don’t always choose the most obvious next word, i.e., they have a built-in element of surprise (see You Auto-Complete Me for more). Clark suggests in the podcast that LLMs suffer from not having a proper reward feedback loop, i.e., they aren’t rewarded for providing useful answers and there are no consequences for bad behavior (lies, insults, etc.). It would therefore be interesting to add reward-based training and operation for LLMs. This is indeed a key idea advanced by Karl Friston, one of the original proponents of the prediction model of the brain (see also #271 and #272). I’ve also previously covered an Australian company working on neural nets comprised of mouse neurons that rewards the cell networks with predictable signals and punishes them with unpredictable ones (#370). I think we could achieve some very interesting human-like AI by combining embodied LLMs (having a physical form is key to accessing environmental inputs and having parameters for seeking balance) with reward/punishment-based reinforcement learning. I suspect it will become especially critical to reward robots for good behavior as LLMs and AI enter the physical world and interact with humans.

Programmed vs. Free-Thinking Bots
Speaking of human-robot interactions, Lex Fridman’s interview with Boston Dynamics CEO Robert Playter was interesting. I will say, however, that I tend to disagree with Playter’s focus, as he voices more interest in pre-programmed, rote robotic tasks than in embodying AI in autonomous form factors capable of learning on the fly. Boston Dynamics has put its AI efforts into a separate division that is now run by BD’s founder. I think the biggest near-term advancements are more likely to come from a deep integration of AI and robotics, where AI can learn by interacting with the physical world (see AI Awareness).

Mind Reading is Real!?
Scientists have successfully used an early version of GPT to read human thoughts and translate them into text. This incredible breakthrough was published in Nature Neuroscience. The technique is not a general-purpose mind reading machine due to the nature of the human brain and fMRI tools (see Whole-Brain Signaling for more on that topic). Rather, it needs to be tuned and trained on each person. But, once trained, it can effectively translate what you are thinking into a decipherable text output. Intriguingly, the output is not word for word, but rather captures the spirit of the thoughts. This too reminds me of how LLMs’ predictive autocomplete often chooses different words to communicate the same thing. Functional MRIs are sophisticated machines that are clearly not portable; but, I suspect that, with enough training and GPT/hardware advances, something more akin to a wearable device could eventually voice your thoughts by reading your brain activity. There are obvious dangers to such a technology, but this advance is truly mind blowing to me, and it seems to reveal a lot about just how easy it might be to decipher (and replicate in silico) the complexity of the human brain. 

Deep Water
I enjoyed the 2022 David Bowie doc Moonage Daydream released on HBO Max last week. That led me also to watch David Bowie: The Last Five Years (also currently available on HBO Max). That film contained a Bowie quote that feels appropriate in many ways right now, especially as it relates to the speed of AI disruption and the importance of surprise and randomness to finding the right questions to ask: “If you feel safe in the area you’re working in, you’re not working in the right area. Always go a little further into the water than you feel you’re capable of being in. Go a little bit out of your depth. And when you don’t feel that your feet are quite touching the bottom, you’re just about in the right place to do something exciting.”

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

jason slingerlend