Intrigue! Sour interpersonal relationships! Backstabbing! A coup! A mutiny! An existential threat to humanity! No, this month’s commentary is not a brainstorming session to create a blockbuster movie script. Instead, it hopes to provide a different reflection on the recent boardroom fiasco at OpenAI.
Let’s put the movie making machinery on hold for a moment and discuss something much more mundane—but more disturbing—for sophisticated investors looking to improve their investment portfolio by allocating to strategies seeking to benefit from the advances in AI.
OpenAI—the poster child of many of these investment portfolio-related advances in AI use—barely averted a major crisis that could have seen the exodus of 90% of the company’s staff (to be eagerly embraced by Microsoft). This leaves one to wonder: if that had come to pass, would we have had “ClosedAI” instead? The legal “what ifs” are mind boggling had 90% of the company’s staff taken the train to Seattle. Who would own what in terms of IP? What value does that IP have, if as it appears it was in this case, it was only in the heads of the staff and not actually on the balance sheet? Scarily, for its investors (with the exception of Microsoft, of course), apparently during the debacle those valuations ranged from $0 to $86 billion.
The bigger issue this reveals is not actually to do with OpenAI per say, but rather the questions around the potential impact that a large-scale deprecation/legacy/no-legacy event for a major AI data-based entity could have due to some crazy circumstance, such as a boardroom tiff.
This is yet another illustration of a topic which readers of the Plotinus’ AI Insights will be familiar—data security. [See our April 2023 Commentary, Could Data Vulnerability Endanger One of Your Current or Planned Investment Portfolio Strategies? And no, we were not then presaging this near-walk-out situation at OpenAI.] This is a data security issue not in the cybersecurity sense, but in the sense that data must have long-term stability and utility as part of a long-term investment process. In other words, it needs to be secure to use over the long-term in order for the efficacy of an investment strategy using it to also remain viable over the long-term. The OpenAI situation shows us that this data vulnerability issue actually should be extended to include backend AI software components used in running an investment portfolio’s methodology. In a “ClosedAI” scenario, what would all the newly fledged AI-driven processes that were dependent on backend ChatGPT do, were the company behind it to fall by the wayside?
As investors begin to realize the benefits of using AI-driven investment strategies, not many of them are thinking about issues like this. They should be.
A Ghost of Christmas Future Glimpsed
Now try to imagine what the impact of a “ClosedAI” scenario or even a “90% WeakenedAI” scenario might inflict on a money manager using, say, AI-driven sentiment analysis signal detection, and dependent on backend OpenAI architecture for providing such inputs. Suddenly, that technology looks awfully vulnerable. It’s not that it would have stopped processing any data (it is artificial intelligence after all), but what would the effect be on a company struggling to keep afloat, having fallen from the starry heights of the ChatGPT launch of November 2022?
This would inevitably take the form of a reduced product, as there would likely be less money available for technological improvements, updates, and crucially, timely renewal of prompting. Many people are familiar with ChatGPT’s propensity to ace tests it had seen before as part of its training data, but flunk the ones for which the data was the unfamiliar. OpenAI, to their credit, have been very quick to acknowledge that their technology is a work-in-progress and not a finished masterpiece. In the “90% WeakenedAI” scenario there obviously are fewer Michelangelos, in the “ClosedAI” version there are none.
Let’s return to the unfortunate money manager. Sentiment-related investing, as we know, is all about timing—assessing the information in a timely, sensitive, and accurate manner. Imagine six months hence a new figure appears running for significant public office and suddenly becomes one of the most relevant names in politics, having previously been an unknown. Yes, we can remember a time when very few people had ever heard of Barak Obama. Imagine a similar scenario, with a newly appointed CEO of a highly relevant company. Imagine the outbreak of an unexpected war. Imagine a new poster child AI company that most people had never heard of until a sudden technological breakthrough is announced. And so on and so on. Imagine the impact on a tardy sentiment analysis system depending on an AI system that is simply no longer on point with its sentiment analysis because the “90% WeakenedAI” behind it can’t afford the costs of keeping its development and prompt refreshing up-to-speed. This is the kind of thing that would not even be very noticeable for those using ChatGPT for fun, but it could devastate the edge of a money manager’s AI-driven sentiment signals engine.
What might happen next? Money managers using such data inputs would be faced with a major problem: do they retool their strategies with Bard or Microsoft + “90% used to be OpenAI” as their new backend AI architecture? If they do, how do they explain this to their investors? Changing their backend AI service provider actually means starting from scratch. Continuity of the investment process is lost.
This may be an imaginary scenario right now, but two weeks ago was anyone seriously contemplating the potential collapse of OpenAI? The fact that it almost faced a staff mutiny that would have destroyed it should have investors who have allocated to, or are evaluating for consideration, money managers whose strategies are heavily reliant on such Big Data have a new investment strategy risk consideration to take into account.
Due diligence questioning by prospective investors about how an AI-driven investment strategy’s long-term viability is determinant on data, and dependence on the use of external AI software, is as important, if not possibly more important to the due diligence process, as whether or not the actual investment strategy works. As we have commented on previous occasions, a perfectly good AI investment strategy has the potential to evaporate overnight if its input data is insecure.
Similarly for those AI-driven strategies using backend AI large language models, they too must recognize the potential problem of relying on a single provider. The portfolio managers would have to be able to illustrate to investors that they have inbuilt redundancy and that the use of alternative providers would not have any discernable impact on the implementation of the investment strategy.
The remedy to these investment process risk-exposure issues is generally that smaller is better. More controlled use of data is preferable, as this could potentially ensure better end-to-end security and usability. Even better could be in-house AI technology, which ensures internal control and a lack of vulnerability to exposure to events at other AI companies that backend users are susceptible to. That is what Plotinus built for itself and uses in running its investment strategies. ■
© 2023 Plotinus Asset Management. All rights reserved.
Unauthorized use and/or duplication of any material on this site without written permission is prohibited.
Image Credit: Delcarmat at Shutterstock.