Sophisticated investors are (hopefully) already reaping the benefits of the AI wave, by picking the winners among the tech stocks that have had an AI-driven surge in value since the beginning of this year. It is rather more difficult, however, to take the next step searching for AI opportunities among money managers deploying the technology. There is a very good reason for this.
Are Your AI-Related Investments in Closed System ‘Standardized’ Opportunities?
It is easy to grasp the AI chatbot conceptually and thus it is easy to see the upside potential for companies that can sell the capability or directly use this type of AI to automate run-of-the-mill standardized applications and activities (up to and including Hollywood screen writing if the WGA union is to be believed!). The keyword to note here is ‘standardized.’ In cases where there can be a closed system, continuous standardization of input data helps improve predictive accuracy. A closed system in this context means where both the inputs and outputs are either limited or clearly definable.
Large Language Model AI has a compounding advantage by creating a standardization. Every time suggested output is accepted by the human end user, the end user is accepting a gradual, progressively more standardized response. This, in turn, helps the accuracy of the next prompted suggestion, which in turn helps the next as the process is endlessly repeated. One can see that what ensues is a slow dulling regimentation of output. In many cases, this boringness is not any disadvantage towards getting a task done more cheaply, quickly, efficiently by using AI than performing the same task without it.
Very clever, yes, but can this style of AI be applied systematically to trade the markets? Can the language of the market be read and analyzed and produce actionable trades, just as the human language prompting is read and analyzed to produce answers to questions?
In theory yes, in practice it is a whole lot more complicated.
The World of Securities Trading Is Not a Closed System
Traditional trading opportunities arise from a discovered or anticipated mispricing. If you expect a stock will rise to a higher price you buy if you think it will fall you sell. There must however be opportunity—in other words dispute. A will sell to B only if A and B have conflicting opinions about the direction of the market. There is no right answer, only opinion.
The information companies may be willing to communicate at the call may become so standardized that there is nothing to read.
Consider the fear of AI standardization of market data that has begun to be recognized regarding the company investor relations call with shareholders. As it has become machine readable/interpretable, the result, from the participating company executives, has been a standardization of the call, turning it into a corporate version of Alan Greenspan’s Fed Speak. The ultimate consequence of this is that the initial advantage of having an AI system rapidly reading the tea leaves of the tone and tenor of the call to gain a trading edge, is lost once planning for the call event by the company responds to being made subject to machine reading. The information companies may be willing to communicate at the call may become so standardized that there is nothing to read. All the right things are said in the right order with the right tone. The trading advantage is lost. Why? AI standardization kills the AI trading opportunity.
Most forms of AI are honed to pursue correctness. The more correct the answers provided—the better the AI system. Once again, fine in theory—in a closed system. But the world of securities trading is not a closed system. It is an opinionated melee of attempted one-upmanship. The assumption held by some is that an AI model trained on market data can be illustrating increasingly accurate predictions and that the model will be constantly improving, and therefore these predictions can be trusted with evermore confidence. Unfortunately, this is not the golden goose it first appears to be, but rather an increasingly fragile self-confirmatory system, which in the long run can end up with a dead goose and no golden eggs.
Heavily Data Dependent AI Models Are Deceptively Skewed Towards the Most Recent Past
This highlights a deeper issue. The clamor for AI systems to achieve higher predictive accuracy requires more and more data. This has reached the point where in the absence of such data it is feasible to have the AI create versions of the data to satisfy the systems need be fed. To return to a point we have made on previous occasions, the growth of usable, storable, analyzable data is exponential. This means that systems which are heavily data dependent are deceptively skewed towards the most recent past. The reason is straightforward once you stop to think about it. There is far less long-term historical data to work with compared to the amount of recent past data that there is now, and to the amount data that will be generated in the future.
You can imagine how this could be embedding layer upon layer of vulnerability which could collapse with just one brush with contradictory empirical reality.
AI systems trained on past (human data) have to contend with predicting using AI contaminated data (human generated + AI generated). One can imagine the complexity when the human generated data is consequent to AI influenced standardization. It will begin to resemble or be indistinguishable from the AI generated data. The challenge investors face today is paralleled by what George Orwell wrote allegorically in Animal Farm (1945): “The creatures outside looked from pig to man, and from man to pig, and from pig to man again; but already it was impossible to say which was which.”
Again, in a closed system this perhaps may not be so much of an issue. Yet, in the context of trading the market, you can imagine how this could be embedding layer upon layer of vulnerability which could collapse with just one brush with contradictory empirical reality.
Humility for Portfolio Managers and More Educated Decision Making by Prospective Allocators
So what is the solution to this problem?
In short, it is humility. Money managers deploying AI should accept predictive imperfection as the robust choice over a suspect and highly vulnerable perfection. That is the harder path to follow, and it includes being able to take the blows that come with incorrect predictive signals instead of the easy highway of pursuing increasing AI predictive accuracy in ways that may lead to blow ups that such systems didn’t see coming.
AI data generation is a very powerful tool and is something we at Plotinus use in our AI-based derived data modeling, which is not heavily data dependent, but the tool must be used correctly. In fact, in our case there are only two real-time data feeds used, off of which our applied math theory approach is run via AI modeling.
It is dangerous (from the perspective of trading) to simply have an AI system fill a data vacuum to encourage and shape a more accurate answer amidst a blur of unclarity that cannot distinguish between human and AI generated data. Instead, there must be an understanding of knowing how and why AI data is being created and how it is being used. If this is the approach that is taken then robust, though imperfect, AI trading systems can be built that will over the long-term produce the improved returns that investors seek.
These thoughts may help the investor make more educated decisions when researching various AI-driven investment strategy offerings. ■
© 2023 Plotinus Asset Management. All rights reserved.
Unauthorized use and/or duplication of any material on this site without written permission is prohibited.
Image Credit: CreativeNature at Can Stock Photo.