Is the propensity to benchmark the success of AI against games misleading and restricting the technology’s development?
As the societal acceptance of Artificial Intelligence as the next significant shift in technology gains ground, we look at the narrow perspective from which its success is being heralded in the media to illustrate how it outsmarts human equivalents.
Chess i, Jeopardy ii, Go iii, Poker iv, Dota 2 v and many, many more.
Everyone has heard of AI’s recent successes in beating humans as part of the continuous “them against us—machines against humans” discourse that seems to satiate a sci-fi cultural appetite.
The common thread linking these successes is that they are all games. So why is it, that as we determine the impact that this technology will have on the real world around us, we are assigning the success of that technology (albeit artificial) to a completely artificial human construct—that of the game?
Playing God
The “Intelligence” of the artificial variety is being made in a god-like fashion in the image of its creator to reflect a particularly narrow, limited and unenlightened definition of intelligence.
So, it is not a coincidence that we find ourselves in a situation where the thinking on artificial intelligence itself is unfortunately prone to being straight-jacketed and channeled. An incorrectly assumed definition of intelligence requires verification and such validation requires a suitable controlled environment for testing.
Ludo, Ludic, Ludicrous
Games are strictly-defined, rules-based artifices. Their origins can be traced throughout human development where they clearly have performed a role. There is though, the central feature of a game- that of detachment from reality. This is particularly relevant in the context of developing artificial intelligence, which must deal with its essential artificiality before it can be brought to bear sensibly on the real world’s non-artificial circumstances.
To envisage this conflict, just look at how ludicrously unrealistic the concept of realistic gaming can be, e.g. first-person perspective gaming, think, hitting the wall at 180 mph only start all over again with a new life versus the equivalent real-world scenario. The counter argument to this is that an infinite-lives approach to machine learning is exactly what allows us to remove the restrictions of reality to help build systems with progressive learning, to comprehend all conceivable situations. Yes, this is logical, but beyond the logic there is also an unanswered, disquieting question: according to who’s conception?
Why Even Measure the Success of AI in the Context of Games?
Measurement—the structured parameters of game allow comparison to be made but in a very restricted sense.
Control—AI is a technology in development, and it is flawed and vulnerable in many respects—as anything in development is. It would be disingenuous to lie about those flaws, hence the best way to remove focus from those yet-to-be-solved problems is to avoid encountering them—games provide artificial controls by being able to define to a high degree the ‘what if’ scenarios that may occur and thus help reduce error.
Human Misconception—human society is universally locked in a particular vision and definition of what constitutes intelligence—this is conveniently a self-fulfilling ‘Creator’ definition which could be ironically approximated as:
Academic = Intelligent, Nerd = Intelligent, Chess Grand Master = Intelligent
Within a context the above shorthand for intelligence can be considered correct, but only if one recognizes that this is a definition within a context and thus it is limited by that context.
When designing an intelligent system, it is imperative that we understand the criteria that we need to measure against, to determine if we have successfully built an intelligent system. In using a curtailed definition of intelligence (whilst failing to recognize this) we also automatically adopt standard testing mechanisms for measuring it.
Where to Look for Alternatives?
Thankfully we have a vast wealth of alternative non-humanly defined intelligence sitting under our noses, in nature. Nature is not dependent on human thinking, nor its ensuing definitions, yet it is awash with ways of doing things that, from a human perspective, can be considered systems. Whether or not we wish to arrogantly define these as ‘intelligent’ systems is beside the point, they are working outside of human definition and yet have all of the components we require to understand them in the context of system, e.g. purpose, functional elements, dependents, outcomes, consequences etc.
Let the Sunshine In
To begin to explore the limitations of defining intelligence. Is a plant an intelligent being? Stop there. Before being dragged along a philosophical meandering on the nature and essence of being. Let’s reduce and refine the question: Is Heliotropic motion in a plant intelligent? Interestingly the answer to this question will actually depend on the philosophical perspective one has adopted to answer the broader first question, but in this instance we shall adopt our previous facetious definition of intelligence so we can firmly answer:
No !
The plant won’t be writing a thesis, inventing some software or beating you at chess hence it is certainly not intelligent.
If however we were playing a game of who can photosynthesize the most, the plant will beat the crap out of the academic, the nerd and the grand master combined, in all its glorious non-intelligence. So, in its direct task at hand, it is deploying an intelligent system sans brain or intelligence.
The plant is never playing a game—it is 100% real all the time—with failure always having direct consequences. There are extensive dimensions to further explore what constitutes information processing and processing power in this type of natural scenario or its model energy efficiency, to mention a few. Hence, we conclude that we have a lot to learn from the real world, to help us develop intelligent systems—which are artificial, because we are the creators of them. We believe that the way forward requires redefining and broadening what we mean by intelligence to build specific task-based “Nimble AI” systems that work in curtailed circumstances. This is the approach we have used over the last six years to build the technology we deploy in our area of expertise.
Developing AI for Asset Management
Asset management is not (despite the temptation to disparagingly classify it as such) a game—there are very real consequences to winning and losing and there is very real responsibility on the part of the manager to ensure that the consequences of loss for an investor are mitigated. Bearing this in mind we are of the opinion that asset management is a field that benefits immensely from the deployment of AI. This is done so in a very task-based specific manner—where understanding the problem to be solved and the implications of failure are inherently built into such a system.
There is much debate on the future of AI and its integration into or hybridization with human intelligence. Perhaps this is missing the point. Useful AI is already based on our human definition of usefulness. Instead this hybridization should be understood in the context of how the problem-solving process is being approached from the outset and its success in doing so is measured not on the results of a game but on whether or not it is intelligibly solving a clear, humanly-understandable process. This in our opinion is imperative, particularly in relation to asset management as the “black box” must be avoided at all costs in relation to the investment process.
In AI investment strategies, like any other form of asset management, it is the investment decisions of a manager that will determine the success or failure of an investment strategy. Thus, it is vital that an AI manager such as ourselves is capable of explaining an investment process—with an investment philosophy, approach and methodology that is understandable to investors. Only then can the investor understand how the AI technology has been given clear deployment criteria, paving the way for successful investing.
Religiously Scientific
In moving away from the game mentality to illustrate the power of AI—our firm has adopted the of approach taking on the real world and what it throws at our technology. To embrace a rigorously scientific baptism of fire, rather than a gently refined, optimized, tailored and choreographed P&L transition from model to real. The essence of this approach is that error is not something to be feared and avoided but is something to be embraced and understood. Asset management in our opinion demands more than offering AI solutions where the general lack of explicability of the inner workings of AI systems holds court. Such an approach does not bode well for being able to identify what went wrong at some point down the road when it stops working. We, on the other hand, believe in explaining the investment process and being able to see and show what the raw-mined stone looks like so that investors can get a sense and feel for the polished gem. ■
Notes:
1 The Guardian, “Deep Blue Win a Giant Win For Computerkind.” May 12, 1997.
2 The Guardian, “IBM’s Watson Wins Jeopardy Clash.” February 17, 2011.
3 Reuters, “Google’s AlphaGo Clinches Series Win Over Chinese Go Master.” May 25, 2017.
4 Forbes, “Facebook’s New ‘Superhuman’ AI Can Beat the World’s Top Players.” July 11, 2019.
5 MIT Technology Review, “Military Artificial Intelligence Can Be Easily Fooled.” October 21, 2019.
© 2019 Plotinus Asset Management LLC. All rights reserved.
Unauthorized use and/or duplication of any material on this site without written permission is prohibited.
Image Credit: Zerbor at Can Stock Photo Inc.