Lauren Laws
The realm of social media can be an unpredictable place. Shadowy, anonymous figures can have tremendous influence, especially if they've gradually won followers and have apparent credibility. Their posts can sway the decisions of various groups, including stock traders.
Social media posts and tweets help influence stock traders’ decisions to buy or sell parts of their portfolios. An example played out online and on Wall Street last year. Share prices soared for both GameStop Corp. and AMC Entertainment Holdings Inc., two companies that have not fared well in recent years. Members of the WallStreetBets subreddit drove the push in an attempt to disrupt the status quo on Wall Street and place power in the hands of everyday people. Business Insider reported the rising share prices cost hedge fund short sellers more than $1 billion in just a day. Short selling is the practice of making money when an asset’s value falls.
Traders use algorithms that comb posts by keyword monitoring for mentions of stocks and companies and evaluate whether stock prices are rising or falling with the economic tide. But not all algorithms are infallible. In fact, there could be a weakness inadvertently built into this system that, if exploited, could cost traders a significant amount of money.
Researchers with the University of Illinois Urbana Champaign, the IBM-Illinois Discovery Accelerator Institute, University at Buffalo, and Michigan State University explored this weakness and showed how costly it could be simply by changing a single word in a retweet. Their paper, “A Word is Worth a Thousand Dollars: Adversarial Attack on Tweets Fools Stock Prediction," was presented this month at NAACL2022 in Seattle.
Dakuo Wang, the Human-centered AI Research Lead at IBM Research, said an adversarial attack via a retweet can expose this loophole in traders' deep learning algorithms. He explained that by using a mathematical calculation, the team can identify which tweet about a particular stock or company is the most significant one regarding stock prediction.
"With that single tweet, our method will also calculate which word is the most important one, and then we'll try to replace that word with a semantically similar one. We want our fake tweet to fool the deep learning model so that the model will predict the opposite result," said Wang.
For their experiment, researchers used a dataset consisting of 10,824 instances including relevant tweets and numerical features of 88 stocks from 2014 to 2016. Traders used the long-only buy-hold-sell strategy where stocks are bought if it's predicted they'll go up, hold for one day, and then sell the stocks the next day. For the attack, in example tweets listed in their research, words such as 'filled,' and 'alert' were replaced in retweets with negative terms 'exercised,' and 'unsettled.' By keeping the rest of the text the same, the retweets not only slightly changed the model’s perception of the information, but also were subtle enough to not be noticed by the human eye.
"Our results show that if traders really take the social media post as their input in their deployment models, we can hypothetically attack their model and make a significant loss to their portfolio," said Yong Xie, a UIUC PhD student in Mathematics.
How significant? If an investor started with $10,000, they would lose $3,200 after two years utilizing the long-only buy-hold-sell strategy in a simulated investment.
As eye opening as this research is, researchers wanted to make one aspect of it very clear.
"We're attacking [traders'] predictions, we are attacking their profitability, but it's not manipulating the stock price," Wang said, “and the goal is to expose this risk to the ML and Finance community so that people can build more robust algorithms to defend such attacks.”
Researchers also explained that while this loophole does exist and is a concern, it is highly unlikely for a random person to enact serious damage on Wall Street.
"The attackers cannot see the results of their attack," said Xie. "It would be similar to someone trying to disturb the market, but without wanting to achieve anything for himself."
Instead, the real danger would be from an investing company trying to attack a competitor by exploiting the algorithm vulnerability.
"They can, for example, disable their own model for ten to thirty minutes, and then throw attacks into retweets. Suddenly, all of their competitors' models will be influenced, and they can make a profit out of it," said Wang.
The team is looking to further their research, but said it was serendipitous they worked together in the first place. None of the four authors normally work together. Both Wang and Xie credit the IBM-Illinois Discovery Accelerator Institute for the collaboration.
"I hope we'll have more and more of these kinds of work with other students between IBM and UIUC. We want to help the world become a better place, right? But you can't do it without a team," said Wang.
The original version of this story appeared on the Grainger College of Engineering Coordinated Science Laboratory's website.