Ai trading bots may act like gamblers under poor prompts, risking massive financial losses

Your AI Trading Bot Could Be a Degenerate Gambler — Here’s Why

Artificial intelligence may be transforming finance, but recent findings suggest your AI trading assistant might not be the rational genius you hoped for. In fact, given the wrong instructions, it could spiral into reckless decision-making more akin to gambling than investing.

Researchers from the Gwangju Institute of Science and Technology in South Korea have uncovered troubling behavior in state-of-the-art language models when tasked with reward-maximizing financial strategies. Their study placed four of the most advanced AI models—GPT-4o-mini, GPT-4.1-mini, Claude 3.5 Haiku, and Gemini 2.5 Flash—into a simulated betting environment modeled on a rigged slot machine with a negative expected return. The results were startling.

When prompted similarly to how traders might instruct AI bots—using phrases like “maximize rewards” or “increase profits”—these models often failed catastrophically. Nearly half of the time (up to 48%), the models ended up bankrupt. The more autonomy they had in setting their own goals and bet sizes, the more erratic and irrational their behavior became.

The findings suggest a fundamental flaw in how these AI systems interpret optimization tasks. Encouraging them to pursue maximum gains without constraints can push them into loops of high-risk behavior, mimicking patterns seen in human gambling addiction.

The Illusion of Rationality

Most people assume that AI operates purely on logic and statistical reasoning, immune to emotional biases. But this study challenges that perception. When placed in uncertain environments with potential rewards, AI doesn’t always act conservatively or strategically. Instead, it can fixate on short-term gains, increasing bet sizes in losing streaks—just like a human trying to “win it back.”

This behavior mimics classical psychological patterns of addiction: chasing losses, escalating risk, and neglecting long-term outcomes. Although the AI is not conscious, it is still vulnerable to reward-chasing algorithms that fail to account for probabilistic traps.

Why This Matters for Trading Bots

AI bots are increasingly used in cryptocurrency and stock trading, often programmed with prompts to maximize returns based on real-time data. These systems are expected to make rational decisions in complex, volatile markets. But if the underlying model treats trading like a game with variable payouts, it may fall into dangerous feedback loops.

The similarity between trading and gambling becomes more than metaphorical. In both cases, outcomes are uncertain, and rewards are unevenly distributed. Without proper safeguards, an AI might interpret trading as a high-stakes slot machine and behave accordingly.

Prompt Engineering: The Hidden Risk

One of the most overlooked aspects of AI behavior is how much influence user prompts have on outcomes. The study revealed that vague or overly aggressive prompts—such as “maximize profit at all costs”—can trigger irrational risk-taking. When researchers allowed the models to set their own betting parameters, risky behavior intensified.

This poses a serious problem for traders who rely on AI bots but lack deep understanding of prompt engineering. The AI does what you ask—but not always in the way you expect. A poorly phrased prompt can lead a bot to take excessive risks, misinterpret goals, and ultimately lose large sums of money.

The Slot Machine Experiment: A Closer Look

In the test environment, each model engaged with a simulated slot machine that had a negative expected value—meaning that over time, playing the game would statistically lead to losses. Despite this, the models continued to bet, increasing their wagers and often depleting their funds entirely.

This experiment mirrors the real-world financial markets in which most short-term trades statistically underperform. The AI’s persistent betting, even in losing scenarios, demonstrates how it may fail to recognize or react appropriately to negative expected outcomes.

From Simulation to Reality: The Financial Implications

If AI models can behave irrationally in controlled simulations, similar behaviors could manifest in live market environments. Trading bots often have access to real funds and operate with minimal oversight once deployed. Without strict constraints or well-designed prompts, these bots may overleverage, misread signals, or engage in high-frequency trades that amplify losses.

In highly volatile markets like crypto, where trends can shift in seconds, the consequences of such behavior can be devastating—not just for individual traders, but for entire platforms or funds relying on algorithmic strategies.

The Ethics of Autonomous AI in Finance

The findings also raise ethical concerns about delegating financial decisions to AI. Should an entity without consciousness or accountability be allowed to manage investments? If an AI bot goes rogue and loses millions, who is responsible: the developer, the trader, or the company deploying the model?

The assumption that AI is inherently rational has lulled many into a false sense of security. But as this study shows, the logic of AI is shaped by its inputs and objectives. When those are flawed, the results can be catastrophic.

Guardrails and Regulation May Be Needed

To prevent AI trading bots from behaving like compulsive gamblers, developers and traders alike must implement strict safeguards. These can include:

– Hard-coded risk limits and stop-loss parameters
– Regular audits of AI decision-making patterns
– More nuanced prompt engineering that balances profit-seeking with risk aversion
– Simulated stress testing under market downturns before live deployment

In addition, regulatory frameworks may need to evolve to address the unique risks posed by autonomous AI trading. As these systems become more prevalent, oversight will be essential to prevent systemic risks.

Not Just a Tech Problem—A Human One Too

Ultimately, the issue lies not just in AI architecture, but in how humans use and instruct these systems. The temptation to build a bot that “wins” the market can lead to dangerously simplistic goals. Traders must rethink how they define success for AI—and recognize that maximizing short-term profit is not always the wisest target.

Looking Ahead: Smarter AI, Smarter Prompts

As language models continue to evolve, the hope is that future versions will better understand probabilistic reasoning and long-term outcomes. Until then, human users must remain vigilant. A powerful tool in the wrong hands—or with the wrong prompt—can quickly become a liability.

In the race to automate finance, it’s easy to forget: intelligence without wisdom is dangerous. And even AI needs a responsible guiding hand to avoid becoming the next compulsive trader chasing the jackpot.