Ai in finance still needs human oversight, says eliza labs founder shaw walters

Shaw Walters, the founder of Eliza Labs, has voiced strong skepticism about allowing AI agents to independently manage personal or institutional finances—at least with the technology available today. Speaking at the Token2049 conference in Singapore, Walters emphasized that although AI has made impressive strides, it remains fundamentally ill-suited for autonomous financial decision-making.

According to Walters, the true current utility of artificial intelligence in the financial sector is not in replacing human judgment, but in augmenting it. AI excels at parsing vast sets of market data, identifying patterns, and executing trades at speeds no human could match. However, expecting an AI agent to act as a self-sufficient money manager is still premature.

“You probably don’t want to hand an AI a pile of cash and expect it to return profits,” Walters cautioned. He explained that AI agents are most effective as tools to interpret and structure quantitative information, not as independent decision-makers. These agents can serve as a bridge between raw data and actionable insights, enabling traders and analysts to make better-informed choices—but they shouldn’t be running the show.

Eliza Labs is taking a pragmatic approach to integrating AI into finance. In January, the company launched ElizaOS, an open-source operating system built on the Solana blockchain. The platform is designed to help developers build, deploy, and manage AI-driven agents and simulations. A key component of ElizaOS is its “marketplace of trust,” a mechanism that translates informal or speculative online chatter—often referred to as shill posts—into structured, testable trading strategies.

Walters sees this as a way to bridge the gap between social sentiment and real-world trading models. Instead of blindly trusting AI decisions, users can test and simulate strategies derived from social media buzz, giving them a way to assess potential outcomes before committing real capital.

One of the major concerns Walters raised is the lack of true contextual understanding in today’s AI models. Financial markets are influenced by a web of interconnected variables—economic indicators, geopolitical events, regulatory changes, and even public sentiment. Current AI models may be adept at crunching numbers and spotting correlations, but they struggle with causation and nuanced interpretation of news or policy shifts.

Moreover, AI agents often lack the kind of ethical and fiduciary frameworks that human advisors must adhere to. They are not yet equipped to weigh long-term financial goals, risk tolerances, and changing life circumstances in a way that aligns with a client’s best interests. Without these capabilities, entrusting them with full financial control could lead to irresponsible or even dangerous outcomes.

Another issue Walters highlighted is accountability. When an AI makes a poor investment decision, who is held responsible? The developer? The platform? The user? This legal and moral ambiguity makes it risky to deploy AI agents in unsupervised financial roles.

Despite these limitations, Walters is optimistic about the future. He believes that with the right checks and balances, AI can play a transformative role in finance. The key, he argues, is to use AI as a co-pilot rather than an autopilot. When paired with human oversight, AI can help detect market inefficiencies, analyze sentiment data in real time, and execute trades with precision.

Expanding on this, Walters pointed out that regulatory bodies are still catching up with the pace of AI development. In many jurisdictions, there are no clear guidelines for how autonomous financial agents should operate. Without a regulatory framework, widespread adoption of AI money managers is not just risky—it’s potentially illegal.

Looking ahead, Walters envisions a hybrid model where AI agents and human advisors collaborate. In this setup, AI would handle data-heavy tasks, such as market scanning and trade execution, while humans would retain control over strategic decision-making and ethical considerations. This model would combine the strengths of both parties—speed and scale from AI, and judgment and accountability from humans.

He also emphasized the importance of user education. As AI tools become more accessible, there’s a risk that less experienced investors may put too much trust in systems they don’t fully understand. Part of Eliza Labs’ mission is to ensure that users are equipped with the knowledge and tools needed to use AI responsibly.

In conclusion, while AI is already reshaping the financial landscape, its current role should be limited to that of an assistant rather than a decision-maker. Walters and Eliza Labs are advocating for a thoughtful, measured integration of artificial intelligence into finance—one that enhances human capabilities without replacing them. Until AI can grasp context, ethics, and accountability, the smart money will continue to rely on human judgment.