Americans embrace Ai in daily life but still distrust it, new national survey shows

Americans Are Surrounded by AI-But Most Still Distrust It, New Survey Finds

More than half of Americans have interacted with artificial intelligence in just the last few months, yet the technology remains one of the least liked institutions in public life, according to a new national poll.

A survey of 1,000 registered voters, conducted for NBC News by Hart Research Associates and Public Opinion Strategies between February 27 and March 3, found that AI has quietly become part of everyday life in the United States. Over 50% of respondents said they had used some kind of AI platform in the previous two to three months-whether through chatbots, image generators, recommendation systems, or productivity tools.

But familiarity has not translated into affection.

AI ranks near the bottom in public approval

When voters were asked how they feel about a range of political figures, institutions, and countries, artificial intelligence scored poorly.

Only 26% of registered voters said they view AI positively. A significantly larger group-46%-said they view it negatively. That leaves AI with a net favorability rating of -20 points.

To put that in perspective, AI is less popular than several deeply polarizing political figures and institutions:

– Immigration and Customs Enforcement (ICE): -18 net favorability
– Republican Party: -14
– President Donald Trump: -12
– Vice President Kamala Harris: -17
– California Governor Gavin Newsom: -18

Only two entries in the list fared worse than AI:

– The Democratic Party: -22
– Iran: -53

In other words, among the range of entities tested in the poll, artificial intelligence sits almost at the very bottom of public esteem-despite the fact that most people are already using it.

A striking contradiction: high use, low trust

The core finding of the poll reveals a sharp disconnect:

Usage is growing fast: More than half of Americans have used AI tools recently. That includes obvious platforms like chatbots and image generators, but also less visible forms of AI embedded in search engines, email, social media feeds, navigation apps, and streaming recommendations.

But trust and comfort are lagging: Even as AI becomes routine, a clear majority of voters believe its risks outweigh its benefits, and overall sentiment remains decisively negative.

This tension suggests that AI in the U.S. is moving along two different tracks at once: rapid adoption driven by convenience and productivity, and rising anxiety driven by fears about safety, control, and the future.

Why do Americans feel so uneasy about AI?

The poll numbers don’t spell out the reasons behind the negativity, but recent public debates and behavioral trends point to several likely sources:

1. Job loss and economic anxiety
Many workers worry that AI systems will automate tasks they currently perform-from customer service and data analysis to writing, design, and basic programming. Even people who use AI to boost productivity may fear that the same tools could one day make their roles redundant or justify layoffs.

2. Misinformation and deepfakes
Highly realistic fake videos, audio clips, and images have already begun to surface, fueling concern that AI will make it far easier to spread disinformation, manipulate voters, impersonate individuals, or commit fraud. In an election year, those fears are especially intense.

3. Privacy and data concerns
AI tools are frequently trained on enormous datasets that may include personal information, social media posts, search histories, or user-generated content. Many people are unsure what data is being collected, how it’s stored, and who has access to it.

4. Lack of control and “black box” decisions
When algorithms make decisions about credit, hiring, medical triage, or policing, it’s often unclear how those decisions were reached. This “black box” nature of many AI systems raises questions about fairness, bias, and accountability.

5. Cultural unease and science-fiction baggage
Popular culture has spent decades telling stories in which AI goes rogue, displaces humans, or concentrates power in the hands of a few. Even if these scenarios are exaggerated, they shape how people interpret real-world advances.

How Americans are actually using AI

The same respondents who express skepticism are often relying on AI without always labeling it as such. Common use cases include:

Everyday productivity: Writing assistance, grammar correction, summarizing long texts, drafting emails or presentations.
Education and learning: Homework help, explanations of complex topics, language practice, and test preparation.
Creative projects: Generating images, brainstorming ideas, writing scripts or lyrics, and experimenting with new artistic styles.
Shopping and entertainment: Personalized recommendations on e-commerce sites, music and video platforms, and social feeds.
Navigation and planning: Route optimization, travel suggestions, and smart calendar scheduling.

For many users, these tools are convenient and helpful-sometimes indispensable. Yet the underlying system, “AI” in the abstract, still inspires wariness.

The perception gap: AI as tool vs. AI as concept

One reason for the contradiction in the poll may be the split between specific tools people like and the broader concept of AI they mistrust.

– When interacting with a particular app that helps them finish a task quickly, users might feel positive and satisfied.
– When asked instead about “artificial intelligence” as a whole-an abstract, powerful, largely invisible technology-their mental image shifts toward risk, loss of control, and worst-case scenarios.

This difference between concrete experience and abstract perception helps explain how AI can be widely used yet broadly disliked at the same time.

Political implications: AI as a new fault line

The favorability comparisons in the survey are not accidental. By placing AI alongside parties, politicians, and foreign countries, the poll positions it as a potential political issue, not just a tech trend.

Several consequences may follow:

Regulation pressure: As distrust grows, voters may demand tougher rules on data use, transparency, safety testing, and corporate accountability.
Campaign messaging: Political candidates could use AI either as a symbol of innovation and economic growth or as a warning sign of corporate overreach and job loss.
Partisan framing: Different parties may try to claim the role of “protector” against AI’s harms or “champion” of its benefits, further polarizing perceptions of the technology.

Given that AI is now less popular than both major U.S. political parties, there is room for leaders across the spectrum to appeal to public concern by promising stronger oversight.

What would ease public fears?

While the poll itself focuses on attitudes rather than solutions, several steps are likely to influence how Americans feel about AI in the coming years:

1. Clear and enforceable rules
Regulations around data privacy, model training, deepfake labeling, and liability for harm could reassure users that AI is not operating in a legal vacuum.

2. Transparency and explainability
If companies can show-at least in broad terms-how AI systems make decisions, what data they use, and how they’re tested for bias and safety, distrust may soften.

3. Visible benefits in everyday life
AI applications that clearly and tangibly improve healthcare, education, public safety, or accessibility-without obvious downsides-can shift perceptions from fear to cautious optimism.

4. Education and digital literacy
Teaching people how AI works, what its limitations are, and how to identify AI-generated content can reduce both unrealistic expectations and exaggerated fears.

5. Worker protection and upskilling
Policies that help workers adapt-through retraining, education, and new job opportunities-will be critical to counter the fear that AI will simply “replace” humans.

Why the poll matters right now

The survey’s timing is significant. AI adoption has accelerated dramatically in the last two years, with generative tools moving from niche experiments to mainstream platforms. At the same time, public debate about AI safety and ethics has surged.

This poll captures a moment when:

– AI is no longer theoretical-it is embedded in search engines, office tools, entertainment platforms, and customer service.
– Regulatory frameworks remain incomplete and uneven.
– Public understanding is catching up to the reality of how deeply AI systems are woven into daily life.

The result is a population that uses AI constantly, often without thinking about it, yet remains broadly uncomfortable with what it represents.

The road ahead: normalization or backlash?

Public attitudes toward new technologies can shift quickly. Historically, innovations such as the internet, social media, and smartphones went through their own cycles of hype, fear, normalization, and regulation.

AI may follow a similar path-but with higher stakes. Its reach spans more sectors, its speed of improvement is faster, and its capacity to alter information, labor, and power structures is greater.

The NBC News poll suggests that AI is entering a critical phase:

– Adoption is already high.
– Skepticism is already entrenched.
– Policy, governance, and public literacy are still catching up.

How companies, governments, and educators respond in the next few years will likely determine whether AI remains one of the least trusted forces in American life-or gradually becomes seen as a manageable, largely beneficial part of the modern world.

For now, the message from voters is unequivocal: Americans are living in an AI-powered society, whether they like it or not-and most of them, so far, do not.