Home Servizi Casi Studio DeepCMS Recensioni Blog FAQ Contattaci English Español
AI-Made Products: Sales Drop, How to Communicate It
Communication

AI-Made Products: Sales Drop, How to Communicate It

March 26, 2026Updated April 18, 202610 min read

In short: 2026 consumers openly distrust products marketed as "created by AI". The Edelman Trust Barometer measures a double-digit trust gap between those who see AI as an opportunity and those who see it as a threat; Pew Research documents that most adults are more concerned than excited; Kantar BrandZ signals a negative impact on awareness and preference when AI framing replaces the human one. Structured transparency + "human+AI" positioning reduce the gap.

In 2026, communicating that a product was "created by AI" is no longer a marketing innovation shortcut: it's a risky positioning choice with measurable impacts on sales, trust and reputation. Data from Edelman, Pew Research and Kantar converge on the same point: consumers don't reject AI technology per se, but they reject the framing that removes the human from the creative process.

This guide synthesizes 2024-2026 evidence, concrete cases (Mattel Barbie, Coca-Cola Christmas, Dove "AI-free"), EU AI Act obligations and an operational framework for choosing between full disclosure, hiding or hybrid positioning. For the flip side on AI content originality also read AI content marketing: all the same.

Human hand and robotic component collaborating on product manufacturing — human+AI metaphor

The data: AI-made = sales and trust decline

The three benchmark 2025-26 surveys converge. The Edelman Trust Barometer in its 2025 edition measures trust in AI-developing companies stuck at around 52% globally, with developed countries below 40% and a gap of over 30 percentage points with emerging countries. Pew Research in the "How the US Public and AI Experts View AI" report (April 2025) documents that 51% of US adults are more concerned than excited, against just 11% in the opposite direction; among AI experts the proportion flips.

Kantar, in its BrandZ and Media Reactions study series, reports that fully AI-generated ads record lower perceived quality and brand differentiation scores than human-led ones, with a particularly marked effect in highly emotional categories (fashion, food, beauty, luxury). It's therefore not a niche perception: it's the baseline against which any brand declaring "AI-made" speaks today.

Why consumers distrust: authenticity, job loss, uncanny valley

Three psychological mechanisms explain the effect.

Perceived authenticity. When a product is labeled as created by AI, consumers attribute less human effort to the process. Less effort means less authenticity, and less authenticity means lower willingness to pay. This causal chain is documented in studies published in the British Food Journal and Journal of Retailing and Consumer Services and is particularly relevant in categories where craftsmanship is part of the value.

Job fears. Pew Research reports that over 60% of US adults fear a negative impact of AI on employment in the next twenty years. When a brand showcases AI as the "creator", it amplifies an already present social fear: the consumer doesn't just buy a product, they participate in a system perceived as hostile to their own group.

Narrative uncanny valley. AI-generated content that's too perfect, too uniform or with micro anatomical errors triggers measurable aesthetic discomfort. Kantar in 2025 showed that AI-generated advertising images score 20-30% lower on "distinctiveness" than traditional photography, even when the consumer doesn't explicitly identify the origin.

Case studies 2024-26: Mattel, Coca-Cola, Dove

Mattel & Barbie AI. In 2025 Mattel announced a partnership with OpenAI to integrate AI in Barbie products. The initial framing, centered on technology rather than on educational value for girls and parents, generated mixed coverage and a wave of criticism on privacy risks and loss of "human play". Subsequent communication rebalanced the message toward "AI as a tool under educational supervision".

Coca-Cola Christmas 2024. The holiday spot made with generative AI divided the public: part of the press called it "soulless", consumers complained about loss of authenticity compared to the tradition of Coca-Cola spots. Exposure generated brand awareness anyway, but at a reputational cost that Kantar estimates in "brand love" points lost on boomer and gen X targets.

Dove "The Code" (2024). In the opposite direction, Dove published a campaign with an explicit commitment to never use AI-generated images to represent real beauty. The campaign strengthened the "Real Beauty" positioning and earned significant media, proving that "AI-free" is today a differentiating lever in identity-based categories.

Disclose, hide or hybrid: comparison with trust impact

The strategic decision reduces to four scenarios. None is universally better: the choice depends on category, target and regulatory context.

Strategy Willingness to buy Trust impact Reputational risk Reference source
Disclose AI full Low in identity categories, medium in utility/B2B Negative on authenticity, positive on transparency Low if consistent with tech brand Pew Research 2025
Hide AI involvement High until it emerges Collapse if discovered (breach of trust) High: EU AI Act fine + backlash EU AI Act Art. 50
Hybrid (human+AI) Higher than human-only in many tests Positive if explicit human control Low Reith & Grohs 2025
AI-free claim High in identity categories (beauty, food) Very positive, differentiating Low if verifiable Dove "The Code" 2024

How to communicate AI without losing trust

Operationally, companies navigating this topic best adopt five convergent practices.

1. AI is always the tool, never the creator. Linguistic construction matters. "Our team used AI to optimize X" produces radically different reactions from "created by AI". The winning formula always keeps a human subject as protagonist.

2. Explicit human-in-the-loop. It's not enough to say "we used AI". The consumer must perceive that a human guided, verified and controlled. For sensitive products (health, finance, editorial content) this becomes almost a regulatory as well as reputational requirement.

3. Structured disclosure. A "how we work" page or a dedicated FAQ section explaining when and how AI is used builds trust without turning AI into the main claim. Information is available for those who seek it, but doesn't dominate commercial communication.

4. Brand consistency. A tech-first brand (SaaS, fintech, martech) can afford a more prominent AI framing than an artisanal or identity-based brand. The key is alignment between historical positioning and declared AI use.

5. Social proof as counterweight. When for strategic reasons AI must be communicated, always accompany it with reviews, case studies and satisfaction data. Social proof is one of the few levers that can compensate for the AI-label aversion bias.

EU AI Act: transparency obligations from 2025-26

The EU AI Act (Regulation 2024/1689), in force since 1 August 2024, applies progressive obligations until 2027. For marketing and communication three articles are particularly relevant.

Art. 50: obligation to inform users when they interact with an AI system (e.g. chatbot) and to label synthetic audio, video and text content that constitutes a deepfake. For AI-generated texts informing the public on issues of general interest, labeling is required unless documented human editorial review is performed.

Prohibitions (Art. 5), applicable from February 2025: banned practices of subliminal manipulation, exploitation of vulnerabilities, social scoring and some forms of biometric categorization. Marketing campaigns exploiting specific vulnerabilities (age, disability, socio-economic situation) via AI fall under the prohibitions.

GPAI (General Purpose AI) systems: from 2 August 2025 obligations for foundation model providers and, cascading, documentation obligations for those integrating them. For brands: ask the AI provider for compliance documentation before using it in commercial communication.

A well-designed disclosure strategy isn't just a reputational choice: it's a compliance asset. On managing reputational crises linked to improper AI use, see crisis communication: 5 disasters 2025; on AI impact on advisory advertising see ChatGPT and advertising.

Do you need AI-friendly PR and communication?

Deep Marketing supports brands and companies in designing communication strategies that integrate AI without losing trust: structured disclosure, human+AI framing, management of reputational crises linked to generative content, EU AI Act compliance. Request a consultation or discover our press office and event management service.

Frequently Asked Questions

Do consumers want to know if a product is made by AI?

Yes, the majority want to know, but don't want AI to be the main claim. Pew Research 2025 documents that over 70% of US adults consider it important to be informed when a company uses AI in its products or services; at the same time, Edelman shows that trust collapses when AI is communicated as an autonomous creator. The solution is structured disclosure: clear information available for those who seek it, without turning AI into the protagonist of commercial positioning.

Why do AI-made products sell less?

For three combined mechanisms: lower perceived authenticity (less human effort = less value), social fear of AI's impact on jobs and aesthetic uncanny valley that reduces the perceived distinctiveness of creative assets. Effects add up in identity categories (fashion, beauty, food, luxury) and attenuate in utility and B2B categories, where AI is read as positive technical competence.

How to communicate AI use in products?

Three operational rules: always use a human subject in the sentence ("the team used AI to" instead of "created by AI"), make human control of the process explicit (human-in-the-loop) and structure disclosure in dedicated sections ("how we work" page, FAQ, technical documentation) rather than making it the main campaign claim. Accompany with social proof (reviews, certifications, case studies) to compensate for the aversion bias.

EU AI Act: what do I need to communicate?

According to Art. 50 of EU Regulation 2024/1689, users must be informed when they interact with an AI system (chatbot, virtual assistants) and synthetic deepfake audio/video content must be labeled. AI-generated texts on topics of public interest must be labeled unless documented human editorial review is performed. From February 2025, subliminal manipulation and exploitation of vulnerabilities via AI are also prohibited. Sanctions reach up to 7% of global turnover for the most serious violations.

Is AI disclosure mandatory in marketing?

Partially. In the EU, the AI Act requires disclosure for chatbots, deepfakes and synthetic content of public interest without human review. Outside these cases, there is no general obligation today to label every marketing asset that used AI. However, the global regulatory direction is clear: FTC in the US, advertising codes of conduct (IAP in Italy, ASA in UK) and sector guidelines are converging toward proactive disclosure as best practice, to protect both consumer and brand.

Is "AI-free" a sustainable positioning?

Yes, in identity and premium categories, as demonstrated by the Dove "The Code" campaign and by artisanal fashion and food brands. However, it requires real verifiability: an AI-free claim contradicted by third-party investigations generates severe reputational crises. The positioning is less suited to categories where production efficiency and technical optimization are part of perceived value (software, logistics, financial services).

Sources and References

Share

Pronto a crescere.

Parliamo del tuo progetto. Trasformeremo insieme i dati in risultati concreti per il tuo business.