The Dilemma of Carbon‐Conscious Consumers: A Multi‐Study Investigation of Carbon Transparency in AI Use

The Dilemma of Carbon‐Conscious Consumers: A Multi‐Study Investigation of Carbon Transparency in AI Use

Summary

This paper examines how consumers react when AI services disclose their carbon footprint. Across three complementary studies (qualitative grounded theory, an online experiment, and biometric validation), the authors develop and test a dual-pathway model: carbon transparency activates ethics awareness, which influences engagement through (1) a cognitive route — perceived brand responsibility — and (2) an affective route — moral emotions (guilt and pride). Disclosures increased perceived brand responsibility and elicited measurable guilt/pride responses; both pathways independently and jointly mediated intended engagement. Age moderated effects: older participants showed stronger emotional and responsibility-related responses. The findings are validated behaviourally (intent measures) and physiologically (skin-conductance and facial-expression analysis).

Key Points

  • Carbon transparency in AI interfaces functions as a moral signal, not just an informational cue.
  • Exposure to carbon disclosures increases perceived brand responsibility (cognitive pathway).
  • Disclosures evoke moral emotions — chiefly guilt and pride — which independently boost engagement intentions (affective pathway).
  • The dual-pathway (parallel) mediation — brand responsibility and moral emotions — explains how transparency leads to higher engagement.
  • Biometric data (SCR and facial micro-expressions) corroborate self-reports, showing embodied moral responses to carbon information.
  • Age moderates the effect: older consumers tend to respond more strongly on both cognitive and emotional dimensions.
  • Practical implications: interface-level carbon labels, third-party certification badges and emotionally attuned UX can strengthen trust and drive ethical engagement; policy could standardise carbon disclosure for AI services.

Why should I read this?

Quick and frank: if you work on AI products, sustainability, UX or marketing, this paper tells you exactly why a little carbon label can change how people feel about your service — and not just what they think. It shows the emotional and credibility pay-offs (and gives biometric proof). Saves you time by cutting through abstract debates and offering a tested model you can use in design, comms or policy work.

Author style

Punchy: the authors make a clear, original move — they extend CSR signalling into intangible, algorithmic spaces and back it with multi-method evidence. If you care about credible, ethical AI or customer trust in digital services, the detailed mechanisms and the biometric validation make this more than an academic curiosity — it’s actionable.

Content summary

The paper begins by noting that AI’s environmental costs are largely invisible to users, creating a moral disconnect. Study 1 (grounded theory; N=28 interviews) found that carbon disclosures prompt surprise, concern and moral discomfort, and revealed two core dimensions: cognitive appraisal (brand credibility/responsibility) and affective response (guilt/pride). From this the authors propose a dual-pathway model.

Study 2 (between-subjects experiment; N=352) manipulated a carbon disclosure on a fictional AI app. Results showed the disclosure significantly increased perceived brand responsibility and reported moral emotions; both predicted engagement intention. Parallel mediation analyses confirmed that brand responsibility and moral emotions jointly mediated the disclosure → engagement link. Moderated-mediation tests showed stronger indirect effects for older participants.

Study 3 (biometric validation; N=78) added physiological measures (SCR) and facial-expression analysis. The disclosure increased electrodermal arousal and produced facial markers consistent with guilt and pride. Biometric indices predicted engagement intention, providing convergent, non-self-report evidence for the affective pathway.

Context and relevance

This research matters because AI is rapidly embedded into consumer life while its energy footprint remains opaque. The study connects CSR/signalling and moral-emotion theory to digital services, showing that simple interface-level transparency can convert invisible infrastructure costs into moral meaning and behavioural intent. For product managers, designers and policymakers, the findings offer evidence that transparency can be both ethically important and strategically valuable — it builds trust and motivates engagement, but must be credible (certification, clear metrics) to avoid scepticism or perceived greenwashing.

Policy-wise, the authors argue for formalising carbon disclosure in AI governance (eco-labelling, standardised metrics, independent certification) and for adding environmental ethics as a core pillar of responsible AI alongside fairness, privacy and accountability.

Source

Source: https://onlinelibrary.wiley.com/doi/10.1002/mar.70143?af=R