Do’s and Don’ts of Using AI: A Director’s Guide
Summary
This Harvard Law Forum piece gives practical guidance for company directors who are using AI tools in their corporate roles. It flags key risks — accidental disclosure of confidential information, discoverability of AI chats, lost privileges when transcribing counsel conversations, and the hazard of relying on unverified AI outputs. The article stresses that AI should augment, not replace, human judgement and recommends that boards set clear policies on approved tools, acceptable uses and required disclosures.
Key Points
- Directors should not upload or input confidential corporate data into public AI tools unless the company has validated the tool and ensured inputs won’t be used for model training.
- Information shared with chatbots may be discoverable in litigation or regulatory reviews and therefore could be disclosed even if you delete your chat history.
- Using third-party recording or transcription tools for board meetings or communications with counsel risks exposing privileged or sensitive material.
- AI outputs can be inaccurate, outdated or biased — verify sources and don’t assume correctness without human review.
- AI is a support tool; directors must retain human oversight to meet duties of care and loyalty and to avoid delegating critical decisions to models.
- Boards should work with management to craft clear AI usage policies covering approved tools, permitted uses and disclosure requirements.
Content Summary
As AI becomes embedded in business workflows, individual directors are increasingly using chatbots and transcription tools for convenience. The article highlights specific pitfalls for directors: accidentally exposing trade secrets or personal data to AI vendors; potential discovery of AI chats by regulators or adversaries; the danger of recording privileged conversations; and the risk of relying on hallucinated or stale AI outputs. Practical advice includes using only company-approved AI tools for confidential material, avoiding AI for board minutes or counsel communications, verifying AI-generated facts, and keeping humans firmly in the decision loop.
Context and Relevance
This guidance sits at the intersection of corporate governance, legal risk and AI adoption. With regulators and litigants beginning to treat electronic AI records like other corporate records, directors face heightened exposure if they treat chatbots as private. The article is timely given the rapid uptake of generative AI across boardrooms and the growing need for formalised policies to manage confidentiality, privilege and decision-making standards.
Why should I read this?
Short version: if you’re a director and you’ve ever thought about asking ChatGPT to summarise board papers — don’t, not without checks. This piece drills the sensible do’s and don’ts in plain terms so you don’t accidentally leak secrets, lose privilege, or rely on dodgy AI outputs. Saves you from tripping over basic risks while you learn the tech.
Source
Source: https://corpgov.law.harvard.edu/2025/09/14/dos-and-donts-of-using-ai-a-directors-guide/