AI for NEDs: The Cheat Sheet Every Board Member Should Have

Artificial Intelligence has entered the boardroom vocabulary faster than most directors expected. This 'cheat sheet' guide is designed to give non-executive directors and board members a clear, neutral understanding of the key terms they are likely to encounter in AI discussions.

Artificial Intelligence (AI) Computer systems that perform tasks normally requiring human intelligence, such as recognising patterns or generating text. Boards should understand AI as a capability applied across functions, not a single technology.

Machine Learning (ML) A subset of AI where systems learn from data rather than following explicit rules. Most commercial AI solutions use ML. Boards should ask how models are trained and validated.

Generative AI (GenAI) AI that creates new content such as text, images or code. It can drive innovation but introduces accuracy, copyright and reputational risks.

Large Language Model (LLM) The type of model behind tools like ChatGPT. It predicts text based on massive datasets. Boards should ensure oversight of accuracy, privacy and usage rights.

Neural Network The mathematical framework that underpins most modern AI. Boards do not need technical detail but should recognise that complexity can limit transparency and explainability.

Algorithm The set of rules or calculations a computer follows to reach a result. Algorithms are the foundation of AI decision-making and must be tested for fairness and reliability.

Training Data The data used to teach an AI system. If it is incomplete or biased, outputs will be flawed. Boards should expect management to understand and document data sources.

Inference The stage when an AI model is used to make predictions or generate results. Inference costs scale with usage and can affect margins.

Data Governance The policies controlling how data is collected, stored and shared. It underpins trust, compliance and defensibility. Weak governance is a common cause of AI failure.

Data Provenance The documented origin and ownership of data. Boards should ensure that data rights are clear and auditable to avoid legal or ethical exposure.

Model Drift The decline in model performance as data or user behaviour changes. Boards should confirm that performance is monitored and retrained periodically.

Evaluation The process of measuring how accurately and safely an AI system performs. Boards should expect regular evaluation as part of operational governance.

Bias Systematic unfairness in AI outcomes caused by skewed data or design. Bias poses reputational, ethical and legal risks. Boards should seek evidence of testing and mitigation.

Explainability The ability to understand how an AI system reached its conclusion. Lack of explainability limits trust and compliance. Boards should ensure decisions can be justified to regulators and customers.

Transparency How openly a company discloses its AI methods, data and safeguards. Transparency builds confidence with investors, regulators and customers.

Accountability Clarity about who is responsible for AI outcomes. Boards must ensure there is defined accountability at executive and operational levels.

Governance Framework A structured approach to managing AI risk, such as the NIST AI Risk Management Framework. Alignment with recognised standards signals maturity and readiness for scale.

Ethical AI The practice of ensuring AI aligns with social, legal and moral expectations. Boards should view this as a component of corporate responsibility, not just compliance.

Regulatory Landscape The emerging set of AI-specific laws, including the EU AI Act and UK guidance. Boards should track these to ensure the company remains compliant as the rules evolve.

AI Strategy The company’s plan for how AI supports its objectives and how risks will be managed. Boards should ensure AI investment aligns with the wider business strategy and delivers measurable value.