AI Glossary
A clear, accessible glossary of artificial intelligence terms. Learn key concepts like deep learning, transformers, and reinforcement learning — your quick reference to understanding today’s AI revolution.
Artificial intelligence has become part of everyday life — from virtual assistants to generative tools like ChatGPT. This glossary explains the key concepts, terms, and methods shaping the modern AI landscape in clear, accessible language.
Looking for technical definitions?
See the AI Model Glossary for a deeper dive into model architectures, training methods, and evaluation metrics.
| Term | Definition |
|---|---|
| Artificial Intelligence (AI) | The field of computer science focused on creating systems that can perform tasks that normally require human intelligence, such as learning, reasoning, and problem-solving. |
| Machine Learning (ML) | A subset of AI where algorithms learn from data and improve performance over time without being explicitly programmed. |
| Deep Learning | A type of machine learning using neural networks with many layers (deep architectures) to model complex data patterns. It powers modern AI systems like ChatGPT and image recognition. |
| Neural Network | A system of algorithms modeled loosely on the human brain that identifies patterns and relationships in data through interconnected “neurons.” |
| Transformer Architecture | A breakthrough neural network design introduced by Google in 2017 that enables parallel processing of text and long-range context understanding — the foundation of GPT models. |
| Generative Pre-trained Transformer (GPT) | A model family by OpenAI trained on massive text datasets to generate and understand human-like language. “Generative” means it creates text; “Pre-trained” means it learned from general data before fine-tuning. |
| Large Language Model (LLM) | A machine learning model trained on vast amounts of text to generate human-like responses. LLMs power chatbots, writing assistants, and reasoning tools. |
| Parameters | The numerical values that define a neural network’s learned knowledge. More parameters generally mean greater model complexity and capability. GPT-3, for example, has 175 billion parameters. |
| Training Data | The collection of text, code, and other information used to teach an AI model language patterns and facts during training. |
| Token | A unit of text (a word, part of a word, or punctuation mark) used by language models to process and generate content. Models like GPT handle input and output as sequences of tokens. |
| Prompt | The input text or question given to an AI model to generate a response. Crafting effective prompts is known as “prompt engineering.” |
| Prompt Engineering | The practice of designing and refining prompts to elicit more accurate, creative, or useful responses from AI systems. |
| Fine-Tuning | The process of adapting a pre-trained model to specific tasks or domains using additional targeted data. |
| Reinforcement Learning from Human Feedback (RLHF) | A training technique that improves model responses by using human reviewers’ preferences to guide what’s considered “good” or “bad” output. It’s what made ChatGPT conversational. |
| Multimodal AI | An AI system capable of processing multiple types of input — such as text, images, audio, or video — simultaneously. GPT-4 is an example. |
| Context Window | The amount of text (measured in tokens) an AI model can “remember” or consider in one session. Larger context windows enable more coherent long-form responses. |
| Hallucination | When an AI confidently produces incorrect or fabricated information. Reducing hallucinations is a major goal in LLM development. |
| Inference | The process of generating output (responses, predictions, etc.) from a trained AI model based on user input. |
| Model Weights | The internal values learned during training that determine how the model processes information — essentially, its “knowledge.” |
| API (Application Programming Interface) | A software interface that allows developers to connect and use AI models within their own applications and services. |
| Open Source AI | AI models or code that are made publicly available for inspection, modification, and reuse — often associated with transparency and community-driven innovation. |
| Closed Source AI | Proprietary AI models, like ChatGPT, whose underlying code and data are not publicly accessible, typically for commercial or security reasons. |
| Mixture of Experts (MoE) | An AI architecture that routes queries to specialized sub-models (“experts”) to improve efficiency and scalability without increasing total computation for every task. |
| Retrieval-Augmented Generation (RAG) | A method that lets models access external databases or documents during inference to provide more factual, up-to-date responses. |
| Autonomous Agent | A future-oriented concept where AI systems perform multi-step tasks independently, using reasoning and planning without continuous human input. |
| Constitutional AI | Anthropic’s approach to AI safety where models are trained to follow a written “constitution” of ethical principles, reducing the need for human moderation. |
| Alignment | Ensuring that an AI system’s goals and outputs align with human values, ethics, and intentions — one of the most significant challenges in advanced AI research. |
🧠 Part of the Bold Outlook AI Learning Series
Explore more guides, insights, and explainers on artificial intelligence, machine learning, and emerging technologies at BoldOutlook.com/AI.