Artificial Intelligence: A-Z glossary 

 
AI

Starting a career in artificial intelligence is exciting, but the sheer amount of technical jargon can feel overwhelming, especially if you're new to the field. As an MA Artificial Intelligence conversion student, understanding key AI terms is essential for grasping important concepts, engaging in discussions, and applying AI effectively in your future career.

Our glossary provides clear and concise definitions of some of the terms you could encounter, helping you build a strong foundation in AI from the start.  

A 

Algorithm  

A step-by-step procedure or set of rules designed to perform a specific task or solve a problem. In AI, algorithms process data and make decisions based on patterns and inputs. 

Array  

A data structure that stores a collection of items, typically of the same type, in an ordered manner. Arrays allow quick access to elements using their index, making them efficient for storing and manipulating sequences of data. 

Artificial Intelligence (AI)  

The simulation of human intelligence by machines, particularly computer systems. AI encompasses various tasks such as learning, reasoning, problem-solving, perception, and language understanding. 

Artificial Neural Network (ANN)  

A computing system inspired by the structure and function of biological neural networks. ANNs are used in machine learning for tasks like image recognition and natural language processing. 

Backpropagation  

A supervised learning algorithm used to train neural networks. It adjusts the weights of connections in the network by minimising the error between predicted and actual outputs. 

Bayesian Network  

A type of graph that shows relationships between variables and how they affect each other. It uses probabilities to handle uncertainty, helping to make predictions or decisions based on available data. 

Big Data  

Very large datasets that can be analysed computationally to reveal patterns, trends, and associations. Big data is often a foundation for training AI models. 

Chatbot  

A software application designed to simulate human-like conversations, often used in customer service, information retrieval, or task automation. Chatbots use natural language processing to understand and respond to users. 

Computer Vision  

A field of AI that enables machines to interpret and understand visual data from the world. Applications include object detection, facial recognition, and medical imaging. 

Deep Learning  

A subset of machine learning that uses neural networks with many layers (deep networks). It is particularly effective for tasks like image and speech recognition. 

Dataset  

A structured collection of data used for training and testing AI models. A dataset’s quality and size will significantly impact AI performance. 

AI Ethics  

The branch of study addressing moral and ethical questions related to AI development and deployment. Topics include privacy, bias, accountability, and the impact of AI on jobs and society. 

Edge Computing  

A computing paradigm that processes data near its source rather than in centralised servers. This enables real-time analytics and faster decision-making in AI systems. 

Feature Engineering  

The process of selecting, modifying, or creating features (input variables) to improve a machine learning model’s performance. It is a critical step in the AI workflow. 

Federated Learning  

A decentralised approach to training machine learning models where data remains on local devices, enhancing privacy and security. 

Generative AI  

A branch of AI focused on creating new data, such as text, images, or music, that mimics existing data. Examples include GPT models and GANs. 

Gradient Descent  

An optimisation algorithm used to minimise the loss function in machine learning models. It iteratively adjusts model parameters to reduce prediction errors. 

Hyperparameter  

A configuration variable set before training an AI model. Examples include learning rate, number of layers, and batch size. 

Human-in-the-Loop (HITL)  

A model training approach that incorporates human input at various stages to improve accuracy and decision-making. 

Inference  

The phase where an AI model applies its learned patterns to make predictions or decisions based on new input data. Inference happens after the training process. 

Internet of Things (IoT)  

A network of interconnected devices that communicate and share data. AI is often integrated with IoT to enable intelligent decision-making in real-time. 

Joint Probability  

A statistical measure representing the likelihood of two events occurring simultaneously. It is used in probabilistic models and Bayesian networks. 

Knowledge Graph  

A data structure that represents relationships between entities in a network. It is used for search engines, recommendation systems, and semantic understanding. 

Kernel Trick  

A mathematical technique used in support vector machines to solve nonlinear problems by transforming data into a higher-dimensional space. 

Language Model  

An AI system designed to understand and generate human language. Examples include GPT and BERT models. 

Loss Function  

A mathematical function that quantifies the difference between predicted and actual outputs. The goal of model training is to minimise loss. 

Machine Learning (ML)  

A subset of AI that enables systems to learn and improve from data without explicit programming. It includes techniques such as supervised, unsupervised, and reinforcement learning. 

Model Overfitting  

A condition where a model performs well on training data but poorly on unseen data. It occurs when the model learns noise or irrelevant details from the training data. 

Natural Language Processing (NLP)  

A branch of AI focused on the interaction between computers and human language. Applications include translation, sentiment analysis, and text summarisation. 

Neural Network  

A computing architecture made of interconnected nodes (neurons) that processes data similarly to the human brain. Neural networks are fundamental in deep learning. 

O 

Optimisation  

The process of adjusting model parameters to improve performance. It involves techniques like gradient descent and stochastic gradient descent. 

Overfitting  

See "Model Overfitting." 

Preprocessing 

The preparation and cleaning of data before feeding it into an AI model. This includes tasks like normalisation, encoding, and handling missing values. 

Predictive Analytics 

The use of AI and statistical methods to predict future outcomes based on historical data. 

Q 

Quantum Computing  

A cutting-edge field of computing that uses quantum mechanics principles to process information. It has potential applications in accelerating AI computations. 

Q-Learning  

A type of reinforcement learning algorithm that learns the value of actions in a given state to maximise cumulative reward. 

Reinforcement Learning (RL)  

An area of machine learning where an agent learns to make decisions by interacting with an environment and receiving rewards or penalties. 

Regularisation  

Techniques used to prevent overfitting in machine learning models. Examples include L1 and L2 regularisation

Scalability  

The ability of an AI system to handle increasing amounts of data or complexity efficiently.  

Supervised Learning  

A machine learning approach where a model is trained on labeled data to predict outcomes. It is used for tasks like classification and regression. 

Support Vector Machine (SVM) 

A machine learning algorithm used for classification and regression tasks. It works by finding the best boundary (or hyperplane) that separates data points into different classes while maximising the margin between them. 

Tensor  

A multidimensional array used in AI computations, particularly in deep learning frameworks like TensorFlow and PyTorch. 

Transfer Learning  

A machine learning technique where a pre-trained model is adapted to solve a new but related task, saving time and resources. 

Unsupervised Learning  

A machine learning approach where a model is trained on unlabeled data to find hidden patterns or groupings. Examples include clustering and dimensionality reduction. 

Underfitting  

A condition where a model fails to capture the underlying trends in data, leading to poor performance on both training and test datasets. 

V 

Validation Set  

A subset of data used to tune model hyperparameters and prevent overfitting. It helps evaluate model performance during training. 

Variance  

A measure of how much a model's predictions vary for different data points. High variance can lead to overfitting. 

Weights  

Parameters in a neural network that determine the strength of connections between neurons. Adjusting weights is the key process in training a model. 

Word Embedding  

A representation of text where words are mapped to continuous vectors in a multidimensional space. Examples include Word2Vec and GloVe. 

X 

Explainable AI (XAI)  

AI systems designed to provide understandable and interpretable outputs. XAI focuses on transparency and accountability in decision-making processes. 

Y 

Yield  

The output or result generated by an AI model. Yield is often used to evaluate the efficiency and accuracy of algorithms. 

Z 

Zero-shot Learning  

A machine learning method where a model generalises to new classes or tasks without being explicitly trained on them. It relies on relationships and attributes learned from other tasks. 

Z-Score  

A statistical measurement that describes how many standard deviations a data point is from the mean average. It is often used in anomaly detection. 

Getting into Artificial Intelligence as a non-STEM professional 

We’ve designed our 100% online MA Artificial Intelligence courses to help you get familiar with the fundamentals of AI technology and apply them to your work. With three different specialist pathways to choose from, you can tailor your studies to fit your chosen sector and career goals. You’ll also benefit from world-class techno-social research from the prestigious Web Science Institute. 

View all online AI courses