Many people are rightly concerned with the present and exponential impacts that AI is having on work and life. There is no field whether in competitiveness in retail trade, manufacturing, military activities, law enforcement, education, health care, or in the creative arts where the impacts of AI are not already being felt and may become even more indispensable to modern life - much as the smartphone is today.

The vocabulary used to describe AI is growing by the second as more than 10,000 developers around the world are working on brand-name applications to cover every aspect of human actions and interactions between themselves and technology.

Below is a basic glossary of about 100 of the most commonly used AI terms from recent papers.

Basic Glossary of AI

• Action language: A language for specifying state transition systems and is commonly used to create formal models of the effects of actions on the world. Action languages are commonly used in the artificial intelligence and robotics domains.
• Adaptive learning: A type of machine learning that allows the model to adapt to changes in the data. Adaptive learning models are often used in applications where the data is constantly changing, such as financial trading or fraud detection.
• Adversarial machine learning: A type of machine learning that pits two models against each other in a game-like setting. Adversarial machine learning models are often used to develop robust and secure systems.
• Affordance: The set of possible actions that can be performed on an object. For example, the affordance of a doorknob is to turn it.
• Algorithmic bias: Bias that is introduced into a machine learning model as a result of the way the data is collected or the way the model is trained. Algorithmic bias can lead to unfair or discriminatory outcomes.
• Attention: A mechanism in neural networks that allows the model to focus on specific parts of the input data. Attention is often used in natural language processing tasks, such as machine translation and text summarization.
• Autoencoder: A type of neural network that is trained to reconstruct its own input. Autoencoders can be used for dimensionality reduction, denoising, and image compression.
• Backpropagation: A method in machine learning for training artificial neural networks. Backpropagation works by calculating the gradient of the loss function with respect to the model parameters, and then updating the parameters in the direction of the negative gradient.
• Bayesian network: A probabilistic graphical model that represents the joint probability distribution of a set of variables. Bayesian networks can be used for a variety of tasks, such as classification, prediction, and decision-making.
• Beam search: A decoding algorithm that is used in natural language processing tasks, such as machine translation and text summarization. Beam search works by considering a set of possible outputs at each step, and then selecting the output that is most likely to be correct.
• Bias: A systematic error in a machine learning model that causes it to favor one output over another. Bias can be introduced into a model in a number of ways, such as through the data that is used to train the model, the way the model is designed, or the way the model is used.
• Big data: Datasets that are too large or complex to be processed using traditional data processing applications. Big data is often used in machine learning applications, as it can be used to train models that are more accurate and robust.
• Black box model: A machine learning model that is not transparent, meaning that it is not possible to understand how the model makes its predictions. Black box models can be difficult to interpret and debug, but they can be very powerful.
• Boltzmann machine: A type of artificial neural network that is inspired by the statistical mechanics of physical systems. Boltzmann machines can be used for a variety of tasks, such as classification, prediction, and dimensionality reduction.
• Bounding box: An imaginary box drawn around an object in an image or video. Bounding boxes are used to identify objects in images and videos, and to track the movement of objects over time.
• Chatbot: A computer program that is designed to simulate conversation with human users. Chatbots are often powered by natural language processing and machine learning.
• Classification: The task of assigning a label to an input. For example, a classification model might be used to classify images as cats or dogs.
• Clustering: The task of grouping data points together based on their similarity. For example, a clustering model might be used to group customers together based on their purchase history.
• Cognitive computing: A type of artificial intelligence that is inspired by the human brain. Cognitive computing systems are able to learn and reason in a way that is similar to humans.
• Convolutional neural network (CNN): A type of artificial neural network that is commonly used for image processing tasks. CNNs are able to learn to recognize patterns in images, such as edges and shapes.
• Curse of dimensionality: The problem that occurs when the number of dimensions of a dataset increases, making it more difficult to find patterns in the data. The curse of dimensionality can make machine learning tasks more difficult and time-consuming.
• Data mining: The process of extracting knowledge from data. Data mining is often used in machine learning to find patterns in data that can be used to make predictions or decisions.
• Data science: The field of study that deals with the collection, analysis, and interpretation of data. Data science is a broad field that encompasses a variety of techniques, including machine learning, statistics, and data visualization.
• Deep reinforcement learning: A type of machine learning that combines deep learning with reinforcement learning. Deep reinforcement learning is used to train agents to learn how to behave in complex environments by trial and error.
• Deep learning: A type of machine learning that uses artificial neural networks to learn from data. Deep learning is a powerful technique that can be used for a variety of tasks, such as image recognition, natural language processing, and speech recognition.
• Decision tree: A type of machine learning model that uses a tree-like structure to make predictions. Decision trees are often used for classification tasks, such as predicting whether a customer will churn or not.
• Dimensionality reduction: A technique for reducing the number of dimensions in a dataset. Dimensionality reduction can be used to make data more manageable and to improve the performance of machine learning models.
• Ensemble learning: A technique for combining multiple machine learning models to improve their performance. Ensemble learning models are often more accurate than individual models.
• Epistemic uncertainty: A type of uncertainty that arises from the fact that the model does not know everything about the world. Epistemic uncertainty can be caused by noise in the data, by the complexity of the problem, or by the limitations of the model.
• Emotion recognition: The task of identifying emotions in human faces, speech, or text. Emotion recognition is a challenging task, but it is becoming increasingly important as AI systems are used in more and more applications.
• Empirical risk minimization: A principle in machine learning that states that the best model is the one that minimizes the error on the training data. Empirical risk minimization is often used to train machine learning models, but it can lead to overfitting.
• Expert system: A computer program that is designed to mimic the reasoning ability of a human expert. Expert systems are often used in domains where there is a lot of expertise, such as medicine or finance.
• Feature engineering: The process of transforming raw data into features that are more suitable for machine learning models. Feature engineering is an important step in the machine learning process, and it can have a significant impact on the performance of the models.
• Federated learning: A type of machine learning that allows multiple devices to train a machine learning model without sharing their data. Federated learning is used to protect the privacy of the data, and it is becoming increasingly popular as more and more data is collected on personal devices.
• Fuzzy logic: A type of logic that deals with uncertainty. Fuzzy logic is often used in AI systems to deal with the fact that the world is not always black and white.
• Generative adversarial network (GAN): A type of machine learning model that consists of two neural networks that compete with each other. GANs are used to generate realistic data, such as images or text.
• Generative model: A type of machine learning model that can generate new data. Generative models are often used to create synthetic data, such as images or text.
• Gradient descent: A method for optimizing a function. Gradient descent is often used to train machine learning models.
• Hallucination: In the context of AI, a hallucination is a perception that is not based on reality. Hallucinations can be caused by machine learning models that are not properly trained or that are not able to distinguish between real and fake data.
• Heuristic: A rule of thumb that is used to solve a problem. Heuristics are often used in AI systems to make decisions when there is no clear optimal solution.
• Hidden Markov model (HMM): A statistical model that is used to represent sequences of events. HMMs are often used in speech recognition and natural language processing.
• Imbalanced data: A dataset in which the number of examples in one class is significantly larger than the number of examples in other classes. This can make it difficult for machine learning models to learn to classify the minority classes accurately.
• Importance sampling: A technique for sampling from a probability distribution in which some samples are more likely to be chosen than others. This can be used to improve the accuracy of machine learning models when the data is imbalanced.
• Incremental learning: A type of machine learning in which the model is updated with new data as it becomes available. This is in contrast to batch learning, where the model is trained on all of the data at once.
• Inductive bias: The assumptions that a machine learning model makes about the data it is trained on. These assumptions help the model to learn more efficiently and generalize better to new data.
• Information retrieval: The process of retrieving relevant information from a collection of documents. This can be done by using keywords, phrases, or other search terms.
• Instance-based learning: A type of machine learning in which the model learns from the individual instances in the training data. This is in contrast to model-based learning, where the model learns a general rule from the data.
• Intent detection: The task of identifying the intent of a user's query or conversation. This can be used to improve the accuracy of machine learning models that are used for tasks such as chatbots and virtual assistants.
• Interpretability: The ability to understand how a machine learning model makes its predictions. This is important for ensuring that the model is making fair and unbiased decisions.
• Joint probability distribution: A mathematical function that describes the probability of two or more events occurring together. This can be used to represent the uncertainty in the data and to make more accurate predictions.
• Kernel method: A machine learning algorithm that uses a kernel function to map the data into a higher-dimensional space. This can be used to improve the performance of machine learning models on non-linear problems.
• Knowledge extraction: The process of extracting knowledge from data. This can be done by using machine learning algorithms to identify patterns and relationships in the data.
• Knowledge graph: A knowledge base that is represented as a graph. The nodes in the graph represent entities, such as people, places, or things. The edges in the graph represent relationships between the entities.
Learning rate: A hyperparameter that controls how quickly a machine learning model learns. A higher learning rate will cause the model to learn more quickly, but it may also cause the model to overfit the training data.
• Linear regression: A machine learning algorithm that predicts a continuous value based on a set of independent variables.
• Logistic regression: A machine learning algorithm that predicts a binary value (e.g., spam or not spam) based on a set of independent variables.
• Long short-term memory (LSTM): A type of artificial neural network that is used for sequence modeling. LSTMs are able to learn long-term dependencies in the data, which makes them well-suited for tasks such as natural language processing and speech recognition.
• Machine learning: A type of artificial intelligence that allows computers to learn without being explicitly programmed. Machine learning algorithms are trained on data and then use that data to make predictions or decisions.
• Markov decision process (MDP): A mathematical model that is used to represent decision-making problems. MDPs are used in a variety of applications, such as robotics, finance, and game theory.
• Maximum likelihood estimation: A statistical method for estimating the parameters of a probability distribution. Maximum likelihood estimation is used in a variety of machine learning algorithms, such as logistic regression and linear regression.
• Mean squared error (MSE): A measure of the error between a model's predictions and the actual values. MSE is often used to evaluate the performance of machine learning models.
• Natural language processing (NLP): The field of computer science that deals with the interaction between computers and human (natural) languages. NLP is used in a variety of applications, such as machine translation, text summarization, and sentiment analysis.
• Neural network: A type of machine learning algorithm that is inspired by the human brain. Neural networks are able to learn complex patterns in the data and make accurate predictions.
• Object detection: The task of identifying and locating objects in images or videos. Object detection is used in a variety of applications, such as self-driving cars and security systems.
• Overfitting: This is a phenomenon in machine learning where a model learns the training data too well and is unable to generalize to new data. This can happen when the model is too complex or when there is not enough training data.
• Parameter tuning: This is the process of adjusting the parameters of a machine learning model to improve its performance. The parameters are the values that control the behavior of the model, such as the learning rate or the number of hidden layers.
• Partial least squares regression: This is a statistical method that is used to predict a response variable from a set of predictor variables. It is a generalization of linear regression that can handle cases where the predictor variables are correlated.
• Perception: This is the ability of an artificial intelligence system to interpret sensory information, such as images or sounds.
• Planning: This is the ability of an artificial intelligence system to formulate and execute plans to achieve a goal.
• Predictive analytics: This is the use of data analysis to predict future events. It is often used in business to make decisions about marketing, finance, and operations.
• Principal component analysis (PCA): This is a statistical method that is used to reduce the dimensionality of a dataset. It does this by finding a set of new variables that are uncorrelated and that explain as much of the variance in the original dataset as possible.
• Reinforcement learning: This is a type of machine learning where an agent learns to behave in an environment by trial and error. The agent is rewarded for taking actions that lead to desired outcomes and penalized for taking actions that lead to undesired outcomes.
• Recurrent neural network (RNN): This is a type of neural network that is capable of processing sequential data. RNNs are often used for tasks such as speech recognition and natural language processing.
• Regularization: This is a technique used to prevent overfitting in machine learning models. It does this by adding a penalty to the loss function that discourages the model from becoming too complex.
• Risk assessment: This is the process of identifying, evaluating, and controlling risks. It is often used in business to assess the risks associated with new projects or investments.
• Rule-based system: This is a type of artificial intelligence system that uses rules to make decisions. The rules are typically expressed in a formal language, such as a programming language or a knowledge representation language.
• Scalability: This is the ability of a system to handle increasing amounts of data or traffic. Scalability is an important consideration for artificial intelligence systems, as they often need to be able to handle large amounts of data.
• Self-driving car: This is a car that is capable of driving itself without human intervention. Self-driving cars use a variety of sensors, such as cameras, radar, and lidar, to perceive their surroundings and make decisions about how to move.
• Semi-supervised learning: This is a type of machine learning where the training data is a mixture of labeled and unlabeled data. The labeled data is used to train the model, while the unlabeled data is used to improve the model's performance.
• Sentiment analysis: This is the process of identifying and extracting the sentiment, or opinion, from text. Sentiment analysis is often used in social media analysis and customer feedback analysis.
• Sequence labeling: This is the task of assigning a label to each element in a sequence. For example, in speech recognition, the labels might be phonemes or words.
• Shared task: This is a task that is performed by multiple teams or organizations. Shared tasks are often used to benchmark the performance of different machine learning algorithms.
• Short-term memory: This is a type of memory that can store information for a short period of time. Short-term memory is often used by artificial intelligence systems to keep track of the current state of the world.
• Simulated annealing: This is a metaheuristic algorithm that is used to find the global optimum of a function. Simulated annealing works by gradually cooling down a system, allowing it to explore different states until it reaches the global optimum.
• Social intelligence: This is the ability of an artificial intelligence system to understand and interact with other agents in a social environment. Social intelligence is often used in chatbots and virtual assistants.
• Speech recognition: This is the task of converting speech into text. Speech recognition is often used in voice assistants and dictation software.
• Spoken language understanding: This is the task of understanding the meaning of spoken language. Spoken language understanding is often used in chatbots and virtual assistants.
• Support vector machine (SVM): A machine learning algorithm that can be used for both classification and regression tasks. SVM works by finding the hyperplane that best separates the data points into two classes.
• Supervised learning: A machine learning task where the algorithm is given labeled data, i.e., data with the correct output for each input. The algorithm learns from this data to predict the output for new inputs.
• System identification: The process of inferring the structure and parameters of a system from observed data. This is a type of supervised learning task, where the goal is to learn a model that can predict the output of the system for new inputs.
• Transfer learning: A machine learning technique where a model trained on one task is reused as the starting point for a model on a new task. This can be helpful when there is limited data available for the new task, or when the new task is similar to the old task.
• Unsupervised learning: A machine learning task where the algorithm is not given labeled data. The algorithm learns from the data by finding patterns and relationships between the data points.
• Utility function: A function that maps from states to real numbers. The utility function represents the value of a state to an agent.
• Variance: A measure of how spread out a set of data is. A high variance indicates that the data points are spread out over a large range of values, while a low variance indicates that the data points are clustered together.
• Visualization: The process of representing data in a way that makes it easier to understand. Visualization can be used to explore data, identify patterns, and communicate findings.
• Weak AI: A type of artificial intelligence that is capable of performing specific tasks, but lacks the ability to reason or learn. Weak AI is often used to refer to early AI systems, such as chess-playing programs.

Author's Bio: 

Wm. Hovey Smith is the author of more than 20 books on outdoor, health, and business topics. He has also written two novels of which the most recent is "The Goldfarb Chronicles" along with his other novel "Blood Ties" and business books, "Create Your Own Job Security," "Make Your Own Job," and "Real Wealth" were featured on a Broadway billboard last week.