AIGlossary

Last updated: March 2026

This glossary defines key artificial intelligence and machine learning terminology.It provides machine-readable definitions used in the FatbikeHero knowledge system, linking core ML concepts with the ontology of AI-Critical Art and Metadata Expressionism.

JSON-LD

{
  "@context": "https://schema.org",
  "@graph": [
    {
      "@type": "WebSite",
      "@id": "https://www.fatbikehero.com/#website",
      "url": "https://www.fatbikehero.com/",
      "name": "FatbikeHero"
    },
    {
      "@type": "Person",
      "@id": "https://www.fatbikehero.com/#artist",
      "name": "FatbikeHero",
      "url": "https://www.fatbikehero.com/",
      "sameAs": [
        "https://www.fatbikehero.com/"
      ]
    },
    {
      "@type": "WebPage",
      "@id": "https://www.fatbikehero.com/p/aiglossary#webpage",
      "url": "https://www.fatbikehero.com/p/aiglossary",
      "name": "AI & Machine Learning Glossary",
      "description": "A comprehensive, authoritative glossary of artificial intelligence and machine learning terms, covering fundamentals, deep learning, generative AI, fairness, evaluation metrics, reinforcement learning, and more.",
      "isPartOf": {
        "@id": "https://www.fatbikehero.com/#website"
      },
      "about": {
        "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset"
      },
      "creator": {
        "@id": "https://www.fatbikehero.com/#artist"
      },
      "inLanguage": "en"
    },
    {
      "@type": "DefinedTermSet",
      "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset",
      "name": "AI & Machine Learning Glossary",
      "description": "A comprehensive, authoritative glossary of artificial intelligence and machine learning terms, covering fundamentals, deep learning, generative AI, fairness, evaluation metrics, reinforcement learning, and more.",
      "url": "https://www.fatbikehero.com/p/aiglossary",
      "inLanguage": "en",
      "creator": {
        "@id": "https://www.fatbikehero.com/#artist"
      },
      "isPartOf": {
        "@id": "https://www.fatbikehero.com/p/aiglossary#webpage"
      },
      "hasDefinedTerm": [
        { "@id": "https://www.fatbikehero.com/p/aiglossary#ablation" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#accuracy" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#activation-function" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#adagrad" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#adam" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#agent" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#agentic" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#attention-mechanism" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#attribute" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#auc" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#axis-aligned-condition" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#backpropagation" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#bagging" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#baseline" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#batch" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#batch-normalization" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#batch-size" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#bias-ethics" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#bias-model" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#binary-classification" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#boosting" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#calibration" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#categorical-data" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#chain-of-thought" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#classification" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#classification-threshold" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#clustering" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#cnn" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#confirmation-bias" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#confusion-matrix" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#context-window" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#convergence" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#counterfactual-fairness" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#cross-entropy" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#cross-validation" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#data-augmentation" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#decision-boundary" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#decision-forest" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#decision-tree" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#deep-learning" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#demographic-parity" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#dimensionality-reduction" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#distillation" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#dropout" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#early-stopping" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#edit-distance" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#embedding" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#ensemble" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#entropy" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#epoch" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#equalized-odds" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#evaluation" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#example" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#f1-score" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#factuality" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#false-negative" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#false-positive" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#feature" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#feature-cross" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#feature-engineering" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#feature-importance" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#fine-tuning" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#gini-impurity" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#gradient" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#gradient-boosting" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#gradient-clipping" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#gradient-descent" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#ground-truth" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#hallucination" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#hidden-layer" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#hyperparameter" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#imbalanced-dataset" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#information-gain" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#label" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#layer" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#learning-rate" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#linear-regression" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#llm" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#logistic-regression" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#loss" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#lstm" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#machine-learning" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#mean-squared-error" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#mini-batch" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#model" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#model-capacity" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#momentum" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#multi-class-classification" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#multi-head-attention" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#neural-network" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#normalization" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#oblique-condition" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#one-hot-encoding" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#out-group-homogeneity-bias" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#outlier" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#overfitting" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#parameter" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#pca" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#policy" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#pooling" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#precision" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#prediction-bias" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#predictive-parity" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#pre-trained-model" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#prompt" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#rag" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#random-forest" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#recall" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#regularization" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#reinforcement-learning" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#relu" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#rnn" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#roc-curve" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#self-attention" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#semi-supervised-learning" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#shrinkage" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#sigmoid" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#societal-bias" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#softmax" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#sparse-representation" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#splitter" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#stochastic-gradient-descent" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#supervised-learning" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#temperature" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#tensor" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#test-set" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#token" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#training" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#training-set" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#transfer-learning" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#transformer" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#true-negative" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#true-positive" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#underfitting" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#unsupervised-learning" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#validation-set" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#vanishing-gradient" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#variance" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#vibe-coding" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#weight" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#zero-shot-learning" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#l1-regularization" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#l2-regularization" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#lambda" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#multimodal" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#rlhf" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#autoencoder" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#gan" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#diffusion-model" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#positional-encoding" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#tokenization" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#beam-search" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#matrix-factorization" },
        { "@id": "https://www.fatbikehero.com/p/aiglossary#collaborative-filtering" }
      ]
    },

    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#ablation",
      "name": "Ablation",
      "termCode": "ablation",
      "description": "A technique for evaluating the importance of a feature, component, or subsystem by temporarily removing it from a model. The model is then retrained without that element. If the retrained model performs significantly worse, the removed component was likely important to the model's performance.",
      "url": "https://www.fatbikehero.com/p/aiglossary#ablation",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#accuracy",
      "name": "Accuracy",
      "termCode": "accuracy",
      "description": "The fraction of correct classification predictions made by a model, calculated as the number of correct predictions divided by the total number of predictions. Accuracy is one of the most common evaluation metrics for classification models, though it can be misleading for imbalanced datasets.",
      "url": "https://www.fatbikehero.com/p/aiglossary#accuracy",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#activation-function",
      "name": "Activation Function",
      "termCode": "activation-function",
      "description": "A mathematical function applied to the output of a neuron in a neural network to introduce non-linearity. Common activation functions include ReLU (Rectified Linear Unit), sigmoid, tanh, and softmax. Without activation functions, a neural network would behave like a linear model regardless of depth.",
      "url": "https://www.fatbikehero.com/p/aiglossary#activation-function",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#adagrad",
      "name": "AdaGrad",
      "termCode": "adagrad",
      "description": "An adaptive gradient descent optimization algorithm that adjusts the learning rate for each parameter individually based on the historical sum of squared gradients. Parameters that receive large gradients have their learning rates reduced, while parameters with small gradients maintain relatively larger learning rates. AdaGrad works well for sparse data.",
      "url": "https://www.fatbikehero.com/p/aiglossary#adagrad",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#adam",
      "name": "Adam (Adaptive Moment Estimation)",
      "termCode": "adam",
      "description": "A popular gradient descent optimization algorithm that combines the ideas of momentum and RMSProp. Adam computes adaptive learning rates for each parameter by maintaining both a first moment estimate (mean of gradients) and a second moment estimate (uncentered variance of gradients). It is widely used for training deep neural networks.",
      "url": "https://www.fatbikehero.com/p/aiglossary#adam",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#agent",
      "name": "Agent",
      "termCode": "agent",
      "description": "In reinforcement learning, the entity that observes states in an environment and takes actions according to a policy in order to maximize expected cumulative reward. In generative AI, an agent is software that can reason about multimodal user inputs to plan and execute multi-step actions autonomously on behalf of the user, often invoking external tools.",
      "url": "https://www.fatbikehero.com/p/aiglossary#agent",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#agentic",
      "name": "Agentic",
      "termCode": "agentic",
      "description": "The adjective form of agent. Agentic describes the qualities that AI agents possess, such as autonomy, goal-directedness, and the ability to plan and execute multi-step tasks. An agentic workflow is a dynamic process in which an agent autonomously plans and executes actions to achieve a goal, including reasoning, invoking external tools, and self-correcting its plan.",
      "url": "https://www.fatbikehero.com/p/aiglossary#agentic",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#attention-mechanism",
      "name": "Attention Mechanism",
      "termCode": "attention-mechanism",
      "description": "A mechanism used in neural networks, particularly in Transformers, that indicates how much relevance each part of the input should receive when producing an output. A typical attention mechanism computes a weighted sum over input elements, where each weight is determined by a learned compatibility function. Attention allows models to focus on the most relevant parts of long input sequences. See also: self-attention, multi-head self-attention.",
      "url": "https://www.fatbikehero.com/p/aiglossary#attention-mechanism",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#attribute",
      "name": "Attribute",
      "termCode": "attribute",
      "description": "Synonym for feature. In machine learning fairness, attributes often refer specifically to characteristics pertaining to individuals, such as age, race, or gender. Sensitive attributes are those whose use in model predictions may raise fairness concerns.",
      "url": "https://www.fatbikehero.com/p/aiglossary#attribute",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#auc",
      "name": "AUC (Area Under the ROC Curve)",
      "termCode": "auc",
      "description": "A classification model evaluation metric representing the area under the Receiver Operating Characteristic (ROC) curve. AUC ranges from 0.0 to 1.0, where 1.0 represents a perfect classifier and 0.5 represents a model no better than random chance. AUC is useful for comparing classifiers and is insensitive to class imbalance.",
      "url": "https://www.fatbikehero.com/p/aiglossary#auc",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#axis-aligned-condition",
      "name": "Axis-Aligned Condition",
      "termCode": "axis-aligned-condition",
      "description": "In a decision tree, a condition that involves a threshold test on a single feature. For example, 'age > 30' is an axis-aligned condition. Contrast with oblique conditions, which involve multiple features simultaneously.",
      "url": "https://www.fatbikehero.com/p/aiglossary#axis-aligned-condition",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#backpropagation",
      "name": "Backpropagation",
      "termCode": "backpropagation",
      "description": "The primary algorithm for training neural networks. Backpropagation calculates the gradient of the loss function with respect to each weight by applying the chain rule from calculus, propagating error signals backward from the output layer through the network to the input layer. The computed gradients are then used by an optimizer such as gradient descent to update the weights.",
      "url": "https://www.fatbikehero.com/p/aiglossary#backpropagation",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#bagging",
      "name": "Bagging (Bootstrap Aggregating)",
      "termCode": "bagging",
      "description": "An ensemble learning method where each constituent model is trained on a random subset of the training data sampled with replacement (a bootstrap sample). The predictions from all models are then aggregated, typically by majority vote for classification or averaging for regression. Random forests use bagging as a core technique. Bagging reduces variance and helps prevent overfitting.",
      "url": "https://www.fatbikehero.com/p/aiglossary#bagging",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#baseline",
      "name": "Baseline",
      "termCode": "baseline",
      "description": "A simple reference model used as a benchmark to gauge the minimum performance a new model should achieve. A good baseline is usually straightforward to implement, such as always predicting the majority class or using a simple heuristic. If a new model cannot outperform the baseline, it is likely not useful.",
      "url": "https://www.fatbikehero.com/p/aiglossary#baseline",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#batch",
      "name": "Batch",
      "termCode": "batch",
      "description": "The set of examples used in one iteration of training. The batch size determines the number of examples processed before the model's weights are updated. A full batch uses the entire training set; a mini-batch typically contains between 10 and 1,000 examples; a batch of size 1 is used in stochastic gradient descent (SGD).",
      "url": "https://www.fatbikehero.com/p/aiglossary#batch",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#batch-normalization",
      "name": "Batch Normalization",
      "termCode": "batch-normalization",
      "description": "A technique used to improve the training stability and speed of deep neural networks by normalizing the inputs to each layer within a mini-batch to have zero mean and unit variance. Batch normalization can reduce the sensitivity to weight initialization and act as a mild form of regularization.",
      "url": "https://www.fatbikehero.com/p/aiglossary#batch-normalization",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#batch-size",
      "name": "Batch Size",
      "termCode": "batch-size",
      "description": "The number of examples in a batch. For instance, if the batch size is 100, then the model processes 100 examples per training iteration before updating weights. Batch size is a hyperparameter that affects training speed, memory usage, and model generalization.",
      "url": "https://www.fatbikehero.com/p/aiglossary#batch-size",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#bias-ethics",
      "name": "Bias (Ethics & Fairness)",
      "termCode": "bias-ethics",
      "description": "In the context of machine learning fairness, bias refers to stereotyping, prejudice, or favoritism toward some things, people, or groups over others. These biases can affect data collection, data labeling, system design, and user interaction. Common forms include automation bias, confirmation bias, reporting bias, selection bias, and societal bias.",
      "url": "https://www.fatbikehero.com/p/aiglossary#bias-ethics",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#bias-model",
      "name": "Bias (Model Parameter)",
      "termCode": "bias-model",
      "description": "An intercept or offset term in a machine learning model, symbolized as b or w0. In a simple linear model, bias represents the y-intercept — the model's output when all feature values are zero. Bias allows the model to fit data that does not pass through the origin. Not to be confused with bias in ethics and fairness, or prediction bias.",
      "url": "https://www.fatbikehero.com/p/aiglossary#bias-model",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#binary-classification",
      "name": "Binary Classification",
      "termCode": "binary-classification",
      "description": "A type of classification task in which the model predicts one of exactly two mutually exclusive classes. Examples include spam detection (spam vs. not spam), disease diagnosis (positive vs. negative), and fraud detection (fraudulent vs. legitimate). Binary classifiers produce a score or probability, and a classification threshold is applied to obtain the final class prediction.",
      "url": "https://www.fatbikehero.com/p/aiglossary#binary-classification",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#boosting",
      "name": "Boosting",
      "termCode": "boosting",
      "description": "An ensemble technique that builds a strong learner by sequentially training weak learners, each one correcting the errors of its predecessor. Each new model focuses on the examples that the previous models got wrong, typically by upweighting misclassified examples. Gradient boosting and AdaBoost are popular boosting algorithms.",
      "url": "https://www.fatbikehero.com/p/aiglossary#boosting",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#calibration",
      "name": "Calibration",
      "termCode": "calibration",
      "description": "The degree to which a model's predicted probabilities match the actual observed frequencies of outcomes. A well-calibrated model that predicts 70% probability for a class should be correct 70% of the time on those examples. Calibration is especially important in applications where the raw probability output is used for decision-making.",
      "url": "https://www.fatbikehero.com/p/aiglossary#calibration",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#categorical-data",
      "name": "Categorical Data",
      "termCode": "categorical-data",
      "description": "Features with a discrete set of possible values that represent categories rather than numeric quantities. Examples include country of origin, species, or color. Categorical data is typically encoded using techniques such as one-hot encoding or embedding before being fed into machine learning models.",
      "url": "https://www.fatbikehero.com/p/aiglossary#categorical-data",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },

    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#chain-of-thought",
      "name": "Chain of Thought (CoT)",
      "termCode": "chain-of-thought",
      "description": "A prompting technique that encourages a large language model to produce intermediate reasoning steps before arriving at a final answer. By prompting the model with phrases like 'think step by step,' chain-of-thought prompting can significantly improve performance on complex reasoning, arithmetic, and multi-step tasks.",
      "url": "https://www.fatbikehero.com/p/aiglossary#chain-of-thought",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#classification",
      "name": "Classification",
      "termCode": "classification",
      "description": "A supervised machine learning task in which a model learns to predict which of a set of discrete classes an input example belongs to. Binary classification involves two classes; multi-class classification involves three or more. Examples include image recognition, spam filtering, and sentiment analysis.",
      "url": "https://www.fatbikehero.com/p/aiglossary#classification",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#classification-threshold",
      "name": "Classification Threshold",
      "termCode": "classification-threshold",
      "description": "In binary classification, the value applied to a model's output probability to determine the final class prediction. Predictions above the threshold are assigned the positive class; those below are assigned the negative class. The default threshold is often 0.5, but tuning the threshold allows practitioners to trade off precision against recall.",
      "url": "https://www.fatbikehero.com/p/aiglossary#classification-threshold",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#clustering",
      "name": "Clustering",
      "termCode": "clustering",
      "description": "An unsupervised machine learning technique that groups data points together based on similarity, without using predefined labels. The goal is to find natural groupings in the data. Common clustering algorithms include k-means, hierarchical clustering, and DBSCAN. Clustering is used in customer segmentation, anomaly detection, and data exploration.",
      "url": "https://www.fatbikehero.com/p/aiglossary#clustering",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#cnn",
      "name": "Convolutional Neural Network (CNN)",
      "termCode": "cnn",
      "description": "A type of deep neural network commonly used for processing grid-like data such as images. CNNs use convolutional layers with learned filters to detect spatial features such as edges, textures, and shapes at multiple scales. Pooling layers reduce spatial dimensions. CNNs typically end in fully connected layers for classification or regression.",
      "url": "https://www.fatbikehero.com/p/aiglossary#cnn",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#confirmation-bias",
      "name": "Confirmation Bias",
      "termCode": "confirmation-bias",
      "description": "The tendency to search for, interpret, favor, and recall information in a way that confirms one's pre-existing beliefs or hypotheses. In machine learning, developers may inadvertently collect or label data in ways that support existing beliefs. Experimenter's bias is a form of confirmation bias where a practitioner continues training models until a pre-existing hypothesis is confirmed.",
      "url": "https://www.fatbikehero.com/p/aiglossary#confirmation-bias",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#confusion-matrix",
      "name": "Confusion Matrix",
      "termCode": "confusion-matrix",
      "description": "A table used to evaluate the performance of a classification model by comparing predicted labels to actual labels. For binary classification, the matrix contains four cells: true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN). Metrics such as precision, recall, and F1 score are derived from the confusion matrix.",
      "url": "https://www.fatbikehero.com/p/aiglossary#confusion-matrix",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#context-window",
      "name": "Context Window",
      "termCode": "context-window",
      "description": "The maximum number of tokens a large language model can consider at one time when generating a response. Text outside the context window is not visible to the model. Larger context windows allow the model to reason over longer documents, conversations, or codebases in a single pass.",
      "url": "https://www.fatbikehero.com/p/aiglossary#context-window",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#convergence",
      "name": "Convergence",
      "termCode": "convergence",
      "description": "A state reached during model training when the loss function stops decreasing significantly with additional training iterations. A model is said to have converged when the parameters have stabilized and further training yields diminishing improvements. Convergence does not necessarily mean the model has found a global optimum.",
      "url": "https://www.fatbikehero.com/p/aiglossary#convergence",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#counterfactual-fairness",
      "name": "Counterfactual Fairness",
      "termCode": "counterfactual-fairness",
      "description": "A fairness metric that checks whether a classification model produces the same result for one individual as it does for another individual who is identical except with respect to one or more sensitive attributes. If changing a sensitive attribute (such as race or gender) would change the model's prediction, the model fails counterfactual fairness.",
      "url": "https://www.fatbikehero.com/p/aiglossary#counterfactual-fairness",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#cross-entropy",
      "name": "Cross-Entropy Loss (Log Loss)",
      "termCode": "cross-entropy",
      "description": "A loss function commonly used for classification models that measures the difference between the true label distribution and the predicted probability distribution. Cross-entropy is minimized when the predicted probabilities closely match the true labels. For binary classification, it is also known as log loss. Lower cross-entropy indicates better model calibration.",
      "url": "https://www.fatbikehero.com/p/aiglossary#cross-entropy",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#cross-validation",
      "name": "Cross-Validation",
      "termCode": "cross-validation",
      "description": "A resampling technique used to evaluate model performance on a limited dataset. In k-fold cross-validation, the data is split into k equal folds; the model trains on k-1 folds and is evaluated on the remaining fold, rotating until every fold has served as the validation set. The results are averaged to produce a robust performance estimate.",
      "url": "https://www.fatbikehero.com/p/aiglossary#cross-validation",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },

    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#data-augmentation",
      "name": "Data Augmentation",
      "termCode": "data-augmentation",
      "description": "Techniques that artificially expand a training dataset by creating modified versions of existing examples. In computer vision, common augmentations include random cropping, flipping, rotation, color jitter, and scaling. In NLP, augmentations may include synonym replacement or back-translation. Data augmentation helps models generalize better and reduces overfitting.",
      "url": "https://www.fatbikehero.com/p/aiglossary#data-augmentation",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#decision-boundary",
      "name": "Decision Boundary",
      "termCode": "decision-boundary",
      "description": "The surface or line that a classification model uses to separate different classes in the feature space. In two dimensions, a decision boundary is a line; in higher dimensions, it is a hyperplane or more complex surface. Linear models produce linear decision boundaries, while neural networks and kernel methods can produce nonlinear boundaries.",
      "url": "https://www.fatbikehero.com/p/aiglossary#decision-boundary",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#decision-forest",
      "name": "Decision Forest",
      "termCode": "decision-forest",
      "description": "A machine learning model composed of multiple decision trees whose predictions are aggregated. Random forests and gradient-boosted trees are the two most common types of decision forests. Decision forests are powerful alternatives to neural networks for structured/tabular data and are interpretable relative to deep learning models.",
      "url": "https://www.fatbikehero.com/p/aiglossary#decision-forest",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#decision-tree",
      "name": "Decision Tree",
      "termCode": "decision-tree",
      "description": "A supervised learning model structured as a hierarchical tree of conditions (internal nodes) and predictions (leaf nodes). At each internal node, a condition splits the data based on a feature value. The tree is traversed from root to leaf during inference, with each leaf providing a prediction. Decision trees are interpretable but prone to overfitting when deep.",
      "url": "https://www.fatbikehero.com/p/aiglossary#decision-tree",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#deep-learning",
      "name": "Deep Learning",
      "termCode": "deep-learning",
      "description": "A subfield of machine learning using neural networks with many layers (hence 'deep') to learn hierarchical representations from data. Deep learning has achieved state-of-the-art results in image recognition, natural language processing, speech recognition, and many other domains. The term encompasses architectures such as CNNs, RNNs, and Transformers.",
      "url": "https://www.fatbikehero.com/p/aiglossary#deep-learning",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#demographic-parity",
      "name": "Demographic Parity",
      "termCode": "demographic-parity",
      "description": "A fairness metric satisfied when the proportion of positive predictions made by a model is the same across different demographic groups defined by sensitive attributes. For example, if a hiring model satisfies demographic parity, it would recommend hiring equal percentages of applicants from different racial groups, regardless of any differences in qualifications between groups.",
      "url": "https://www.fatbikehero.com/p/aiglossary#demographic-parity",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#dimensionality-reduction",
      "name": "Dimensionality Reduction",
      "termCode": "dimensionality-reduction",
      "description": "Techniques that reduce the number of features in a dataset while retaining as much relevant information as possible. Dimensionality reduction helps combat the curse of dimensionality, reduces computational cost, and can improve model performance. Common techniques include PCA (Principal Component Analysis), t-SNE, UMAP, and autoencoders.",
      "url": "https://www.fatbikehero.com/p/aiglossary#dimensionality-reduction",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#distillation",
      "name": "Distillation (Knowledge Distillation)",
      "termCode": "distillation",
      "description": "A model compression technique in which a smaller 'student' model is trained to mimic the behavior of a larger, more powerful 'teacher' model. The student is trained on the soft probability outputs of the teacher rather than hard labels, which transfers richer information. Distillation produces compact models that retain much of the teacher's performance.",
      "url": "https://www.fatbikehero.com/p/aiglossary#distillation",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#dropout",
      "name": "Dropout",
      "termCode": "dropout",
      "description": "A regularization technique for neural networks in which randomly selected neurons are temporarily 'dropped out' (set to zero) during each training iteration. Dropout prevents co-adaptation of neurons and acts as an ensemble of many different network architectures, reducing overfitting. During inference, all neurons are active but their outputs are scaled appropriately.",
      "url": "https://www.fatbikehero.com/p/aiglossary#dropout",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#early-stopping",
      "name": "Early Stopping",
      "termCode": "early-stopping",
      "description": "A regularization technique where training is halted when model performance on a validation set stops improving, preventing overfitting. The model weights from the best-performing epoch are retained. Early stopping avoids the need to specify the exact number of training epochs in advance.",
      "url": "https://www.fatbikehero.com/p/aiglossary#early-stopping",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#edit-distance",
      "name": "Edit Distance",
      "termCode": "edit-distance",
      "description": "A metric that measures how similar two text strings are by counting the minimum number of operations (insertions, deletions, or substitutions) needed to transform one string into the other. Edit distance is useful for comparing strings known to be similar, such as in spell checking or fuzzy matching. Levenshtein distance is the most common form of edit distance.",
      "url": "https://www.fatbikehero.com/p/aiglossary#edit-distance",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#embedding",
      "name": "Embedding",
      "termCode": "embedding",
      "description": "A dense, low-dimensional numerical representation of high-dimensional or discrete data such as words, sentences, images, or users. Embeddings map similar items to nearby points in a continuous vector space, allowing models to capture semantic relationships. Word embeddings (Word2Vec, GloVe) and token embeddings in Transformers are fundamental to NLP.",
      "url": "https://www.fatbikehero.com/p/aiglossary#embedding",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#ensemble",
      "name": "Ensemble",
      "termCode": "ensemble",
      "description": "A machine learning strategy that combines the predictions of multiple individual models to produce a single final prediction. Ensemble methods generally outperform any single constituent model by reducing variance (bagging), bias (boosting), or both (stacking). Random forests and gradient boosted trees are examples of ensemble methods.",
      "url": "https://www.fatbikehero.com/p/aiglossary#ensemble",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#entropy",
      "name": "Entropy",
      "termCode": "entropy",
      "description": "In information theory, a measure of the unpredictability or disorder of a probability distribution. A distribution with all values equally likely has maximum entropy; a distribution concentrated on a single value has zero entropy. In decision trees, entropy is used to measure the impurity of a set of examples when determining the best feature to split on.",
      "url": "https://www.fatbikehero.com/p/aiglossary#entropy",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#epoch",
      "name": "Epoch",
      "termCode": "epoch",
      "description": "One complete pass through the entire training dataset during model training. In each epoch, the model sees every training example once. Training typically requires multiple epochs. Within each epoch, examples are processed in batches, and model weights are updated after each batch.",
      "url": "https://www.fatbikehero.com/p/aiglossary#epoch",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#equalized-odds",
      "name": "Equalized Odds",
      "termCode": "equalized-odds",
      "description": "A fairness metric requiring that a classifier's true positive rate and false positive rate are equal across all groups defined by a sensitive attribute. Equalized odds permits classification results to depend on sensitive attributes in aggregate but ensures that the rates of correct and incorrect predictions are equal across demographic groups.",
      "url": "https://www.fatbikehero.com/p/aiglossary#equalized-odds",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#evaluation",
      "name": "Evaluation",
      "termCode": "evaluation",
      "description": "The process of measuring a model's quality or comparing different models. For supervised models, evaluation is typically performed on a held-out validation set and test set using metrics relevant to the task (accuracy, AUC, F1, RMSE, etc.). Evaluating large language models often involves broader quality and safety assessments beyond single-metric benchmarks.",
      "url": "https://www.fatbikehero.com/p/aiglossary#evaluation",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },

    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#example",
      "name": "Example",
      "termCode": "example",
      "description": "A single instance or data point used in training, validation, or testing. In supervised learning, a labeled example consists of a feature vector (input) paired with a label (output). In unsupervised learning, examples have no labels. The quality and quantity of examples are critical to model performance.",
      "url": "https://www.fatbikehero.com/p/aiglossary#example",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#f1-score",
      "name": "F1 Score",
      "termCode": "f1-score",
      "description": "The harmonic mean of precision and recall, providing a single metric that balances both. F1 ranges from 0 to 1, with 1 being perfect precision and recall. F1 is especially useful when classes are imbalanced and both false positives and false negatives are costly. Formula: F1 = 2 * (Precision * Recall) / (Precision + Recall).",
      "url": "https://www.fatbikehero.com/p/aiglossary#f1-score",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#factuality",
      "name": "Factuality",
      "termCode": "factuality",
      "description": "In the context of large language models, a property describing a model whose output is grounded in real-world facts rather than fabricated or hallucinated information. Factuality is a qualitative concept, not a single metric, and is assessed through benchmarks, human evaluation, and retrieval-augmented approaches.",
      "url": "https://www.fatbikehero.com/p/aiglossary#factuality",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#false-negative",
      "name": "False Negative (FN)",
      "termCode": "false-negative",
      "description": "In binary classification, an outcome where the model incorrectly predicts the negative class when the true label is positive. For example, a medical test that fails to detect a disease that is actually present. False negatives are also called Type II errors. The false negative rate is computed as FN / (TP + FN).",
      "url": "https://www.fatbikehero.com/p/aiglossary#false-negative",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#false-positive",
      "name": "False Positive (FP)",
      "termCode": "false-positive",
      "description": "In binary classification, an outcome where the model incorrectly predicts the positive class when the true label is negative. For example, a spam filter that flags a legitimate email as spam. False positives are also called Type I errors. The false positive rate is computed as FP / (FP + TN).",
      "url": "https://www.fatbikehero.com/p/aiglossary#false-positive",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#feature",
      "name": "Feature",
      "termCode": "feature",
      "description": "An individual measurable input variable used by a machine learning model to make predictions. Features are the columns in a tabular dataset or the channels in an image. Good feature selection and engineering are among the most impactful steps in the machine learning pipeline. Also called an attribute or input variable.",
      "url": "https://www.fatbikehero.com/p/aiglossary#feature",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#feature-cross",
      "name": "Feature Cross",
      "termCode": "feature-cross",
      "description": "A synthetic feature created by multiplying (crossing) two or more existing features to capture interaction effects between them. Feature crosses allow linear models to learn nonlinear relationships. For example, crossing 'day of week' with 'hour of day' creates a feature that captures time-of-week patterns more richly than either feature alone.",
      "url": "https://www.fatbikehero.com/p/aiglossary#feature-cross",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#feature-engineering",
      "name": "Feature Engineering",
      "termCode": "feature-engineering",
      "description": "The process of using domain knowledge to create, transform, or select input features that improve model performance. Feature engineering includes steps such as normalization, encoding categorical variables, creating feature crosses, handling missing values, and binning continuous variables. Effective feature engineering can often make a larger impact than changing the model architecture.",
      "url": "https://www.fatbikehero.com/p/aiglossary#feature-engineering",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#feature-importance",
      "name": "Feature Importance",
      "termCode": "feature-importance",
      "description": "A measure of how much each feature contributes to a model's predictions. Feature importance can be computed in various ways depending on the model: decision trees use information gain or gini impurity reduction; linear models use coefficient magnitudes; model-agnostic approaches such as SHAP or permutation importance work for any model type.",
      "url": "https://www.fatbikehero.com/p/aiglossary#feature-importance",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#fine-tuning",
      "name": "Fine-Tuning",
      "termCode": "fine-tuning",
      "description": "The process of taking a pre-trained model and further training it on a smaller, task-specific dataset to specialize its capabilities. Fine-tuning can involve updating all parameters (full fine-tuning), updating only a subset of parameters, or adding new layers. It is a form of transfer learning that allows large models to be adapted to specific domains or tasks efficiently.",
      "url": "https://www.fatbikehero.com/p/aiglossary#fine-tuning",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#gini-impurity",
      "name": "Gini Impurity",
      "termCode": "gini-impurity",
      "description": "A metric used in decision tree algorithms to measure the impurity or disorder of a set of examples. It represents the probability of incorrectly classifying a randomly chosen element if it were randomly labeled according to the class distribution in the set. A Gini impurity of 0 means all examples belong to a single class (perfect purity). Also called gini index.",
      "url": "https://www.fatbikehero.com/p/aiglossary#gini-impurity",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#gradient",
      "name": "Gradient",
      "termCode": "gradient",
      "description": "A vector of partial derivatives that indicates the direction and rate of the greatest increase of a function with respect to its parameters. In machine learning, the gradient of the loss function with respect to model weights tells us how to adjust weights to increase the loss. Gradient descent moves weights in the opposite direction to reduce the loss.",
      "url": "https://www.fatbikehero.com/p/aiglossary#gradient",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#gradient-boosting",
      "name": "Gradient Boosting",
      "termCode": "gradient-boosting",
      "description": "An ensemble technique that builds an additive model sequentially by fitting new trees to the residual errors of the current ensemble. Each tree corrects the mistakes of the previous trees. Gradient boosting uses gradient descent in function space to minimize a loss function. Popular implementations include XGBoost, LightGBM, and CatBoost.",
      "url": "https://www.fatbikehero.com/p/aiglossary#gradient-boosting",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },

    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#gradient-clipping",
      "name": "Gradient Clipping",
      "termCode": "gradient-clipping",
      "description": "A technique to prevent exploding gradients during training by capping gradient values above a defined threshold before applying weight updates. Gradient clipping is commonly used when training recurrent neural networks (RNNs) or very deep networks, where gradients can grow exponentially large through many layers.",
      "url": "https://www.fatbikehero.com/p/aiglossary#gradient-clipping",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#gradient-descent",
      "name": "Gradient Descent",
      "termCode": "gradient-descent",
      "description": "An iterative optimization algorithm used to minimize the loss function of a machine learning model by adjusting weights in the direction opposite to the gradient. The step size is controlled by the learning rate. Variants include batch gradient descent (uses all examples), stochastic gradient descent (uses one example), and mini-batch gradient descent.",
      "url": "https://www.fatbikehero.com/p/aiglossary#gradient-descent",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#ground-truth",
      "name": "Ground Truth",
      "termCode": "ground-truth",
      "description": "The actual, correct label or outcome for an example in a supervised learning dataset. Ground truth labels are typically provided by human annotators or automatically derived from known outcomes. The quality of ground truth labels directly affects how well the model can learn the correct relationship between inputs and outputs.",
      "url": "https://www.fatbikehero.com/p/aiglossary#ground-truth",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#hallucination",
      "name": "Hallucination",
      "termCode": "hallucination",
      "description": "A phenomenon in which a large language model generates output that is factually incorrect, fabricated, or nonsensical, even though it may sound plausible and confident. Hallucinations occur because LLMs predict statistically likely token sequences without a grounded understanding of truth. Mitigation strategies include retrieval-augmented generation (RAG) and improved training on factual data.",
      "url": "https://www.fatbikehero.com/p/aiglossary#hallucination",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#hidden-layer",
      "name": "Hidden Layer",
      "termCode": "hidden-layer",
      "description": "Any layer in a neural network between the input layer and the output layer. Hidden layers apply nonlinear transformations to learn increasingly abstract representations of the input data. A neural network with one or more hidden layers is called a deep neural network. The number and size of hidden layers are key hyperparameters.",
      "url": "https://www.fatbikehero.com/p/aiglossary#hidden-layer",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#hyperparameter",
      "name": "Hyperparameter",
      "termCode": "hyperparameter",
      "description": "A configuration setting that controls the training process or model architecture, as opposed to model parameters that are learned from data. Examples include learning rate, batch size, number of layers, regularization strength, and dropout rate. Hyperparameters must be set before training and are often tuned using techniques such as grid search, random search, or Bayesian optimization.",
      "url": "https://www.fatbikehero.com/p/aiglossary#hyperparameter",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#imbalanced-dataset",
      "name": "Imbalanced Dataset",
      "termCode": "imbalanced-dataset",
      "description": "A dataset in which the distribution of class labels is significantly skewed, with one class (the majority class) far more represented than others (minority classes). Imbalanced datasets can cause a model to ignore minority classes. Techniques to handle imbalance include oversampling (SMOTE), undersampling, class weighting, and using metrics like F1, AUC, and precision-recall curves.",
      "url": "https://www.fatbikehero.com/p/aiglossary#imbalanced-dataset",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#information-gain",
      "name": "Information Gain",
      "termCode": "information-gain",
      "description": "A metric derived from entropy used in decision tree algorithms to quantify the reduction in entropy achieved by splitting data on a particular feature. The feature with the highest information gain is selected as the splitting criterion at each node. High information gain means the feature provides strong predictive power for classifying examples.",
      "url": "https://www.fatbikehero.com/p/aiglossary#information-gain",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#label",
      "name": "Label",
      "termCode": "label",
      "description": "The target variable or answer that a supervised machine learning model is trained to predict. In classification, labels are discrete categories (e.g., cat or dog). In regression, labels are continuous values (e.g., house price). Labels are provided in the training dataset and are what the model aims to approximate on new, unseen examples.",
      "url": "https://www.fatbikehero.com/p/aiglossary#label",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#layer",
      "name": "Layer",
      "termCode": "layer",
      "description": "A collection of neurons in a neural network that process inputs and pass outputs to the next layer. The three primary layer types are: input layer (receives raw feature values), hidden layers (learn intermediate representations), and output layer (produces final predictions). Layers can be fully connected, convolutional, recurrent, or attention-based.",
      "url": "https://www.fatbikehero.com/p/aiglossary#layer",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#learning-rate",
      "name": "Learning Rate",
      "termCode": "learning-rate",
      "description": "A hyperparameter that controls how much the model's weights are updated in response to the estimated gradient error on each training iteration. A high learning rate speeds up training but risks overshooting the loss minimum; a low learning rate is more stable but may converge slowly or get stuck in local minima. Learning rate schedules dynamically adjust it during training.",
      "url": "https://www.fatbikehero.com/p/aiglossary#learning-rate",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#linear-regression",
      "name": "Linear Regression",
      "termCode": "linear-regression",
      "description": "A supervised learning model that predicts a continuous numeric target by fitting a linear equation to the training data. The model learns a weight for each feature and a bias term such that the weighted sum of features approximates the target. Ordinary least squares (OLS) minimizes the mean squared error. Linear regression is one of the most interpretable machine learning models.",
      "url": "https://www.fatbikehero.com/p/aiglossary#linear-regression",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#llm",
      "name": "Large Language Model (LLM)",
      "termCode": "llm",
      "description": "A large-scale neural network model, typically a Transformer, trained on massive text corpora to understand and generate human language. LLMs learn statistical patterns across billions or trillions of tokens and can perform a wide range of language tasks including translation, summarization, question answering, and code generation. Examples include GPT-4, Gemini, and Claude.",
      "url": "https://www.fatbikehero.com/p/aiglossary#llm",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#logistic-regression",
      "name": "Logistic Regression",
      "termCode": "logistic-regression",
      "description": "A supervised classification model that estimates the probability of a binary outcome using a linear combination of features passed through a sigmoid function. Despite its name, logistic regression is a classification algorithm. It is widely used as a fast, interpretable baseline for binary classification tasks. The loss function is cross-entropy (log loss).",
      "url": "https://www.fatbikehero.com/p/aiglossary#logistic-regression",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },

    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#loss",
      "name": "Loss (Loss Function)",
      "termCode": "loss",
      "description": "A mathematical function that measures the difference between a model's predictions and the true labels on a set of examples. The goal of training is to minimize the loss. Common loss functions include mean squared error (for regression), cross-entropy (for classification), and hinge loss (for SVMs). Also called cost or objective function.",
      "url": "https://www.fatbikehero.com/p/aiglossary#loss",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#lstm",
      "name": "Long Short-Term Memory (LSTM)",
      "termCode": "lstm",
      "description": "A type of recurrent neural network (RNN) architecture designed to capture long-range dependencies in sequential data. LSTMs use gating mechanisms (input gate, forget gate, output gate) to selectively remember or forget information over many time steps, solving the vanishing gradient problem that affects standard RNNs. LSTMs are used for time series, speech, and NLP tasks.",
      "url": "https://www.fatbikehero.com/p/aiglossary#lstm",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#machine-learning",
      "name": "Machine Learning",
      "termCode": "machine-learning",
      "description": "A subfield of artificial intelligence in which computer systems learn to perform tasks by discovering patterns in data, rather than being explicitly programmed with rules. Machine learning encompasses supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning. Models improve their performance with more data and training.",
      "url": "https://www.fatbikehero.com/p/aiglossary#machine-learning",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#mean-squared-error",
      "name": "Mean Squared Error (MSE)",
      "termCode": "mean-squared-error",
      "description": "A common loss function for regression models that computes the average of the squared differences between predicted values and true values. Squaring the differences penalizes large errors more heavily than small ones. The square root of MSE is RMSE (Root Mean Squared Error), which is expressed in the same units as the target variable.",
      "url": "https://www.fatbikehero.com/p/aiglossary#mean-squared-error",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#mini-batch",
      "name": "Mini-Batch",
      "termCode": "mini-batch",
      "description": "A subset of the training dataset used in one iteration of gradient descent. Mini-batch sizes typically range from 10 to 1,000 examples. Mini-batch gradient descent balances the efficiency of full-batch gradient descent with the regularization benefits of stochastic gradient descent, and is the most commonly used training strategy in modern deep learning.",
      "url": "https://www.fatbikehero.com/p/aiglossary#mini-batch",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#model",
      "name": "Model",
      "termCode": "model",
      "description": "A mathematical representation of a real-world process, learned from data. In machine learning, a model is the output of training an algorithm on a dataset. It consists of a structure (architecture) and learned parameters (weights and biases) that map input features to output predictions. A trained model can be deployed to make predictions on new data.",
      "url": "https://www.fatbikehero.com/p/aiglossary#model",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#model-capacity",
      "name": "Model Capacity",
      "termCode": "model-capacity",
      "description": "A measure of the complexity of functions a model can learn. Higher capacity models can represent more complex relationships but are also more prone to overfitting. A model's capacity generally increases with the number of parameters, layers, or features. VC dimension provides a formal measure of classification model capacity.",
      "url": "https://www.fatbikehero.com/p/aiglossary#model-capacity",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#momentum",
      "name": "Momentum",
      "termCode": "momentum",
      "description": "A gradient descent optimization technique that accumulates an exponentially weighted moving average of past gradients to determine the direction of the next weight update. Momentum dampens oscillations and helps the optimizer move more steadily toward the minimum, especially in ravine-like loss surfaces. Momentum can help escape local minima and saddle points.",
      "url": "https://www.fatbikehero.com/p/aiglossary#momentum",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#multi-class-classification",
      "name": "Multi-Class Classification",
      "termCode": "multi-class-classification",
      "description": "A classification task where each example must be assigned to one of three or more discrete classes. Multi-class classifiers typically use softmax activation in the output layer to produce a probability distribution over all classes. Examples include digit recognition (10 classes) and animal classification.",
      "url": "https://www.fatbikehero.com/p/aiglossary#multi-class-classification",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#multi-head-attention",
      "name": "Multi-Head Self-Attention",
      "termCode": "multi-head-attention",
      "description": "An extension of self-attention used in Transformer models that runs multiple attention operations in parallel, each with different learned weight matrices. Each 'head' can focus on different aspects of the input (e.g., syntactic vs. semantic relationships). The outputs are concatenated and projected to form the final representation. Multi-head attention is a core building block of all Transformer architectures.",
      "url": "https://www.fatbikehero.com/p/aiglossary#multi-head-attention",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#neural-network",
      "name": "Neural Network",
      "termCode": "neural-network",
      "description": "A machine learning model loosely inspired by the structure of biological brains, consisting of layers of interconnected nodes (neurons). Each neuron computes a weighted sum of its inputs and passes the result through an activation function. Neural networks can approximate any continuous function and are the foundation of deep learning.",
      "url": "https://www.fatbikehero.com/p/aiglossary#neural-network",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#normalization",
      "name": "Normalization",
      "termCode": "normalization",
      "description": "The process of rescaling feature values to a standard range or distribution to improve training stability and convergence speed. Common normalization techniques include min-max scaling (to [0,1]), z-score standardization (zero mean, unit variance), and batch normalization. Normalization prevents features with large numeric ranges from dominating the training process.",
      "url": "https://www.fatbikehero.com/p/aiglossary#normalization",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },

    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#oblique-condition",
      "name": "Oblique Condition",
      "termCode": "oblique-condition",
      "description": "In a decision tree, a condition that tests a linear combination of two or more features, rather than a single feature threshold. For example, 'height + 0.5 * width > 100' is an oblique condition. Oblique conditions allow decision trees to create diagonal decision boundaries. Contrast with axis-aligned conditions.",
      "url": "https://www.fatbikehero.com/p/aiglossary#oblique-condition",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#one-hot-encoding",
      "name": "One-Hot Encoding",
      "termCode": "one-hot-encoding",
      "description": "A method of representing categorical variables as binary vectors. For a categorical feature with N possible values, one-hot encoding creates N binary features, exactly one of which is 1 (hot) for each example. For example, a color feature with values {red, green, blue} would become three binary features. One-hot encoding avoids imposing an ordinal relationship between categories.",
      "url": "https://www.fatbikehero.com/p/aiglossary#one-hot-encoding",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#out-group-homogeneity-bias",
      "name": "Out-Group Homogeneity Bias",
      "termCode": "out-group-homogeneity-bias",
      "description": "The tendency to perceive members of groups one does not belong to (out-groups) as more similar to each other than members of one's own group (in-group). In machine learning, this bias can appear in datasets where attributes describing out-group members are less nuanced or more stereotyped than attributes describing in-group members.",
      "url": "https://www.fatbikehero.com/p/aiglossary#out-group-homogeneity-bias",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#outlier",
      "name": "Outlier",
      "termCode": "outlier",
      "description": "A data point that differs significantly from the majority of other points in a dataset. Outliers can result from measurement error, data corruption, or genuine rare events. They can disproportionately influence model training, particularly for linear models and those using mean-squared error loss. Outliers may be removed, clipped, or modeled explicitly depending on the application.",
      "url": "https://www.fatbikehero.com/p/aiglossary#outlier",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#overfitting",
      "name": "Overfitting",
      "termCode": "overfitting",
      "description": "A situation where a model learns the noise and specific patterns of the training data too well, resulting in poor generalization to new, unseen data. An overfit model has low training loss but high validation/test loss. Overfitting is more likely with complex models trained on small datasets. Regularization, dropout, early stopping, and more data help prevent it.",
      "url": "https://www.fatbikehero.com/p/aiglossary#overfitting",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#parameter",
      "name": "Parameter",
      "termCode": "parameter",
      "description": "A value learned by a model during training. Parameters include weights (which determine the strength of connections between neurons) and biases (offset terms). The number of parameters in a model is a primary measure of its capacity. Modern large language models can have billions or trillions of parameters. Parameters are distinct from hyperparameters, which are set manually.",
      "url": "https://www.fatbikehero.com/p/aiglossary#parameter",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#pca",
      "name": "Principal Component Analysis (PCA)",
      "termCode": "pca",
      "description": "A linear dimensionality reduction technique that projects data onto the directions (principal components) of maximum variance. PCA transforms correlated features into a smaller set of uncorrelated components ordered by how much variance each explains. It is used for data visualization, noise reduction, and as a preprocessing step before training.",
      "url": "https://www.fatbikehero.com/p/aiglossary#pca",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#policy",
      "name": "Policy",
      "termCode": "policy",
      "description": "In reinforcement learning, a function that maps states of the environment to actions to take. A policy defines the agent's behavior. Policies can be deterministic (always taking the same action in a given state) or stochastic (sampling actions from a probability distribution). The goal of reinforcement learning is to find the optimal policy that maximizes expected cumulative reward.",
      "url": "https://www.fatbikehero.com/p/aiglossary#policy",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#pooling",
      "name": "Pooling",
      "termCode": "pooling",
      "description": "An operation in convolutional neural networks that reduces the spatial dimensions of feature maps by aggregating values in a local region. Max pooling takes the maximum value; average pooling takes the mean. Pooling introduces local translation invariance and reduces the number of parameters in subsequent layers.",
      "url": "https://www.fatbikehero.com/p/aiglossary#pooling",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#precision",
      "name": "Precision",
      "termCode": "precision",
      "description": "A classification metric measuring the proportion of positive predictions that are actually correct. Formula: Precision = TP / (TP + FP). High precision means the model rarely incorrectly labels a negative example as positive. Precision is important when the cost of false positives is high, such as in spam filtering or medical diagnosis.",
      "url": "https://www.fatbikehero.com/p/aiglossary#precision",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#prediction-bias",
      "name": "Prediction Bias",
      "termCode": "prediction-bias",
      "description": "A value indicating how different the average model prediction is from the average of all true labels in the dataset. A model with zero prediction bias has predictions that average out to the same value as the labels on average. Significant prediction bias indicates that something is systematically wrong with the model's calibration.",
      "url": "https://www.fatbikehero.com/p/aiglossary#prediction-bias",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },

    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#predictive-parity",
      "name": "Predictive Parity",
      "termCode": "predictive-parity",
      "description": "A fairness metric that checks whether a model's precision rates are equivalent across subgroups defined by sensitive attributes. For example, a college admissions model satisfies predictive parity if its precision (rate of correctly predicted acceptances) is the same for all demographic groups. Also called predictive rate parity.",
      "url": "https://www.fatbikehero.com/p/aiglossary#predictive-parity",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#pre-trained-model",
      "name": "Pre-Trained Model",
      "termCode": "pre-trained-model",
      "description": "A model that has already been trained on a large dataset (often with massive compute resources) before being adapted to a specific task. Pre-trained models capture general representations that can be efficiently fine-tuned for downstream tasks. In NLP, pre-trained Transformer models like BERT and GPT have transformed the field by enabling strong performance with relatively little task-specific data.",
      "url": "https://www.fatbikehero.com/p/aiglossary#pre-trained-model",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#prompt",
      "name": "Prompt",
      "termCode": "prompt",
      "description": "The input text (and sometimes images or other data) provided to a generative AI model to elicit a response. Prompts can include instructions, context, examples (few-shot), or constraints. Prompt engineering — the craft of designing effective prompts — can significantly influence model output quality without any retraining.",
      "url": "https://www.fatbikehero.com/p/aiglossary#prompt",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#rag",
      "name": "Retrieval-Augmented Generation (RAG)",
      "termCode": "rag",
      "description": "An architecture that combines a generative model with an external retrieval system, such as a vector database. Before generating a response, the system retrieves relevant documents or passages based on the user's query and provides them as additional context to the model. RAG improves factuality and allows models to access up-to-date information beyond their training data.",
      "url": "https://www.fatbikehero.com/p/aiglossary#rag",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#random-forest",
      "name": "Random Forest",
      "termCode": "random-forest",
      "description": "An ensemble model consisting of many decision trees trained with bagging (bootstrap aggregating) and random feature selection at each split. The final prediction is produced by majority vote (classification) or averaging (regression) across all trees. Random forests are robust, accurate, relatively insensitive to hyperparameters, and handle high-dimensional data well.",
      "url": "https://www.fatbikehero.com/p/aiglossary#random-forest",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#recall",
      "name": "Recall (Sensitivity)",
      "termCode": "recall",
      "description": "A classification metric measuring the proportion of actual positives that the model correctly identifies. Formula: Recall = TP / (TP + FN). High recall means the model rarely misses a true positive. Recall is critical when the cost of false negatives is high, such as in disease screening or fraud detection. Also called sensitivity or true positive rate.",
      "url": "https://www.fatbikehero.com/p/aiglossary#recall",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#regularization",
      "name": "Regularization",
      "termCode": "regularization",
      "description": "A set of techniques that constrain model complexity to prevent overfitting and improve generalization. Common regularization methods include L1 regularization (lasso, which promotes sparsity by penalizing the absolute value of weights), L2 regularization (ridge, which penalizes the square of weights), dropout, and early stopping. The strength of regularization is controlled by the regularization rate (lambda).",
      "url": "https://www.fatbikehero.com/p/aiglossary#regularization",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#reinforcement-learning",
      "name": "Reinforcement Learning (RL)",
      "termCode": "reinforcement-learning",
      "description": "A machine learning paradigm in which an agent learns to make decisions by interacting with an environment. The agent receives a reward signal after each action and learns a policy that maximizes cumulative expected reward over time. RL has been used to train game-playing agents (AlphaGo, Atari games), robotics controllers, and, more recently, to align large language models via RLHF.",
      "url": "https://www.fatbikehero.com/p/aiglossary#reinforcement-learning",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#relu",
      "name": "ReLU (Rectified Linear Unit)",
      "termCode": "relu",
      "description": "The most widely used activation function in deep neural networks. ReLU returns the input value if it is positive, and zero otherwise: f(x) = max(0, x). ReLU is computationally efficient, avoids the vanishing gradient problem for positive activations, and empirically works well in deep networks. Variants include Leaky ReLU, ELU, and GELU.",
      "url": "https://www.fatbikehero.com/p/aiglossary#relu",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#rnn",
      "name": "Recurrent Neural Network (RNN)",
      "termCode": "rnn",
      "description": "A type of neural network designed to process sequential data by maintaining a hidden state that captures information from previous time steps. At each step, the RNN takes the current input and the previous hidden state to produce a new hidden state and output. RNNs are suited for time series, speech, and NLP tasks. LSTMs and GRUs are improved RNN variants that address the vanishing gradient problem.",
      "url": "https://www.fatbikehero.com/p/aiglossary#rnn",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#roc-curve",
      "name": "ROC Curve (Receiver Operating Characteristic)",
      "termCode": "roc-curve",
      "description": "A graphical plot showing the performance of a binary classification model at all classification thresholds by plotting the true positive rate (recall) against the false positive rate. A model with no skill lies along the diagonal; a perfect model reaches the top-left corner. The area under the ROC curve (AUC) summarizes overall classifier performance.",
      "url": "https://www.fatbikehero.com/p/aiglossary#roc-curve",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },

    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#self-attention",
      "name": "Self-Attention",
      "termCode": "self-attention",
      "description": "An attention mechanism in which each position in a sequence attends to all other positions in the same sequence to compute a representation. Self-attention allows the model to capture dependencies between tokens regardless of their distance in the sequence. It is the core operation in Transformer models and replaces the sequential processing of RNNs with fully parallelizable computation.",
      "url": "https://www.fatbikehero.com/p/aiglossary#self-attention",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#semi-supervised-learning",
      "name": "Semi-Supervised Learning",
      "termCode": "semi-supervised-learning",
      "description": "A learning paradigm that uses a small amount of labeled data combined with a large amount of unlabeled data during training. Semi-supervised methods exploit the structure of unlabeled data to improve model performance beyond what could be achieved with labeled data alone. This is useful when labeling data is expensive or time-consuming.",
      "url": "https://www.fatbikehero.com/p/aiglossary#semi-supervised-learning",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#shrinkage",
      "name": "Shrinkage",
      "termCode": "shrinkage",
      "description": "A hyperparameter in gradient boosting that controls the contribution of each new tree added to the ensemble, analogous to the learning rate in gradient descent. A lower shrinkage value reduces each tree's contribution and is more conservative, which helps prevent overfitting but requires more trees. Shrinkage is a decimal value between 0.0 and 1.0.",
      "url": "https://www.fatbikehero.com/p/aiglossary#shrinkage",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#sigmoid",
      "name": "Sigmoid Function",
      "termCode": "sigmoid",
      "description": "A mathematical function with an S-shaped curve that maps any real-valued input to a value between 0 and 1. Formula: σ(x) = 1 / (1 + e^-x). The sigmoid function is used as an activation function in binary classification (output layer of logistic regression) and in the gates of LSTM cells. It can suffer from vanishing gradients for very large or very small inputs.",
      "url": "https://www.fatbikehero.com/p/aiglossary#sigmoid",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#societal-bias",
      "name": "Societal Bias",
      "termCode": "societal-bias",
      "description": "A type of bias that already exists in the world and has been absorbed into a training dataset. Societal biases tend to reflect existing cultural stereotypes, demographic inequalities, and historical prejudices. Models trained on such data may perpetuate or amplify these biases in their predictions unless explicit debiasing steps are taken.",
      "url": "https://www.fatbikehero.com/p/aiglossary#societal-bias",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#softmax",
      "name": "Softmax Function",
      "termCode": "softmax",
      "description": "An activation function that converts a vector of raw scores (logits) into a probability distribution over multiple classes. Each output value is in (0, 1) and all values sum to 1.0. Softmax is used as the final activation function in multi-class classification output layers. It amplifies differences between class scores, making the highest score even more dominant.",
      "url": "https://www.fatbikehero.com/p/aiglossary#softmax",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#sparse-representation",
      "name": "Sparse Representation",
      "termCode": "sparse-representation",
      "description": "A vector or matrix in which most values are zero, with only a small number of nonzero entries. Sparse representations are common in NLP (e.g., bag-of-words vectors) and recommendation systems. Handling sparse data efficiently requires specialized storage formats and algorithms. Dense embeddings are often used to create compact representations of sparse inputs.",
      "url": "https://www.fatbikehero.com/p/aiglossary#sparse-representation",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#splitter",
      "name": "Splitter",
      "termCode": "splitter",
      "description": "In decision tree training, the algorithm responsible for finding the best condition at each internal node. The splitter evaluates candidate split conditions using metrics such as information gain (derived from entropy) or Gini impurity reduction, and selects the condition that produces the most homogeneous child nodes.",
      "url": "https://www.fatbikehero.com/p/aiglossary#splitter",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#stochastic-gradient-descent",
      "name": "Stochastic Gradient Descent (SGD)",
      "termCode": "stochastic-gradient-descent",
      "description": "A variant of gradient descent in which the gradient is estimated using a single randomly selected training example (batch size of 1) per iteration. SGD is noisy but can escape local minima and is computationally efficient. In practice, mini-batch gradient descent is most commonly used, which strikes a balance between the efficiency of full-batch and the noise benefits of SGD.",
      "url": "https://www.fatbikehero.com/p/aiglossary#stochastic-gradient-descent",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#supervised-learning",
      "name": "Supervised Learning",
      "termCode": "supervised-learning",
      "description": "A machine learning paradigm in which a model is trained on labeled examples, each consisting of input features and a corresponding target label. The model learns to map inputs to outputs by minimizing the error between its predictions and the true labels. Classification and regression are the two main types of supervised learning tasks.",
      "url": "https://www.fatbikehero.com/p/aiglossary#supervised-learning",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#temperature",
      "name": "Temperature",
      "termCode": "temperature",
      "description": "A hyperparameter in language models and other generative models that controls the randomness of predictions. A temperature of 0 makes the model deterministic (always choosing the most likely token); temperatures above 1 increase diversity and creativity but may reduce coherence. Temperature scales the logits before the softmax function during sampling.",
      "url": "https://www.fatbikehero.com/p/aiglossary#temperature",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },

    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#tensor",
      "name": "Tensor",
      "termCode": "tensor",
      "description": "A generalization of scalars, vectors, and matrices to arbitrary numbers of dimensions. A scalar is a rank-0 tensor, a vector is a rank-1 tensor, and a matrix is a rank-2 tensor. Tensors are the fundamental data structure in deep learning frameworks such as TensorFlow and PyTorch, representing inputs, outputs, weights, and activations.",
      "url": "https://www.fatbikehero.com/p/aiglossary#tensor",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#test-set",
      "name": "Test Set",
      "termCode": "test-set",
      "description": "A held-out subset of the dataset used only for the final evaluation of a trained model. The test set is never used during training or hyperparameter tuning to ensure an unbiased estimate of the model's performance on new, unseen data. Evaluating a model on the test set multiple times can lead to overfitting to the test set.",
      "url": "https://www.fatbikehero.com/p/aiglossary#test-set",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#token",
      "name": "Token",
      "termCode": "token",
      "description": "The basic unit of text processed by a language model. Tokens may correspond to whole words, subwords, characters, or punctuation depending on the tokenization scheme. Tokenizers such as BPE (Byte Pair Encoding) split text into tokens before feeding it into a model. A typical token is roughly 4 characters or 0.75 words in English.",
      "url": "https://www.fatbikehero.com/p/aiglossary#token",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#training",
      "name": "Training",
      "termCode": "training",
      "description": "The process of adjusting a model's parameters by repeatedly presenting training examples, computing the loss, and applying an optimization algorithm (such as gradient descent with backpropagation) to reduce the loss. Training continues for multiple epochs or until convergence. The trained model's parameters encode the patterns learned from the training data.",
      "url": "https://www.fatbikehero.com/p/aiglossary#training",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#training-set",
      "name": "Training Set",
      "termCode": "training-set",
      "description": "The subset of the full dataset used to train a machine learning model. The model's parameters are updated based on training set examples. A larger, more representative training set generally leads to better model performance. The training set should be kept separate from the validation set and test set to avoid data leakage.",
      "url": "https://www.fatbikehero.com/p/aiglossary#training-set",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#transfer-learning",
      "name": "Transfer Learning",
      "termCode": "transfer-learning",
      "description": "A machine learning technique where a model trained on one task or domain is adapted for a different but related task or domain. Transfer learning is especially powerful in deep learning, where pre-trained models capture general representations that can be fine-tuned efficiently. It reduces the need for large labeled datasets and extensive compute for new tasks.",
      "url": "https://www.fatbikehero.com/p/aiglossary#transfer-learning",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#transformer",
      "name": "Transformer",
      "termCode": "transformer",
      "description": "A neural network architecture introduced in the 2017 paper 'Attention Is All You Need' that relies entirely on self-attention mechanisms rather than recurrence or convolution. Transformers process all tokens in parallel, enabling efficient training on large datasets. They are the foundation of modern large language models (GPT, BERT, Claude, Gemini) and have also been applied to vision, audio, and other modalities.",
      "url": "https://www.fatbikehero.com/p/aiglossary#transformer",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#true-negative",
      "name": "True Negative (TN)",
      "termCode": "true-negative",
      "description": "In binary classification, an outcome where the model correctly predicts the negative class for an example whose true label is negative. For example, a spam filter correctly identifying a legitimate email as not spam. The true negative rate (specificity) is TN / (TN + FP).",
      "url": "https://www.fatbikehero.com/p/aiglossary#true-negative",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#true-positive",
      "name": "True Positive (TP)",
      "termCode": "true-positive",
      "description": "In binary classification, an outcome where the model correctly predicts the positive class for an example whose true label is positive. For example, a disease detection model correctly identifying a patient who has the disease. The true positive rate (recall/sensitivity) is TP / (TP + FN).",
      "url": "https://www.fatbikehero.com/p/aiglossary#true-positive",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#underfitting",
      "name": "Underfitting",
      "termCode": "underfitting",
      "description": "A condition where a model is too simple to capture the underlying patterns in the training data, resulting in high training loss and high validation/test loss. An underfit model has high bias. Common causes include using a model with insufficient capacity, insufficient training time, or too strong regularization. Solutions include increasing model complexity, reducing regularization, or training longer.",
      "url": "https://www.fatbikehero.com/p/aiglossary#underfitting",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },

    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#unsupervised-learning",
      "name": "Unsupervised Learning",
      "termCode": "unsupervised-learning",
      "description": "A machine learning paradigm in which models learn patterns, structure, or representations from unlabeled data without any target labels. Unsupervised learning tasks include clustering (grouping similar examples), dimensionality reduction (compressing representations), and density estimation. Examples include k-means clustering, PCA, and autoencoders.",
      "url": "https://www.fatbikehero.com/p/aiglossary#unsupervised-learning",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#validation-set",
      "name": "Validation Set",
      "termCode": "validation-set",
      "description": "A subset of data separate from both the training set and test set, used to evaluate model performance during training and to tune hyperparameters. Validation set performance guides decisions such as early stopping, learning rate adjustment, and model architecture selection. Unlike the test set, the validation set may be evaluated multiple times during development.",
      "url": "https://www.fatbikehero.com/p/aiglossary#validation-set",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#vanishing-gradient",
      "name": "Vanishing Gradient Problem",
      "termCode": "vanishing-gradient",
      "description": "A problem that occurs during backpropagation in deep networks where gradients become extremely small as they are propagated backward through many layers, causing the weights in early layers to update very slowly or not at all. This stalls learning in deep architectures. Solutions include ReLU activations, batch normalization, residual connections (skip connections), and LSTM architectures.",
      "url": "https://www.fatbikehero.com/p/aiglossary#vanishing-gradient",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#variance",
      "name": "Variance (Bias-Variance Tradeoff)",
      "termCode": "variance",
      "description": "In the context of model evaluation, variance refers to how much a model's predictions change in response to small fluctuations in the training data. A high-variance model (overfit) changes substantially with different training sets. The bias-variance tradeoff describes the tension between model complexity (which reduces bias but increases variance) and simplicity (which reduces variance but increases bias).",
      "url": "https://www.fatbikehero.com/p/aiglossary#variance",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#vibe-coding",
      "name": "Vibe Coding",
      "termCode": "vibe-coding",
      "description": "A term coined by Andrej Karpathy referring to a style of AI-assisted software development where a developer (or non-developer) describes software intent to a generative AI model in natural language and the model produces source code. Originally implying a loose, iterative approach without necessarily examining all generated code, the term has evolved to broadly describe any AI-generated coding workflow.",
      "url": "https://www.fatbikehero.com/p/aiglossary#vibe-coding",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#weight",
      "name": "Weight",
      "termCode": "weight",
      "description": "A learnable parameter in a machine learning model that scales the importance of an input feature or connection between neurons. Weights are adjusted during training via gradient descent to minimize the loss function. In a linear model, weights directly correspond to feature coefficients. In neural networks, weights form the connections between neurons in adjacent layers.",
      "url": "https://www.fatbikehero.com/p/aiglossary#weight",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#zero-shot-learning",
      "name": "Zero-Shot Learning",
      "termCode": "zero-shot-learning",
      "description": "The ability of a model to correctly perform a task or recognize a class that it was never explicitly trained on, by leveraging knowledge learned from related tasks or semantic descriptions. In large language models, zero-shot prompting means asking the model to perform a task with no examples in the prompt. Few-shot learning provides a small number of examples.",
      "url": "https://www.fatbikehero.com/p/aiglossary#zero-shot-learning",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#l1-regularization",
      "name": "L1 Regularization (Lasso)",
      "termCode": "l1-regularization",
      "description": "A regularization technique that adds a penalty term to the loss function equal to the sum of the absolute values of all model weights multiplied by a regularization rate (lambda). L1 regularization encourages sparsity by driving many weights to exactly zero, effectively performing feature selection. Lasso regression applies L1 regularization to linear models.",
      "url": "https://www.fatbikehero.com/p/aiglossary#l1-regularization",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#l2-regularization",
      "name": "L2 Regularization (Ridge)",
      "termCode": "l2-regularization",
      "description": "A regularization technique that adds a penalty term to the loss function equal to the sum of the squared values of all model weights multiplied by a regularization rate (lambda). L2 regularization penalizes large weights and encourages them to be small but rarely exactly zero. Ridge regression applies L2 regularization to linear models. Also called weight decay.",
      "url": "https://www.fatbikehero.com/p/aiglossary#l2-regularization",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#lambda",
      "name": "Lambda (Regularization Rate)",
      "termCode": "lambda",
      "description": "A hyperparameter that controls the strength of regularization applied to a model. A higher lambda value imposes stronger regularization, shrinking weights more aggressively and reducing overfitting but potentially increasing underfitting. A lambda of zero disables regularization entirely. Also called regularization rate or weight decay coefficient.",
      "url": "https://www.fatbikehero.com/p/aiglossary#lambda",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },

    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#multimodal",
      "name": "Multimodal Model",
      "termCode": "multimodal",
      "description": "A machine learning model capable of processing and reasoning over more than one modality of data, such as text and images together, or text, audio, and video. Multimodal models learn joint representations that capture relationships across modalities. Examples include vision-language models that can describe images, answer visual questions, or generate images from text prompts.",
      "url": "https://www.fatbikehero.com/p/aiglossary#multimodal",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#rlhf",
      "name": "Reinforcement Learning from Human Feedback (RLHF)",
      "termCode": "rlhf",
      "description": "A training technique used to align large language models with human preferences. RLHF involves three steps: supervised fine-tuning on demonstrations, training a reward model on human preference comparisons, and optimizing the language model against the reward model using a reinforcement learning algorithm such as PPO. RLHF is used to make models more helpful, honest, and safe.",
      "url": "https://www.fatbikehero.com/p/aiglossary#rlhf",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#autoencoder",
      "name": "Autoencoder",
      "termCode": "autoencoder",
      "description": "A neural network trained to reproduce its input at its output by first compressing the input into a lower-dimensional latent representation (encoder) and then reconstructing the original input from that representation (decoder). Autoencoders are used for dimensionality reduction, anomaly detection, denoising, and generative modeling. Variational autoencoders (VAEs) learn probabilistic latent representations.",
      "url": "https://www.fatbikehero.com/p/aiglossary#autoencoder",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#gan",
      "name": "Generative Adversarial Network (GAN)",
      "termCode": "gan",
      "description": "A generative modeling framework consisting of two neural networks trained simultaneously in opposition: a generator that produces synthetic data to fool the discriminator, and a discriminator that tries to distinguish real data from generated data. Through this adversarial process, the generator learns to produce increasingly realistic outputs. GANs are used for image synthesis, data augmentation, and style transfer.",
      "url": "https://www.fatbikehero.com/p/aiglossary#gan",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#diffusion-model",
      "name": "Diffusion Model",
      "termCode": "diffusion-model",
      "description": "A generative model that learns to create data by gradually denoising a sample drawn from a Gaussian noise distribution. During training, the model learns to reverse a diffusion process that progressively adds noise to real data. During inference, the model starts from pure noise and iteratively removes noise to generate high-quality samples. Diffusion models underpin tools like Stable Diffusion and DALL-E.",
      "url": "https://www.fatbikehero.com/p/aiglossary#diffusion-model",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#positional-encoding",
      "name": "Positional Encoding",
      "termCode": "positional-encoding",
      "description": "A technique used in Transformer models to inject information about the position of each token in a sequence, since self-attention is permutation-invariant and does not inherently capture order. Positional encodings are added to token embeddings before they enter the Transformer. Fixed sinusoidal encodings or learned positional embeddings are common approaches.",
      "url": "https://www.fatbikehero.com/p/aiglossary#positional-encoding",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#tokenization",
      "name": "Tokenization",
      "termCode": "tokenization",
      "description": "The process of converting raw text into a sequence of tokens that can be processed by a language model. Common tokenization strategies include word-level, character-level, and subword tokenization (e.g., Byte Pair Encoding, WordPiece). Subword tokenization balances vocabulary size with the ability to represent rare or unknown words.",
      "url": "https://www.fatbikehero.com/p/aiglossary#tokenization",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#beam-search",
      "name": "Beam Search",
      "termCode": "beam-search",
      "description": "A heuristic search algorithm used in sequence generation models such as language models and machine translation systems. Instead of greedily selecting the single most likely token at each step, beam search maintains the top-k (beam width) most probable partial sequences at each step. This typically produces higher-quality outputs than greedy decoding at the cost of additional computation.",
      "url": "https://www.fatbikehero.com/p/aiglossary#beam-search",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#matrix-factorization",
      "name": "Matrix Factorization",
      "termCode": "matrix-factorization",
      "description": "A technique used in recommendation systems that decomposes a user-item interaction matrix into the product of two lower-dimensional matrices representing latent features of users and items. The dot product of a user's latent vector and an item's latent vector predicts the user's rating or preference for that item. Alternating least squares (ALS) and SVD are common matrix factorization methods.",
      "url": "https://www.fatbikehero.com/p/aiglossary#matrix-factorization",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    },
    {
      "@type": "DefinedTerm",
      "@id": "https://www.fatbikehero.com/p/aiglossary#collaborative-filtering",
      "name": "Collaborative Filtering",
      "termCode": "collaborative-filtering",
      "description": "A recommendation system technique that predicts a user's preferences based on the preferences of similar users or items. User-based collaborative filtering finds similar users; item-based collaborative filtering finds similar items. Collaborative filtering can surface unexpected recommendations but requires sufficient interaction data and struggles with cold-start problems for new users or items.",
      "url": "https://www.fatbikehero.com/p/aiglossary#collaborative-filtering",
      "inDefinedTermSet": { "@id": "https://www.fatbikehero.com/p/aiglossary#definedtermset" }
    }
  ]
}