Objective functions define what a machine learning model is trying to optimise. They quantify the difference between predicted outcomes and actual outcomes, and guide the algorithm in adjusting the model’s parameters to minimise (or maximise) that difference.

Key aspects include:

  • Purpose: In supervised learning, objective functions measure the error between predicted and true values. In unsupervised learning, they might measure the compactness of clusters or the reconstruction error in dimensionality reduction.
  • Types of tasks:
    • Regression: Mean Squared Error (MSE) and Mean Absolute Error (MAE) measure the difference between predicted and actual continuous values.
    • Classification: Cross-Entropy Loss and Hinge Loss evaluate predicted class labels against true labels.
    • Ranking: Pairwise ranking loss and Normalised Discounted Cumulative Gain (NDCG) assess the order of predicted items relative to their true order.
    • Clustering: Sum of Squared Errors (SSE) and the Silhouette Score measure the compactness and separation of clusters.
    • Dimensionality reduction: Reconstruction error (for example, in Principal Component Analysis) captures how much information is lost when data is projected into fewer dimensions.
    • Reinforcement learning: Cumulative reward guides the learning of policies that maximise long-term returns.
  • Optimisation: Training a model means optimising the objective function, typically through iterative algorithms like gradient descent. See Machine learning optimisation techniques.
  • Regularisation: Objective functions can include regularisation terms that penalise model complexity, helping to prevent overfitting.