Getatlas 2z0gl7v1xeMOSTLY AI
Help CenterModelsTraining and Evaluating Models

Training and Evaluating Models

Last updated July 29, 2024

Once you've chosen the right model for your machine learning task, it's time to train it on your data and evaluate its performance to ensure it meets your requirements. Mostly AI provides a streamlined process for training and evaluating models, making it easy to assess their effectiveness.

Training your Model

  • Navigate to the "Models" Section: Within your project's workspace, go to the "Models" section.
  • Select a Model: Choose the model you've chosen based on your problem and data characteristics (e.g., Linear Regression, Logistic Regression, Decision Tree).
  • Configure Training Parameters: Set any relevant training parameters for the chosen model:
  • Epochs: The number of times the model will iterate through the entire training dataset.
  • Batch Size: The number of training examples used in each update during training.
  • Learning Rate: Controls how much the model updates its weights during training. A high learning rate can lead to instability, while a low learning rate might result in slow training.
  • Start Model Training: Initiate the training process. Mostly AI will display progress, including training metrics like loss and accuracy.
  • Monitor Training Process: Observe training metrics during training to understand how the model is performing. Look for any signs of overfitting or underfitting.

Evaluating Your Model

  • Split Data: Before training, your data is typically split into three sets:
  • Training Set: Used to train the model.
  • Validation Set: Used during training to monitor performance and prevent overfitting.
  • Test Set: Used to evaluate the final model's performance on unseen data.
  • Assess Performance: Evaluate your model's performance on the test set using appropriate metrics, which depend on your task:
  • Regression Metrics: Mean Squared Error (MSE), Root Mean Squared Error (RMSE), R-squared (R2).
  • Classification Metrics: Accuracy, Precision, Recall, F1-score, Receiver Operating Characteristic (ROC) curve, Area Under the Curve (AUC).
  • Clustering Metrics: Silhouette score, Davies-Bouldin index.
  • Iterative Improvement: If your model's performance isn't satisfactory, you can iterate on the process by:
  • Tuning Hyperparameters: Adjusting training parameters like learning rate, epochs, or regularisation strength.
  • Feature Engineering: Creating new features or transforming existing ones to improve model accuracy.
  • Trying Different Models: Experimenting with different model types.

By understanding the training and evaluation process, you can effectively build models that perform well on your data and achieve your machine learning objectives within Mostly AI.

Was this article helpful?