Model Evaluation
Model Evaluation#
How can we assess the quality of a machine learning model?
How can we detect underfitting or overfitting?
How can we adjust the algorithm to achieve a good fit?
This section will introduce key concepts and tools to evaluate and improve your model.
Learning outcomes:
Understand the purpose of splitting data into different sets (training, validation, test)
Become familiar with cross-validation techniques
Learn common metrics for regression and classification
Understand how the ROC curve is constructed step by step
Grasp the concepts of bias and variance and the tradeoff between them
Define overfitting and underfitting and identify when they occur
Know strategies to cope with high bias or high variance situations