Creating Confidence Intervals for Machine Learning Classifiers
A guide to creating confidence intervals for evaluating machine learning models, covering multiple methods to quantify performance uncertainty.
A guide to creating confidence intervals for evaluating machine learning models, covering multiple methods to quantify performance uncertainty.
Argues that achieving the 95th percentile in many activities is not impressive, using examples from video games and real-world skills.
The author discusses their experience being interviewed on the Perfbytes podcast about the Visually Complete metric at a performance conference.
A developer shares his journey advocating for and witnessing the launch of Speed Index and Visually Complete metrics in Dynatrace's products.