Overview
In this article the authors develop an intrinsic measure for quantifying heterogeneity in training data for supervised learning. This measure is the variance of a random variable which factors through the influences of pairs of training points. The variance is shown to capture data heterogeneity and can thus be used to assess if a sample is a mixture of distributions. The authors prove that the data itself contains key information that supports a partitioning into blocks. Several proof of concept studies are provided that quantify the connection between variance and heterogeneity for EMNIST image data and synthetic data. The authors establish that variance is maximal for equal mixes of distributions, and detail how variance-based data purification followed by conventional training over blocks can lead to significant increases in test accuracy.
Key Takeaways
- Data heterogeneity—not model capacity—is a primary driver of poor generalization in supervised learning. When training data are drawn from mixed distributions, even highly expressive models converge toward averaged predictions that suppress underlying structure and reduce test accuracy.
- The variance of influence provides an intrinsic, model-aware measure of data heterogeneity. By aggregating pairwise influence across training points, influence variance reliably detects mixed distributions and peaks when data are evenly split across underlying sources.
- Reducing influence variance through data stratification measurably improves prediction accuracy while enabling simpler models. A two-stage approach—variance-based data purification followed by conventional training—outperforms monolithic training on heterogeneous data and offers a path toward more efficient, lower-energy machine learning systems.