Measuring the Trade-off Between Model Complexity and Ecological
Abstract
The current obsession with massive neural architectures has led to a crisis of resource waste in the tech sector. Our study investigates a more sustainable path by testing whether high-performance results truly require high energy hardware. We put “slim” algorithms—specifically K-Nearest Neighbors and Naive Bayes—up against heavy-duty Deep Learning systems to see if we can achieve similar outcomes with a fraction of the electricity. Our data indicates that for many standard classification problems, the industry’s reliance on power-heavy GPUs is misplaced. By switching to CPU-centered statistical models, we found that it is possible to slash electricity requirements by a factor of ten, while only experiencing a tiny change in output quality. We argue for a fundamental change in how the industry judges “success.” By introducing Resource-Weighted Accuracy (RWA) as a primary metric, we can move toward a future where a model’s efficiency is considered just as important as its precision.
Copyright (c) 2026 Abitha S, Krishna Karthik, C.J. Preethi

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

