Datahugging shields proprietary AI models from research that could disprove them
Researchers have identified a practice called 'datahugging' where AI companies restrict access to their proprietary models, preventing independent verification and potentially shielding flawed systems from scrutiny. This lack of transparency hinders scientific research that could identify biases or inaccuracies in commercial AI systems.