Datahugging shields proprietary AI models from research that could disprove them
Datahugging refers to the practice where companies restrict access to their proprietary AI models and training data, preventing independent research that could potentially disprove or challenge their claimed capabilities. This creates barriers to scientific verification and transparency in AI development.