According to computer scientists at Stanford, they have “developed the first system for automatically synthesizing sounds to accompany physics-based computer animations” that “simulates sound from first physical principles” and most impressively, unlike other AI “no training data is required”.
Why it’s hot:
While most AI to date requires overt training in order to be able to properly synthesize an output, this requires none. It’s not the first AI to require no human-assistance, but the future that might have seemed years off for AI is rapidly advancing. If AI can construct sound from visuals based on physical principles, you have to wonder how hard it might be to construct physical objects based on sound.