As a follow up to an often mentioned topic on Hot Sauce, YouTube is trying desperately to court brands that are finding the networks AI to be unsafe and promoting fringe ideas or are dangerous to children.
YouTube’s original answer to it’s child algorithm problem was an app whitelisting sources so that parents could put children in front of YouTube without fear of what they might find (not a great parenting technique but that’s neither here nor there… ). Originally YouTube incentivized views at all costs, and content makers turned to AI to auto-generate films that were hurting children’s cognition.
They’ve hired a whole team to support this initiative, and they finally have actual humans watching the video. YouTube was losing a lot of revenue because of brand safety.
Why it’s hot?
AI can be dangerous. Keeping humans involved and not letting AI run wild is key to success even if it means slower revenue. If you don’t make brand safety and human safety a concern you might lose out on LOTS of revenue down the line. Is money the only way to keep big tech and AI accountable?