“The problem is that literally anyone can watch these videos: kids, adults, it doesn’t matter,” he says. Matt first saw a fractal wood burning video shared by a friend on Facebook and was so intrigued that he “started watching YouTube videos, and they’re endless.”
Matt was electrocuted when a piece of the casing around the jumper wires he was using came loose and his palm touched metal. “I truly believe that if my husband had been fully aware [of the dangers], I wouldn’t have been doing it,” Schmidt says. His plea is simple: “When it comes to something that has the ability to kill someone, there should always be a warning… YouTube needs to do a better work, and I know it can, because it censures all kinds of people. .”
After Matt’s death, medical professionals at the University of Wisconsin wrote an article titled “Shocked Though the Heart and YouTube Is to Blame.” Citing Matt’s death and four fractal woodburn injuries they had personally dealt with, they called for “a warning label to be inserted before users can access video content” about the crafting technique. “While it is not possible, or even desirable, to flag every video that represents a potentially risky activity,” they wrote, “it seems practical to apply a warning label to videos that can lead to instant death when imitated.” .
Matt and Caitlin Schmidt had been best friends since they were 12 years old. He leaves behind three children. Schmidt says his family has suffered “pain, loss and devastation” and will carry a lifetime of grief. “Now we’re the cautionary tale,” he says, “and I wish in everything in my life we weren’t.”
YouTube told MIT Technology Review that its community guidelines prohibit content intended to encourage dangerous activity or that has an inherent risk of physical harm. Warnings and age restrictions apply to graphic videos, and a combination of technology and human staff enforce company guidelines. Dangerous videos banned by YouTube include challenges that pose an imminent risk of injury, pranks that cause emotional distress, drug use, glorification of violent tragedies, and instructions on how to kill or harm. However, videos can represent dangerous acts if they contain sufficient educational, documentary, scientific or artistic context.
YouTube first introduced the ban on dangerous pranks and challenges in January 2019, a day after a blindfolded teenager crashed into a car while taking part in the so-called “Birdbox Challenge”.
YouTube removed “a number” of fractal wood burning videos and restricted others when approached by MIT Technology Review. But the company did not say why it moderates against pranks and challenges, but not hacks.
It would certainly be a challenge to do so – each 5 minute craft video contains numerous crafts, one after the other, many of which are just plain weird but not harmful. And the ambiguity of hack videos, an ambiguity not present in challenge videos, can be difficult for human moderators to judge, let alone AI. In September 2020, YouTube reinstated human moderators who had been “taken offline” during the pandemic after determining that its AI had been overzealous, doubling the number of incorrect takedowns between April and June.