—Tate Ryan-Mosley, senior technology policy reporter
I’ve always been a super Googler, dealing with uncertainty by trying to learn as much as I can about what may come. This included my father’s throat cancer.
I began Googling the stages of grief and books and academic research on loss, from the app on my iPhone, intentionally and unintentionally consuming people’s experiences of grief and tragedy through videos of Instagram, various news feeds and Twitter testimonials.
However, with each search and click, I inadvertently created a sticky web of digital pain. Ultimately, it would prove nearly impossible to disentangle myself from what the algorithms served me. I finally got out. But why is it so hard to opt out and turn off content we don’t want, even when it’s harmful to us? Read the whole story.
AI models spit out photos of real people and copyrighted images
The news: Image generation models can be asked to produce identifiable photos of real people, medical images and copyrighted works of artists, according to new research.
How they did it: The researchers asked Stable Diffusion and Google Image with captions for images, such as a person’s name, many times. They then analyzed whether any of the generated images matched the original images in the model database. The group managed to extract more than 100 replica images in the AI’s training set.
Why it matters: The finding could strengthen the case of artists who are currently suing AI companies for copyright violations and could threaten the privacy of human subjects. It could also have implications for startups looking to use generative AI models in healthcare, as it shows that such systems are at risk of leaking sensitive private information. Read the whole story.