• About Us
  • Contact us
  • Editorial Policy
  • Terms & Conditions
  • Privacy Policy
Monday, March 27, 2023
SUBSCRIBE
London Daily Post
  • Home
  • UK
  • World
  • Business
  • Politics
  • Finance
  • Tech
  • Entertainment
  • Lifestyle
  • Sports
No Result
View All Result
  • Home
  • UK
  • World
  • Business
  • Politics
  • Finance
  • Tech
  • Entertainment
  • Lifestyle
  • Sports
No Result
View All Result
London Daily Post
No Result
View All Result
ADVERTISEMENT

Trapped by grief algorithms, and image AI privacy issues

Editorial Board by Editorial Board
February 6, 2023
in Tech News
Reading Time: 2 mins read
0


—Tate Ryan-Mosley, senior technology policy reporter

I’ve always been a super Googler, dealing with uncertainty by trying to learn as much as I can about what may come. This included my father’s throat cancer.

I began Googling the stages of grief and books and academic research on loss, from the app on my iPhone, intentionally and unintentionally consuming people’s experiences of grief and tragedy through videos of Instagram, various news feeds and Twitter testimonials.

However, with each search and click, I inadvertently created a sticky web of digital pain. Ultimately, it would prove nearly impossible to disentangle myself from what the algorithms served me. I finally got out. But why is it so hard to opt out and turn off content we don’t want, even when it’s harmful to us? Read the whole story.

AI models spit out photos of real people and copyrighted images

The news: Image generation models can be asked to produce identifiable photos of real people, medical images and copyrighted works of artists, according to new research.

How they did it: The researchers asked Stable Diffusion and Google Image with captions for images, such as a person’s name, many times. They then analyzed whether any of the generated images matched the original images in the model database. The group managed to extract more than 100 replica images in the AI’s training set.

Why it matters: The finding could strengthen the case of artists who are currently suing AI companies for copyright violations and could threaten the privacy of human subjects. It could also have implications for startups looking to use generative AI models in healthcare, as it shows that such systems are at risk of leaking sensitive private information. Read the whole story.



Source link

Share this:

  • Twitter
  • Facebook

Related Posts

Tech News

How ChatGPT will revolutionize the economy

March 25, 2023

Struggle for power When Anton Korinek, an economist at the University of Virginia and a fellow at the Brookings Institution,...

Tech News

The battle for satellite internet, and detecting biased AI

March 23, 2023

Next: Elon Musk and Jeff Bezos are about to lock horns again. Last month, the US Federal Communications Commission approved...

Tech News

Weight loss drugs, and a new abortion fight frontier

March 20, 2023

Over the past year, so-called "miracle" weight loss drugs have exploded on the Internet. While celebrity users have risen in...

Tech News

China’s version of ChatGPT, and protecting our brain data

March 17, 2023

The technology that aims to read your mind and probe your memories is here In recent years, we have seen...

Next Post

ChatGPT's origins, and making cement greener

POPULAR

Tech News

How ChatGPT will revolutionize the economy

March 25, 2023
Tech News

The battle for satellite internet, and detecting biased AI

March 23, 2023
Tech News

Weight loss drugs, and a new abortion fight frontier

March 20, 2023
  • About Us
  • Contact us
  • Editorial Policy
  • Terms & Conditions
  • Privacy Policy

© 2022 London Daily Post. All Rights Reserved.

No Result
View All Result
  • Home
  • UK
  • World
  • Business
  • Politics
  • Finance
  • Tech
  • Entertainment
  • Lifestyle
  • Sports