• About Us
  • Contact us
  • Editorial Policy
  • Terms & Conditions
  • Privacy Policy
Monday, September 25, 2023
SUBSCRIBE
London Daily Post
  • Home
  • UK
  • World
  • Business
  • Politics
  • Finance
  • Tech
  • Entertainment
  • Lifestyle
  • Sports
No Result
View All Result
  • Home
  • UK
  • World
  • Business
  • Politics
  • Finance
  • Tech
  • Entertainment
  • Lifestyle
  • Sports
No Result
View All Result
London Daily Post
No Result
View All Result
ADVERTISEMENT

Trapped by grief algorithms, and image AI privacy issues

Editorial Board by Editorial Board
February 6, 2023
in Tech News
Reading Time: 2 mins read
0


—Tate Ryan-Mosley, senior technology policy reporter

I’ve always been a super Googler, dealing with uncertainty by trying to learn as much as I can about what may come. This included my father’s throat cancer.

I began Googling the stages of grief and books and academic research on loss, from the app on my iPhone, intentionally and unintentionally consuming people’s experiences of grief and tragedy through videos of Instagram, various news feeds and Twitter testimonials.

However, with each search and click, I inadvertently created a sticky web of digital pain. Ultimately, it would prove nearly impossible to disentangle myself from what the algorithms served me. I finally got out. But why is it so hard to opt out and turn off content we don’t want, even when it’s harmful to us? Read the whole story.

AI models spit out photos of real people and copyrighted images

The news: Image generation models can be asked to produce identifiable photos of real people, medical images and copyrighted works of artists, according to new research.

How they did it: The researchers asked Stable Diffusion and Google Image with captions for images, such as a person’s name, many times. They then analyzed whether any of the generated images matched the original images in the model database. The group managed to extract more than 100 replica images in the AI’s training set.

Why it matters: The finding could strengthen the case of artists who are currently suing AI companies for copyright violations and could threaten the privacy of human subjects. It could also have implications for startups looking to use generative AI models in healthcare, as it shows that such systems are at risk of leaking sensitive private information. Read the whole story.



Source link

Share this:

  • Twitter
  • Facebook

Related Posts

Tech News

These scientists live like astronauts without leaving Earth

September 22, 2023

Across the world, around 20 analog space facilities host people who volunteer to be study subjects, isolating themselves for weeks...

Tech News

DeepMind is using AI to pinpoint the causes of genetic disease

September 19, 2023

With the rise of gene sequencing, doctors can now decode people’s genomes and then scour the DNA data for possible...

Tech News

Migrating to the cloud transforms business

September 15, 2023

Laurel: Well, let's start off. So, what has BP's move to the cloud been like? From your perspectives, what are...

Tech News

Investing in holistic innovation | MIT Technology Review

September 13, 2023

Enterprises need to constantly look for ways to improve and expand what they offer to the marketplace. For example, Sameena...

Next Post

ChatGPT's origins, and making cement greener

POPULAR

Entertainment

Gisele Bündchen Opens Up About Contemplating Suicide As Young Model

September 23, 2023
World News

Russia-Ukraine war news: Russian defenses near Verbove breached, Ukrainian officials say

September 23, 2023
Finance News

Kaiser health workers set to strike in October, alleging bad-faith negotiations

September 22, 2023
  • About Us
  • Contact us
  • Editorial Policy
  • Terms & Conditions
  • Privacy Policy

© 2022 London Daily Post. All Rights Reserved.

No Result
View All Result
  • Home
  • UK
  • World
  • Business
  • Politics
  • Finance
  • Tech
  • Entertainment
  • Lifestyle
  • Sports