Ai = Good = Bad?

 In this blog post, I will present how AI can save lives and how it can make them worse.

 

First I will present 3 examples of how it helped people.

 

1. Preventing Accidents

1.35 million people die from vehicle-related accidents each year. !important app aims to minimize the risks of accidents with all certified connected vehicles. The AI-enabled app creates a virtual protection zone around pedestrians, wheelchair users, cyclists or motorcyclists. If a connected vehicle gets too close to an !important user, its brakes will be triggered automatically.

2. Detecting Medical Conditions

Voice-based digital assistant listens in on emergency calls and analyzes what patients say to determine their current medical state. Trials of the system in Copenhagen found that it decreased cases of undetected out-of-hospital cardiac arrest by 43%. The company is already working on ways its solution can be used to diagnose other ailments.

 

3. Improving Pharmaceutical Solutions

Okwin uses machine-learning to predict disease evolution, improve treatment and enhance drug development. Okwin leverages an extensive amount of data from its hospital partners to find ways to help patients improve more quickly while minimizing side effects.


So we can see, ai can help a lot of lives and all and all its making our lives for the better... but now let's check some cases where it made lives worse


1. UK lost thousands of COVID cases by exceeding spreadsheet data limit

Nearly 16,000 coronavirus cases went unreported between Sept. 25 and Oct. 2. Microsoft Excel has a maximum of 1,048,576 rows and 16,384 columns per worksheet. The "glitch" didn't prevent individuals who got tested from receiving their results. It did stymie contact tracing efforts, making it harder for the UK National Health Service to identify and notify individuals who were in close contact.


2. Dataset trained Microsoft chatbot to spew racist tweets

Microsoft released Tay, an AI chatbot, on Twitter in March 2016 as an experiment in "conversational understanding" Within 16 hours, the chatbot posted more than 95,000 tweets. Those tweets rapidly turned overtly racist, misogynist, and anti-Semitic. Microsoft quickly suspended the service for adjustments and ultimately pulled the plug on the bot. The chatbot's predecessor, Xiaoice, had successfully had conversations with more than 40 million people in the two years prior to Tay's release.  "We had made a critical oversight for this specific attack," said Peter Lee, corporate vice president, Microsoft Research.

I remember this one it was very funny at the time


3. Target analytics violated privacy

Target's marketing department wanted to determine whether customers are pregnant. The retail giant collected data from shopper codes, credit cards, surveys, and more. The data could be used to generate a "pregnancy prediction" score. The company could then target high-scoring customers with coupons and marketing messages. But the company didn't back away from its targeted marketing but did mix in ads for things they knew pregnant women wouldn't buy.  


references: https://www.forbes.com/sites/serenitygibbons/2020/08/25/5-life-saving-applications-of-artificial-intelligence/?sh=68dca2b61c58

https://www.cio.com/article/3586802/5-famous-analytics-and-ai-disasters.html

Komentarji

Priljubljene objave iz tega spletnega dnevnika

Pre-history

How do blind people use a computer?

3 Products that flopped