Technology
Skillbuilder:Social Awareness

Smart Machines & Tough Choices

Effort: 10 minutes
Earns

+10 Points

Winner

(1) $100 e-gift card


AI has transformed industries from healthcare to entertainment, but it raises significant ethical concerns. For example:

Autonomous Decisions: Self-driving cars must make split-second decisions that could risk lives. Who is responsible when something goes wrong: the programmer, the company, or the car itself?

Bias in Algorithms: AI systems have been shown to replicate or even amplify biases, such as when facial recognition software struggled to identify people of color accurately. How can we ensure fairness in AI applications?

Robot Rights: As robots become more lifelike, some argue they deserve rights, like being free from abuse or exploitation, while others say rights should remain exclusive to humans.

Liability in Errors: AI-powered medical tools that misdiagnose conditions pose a challenge: Is the doctor, programmer, or company accountable?

Surveillance and Privacy: AI in surveillance tools can enhance national security but risks violating personal freedoms, such as China’s widespread use of AI for monitoring citizens.

Choose a Scenario: Select one AI-related scenario from the list below:

1. A self-driving car is about to hit an obstacle.

2. A robot assists with classroom teaching but struggles with fairness.

3. An AI app recommends who should get a scholarship based on their profile.

4. A smart home assistant overhears a private conversation.

Create 3 AI Laws: Based on your chosen scenario, quickly write down THREE rules you think should guide how AI behaves in that situation. Use these questions to help:

a. What is the AI allowed to do?
b. How can we make sure it’s fair and safe?
c. Who should be responsible if something goes wrong?

Sponsored by
Header Logo