Beneficial and Harmful Effects

Help Questions

AP Computer Science Principles › Beneficial and Harmful Effects

Questions 1 - 10
1

Case study (Artificial Intelligence in Healthcare, ~505 words): A regional hospital system deploys an AI tool called ScanAid to help radiologists review chest X-rays and CT scans. ScanAid uses machine learning, meaning it learns patterns from large sets of labeled medical images to estimate the likelihood of conditions such as pneumonia or a collapsed lung. When a new scan arrives, the system highlights areas it deems suspicious and assigns a risk score. The hospital integrates ScanAid into its workflow so that high-risk cases move to the front of the review queue.

The benefits appear quickly. In the emergency department, clinicians receive faster preliminary flags, which helps prioritize patients who may need urgent treatment. Administrators report shorter average turnaround times for imaging results, and radiologists say the tool reduces “missed findings” on busy nights by acting as a second set of eyes. This aligns with real-world momentum: the U.S. Food and Drug Administration has authorized hundreds of AI-enabled medical devices, many aimed at imaging, reflecting a broad belief that computing can improve efficiency and decision support.

Yet the hospital also confronts serious risks. ScanAid requires large quantities of patient data for training and ongoing monitoring. Even if names are removed, data can sometimes be re-identified when combined with other information, raising privacy concerns. The system’s vendor stores model updates in the cloud, and a security audit warns that misconfigured access controls could expose sensitive scans. In addition, clinicians discover performance gaps: the tool flags fewer abnormalities in patients from a smaller rural clinic that uses older imaging machines. A quality team suspects the training data underrepresents those machine types, a form of algorithmic bias (systematic error that disadvantages certain groups or settings). Finally, some physicians worry about “automation bias,” where staff may over-trust a computer’s score and overlook contradictory clinical evidence.

The hospital responds by requiring human sign-off on all diagnoses, conducting bias tests across equipment types, and encrypting data in transit and at rest. Still, leaders acknowledge that the same computing innovation that boosts speed and consistency also introduces new ethical dilemmas around data security, fairness, and accountability.

Based on the case study, what are two benefits and two risks associated with AI diagnostics as described in the passage?

Benefits: it replaces radiologists entirely and guarantees perfect accuracy; Risks: patients stop needing imaging and hospitals lose electricity.

Benefits: stronger Wi‑Fi and lower cafeteria prices; Risks: louder MRI machines and increased snowfall near the rural clinic.

Benefits: faster triage and fewer missed findings; Risks: privacy exposure through cloud storage and biased performance across equipment types.

Benefits: reduced privacy because data is never stored; Risks: slower emergency care because high-risk cases are hidden from clinicians.

Explanation

This question tests understanding of the dual impact of computing innovations (AP CSP standard: Impacts of Computing, 7.1). Computing technologies can have both beneficial and harmful effects on society. Benefits include increased efficiency and connectivity, while harms may involve privacy risks and job displacement. In this passage, the case study of ScanAid AI diagnostic tool discusses how it provides faster triage and reduces missed findings while also creating privacy risks through cloud storage and exhibiting biased performance across different equipment types. Choice A is correct because it accurately identifies the benefits (faster triage and fewer missed findings) and risks (privacy exposure through cloud storage and biased performance across equipment types) explicitly mentioned in the passage. Choice B is incorrect because it makes extreme claims not supported by the text - the passage emphasizes human sign-off is still required and never suggests perfect accuracy or that patients stop needing imaging. To help students: Encourage critical analysis of both benefits and risks when evaluating computing innovations. Discuss real-world examples and encourage students to consider multiple perspectives. Watch for: Students selecting answers with absolute claims or benefits/risks not mentioned in the passage.

2

Case study (Artificial Intelligence in Healthcare): A regional hospital network deploys an AI system to assist radiologists in detecting early signs of lung cancer on CT scans. The innovation uses machine learning, meaning the software learns patterns from thousands of labeled images and then estimates the probability that a new scan contains suspicious nodules. The hospital integrates the tool into its workflow: the AI highlights areas of concern, and a clinician makes the final diagnosis. A 2023 U.S. FDA announcement notes a growing number of AI-enabled medical devices cleared for clinical use, reflecting real-world adoption. The network also pilots a personalized-medicine feature that combines lab results, medication history, and wearable data (like heart rate trends) to recommend follow-up tests.

Benefits: The hospital reports shorter wait times for scan reviews and more consistent “second-look” screening in busy weeks. In one month, the AI flags subtle nodules that a fatigued overnight team might miss, enabling earlier follow-up for several patients. Administrators note efficiency gains: fewer repeat scans, faster triage, and improved access for rural clinics that can upload images for review.

Harms: The same system introduces risks. Because it requires large datasets, the hospital stores sensitive imaging and health records in a centralized repository; a ransomware attempt forces the network to temporarily disconnect systems, delaying non-urgent appointments. Ethical concerns also emerge: the training data underrepresents certain demographic groups, and clinicians worry the model may be less accurate for them (algorithmic bias, meaning uneven performance across groups). Finally, some staff fear “automation complacency,” where clinicians may over-trust the AI’s highlights and overlook unmarked abnormalities. Based on the case study, what are two benefits and two risks associated with AI in healthcare as described in the passage?

Lower electricity use and fewer hospital buildings; risks include eliminating all clinician jobs and guaranteeing perfect diagnoses.

Higher accuracy for every demographic and complete privacy; risks include slower workflows and more repeat scans.

Faster triage and earlier detection; risks include data breaches/ransomware and biased accuracy from underrepresented training data.

Improved cafeteria logistics and parking efficiency; risks include social media addiction and online shopping fraud.

Explanation

This question tests understanding of the dual impact of computing innovations (AP CSP standard: Impacts of Computing, 7.1). Computing technologies can have both beneficial and harmful effects on society. Benefits include improved efficiency and accuracy, while harms may involve security vulnerabilities and algorithmic bias. In this passage, the case study of AI in healthcare discusses how it enables faster triage and earlier detection while also creating risks of data breaches and biased accuracy. Choice A is correct because it accurately identifies the two benefits mentioned (faster triage and earlier detection) and two risks (data breaches/ransomware and biased accuracy from underrepresented training data) as described in the passage. Choice D is incorrect because it claims the AI provides complete privacy and higher accuracy for every demographic, which contradicts the passage's discussion of ransomware attempts and algorithmic bias concerns. To help students: Encourage careful reading to identify specific benefits and harms mentioned in passages. Discuss how AI systems can perpetuate bias through training data. Watch for: Students confusing aspirational goals with actual outcomes or missing subtle risks like algorithmic bias.

3

If the AI was trained on the company's hiring data from the past 30 years, during which certain demographic groups were historically underrepresented, what is a likely harmful effect?

The tool may overlook qualified candidates who use unconventional formatting on their résumés.

The tool will reduce the amount of time that human resources staff spend reading through unqualified applications.

The tool can process applications 24/7, allowing the company to respond to applicants more quickly.

The tool may perpetuate past biases by favoring candidates from demographic groups that were historically hired more often.

Explanation

The correct answer is A. This describes algorithmic bias, a significant harmful effect where a computing innovation reflects and amplifies existing human biases present in its training data. Choices B and C are the intended beneficial effects. Choice D is a potential flaw or limitation, but A describes a more systemic and harmful societal effect.

4

A medical research institute develops a sophisticated computer simulation to model the spread of a new virus. What is the primary beneficial reason for using a simulation instead of studying the real-world phenomenon directly?

Simulations require no data from the real world to be created, as they are based purely on theoretical algorithms.

Simulations are less complex than the real-world phenomena they model, which means they provide no useful information.

Simulations allow for the investigation of phenomena that would be too dangerous, slow, or expensive to experiment with in the real world.

Simulations perfectly replicate the real world in every detail, guaranteeing that their predictions are always 100% accurate.

Explanation

The correct answer is A. This is the core benefit of simulations: they provide a safe, fast, and cost-effective way to study complex systems. It would be unethical and dangerous to intentionally spread a virus to study it. Choice B is incorrect; simulations are abstractions and are never perfectly accurate. Choice C is incorrect; simulations are built upon real-world data and observations. Choice D is incorrect; their simplification is precisely what makes them useful for analysis.

5

Which of the following describes how this practice could be viewed as both beneficial and harmful?

It is beneficial for the website which profits from the data, and beneficial for the pharmaceutical companies which get research data.

It is harmful because it violates medical ethics, but beneficial because it allows users to find new relatives.

It is harmful to users who do not read the privacy policy, but beneficial to users who understand where their data is going.

It is beneficial for medical research which could lead to new cures, but harmful as it risks the genetic privacy of users.

Explanation

The correct answer is A. This presents a clear trade-off: the data sharing contributes to a broad societal benefit (medical research) but creates a potential harm for the individuals contributing the data (privacy risks). Choice B only lists benefits. Choice C incorrectly frames the benefit/harm around user awareness rather than the effect itself. Choice D incorrectly disconnects the harm (ethics) from the benefit (relatives), whereas the core issue is the data sharing itself.

6

Which of the following is a potential harmful societal effect of relying on this platform?

Students who are absent due to illness can use the platform to catch up on missed work more easily.

The platform requires regular software updates to maintain security and add new features for all users.

It can worsen the educational gap for students from low-income families who may lack reliable Internet access or suitable devices.

Teachers can use the platform to record lectures in advance, which gives them more flexibility in their schedules.

Explanation

The correct answer is A. This describes a harmful effect related to the 'digital divide,' where a technology that benefits some can disadvantage others who lack the necessary resources, thereby increasing societal inequality. Choices B and C are beneficial effects of the platform. Choice D is a maintenance requirement, not a harmful societal effect.

7

Which of the following describes a potential harmful effect resulting from the thermostat's method of operation?

The data collected could reveal patterns of when a home is unoccupied, creating a security risk if the data is stolen.

The thermostat helps the household reduce its monthly energy bill, saving the residents money.

The device's software can be updated over the Internet to add new energy-saving features.

The automatic adjustments make the home more comfortable for its residents without requiring manual changes.

Explanation

The correct answer is A. The data collection that enables the beneficial energy-saving feature also creates a potential harm: if this data falls into the wrong hands, it could be used to determine when the house is empty, making it a target for burglary. Choices B and C are the intended beneficial effects. Choice D is a feature of the device, which is generally beneficial, not a harmful effect of its operation.

8

An online retail company uses an algorithm to suggest products to customers based on their previous purchases. A beneficial effect of this is that customers may find products they like more easily. Which of the following is a potential harmful effect of this same computing innovation?

The company's sales may increase because customers are more likely to purchase the suggested items.

The algorithm helps the company manage its inventory by predicting which products will be popular among certain customers.

The website may load faster for users because the algorithm pre-selects a smaller set of products to display.

The algorithm may limit a customer's exposure to new or different types of products, reinforcing their existing preferences.

Explanation

The correct answer is A because this describes a potential harmful effect known as a 'filter bubble,' where the user is only shown content that aligns with their past behavior, reducing their exposure to diverse options. Choices B and C describe beneficial effects for the company. Choice D describes a potential technical benefit, not a harmful effect on the user's experience in terms of choice.

9

The effects of ride-sharing applications on traffic and public transit are best described as which of the following?

Unintended harmful effects of a technology designed for convenience.

The stated and intended purposes of the ride-sharing applications.

Technical limitations of the software used in the applications.

Beneficial effects that improve the overall transportation infrastructure.

Explanation

The correct answer is A. The primary purpose of ride-sharing apps is to provide convenient transportation, not to increase traffic or decrease public transit use. These are negative consequences that were not part of the original design, making them unintended harmful effects. Choice B is incorrect because these were not the intended purposes. Choice C is incorrect because these effects are described as harmful. Choice D is incorrect because the issue is about societal impact, not a technical flaw.

10

Case study (Smart Devices): A city installs “smart” streetlights with motion sensors and networked controllers. The lights dim when streets are empty and brighten when pedestrians or cars approach, aiming to reduce energy costs. A central dashboard collects status data (bulb health, power usage) and sends maintenance alerts. Similar systems appear in many municipalities as part of smart-city initiatives, and vendors often claim measurable energy savings from adaptive lighting.

Benefits: The city reports lower electricity bills and quicker repairs because crews replace failing lights before outages occur. Residents also report feeling safer on well-lit routes that brighten as they walk.

Harms: The networked design introduces new security risks: if attackers access the control system, they could disrupt lighting patterns. Privacy concerns arise because motion data, while not always personally identifying, can still reveal patterns of movement in specific neighborhoods. The city also becomes reliant on the vendor for software updates; when support is delayed, known vulnerabilities remain unpatched longer than desired. Considering the described technology, what are two benefits and two risks associated with smart devices as described in the passage?

Improved test scores and cheaper groceries; risks include misinformation spread and declining local bookstores.

Elimination of all crime and perfect cybersecurity; risks include streetlights working only during daytime and never at night.

Lower energy costs and proactive maintenance; risks include hacking of network controls and privacy concerns from movement-pattern data.

Guaranteed anonymity and no vendor dependence; risks include higher electricity use and slower repairs due to fewer sensors.

Explanation

This question tests understanding of the dual impact of computing innovations (AP CSP standard: Impacts of Computing, 7.1). Computing technologies can have both beneficial and harmful effects on society. Benefits include resource efficiency and improved maintenance, while harms may involve security vulnerabilities and privacy concerns. In this passage, the case study of smart streetlights discusses how they lower energy costs and enable proactive maintenance while creating risks of network control hacking and privacy concerns from movement patterns. Choice A is correct because it accurately identifies the two benefits (lower energy costs through adaptive dimming and proactive maintenance through status monitoring) and two risks (hacking of network controls that could disrupt lighting and privacy concerns from movement-pattern data collection) as described in the passage. Choice D is incorrect because it makes absurd claims about eliminating all crime and suggests streetlights would work only during daytime, which is logically contradictory to their purpose. To help students: Encourage thinking about how seemingly innocuous data like movement patterns can raise privacy concerns. Discuss how infrastructure connectivity creates new attack vectors. Watch for: Students dismissing privacy concerns from aggregate data or not recognizing infrastructure as a potential cyber target.

Page 1 of 3