Texas High School ELA Question of the Day

Test your knowledge with a hand-picked multiple-choice question.

Debates over artificial intelligence often lurch between moratoria and boosterism, but a system that rates risk and assigns obligations accordingly would better protect the public without freezing innovation. When algorithms screen tenants, score job applicants, or prioritize emergency services, errors can quietly harden inequity. The remedy is not to ban statistical tools, which can outperform human judgment, but to require high-risk systems to earn their way into sensitive domains. Pre-deployment impact assessments should map foreseeable harms; audit trails must make consequential decisions reproducible; and affected people need accessible avenues to contest outcomes. Low-risk uses—like auto-captioning videos—should face lighter-touch rules. This is how we regulate bridges and medicines: the higher the stakes, the tighter the guardrails. Critics worry that audits will expose trade secrets; however, firms can disclose model behavior without revealing code, and independent validators can hold nondisclosure agreements. The core principle is proportionality married to due process: regulate uses, not math, and ensure that those subject to automated judgments are not trapped by them.

Which option states the author's complex central claim?

Select an answer and click Check.