Q1. Scenario: A facial recognition system used by police has high error rates for darker skin tones. What is the source of bias, and how can it be mitigated?
Bias arises from unrepresentative training data (mostly light-skinned faces). Mitigation: collect diverse datasets, use fairness metrics, re-sample, or apply adversarial debiasing techniques. Also involve independent audits.
Q2. Scenario: An AI hiring tool screens resumes. It learns from past hires (mostly male). How does it become biased, and what are the legal/ethical concerns?
The model learns to discriminate against female candidates because historical data reflects past bias. Legal concerns: violation of anti-discrimination laws. Ethical: perpetuating inequality. Solutions: remove gender-correlated features, fairness constraints.
Q3. Scenario: Explain the concept of "explainable AI" (XAI). Why is it important in a loan approval system?
XAI provides human-understandable reasons for decisions. In loan approval, regulations require transparency; customers have right to know why denied. Without explainability, trust and accountability suffer.
Q4. Scenario: A self-driving car must choose between harming the occupant or a pedestrian. How do you program ethics? What is the trolley problem in AI?
The trolley problem asks to prioritize lives. There's no consensus; approaches include utilitarian (minimize total harm), rule-based (never sacrifice occupant), or learn from human decisions. It's a moral dilemma without perfect answer.
Q5. Scenario: A generative AI model creates realistic fake news images. What are the risks, and what technical and policy solutions exist?
Risks: misinformation, impersonation, erosion of trust. Technical: digital watermarking, detection models, provenance tracking. Policy: regulations requiring labeling of synthetic content, criminalizing malicious deepfakes.
