Q1. Scenario: You are asked to explain to your team why AI has experienced multiple "winters" and summers. What are the key events that led to the AI boom in the 2010s?
AI winters (1970s, late 80s) due to overpromises, lack of computing power, and limitations of symbolic AI. The 2010s boom came from deep learning, big data, GPUs, and breakthroughs like AlexNet in 2012, leading to practical applications.
Q2. Scenario: A startup claims they have achieved human-level AI. Based on history, what questions would you ask to verify, and why is the Turing test insufficient?
Turing test only checks conversational mimicry, not understanding. You'd ask for generalization across unrelated tasks, reasoning about novel situations, and learning from minimal examples. History shows repeated failures (e.g., ELIZA).
Q3. Scenario: Compare the Dartmouth conference (1956) to the ImageNet challenge (2012) in terms of goals, techniques, and impact on AI research.
Dartmouth aimed to create thinking machines using symbolic logic and search. ImageNet popularized deep learning (CNNs) for perception. Dartmouth influenced theory; ImageNet drove practical, data-driven AI, including computer vision and NLP.
Q4. Scenario: Why did expert systems (rule-based AI) fall out of favor? What lessons from that era apply to modern AI?
Expert systems were brittle, required manual rule extraction from experts, and failed for large domains. Lessons: avoid overreliance on handcrafted knowledge, embrace learning from data, and beware of maintenance costs.
Q5. Scenario: How did the invention of backpropagation (Rumelhart, 1986) and later deep learning (Hinton, 2006) change the trajectory of AI?
Backpropagation enabled training multi-layer neural networks, overcoming limitations of perceptrons. Deep learning (2006) introduced greedy layer-wise pretraining, making very deep networks feasible. This led to modern AI breakthroughs in image, speech, and language.
