This seminar examines artificial intelligence from a philosophical perspective, focusing on questions concerning intelligence, understanding, consciousness, risk, and moral responsibility. Beginning with contemporary philosophical reflections on Large Language Models, the seminar introduces students to ongoing debates about whether current AI systems merely simulate intelligence or instantiate genuinely new forms of cognitive agency. Foundational concepts are developed through classic and contemporary texts on the nature of intelligence, including discussions of the Turing Test, general intelligence, and the limits of computational models. The course then explores more advanced questions concerning the possibility of artificial consciousness and the ethical implications that would follow if such systems were capable of subjective experience. In its second half, the seminar turns to normative issues, examining existential and systemic risks posed by AI, the responsibility gap associated with autonomous systems, and the challenges of attributing responsibility in human–AI interplay. Finally, the course situates AI within theory of the Extended Mind, analyzing how AI technologies may function as cognitive extensions of human agents and how this reframes traditional accounts of agency and moral responsibility.
- Trainer/in: Mehmet Özketen