Never has there been a more cutting-edge topic in Stoa Team Policy debate. In almost every other resolution we have debated, whether focused on military commitments, foreign aid, food safety, or trade, we have drawn from a century of established policies. Debaters could research what worked before, which policies needed to be updated or repealed, and how other countries approached the same issues. However, this is not true of artificial intelligence.
As of yet, the federal government does not have any cohesive policies for AI technology. Some agencies have issued general guidance, which does not, however, constitute binding law. As a result, Affirmative teams have a blank slate to work with, making it both exciting and intimidating! There are plenty of important questions to ask about AI:
Should there be ethical standards for how AI is developed and used?
What kind of decisions should AI be allowed to make? Even end-of-life decisions?
How will product liability laws apply to AI technology in consumer products, like autonomous vehicles?
Should the government subsidize and encourage more AI research?
Affirmative teams will find fascinating cases in all these and other areas! Negative teams also have excellent questions to ask on this topic:
Is it too early to create regulations for AI technology?
Would it be better first to see how states approach AI before enacting national policy?
Should we encourage increased AI technology funding when the results are so unpredictable?
AI technology is an incredible tool, which can have wonderful results, but it also has potential for problematic effects. What should the government then do about it? We look forward to hearing your ideas!
Your Stoa Debate Committee