60%
Yes
40%
No
60%
Yes
40%
No

Historical Results

See how support for each position on “Artificial Intelligence (AI) for Defense” has changed over time for 81.5k UK voters.

Loading data...

Loading chart... 

Historical Importance

See how importance of “Artificial Intelligence (AI) for Defense” has changed over time for 81.5k UK voters.

Loading data...

Loading chart... 

Other Popular Answers

Unique answers from UK users whose views extended beyond the provided choices.

 @9NNWNTCanswered…1mo1MO

I dont like the idea of using AI in defence and/or offensive capabilities, but conflicts and other countries will end up using this technology and the UK should not fall behind

 @9QRXK5Gfrom Oregon answered…6 days6D

Yes, but only within the sphere of research and preparedness until the potential ethical implications are better understood

 @9QRGKV3answered…7 days7D

Yes, with careful consideration of ethical, strategic, technological, security, economic, practical, and political factors, and robust oversight.

 @9QPYNDFanswered…1wk1W

This gets dangerous, AI at its current state goes from data provided, to use it for this format would increase the number of civilian casualties

 @9QPRKMVfrom Tennessee answered…1wk1W

My thoughts on this depend heavily upon the application of the AI usage in question. I strongly believe that AI should not pilot weapon(s) or anything else that could have (even if extremely marginal) a capacity to end someone's life in error.

Geologists use AI detection to measure tectonic shifts for earthquakes; similar applications I think it's totally okay and encouraged.

My reservations on AI has nothing to do with the program(s) themselves and everything to do with the application of said programs.

 @9Q83PJ8answered…2wks2W

No, AI should only be used under human supervision or control, and should not be put in charge of making decisions regarding human lives as an AI cannot be held accountable in the event of loss of life.