Try the political quiz
+

Filter by type

Narrow down which types of responses you would like to see.

166 Replies

 @ISIDEWITHDiscuss this answer...1yr1Y

 @ISIDEWITHDiscuss this answer...1yr1Y

Yes

 @9MPC97Vdisagreed…1yr1Y

Artificial Intelligwnce is biaised. It will only know and act on what it's been taught. Human defence decisions always require quality control by at least ome other person.

 @9MP8LP9disagreed…1yr1Y

AI doesn’t seem to have many positive or genuinely useful purposes, so I can’t see how it would help with our defense

 @ISIDEWITHDiscuss this answer...4mos4MO

Yes, but only to assist and not replace human decision making

 @ISIDEWITHDiscuss this answer...4mos4MO

Yes, but with very strict oversight and regulations

 @B4RWX4QGreendisagreed…4mos4MO

Can you trust any government or company that gets outsourced to to provide that oversight when there’s profit / corporate relations /money-saving on the table?

 @ISIDEWITHDiscuss this answer...4mos4MO

No, we need more testing in controlled environments first

 @9NNWNTCLabouranswered…1yr1Y

I dont like the idea of using AI in defence and/or offensive capabilities, but conflicts and other countries will end up using this technology and the UK should not fall behind

 @9NVWSC7commented…1yr1Y

Completely agreed. I would much prefer a world where we don't use AI but it is naive to think the world won't embrace it, therefore I believe we should too.

 @B5N86Z9Reform UKanswered…3mos3MO

Yes, but only to assist and not replace human decision making, and then only with very strict oversight and regulations. At no point should an AI be capable of independent, unsupervised, and uninterruptible action of any kind, least of all communications or release of weapons activity.

 @B5988HJanswered…4mos4MO

Yes, but with very strict ethical oversight, international collaboration on safety standards, and a primary focus on defensive and non-lethal applications that reduce risk to both military personnel and civilians.

 @B3K4TXJanswered…6mos6MO

Only in response to attacks on us by others not for attacking others, i believe we should only be defending ourselves against attacks and not attacking others

 @B3JQK74Conservativeanswered…6mos6MO

Yes but they should keep investing into manual defence such as army with the current circumstances of the world today.

 @B2WQYCFanswered…6mos6MO

As long as the AI will not directly affect national security and instead will just improve the efficiency of the part of the process.

 @B2QWQBNanswered…7mos7MO

The government should invest in AI for the military however they still should be cautious of the risks and dangers. It can cause with loss of control but it also could be useful.

 @B2Q67G4answered…7mos7MO

Ai is good and the country needs ai and it’s good but actually no I don’t want ai use your own independent mind to think

 @ISIDEWITHasked…11mos11MO

What worries you more: nations not adopting AI fast enough in defense or developing AI too quickly without enough oversight?

 @9TLS5LWanswered…11mos11MO

Not adopting fast enough as countries with real threat are already adopting it

 @9QTTF8GLiberal Democratanswered…1yr1Y

Yes, but I think extra precaution should be taken when dealing with AI and there should always be a certain number of human workers to reinforce both authenticity and security on the front line.

 @9QRXK5Gfrom Oregon  answered…1yr1Y

Yes, but only within the sphere of research and preparedness until the potential ethical implications are better understood

 @9QRGKV3Labouranswered…1yr1Y

Yes, with careful consideration of ethical, strategic, technological, security, economic, practical, and political factors, and robust oversight.

 @9QPYNDFanswered…1yr1Y

This gets dangerous, AI at its current state goes from data provided, to use it for this format would increase the number of civilian casualties

 @9QPRKMVfrom Tennessee  answered…1yr1Y

My thoughts on this depend heavily upon the application of the AI usage in question. I strongly believe that AI should not pilot weapon(s) or anything else that could have (even if extremely marginal) a capacity to end someone's life in error.

Geologists use AI detection to measure tectonic shifts for earthquakes; similar applications I think it's totally okay and encouraged.

My reservations on AI has nothing to do with the program(s) themselves and everything to do with the application of said programs.

 @9Q83PJ8answered…1yr1Y

No, AI should only be used under human supervision or control, and should not be put in charge of making decisions regarding human lives as an AI cannot be held accountable in the event of loss of life.

 @9Q7H9ZNanswered…1yr1Y

Only in limited roles. AI is not some miracle answer to everything and it should be used with extreme caution and care, if it is used at all.

 @9PJ5K6Nanswered…1yr1Y

AI can be useful in small doses and can not work the way it's supposed to do yes but in a very small amount

 @9PHGG9Panswered…1yr1Y

Only if it can be proved that there is no bias and that there is transparency as to where the training data comes from

 @9PH79DHanswered…1yr1Y

The government should invest in appropriate IT (including AI) to enable a more efficient and effective armed forces.

 @9PH38DTanswered…1yr1Y

Again, difficult now that AI is out of the bag. Also needs global protocols similar to those on nuclear weapons

 @9PFGF3Yanswered…1yr1Y

Yes, but only in the interest of improving defensive countermeasures such as anti ballistic missiles, or to increase accuracy of offensive technology to decrease casualties

 @9PF94LVanswered…1yr1Y

Yes, but in a publicly controlled way, strictly regulated - should never be used for attack purposes and / or population controle.

 @9P87FF2Greenanswered…1yr1Y

It would be useful in many applications but should be heavily monitored/ controlled by the correct human teams

 @9P7M63Panswered…1yr1Y

Yes, but only for DEFENCE. No weaponry that can take life. Anti-missile is a great use of AI as it can process information and react far faster than humans can and with the advent of hypersonic missiles and the like... humans simply can't react or cover everything.

 @9P6P2SBanswered…1yr1Y

Only to provide humans with the likely success of various options. Human beings should ultimately make the decisions.

 @9P6LYBDConservativeanswered…1yr1Y

Yes, but there should ALWAYS be an override. Humans should always have to give a green light before military offensive capabilities are deployed.

 @9P4K8WVanswered…1yr1Y

Yes, with the exception that this AI is used in conjunction with a human operator in order to maximise the defensive capabilities that the government can put to practice.

 @9P49V45answered…1yr1Y

Theyre should be limits on what lgrade of wepons AI can be intergrated in and restrictions on their application.

 @9P464FCGreenanswered…1yr1Y

No, not yet - over-reliance on extremely flawed AI solutions will cause more problems than it will solve.

 @9P2DFL8answered…1yr1Y

Yes but only AI developed in this country and not from commercial companies or companies based overseas.

 @9NZW6R3answered…1yr1Y

No Would need a high Level of accountability as the risk of misuse is very high. It should net be allowed to be used on citizens even after a terrorist attack.

 @9NZT2JJLabouranswered…1yr1Y

I think more due dilligence would need conducting and confidence reinstalled into the public around public/government contract for me to be reassured in investing in AI for national defense

 @9NZS698Women's Equalityanswered…1yr1Y

More research needs done on AI. In this instance I think we should be guided by experts and not politicians

 @9NZPKG7answered…1yr1Y

Yes, it is an unfortunate reality that we will need to in order to counter threat actors that are doing the same.

 @9NZ7QGDConservativeanswered…1yr1Y

Yes but with extremely tight regulations and back up plans and procedures in case we encounter a difficulty controlling them.

 @9NYZPB5answered…1yr1Y

Yes but caution should be used, it shouldn't be given too much power/agency in case it goes rogue in future

 @9NYMB7Manswered…1yr1Y

Only in limited use cases. AI should not be given the power to kill, it should only assist with intelligence gathering.

 @9NYKP8Zanswered…1yr1Y

Yes, where it will improve defensive strategies but does not allow for complete loss of human control.

 @9NYJS8Tanswered…1yr1Y

Provided there are strict methods to counteract and/or shut down any AI. Should the worse happen, e.g. Sofia Microsoft A.I. becoming possibly sentient

 @9NY7N2Vanswered…1yr1Y

No - but the question is too broad, and the technology is not sufficiently mature to be foolproof yet. Skynet...

 @9NXDVX3answered…1yr1Y

AI is a way forward and can do lots of very clever things But humans must be able to make decisions using all avenues not be just reliant on an algorithm. If this had been the case in 50 we would now be in an atomic desert now as soldiers ignored the computers and did what a human thought was right. Not to hit the button... Could though go the other way so you need technology just need some checks and balances in place

 @9NX2LDManswered…1yr1Y

Yes however, we must also focus on teaching the population how to stay safe - treat things at the core of the issues and not just invest on how to investigate things faster after the incident has taken place. We cannot raise generations to become too dependent on Ai provisions but should also be developed if a case calls for it - it can then offer the best help.

 @9NWQMPFanswered…1yr1Y

It depends on how it's being utilised. Generative AI is not a positive thing but AI can be used in so many different ways that this question feels too broad

 @9NVX54Yanswered…1yr1Y

Whilst useful it goes against one’s personal rights and is unfair against certain races so needs huge improvement before ever being used

 @9NVWSC7answered…1yr1Y

Yes. It is naive to think that the other countries would not use this technology offensively. AI could provide essential defense detection which humans simply cannot respond to

 @9NJM623Greenanswered…1yr1Y

In terms of making defence more efficient, but in terms of military capabilities AI lacks empathy that may worsen conflicts, etc. AI is based on the opinions of those who create it and is therefore not representative or necessarily reflects the best options.

 @9NJ9Z7Manswered…1yr1Y

The government should invest in training AI for defense, but it should be there as a precautionary measure

 @9NJ8C8NGreenanswered…1yr1Y

No context in how they would use AI. If it was ethical and moral then yes but is a grey area which could result in dangerous applications.

 @9N66DNSanswered…1yr1Y

Depends on your definition of "defense applications". Use of fully-automated weaponry should be strictly off the table.

 @9N5XJ9Vanswered…1yr1Y

Only if the algorithm that is used is developed in the UK, so as to prevent foreign influence, giving us sovereignty over our own weapon systems and can 100% of the time obey human rights laws and the Geneva convention.

 @9N44NYMLabouranswered…1yr1Y

Yes, but only supplementary to and not the main use for defence applications. Caution and regulation needed in the early stages of new technology.

 @9N3QYYPLiberal Democratanswered…1yr1Y

AI can be utilized to speed up process but there needs to be human controls for quality check and security.

 @9N3PWT2answered…1yr1Y

Yes but only to shift through raw data to bring relevant information to humans that make the final call on further information to acquire and what actions to take

 @9N2ZY9Nanswered…1yr1Y

I believe it can be used for the betterment of the government however it shouldn't be relayed upon as it can be biased and also create unforeseen issues within a certain situation.

 @9N2S6XQanswered…1yr1Y

Yes, with extreme caution AI is new technology and the UK needs embrace AI to use as an add on support not rely on it

 @9N26GV3answered…1yr1Y

it depends again on how well developed the AI is. how accurate is the information they will use in défense?

 @9MZP4ZSLabouranswered…1yr1Y

Yes but the ai should be able to be over ridden and be able to be completely controlled by humans if something goes wrong

 @9MXWBBNanswered…1yr1Y

Yes to AI with caution but listen to the experts. I can't see how Asimov's 3 rules of robotics would apply if you're using AI for offensive purposes, but defence often means offence. A conundrum!

 @9MXRFT2answered…1yr1Y

Only if the algorithm that is used is developed in the UK, as to prevent foreign influence, giving us sovereignty over our own weapon systems and can 100% of the time obey human rights laws and the Geneva convention. If not then the victims of a war crime committed by AI could never find justice.
You cant convict an algorithm.

 @9MX5JLCanswered…1yr1Y

Yes, but only if the AI is unbiased, trustworthy and reliable with security measures in place to ensure there is no opportunity for it to be hacked/taken advantage of. AI should only be used to accompany qualified, human knowledge.

 @9MWR5QRanswered…1yr1Y

Yes, but in areas where no threat to life can be affected (I.e. streamlining procurement and capability enhancement).

 @9MQS8BManswered…1yr1Y

because in a world were people use ai for effeicney it will need to be used but should be a tools only not a survelience super computer

 @9MP3W9GGreenanswered…1yr1Y

 @9MPBC6Qanswered…1yr1Y

Yes in a limited capacity such as the protection of UK cyber infrastructure, but an AI should not be power to potentially decide the fate(s) of the lives of people.

 @9MP9JDWanswered…1yr1Y

It must be safe before any control is given to AI. Until then it could assist only

 @9MP5T9Janswered…1yr1Y

Not against it as a concept but not a high priority compared to other areas of potential investment

 @9MP4K23Labouranswered…1yr1Y

Yes but with huge amounts of transparency on where information is being drawn from, what purpose the AI serves and where money is being invested

 @9Q5D556answered…1yr1Y

Yes, but defensive and assisting only. We should have a global non proliferation treaty for autonomous offensive weapons.

 @9Q4SWZLanswered…1yr1Y

While I disagree with AI in its current form, we need to be first to the table to prevent our own destruction

 @9Q3CF47answered…1yr1Y

Generally I'd say no, mainly due to the fear of risk of losing human control but I don't have enough overall knowledge or understanding on this subject to form an opinion on it

 @9Q2BY9Sanswered…1yr1Y

I think it’s absolutely critical that we err on the side of caution in regard to AI, so we need to strike a balance between using it to human advantage against allowing it to get out of control

 @9PXP3CSanswered…1yr1Y

Cybersecurity perhaps, but the AI should not have access to physical weapons or oversee troop deployments.

 @9PXL2XK answered…1yr1Y

Yes, but I a purely assistive manner to process information and give recommendations but all final destination are made by a human and ai should have no capabilities to control weapons.

 @9PXCL7ZWorkers of Britainanswered…1yr1Y

No as one even with good security measures it can still be hacked and corrupted and two we ain't recreating skynet

 @9PP8NJVanswered…1yr1Y

Yes to keep pace with other superpowers and to limit loss of troop life in reconoitre missions but large scale offense or face targeted assaults must be used with caution.

 @9PMSJMDLiberal Democratanswered…1yr1Y

No, AI technology is nowhere near advanced enough to justify its use in situations that intentionally put peoples' lives and livelihoods at risk.

 @9PLTNT4answered…1yr1Y

It depends what other countries are doing - we can't be left behind but at the same time we don't want a world run by AI . There has to be an 'off' switch.

 @9NXJ83Yanswered…1yr1Y

On a case by case basis and only if there is strong evidence that this is the only solution. We should invest more in people and the infrastructure to support more people to be employed instead of relying too heavily on AI which is taking over human jobs

 @9NWN8TNanswered…1yr1Y

In terms of military application, no, but in terms of defending our infrastructure from cyber attack, yes.

 @9NV6SWJanswered…1yr1Y

Only for certain use such as detecting or providing messages humans would not otherwise be aware of. But do not take off human positions and allow AI to control them, this leaves room for errors, misuse, corruption, hacking issues, whereas humans in the position have more control. They should work alongside human use.

 @9NT74NQGreen answered…1yr1Y

Yes to help development but not to takeover ultimate decision making. More considered use of its development is required and strict regulatory boards in place to limit roll out of ai tech that has not had full scenario testing of its future impact

 @9N9J7C5 answered…1yr1Y

As long as the artificial intelligence is being used in a safe and professional manor, and use to pick out signs of criminal activity.

 @9N95FJ2answered…1yr1Y

Providing it doesn't threaten the publics privacy, personal security and isn't able to be accessed by the public and only accessible my high level ranking security

 @9N94PWBanswered…1yr1Y

Yes. I don’t like the thought of it. But realistically, if we don’t - we will be at a disadvantage against countries who will

 @9N8TJGLanswered…1yr1Y

As long as we understand how to use AI safely in a way that will prevent human control, then I think it could be good to protect people.But it should never be used to invade peoples privacy and rights

 @9N8N38Janswered…1yr1Y

I do not agree with investing more to further develop AI, if it’s already out there and has been beneficial with other countries then yes.

 @9N82C22answered…1yr1Y

The government should continue to invest in using data in defense project without buying in the hype of naming any data based tool "AI" to get more funding

Demographics

Loading the political themes of users that engaged with this discussion

Loading data...