Try the political quiz
+

Filter by type

Narrow down which types of responses you would like to see.

153 Replies

 @ISIDEWITHDiscuss this answer...8mos8MO

Yes

 @9MPC97Vdisagreed…8mos8MO

Artificial Intelligwnce is biaised. It will only know and act on what it's been taught. Human defence decisions always require quality control by at least ome other person.

 @9MP8LP9disagreed…8mos8MO

AI doesn’t seem to have many positive or genuinely useful purposes, so I can’t see how it would help with our defense

 @9NNWNTCLabouranswered…8mos8MO

I dont like the idea of using AI in defence and/or offensive capabilities, but conflicts and other countries will end up using this technology and the UK should not fall behind

 @9NVWSC7commented…7mos7MO

Completely agreed. I would much prefer a world where we don't use AI but it is naive to think the world won't embrace it, therefore I believe we should too.

 @ISIDEWITHasked…4mos4MO

What worries you more: nations not adopting AI fast enough in defense or developing AI too quickly without enough oversight?

 @9TLS5LWanswered…4mos4MO

Not adopting fast enough as countries with real threat are already adopting it

 @9QTTF8GLiberal Democratanswered…6mos6MO

Yes, but I think extra precaution should be taken when dealing with AI and there should always be a certain number of human workers to reinforce both authenticity and security on the front line.

 @9QRXK5Gfrom Oregon  answered…6mos6MO

Yes, but only within the sphere of research and preparedness until the potential ethical implications are better understood

 @9QRGKV3Labouranswered…7mos7MO

Yes, with careful consideration of ethical, strategic, technological, security, economic, practical, and political factors, and robust oversight.

 @9QPYNDFanswered…7mos7MO

This gets dangerous, AI at its current state goes from data provided, to use it for this format would increase the number of civilian casualties

 @9QPRKMVfrom Tennessee  answered…7mos7MO

My thoughts on this depend heavily upon the application of the AI usage in question. I strongly believe that AI should not pilot weapon(s) or anything else that could have (even if extremely marginal) a capacity to end someone's life in error.

Geologists use AI detection to measure tectonic shifts for earthquakes; similar applications I think it's totally okay and encouraged.

My reservations on AI has nothing to do with the program(s) themselves and everything to do with the application of said programs.

 @9Q83PJ8answered…7mos7MO

No, AI should only be used under human supervision or control, and should not be put in charge of making decisions regarding human lives as an AI cannot be held accountable in the event of loss of life.

 @9Q7H9ZNanswered…7mos7MO

Only in limited roles. AI is not some miracle answer to everything and it should be used with extreme caution and care, if it is used at all.

 @9PJ5K6Nanswered…7mos7MO

AI can be useful in small doses and can not work the way it's supposed to do yes but in a very small amount

 @9PHGG9Panswered…7mos7MO

Only if it can be proved that there is no bias and that there is transparency as to where the training data comes from

 @9PH79DHanswered…7mos7MO

The government should invest in appropriate IT (including AI) to enable a more efficient and effective armed forces.

 @9PH38DTanswered…7mos7MO

Again, difficult now that AI is out of the bag. Also needs global protocols similar to those on nuclear weapons

 @9PFGF3Yanswered…7mos7MO

Yes, but only in the interest of improving defensive countermeasures such as anti ballistic missiles, or to increase accuracy of offensive technology to decrease casualties

 @9PF94LVanswered…7mos7MO

Yes, but in a publicly controlled way, strictly regulated - should never be used for attack purposes and / or population controle.

 @9P87FF2Greenanswered…7mos7MO

It would be useful in many applications but should be heavily monitored/ controlled by the correct human teams

 @9P7M63Panswered…7mos7MO

Yes, but only for DEFENCE. No weaponry that can take life. Anti-missile is a great use of AI as it can process information and react far faster than humans can and with the advent of hypersonic missiles and the like... humans simply can't react or cover everything.

 @9P6P2SBanswered…7mos7MO

Only to provide humans with the likely success of various options. Human beings should ultimately make the decisions.

 @9P6LYBDConservativeanswered…7mos7MO

Yes, but there should ALWAYS be an override. Humans should always have to give a green light before military offensive capabilities are deployed.

 @9P4K8WVanswered…7mos7MO

Yes, with the exception that this AI is used in conjunction with a human operator in order to maximise the defensive capabilities that the government can put to practice.

 @9P49V45answered…7mos7MO

Theyre should be limits on what lgrade of wepons AI can be intergrated in and restrictions on their application.

 @9P464FCGreenanswered…7mos7MO

No, not yet - over-reliance on extremely flawed AI solutions will cause more problems than it will solve.

 @9P2DFL8answered…7mos7MO

Yes but only AI developed in this country and not from commercial companies or companies based overseas.

 @9NZW6R3answered…7mos7MO

No Would need a high Level of accountability as the risk of misuse is very high. It should net be allowed to be used on citizens even after a terrorist attack.

 @9NZT2JJLabouranswered…7mos7MO

I think more due dilligence would need conducting and confidence reinstalled into the public around public/government contract for me to be reassured in investing in AI for national defense

 @9NZS698Women's Equalityanswered…7mos7MO

More research needs done on AI. In this instance I think we should be guided by experts and not politicians

 @9NZPKG7answered…7mos7MO

Yes, it is an unfortunate reality that we will need to in order to counter threat actors that are doing the same.

 @9NZ7QGDConservativeanswered…7mos7MO

Yes but with extremely tight regulations and back up plans and procedures in case we encounter a difficulty controlling them.

 @9NYZPB5answered…7mos7MO

Yes but caution should be used, it shouldn't be given too much power/agency in case it goes rogue in future

 @9NYMB7Manswered…7mos7MO

Only in limited use cases. AI should not be given the power to kill, it should only assist with intelligence gathering.

 @9NYKP8Zanswered…7mos7MO

Yes, where it will improve defensive strategies but does not allow for complete loss of human control.

 @9NYJS8Tanswered…7mos7MO

Provided there are strict methods to counteract and/or shut down any AI. Should the worse happen, e.g. Sofia Microsoft A.I. becoming possibly sentient

 @9NY7N2Vanswered…7mos7MO

No - but the question is too broad, and the technology is not sufficiently mature to be foolproof yet. Skynet...

 @9NXDVX3answered…7mos7MO

AI is a way forward and can do lots of very clever things But humans must be able to make decisions using all avenues not be just reliant on an algorithm. If this had been the case in 50 we would now be in an atomic desert now as soldiers ignored the computers and did what a human thought was right. Not to hit the button... Could though go the other way so you need technology just need some checks and balances in place

 @9NX2LDManswered…7mos7MO

Yes however, we must also focus on teaching the population how to stay safe - treat things at the core of the issues and not just invest on how to investigate things faster after the incident has taken place. We cannot raise generations to become too dependent on Ai provisions but should also be developed if a case calls for it - it can then offer the best help.

 @9NWQMPFanswered…7mos7MO

It depends on how it's being utilised. Generative AI is not a positive thing but AI can be used in so many different ways that this question feels too broad

 @9NVX54Yanswered…7mos7MO

Whilst useful it goes against one’s personal rights and is unfair against certain races so needs huge improvement before ever being used

 @9NVWSC7answered…7mos7MO

Yes. It is naive to think that the other countries would not use this technology offensively. AI could provide essential defense detection which humans simply cannot respond to

 @9NJM623Greenanswered…8mos8MO

In terms of making defence more efficient, but in terms of military capabilities AI lacks empathy that may worsen conflicts, etc. AI is based on the opinions of those who create it and is therefore not representative or necessarily reflects the best options.

 @9NJ9Z7Manswered…8mos8MO

The government should invest in training AI for defense, but it should be there as a precautionary measure

 @9NJ8C8NGreenanswered…8mos8MO

No context in how they would use AI. If it was ethical and moral then yes but is a grey area which could result in dangerous applications.

 @9N66DNSanswered…8mos8MO

Depends on your definition of "defense applications". Use of fully-automated weaponry should be strictly off the table.

 @9N5XJ9Vanswered…8mos8MO

Only if the algorithm that is used is developed in the UK, so as to prevent foreign influence, giving us sovereignty over our own weapon systems and can 100% of the time obey human rights laws and the Geneva convention.

 @9N44NYMLabouranswered…8mos8MO

Yes, but only supplementary to and not the main use for defence applications. Caution and regulation needed in the early stages of new technology.

 @9N3QYYPLiberal Democratanswered…8mos8MO

AI can be utilized to speed up process but there needs to be human controls for quality check and security.

 @9N3PWT2answered…8mos8MO

Yes but only to shift through raw data to bring relevant information to humans that make the final call on further information to acquire and what actions to take

 @9N2ZY9Nanswered…8mos8MO

I believe it can be used for the betterment of the government however it shouldn't be relayed upon as it can be biased and also create unforeseen issues within a certain situation.

 @9N2S6XQanswered…8mos8MO

Yes, with extreme caution AI is new technology and the UK needs embrace AI to use as an add on support not rely on it

 @9N26GV3answered…8mos8MO

it depends again on how well developed the AI is. how accurate is the information they will use in défense?

 @9MZP4ZSLabouranswered…8mos8MO

Yes but the ai should be able to be over ridden and be able to be completely controlled by humans if something goes wrong

 @9MXWBBNanswered…8mos8MO

Yes to AI with caution but listen to the experts. I can't see how Asimov's 3 rules of robotics would apply if you're using AI for offensive purposes, but defence often means offence. A conundrum!

 @9MXRFT2answered…8mos8MO

Only if the algorithm that is used is developed in the UK, as to prevent foreign influence, giving us sovereignty over our own weapon systems and can 100% of the time obey human rights laws and the Geneva convention. If not then the victims of a war crime committed by AI could never find justice.
You cant convict an algorithm.

 @9MX5JLCanswered…8mos8MO

Yes, but only if the AI is unbiased, trustworthy and reliable with security measures in place to ensure there is no opportunity for it to be hacked/taken advantage of. AI should only be used to accompany qualified, human knowledge.

 @9MWR5QRanswered…8mos8MO

Yes, but in areas where no threat to life can be affected (I.e. streamlining procurement and capability enhancement).

 @9MQS8BManswered…8mos8MO

because in a world were people use ai for effeicney it will need to be used but should be a tools only not a survelience super computer

 @9MP3W9GGreenanswered…8mos8MO

 @9MPBC6Qanswered…8mos8MO

Yes in a limited capacity such as the protection of UK cyber infrastructure, but an AI should not be power to potentially decide the fate(s) of the lives of people.

 @9MP9JDWanswered…8mos8MO

It must be safe before any control is given to AI. Until then it could assist only

 @9MP5T9Janswered…8mos8MO

Not against it as a concept but not a high priority compared to other areas of potential investment

 @9MP4K23Labouranswered…8mos8MO

Yes but with huge amounts of transparency on where information is being drawn from, what purpose the AI serves and where money is being invested

 @9Q5D556answered…7mos7MO

Yes, but defensive and assisting only. We should have a global non proliferation treaty for autonomous offensive weapons.

 @9Q4SWZLanswered…7mos7MO

While I disagree with AI in its current form, we need to be first to the table to prevent our own destruction

 @9Q3CF47answered…7mos7MO

Generally I'd say no, mainly due to the fear of risk of losing human control but I don't have enough overall knowledge or understanding on this subject to form an opinion on it

 @9Q2BY9Sanswered…7mos7MO

I think it’s absolutely critical that we err on the side of caution in regard to AI, so we need to strike a balance between using it to human advantage against allowing it to get out of control

 @9PXP3CSanswered…7mos7MO

Cybersecurity perhaps, but the AI should not have access to physical weapons or oversee troop deployments.

 @9PXL2XK answered…7mos7MO

Yes, but I a purely assistive manner to process information and give recommendations but all final destination are made by a human and ai should have no capabilities to control weapons.

 @9PXCL7ZWorkers of Britainanswered…7mos7MO

No as one even with good security measures it can still be hacked and corrupted and two we ain't recreating skynet

 @9PP8NJVanswered…7mos7MO

Yes to keep pace with other superpowers and to limit loss of troop life in reconoitre missions but large scale offense or face targeted assaults must be used with caution.

 @9PMSJMDLiberal Democratanswered…7mos7MO

No, AI technology is nowhere near advanced enough to justify its use in situations that intentionally put peoples' lives and livelihoods at risk.

 @9PLTNT4answered…7mos7MO

It depends what other countries are doing - we can't be left behind but at the same time we don't want a world run by AI . There has to be an 'off' switch.

 @9NXJ83Yanswered…7mos7MO

On a case by case basis and only if there is strong evidence that this is the only solution. We should invest more in people and the infrastructure to support more people to be employed instead of relying too heavily on AI which is taking over human jobs

 @9NWN8TNanswered…7mos7MO

In terms of military application, no, but in terms of defending our infrastructure from cyber attack, yes.

 @9NV6SWJanswered…7mos7MO

Only for certain use such as detecting or providing messages humans would not otherwise be aware of. But do not take off human positions and allow AI to control them, this leaves room for errors, misuse, corruption, hacking issues, whereas humans in the position have more control. They should work alongside human use.

 @9NT74NQGreen answered…7mos7MO

Yes to help development but not to takeover ultimate decision making. More considered use of its development is required and strict regulatory boards in place to limit roll out of ai tech that has not had full scenario testing of its future impact

 @9N9J7C5 answered…8mos8MO

As long as the artificial intelligence is being used in a safe and professional manor, and use to pick out signs of criminal activity.

 @9N95FJ2answered…8mos8MO

Providing it doesn't threaten the publics privacy, personal security and isn't able to be accessed by the public and only accessible my high level ranking security

 @9N94PWBanswered…8mos8MO

Yes. I don’t like the thought of it. But realistically, if we don’t - we will be at a disadvantage against countries who will

 @9N8TJGLanswered…8mos8MO

As long as we understand how to use AI safely in a way that will prevent human control, then I think it could be good to protect people.But it should never be used to invade peoples privacy and rights

 @9N8N38Janswered…8mos8MO

I do not agree with investing more to further develop AI, if it’s already out there and has been beneficial with other countries then yes.

 @9N82C22answered…8mos8MO

The government should continue to invest in using data in defense project without buying in the hype of naming any data based tool "AI" to get more funding

 @9N786F6answered…8mos8MO

Not if it’s going to be used as Israel has demonstrated it can be used to justify demolishing civilians using AI to justify bombing targets.

 @9N6TPW7answered…8mos8MO

I do think AI could massively support this, however there is a still a long way to go before we can solely rely on it so it’s more complicated.

 @9MY77LFanswered…8mos8MO

Current AI applications for military purposes have been proven to be ineffective and dangerous. They are not yet fit for purpose.

 @9MY48WGanswered…8mos8MO

Yes, all governments of all nations should improve defence technology! otherwise the invaders will invade

 @9MY3G7X answered…8mos8MO

No, in general. But... The use of machines to do human killing is obscene. If humans must continue to kill then it should be without the 'distance' and moral barrier of 'computer says'. AI should be subject to stringent international control but humans lie and cheat so that is not going to happen. Morally we need a block on AI warfare but the argument will be that it's to save lives (and make it easier to steal land and properties). At the least there should be an agreement to use AI only for passive defence from incoming threats... which will ensure that the best defended will become the most arrogant and belligerent. There are no wins without a metanoia in human nature.

 @9MXZ8JDLabouranswered…8mos8MO

Yes provided the algorithm that is used is developed in the UK, to prevent foreign influence and obey human rights laws and the Geneva convention.

 @9MTN2WZ answered…8mos8MO

 @9MRDXWBanswered…8mos8MO

Yes but only to keep up with other nations, Ai should never make the decision to kill or harm a life.

 @9MRBW3Yanswered…8mos8MO

Yes but up to a certain level, can’t afford to be left behind if other countries are taking advantage of AI

 @9MR4KV5answered…8mos8MO

Where the effects are potentially lethal these should be reviewed by ministers who can be held accountable for decision.

Demographics

Loading the political themes of users that engaged with this discussion

Loading data...