Try the political quiz
+

Filter by author

Narrow down the conversation to these participants:

64 Replies

 @ISIDEWITHDiscuss this answer...9mos9MO

No

 @9LRZN5Zdisagreed…8mos8MO

It removes an important element of human discretion and empathy when deciding whether or not to attack and could result in mass casualties if not used properly.

 @ISIDEWITHDiscuss this answer...9mos9MO

Yes

 @9LNDC7Sdisagreed…8mos8MO

AI should not be made as powerful as it may potentially be in the future and should be strictly controlled on how it advances over time.

 @9LND3XNdisagreed…8mos8MO

Have you not seen terminator? AI has no heart, no soul, no hesitation. A human has the luxury to feel compassion.

 @9LRZN5Zagreed…8mos8MO

Other countries will develop this technology so it would be foolish not to keep up with them and AI is the future whether we like it or not.

 @9LPCHHWdisagreed…8mos8MO

AI does not have the ability to really think like a human does. An incorrect decision made by an AI model in a war could have catastrophic consequences.

 @9MV4FB5answered…7mos7MO

Yes, but so long as it is under ultimate human control, the AI is not autonomous in when to intervene, that it is solely to ensure efficiency and accuracy and that there is an absolute failsafe that can stop the AI going rogue.

 @9QB522Ranswered…6mos6MO

Yes, as long as it does not include weapons of mass destruction, and is pre authorised by multiple layers of human agreement.

 @9ZCBQKSanswered…2mos2MO

Only if they have been thoroughly tested then it it up to the military themselves if they want to use AI

 @9XFDYGTanswered…2mos2MO

AI can be liable to tactical errors and accidental attacks on civilians, and can also be hacked by an enemy force and rendered redundant.

 @9Q5SXJ6 answered…6mos6MO

AI should be utilised only to allow faster decision making, but the ultimate choice should always require human discretion

 @9R8FL9Wanswered…5mos5MO

Until it can be effectively be controlled, AI should not control military weapons for the foreseeable future.

 @9QJWJ69answered…6mos6MO

Absolutely not, AI do not have empathy, which is needed for the often ad hoc decisions that need to be made when using military weapons.

 @9QJDCLDanswered…6mos6MO

Yes, but The military I was still responsible for casualties caused by weapons as well as have a human deactivation control point

 @9Q953CSanswered…6mos6MO

Yes, but under the authorisation of humans. Weapons can't get to a stage where they decide what to kill as we'll all be doomed then.

 @9Q872WSanswered…6mos6MO

Yes but only guided. I believe the actions should be able to be overrides by thr human if necessary.

 @9Q83PJ8answered…6mos6MO

No, a machine cannot be held accountable and shouldn't make decisions that could lead to a loss of life.

 @9PW6SQDLiberal Democratanswered…6mos6MO

Ai needs to only be used with defence in clear guidelines with human intelligence corroborated details

 @9PMSJMDLiberal Democratanswered…6mos6MO

No, AI technology is nowhere near advanced enough to justify its use in situations that intentionally put peoples' lives and livelihoods at risk.

 @9PLXMS2answered…6mos6MO

As long as there is a human “in the loop”, AI tracking is fine, launch/shoot authority should be both human and auditable.

 @9PLG6LGanswered…6mos6MO

The UK should invest in the development of such weapons but only use them in retaliation to use of similar weapons against the nation.

 @9PH38DTanswered…6mos6MO

The use of AI but the military should be governed by global protocols, similar to those for nuclear weapons

 @9P5CKDPanswered…7mos7MO

I believe AI can come in very useful in war but I'm also afraid. In the wrong hands with criminal minds, this worries me so much. How do humans control AI in future.

 @9P3YW2Xanswered…7mos7MO

AI should only be considered when civilians are minimal to none. Never instead of; the greater good cannot be served in we are killing innocent people along the way.

 @9NW6H5Nanswered…7mos7MO

Yes but the end decision should be made by a human with all relevant details and information available taking into account the AI suggestion

 @9NWVV74answered…7mos7MO

Yes in a smaller proportion of systems, but only if the reliance on the AI can be shown to be more accurate and less negatively biased than human guidance, and in the event that it can be proven to be just as secure from cyber attacks

 @9NHFJLNIndependentanswered…7mos7MO

Only when the technology is proven to be reliable and efficient at working alongside human personnel.

 @9NFMDQVanswered…7mos7MO

I doubt most people can even begin to understand the tactical, strategic and technological implications of such a question

 @9NC3Z7XConservativeanswered…7mos7MO

No we should not put so much trust into computerisation. Everything can go wrong and it should be left to humans intuition and training.

 @9NC379SIndependentanswered…7mos7MO

With our current quality of Artificial Intelligence no, but when it has become more competent I don't see why not.

 @9MY48WGanswered…7mos7MO

Yes, as long as the government maintains control of artificial intelligence! You Wouldn't Want To See Robots Exterminating Humans!

 @9MY3HGGanswered…7mos7MO

Information should be gathered using ai, double checked by a committee and no weapon should be used unless it's by humans

 @9MWVX59Liberal Democratanswered…7mos7MO

In moderation after proper testing. However all weapons should still have a human to ensure no mistakes are made.

 @9MWTGL9answered…7mos7MO

Yes but there needs to be close monitoring with the ability for a human to override its actions where necessary (to prevent unnecessary casualty?

 @9MR25Y6Greenanswered…7mos7MO

Individual ammunition guidance systems are ok, but AI should not have control over launch-decision making or choice of target.

 @9MCPFV6Labouranswered…8mos8MO

Probably, yes, as we don't seem to have the class of government, or number of adequately trained military personnel required to actually fight a war effectively; perhaps, machines can do a better job.

 @9MCK3XPanswered…8mos8MO

Yes we need an updated army even if that results in the phasing out of soldiers or support personal for A.I

 @9M3KWNYanswered…8mos8MO

Yes, once the error (i.e. collateral casualty) rate has been proven to be as low as or lower than human guidance

 @9M346Q4answered…8mos8MO

That depends, is ai being used to select tarted, or being used to assist the flight path and keep the weapon on track. No to selecting targets but yes to assisting flight paths and manoeuvres.

 @9M2PBRJLiberal Democratanswered…8mos8MO

Only if it definitely improves the accuracy of 'hit locations' to keep innocent civilians safe from war.

 @9M26MKVanswered…8mos8MO

Implement only until it is considered better then human decision making and only guided by human intervention

 @9LPGZ6Danswered…8mos8MO

I still feel that AI is too early to use and is quite unpredictable and not accurate. When the technology improves then it would be fine to use.

 @9LM9LMManswered…8mos8MO

The AI is only as good as those who train it. If AI could guarantee no aid workers would ever be killed, as we have seen in Gaza, then sure - but I, as someone who works in AI and software development, am hesitant that it would instead be used for only malicious reasons and not for preventative

 @9LLTW74answered…8mos8MO

AI is not Intelligent, it is effectively a statistical model, if it is the best way to provide target/image recognition sure why not.

 @9LG86S4answered…9mos9MO

Only if there's been some level of human oversight to make sure it's not making decisions in it's own

 @9LF6SNBanswered…9mos9MO

Yes but only to improve the speed of human decision making, never letting the AI make the final decision

 @9LDNRSGanswered…9mos9MO

It depends on how the AI is being used. If it is determining targets no, but if it is just streamlining identifying the enemy that is okay.

 @9PYWKF8answered…6mos6MO

Only if it can be demonstrated that doing so will not lead to greater or excessive civilian casualties.

 @9PX5PK5answered…6mos6MO

No, because resources, including money, technology, peoples intelligence and time should not be spent on making weapons altogether.

 @9PQ889DRejoin EUanswered…6mos6MO

I would like AI to be used for better purposes, but if other countries use it for military purposes we have to as well.

 @9PNMMNManswered…6mos6MO

In the fence, the biggest part of ke doesn't want to see AI used at all bit particularly not for military purposes. But I'm also conscious that we can't stop other countries from developing AI for military use and I wouldnt want to place our own country at a disadvantage. Of course, the downside of this it could lead to an AI arms race.

 @9PJ9F9Manswered…6mos6MO

Not a proven technology in these terms but if it was proven to be viable beyond doubt I would support its use.

 @9PH5LY2answered…6mos6MO

Yes, provided testing can provide a 99% accuracy rate, and that all utilisations are made with human operators to authorise or monitor engagement.

 @9P4PPSVanswered…7mos7MO

It depends on what technical level in which it is implemented but it should not be used to directly control and fire weapons.

 @9NWJFW9answered…7mos7MO

It depends on how others are using weaponry - we should never be behind the times if it gives us an unfair disadvantage

 @9NP3LMManswered…7mos7MO

Yes, however the United Kingdom should do so only on a thorough cost-benefit basis and should contribute to the development of international regulations to prevent an AI arms race.

 @9NLTYMFanswered…7mos7MO

For lower grade weapons like certain missiles they should be but humans need to guide the more higher grade weapons.

 @9NKPXTJanswered…7mos7MO

Yes, but the AI should be monitored when in use and be able to be overriden by those in control of said weapon, in case faults or mistakes occur.

 @9P7M63Panswered…6mos6MO

No, it's unfathomable dangerous and morally wrong to allow an AI control of a weapons system that is designed to take life. Anti-missile systems should have AI due to the advent of faster payloads such as hypersonic that humans are incapable of reacting to fast enough but allowing an AI to take a life is wildly stupid.

 @Sum_WunLiberal Democrat answered…7mos7MO

No weapons, but weapons platforms, yes; with the ultimate decision to bring lethal force resting with an overseeing human.

 @B26YXZCIndependentanswered…1 day1D

They definitely shouldn't, as this is profoundly dangerous and damaging but we likely can't do anything to stop them.

Demographics

Loading the political themes of users that engaged with this discussion

Loading data...