Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Lethal autonomous weapons systems use artificial intelligence to identify and kill human targets without human intervention. Russia, the United States and China have all recently invested billions of dollars secretly developing AI weapons systems sparking fears of an eventual “AI Cold War.”In April 2024 +972 Magazine published a report detailing the Israeli Defense Forces intelligence-based program known as “Lavender.” Israeli intel…
Read moreNarrow down the conversation to these participants:
Region:
@ISIDEWITH9mos9MO
No
@9LRZN5Z8mos8MO
It removes an important element of human discretion and empathy when deciding whether or not to attack and could result in mass casualties if not used properly.
@ISIDEWITH9mos9MO
Yes
@9LNDC7S8mos8MO
AI should not be made as powerful as it may potentially be in the future and should be strictly controlled on how it advances over time.
@9LND3XN8mos8MO
Have you not seen terminator? AI has no heart, no soul, no hesitation. A human has the luxury to feel compassion.
@9LRZN5Z8mos8MO
Other countries will develop this technology so it would be foolish not to keep up with them and AI is the future whether we like it or not.
@9LPCHHW8mos8MO
AI does not have the ability to really think like a human does. An incorrect decision made by an AI model in a war could have catastrophic consequences.
@9MV4FB57mos7MO
Yes, but so long as it is under ultimate human control, the AI is not autonomous in when to intervene, that it is solely to ensure efficiency and accuracy and that there is an absolute failsafe that can stop the AI going rogue.
@9QB522R6mos6MO
Yes, as long as it does not include weapons of mass destruction, and is pre authorised by multiple layers of human agreement.
@9ZCBQKS2mos2MO
Only if they have been thoroughly tested then it it up to the military themselves if they want to use AI
@9XFDYGT2mos2MO
AI can be liable to tactical errors and accidental attacks on civilians, and can also be hacked by an enemy force and rendered redundant.
@9Q5SXJ6 6mos6MO
AI should be utilised only to allow faster decision making, but the ultimate choice should always require human discretion
@9R8FL9W5mos5MO
Until it can be effectively be controlled, AI should not control military weapons for the foreseeable future.
@9QJWJ696mos6MO
Absolutely not, AI do not have empathy, which is needed for the often ad hoc decisions that need to be made when using military weapons.
@9QJDCLD6mos6MO
Yes, but The military I was still responsible for casualties caused by weapons as well as have a human deactivation control point
@9Q953CS6mos6MO
Yes, but under the authorisation of humans. Weapons can't get to a stage where they decide what to kill as we'll all be doomed then.
@9Q872WS6mos6MO
Yes but only guided. I believe the actions should be able to be overrides by thr human if necessary.
@9Q83PJ86mos6MO
No, a machine cannot be held accountable and shouldn't make decisions that could lead to a loss of life.
@9PW6SQDLiberal Democrat6mos6MO
Ai needs to only be used with defence in clear guidelines with human intelligence corroborated details
@9PMSJMDLiberal Democrat6mos6MO
No, AI technology is nowhere near advanced enough to justify its use in situations that intentionally put peoples' lives and livelihoods at risk.
@9PLXMS26mos6MO
As long as there is a human “in the loop”, AI tracking is fine, launch/shoot authority should be both human and auditable.
@9PLG6LG6mos6MO
The UK should invest in the development of such weapons but only use them in retaliation to use of similar weapons against the nation.
@9PH38DT6mos6MO
The use of AI but the military should be governed by global protocols, similar to those for nuclear weapons
@9P5CKDP7mos7MO
I believe AI can come in very useful in war but I'm also afraid. In the wrong hands with criminal minds, this worries me so much. How do humans control AI in future.
@9P3YW2X7mos7MO
AI should only be considered when civilians are minimal to none. Never instead of; the greater good cannot be served in we are killing innocent people along the way.
@9NW6H5N7mos7MO
Yes but the end decision should be made by a human with all relevant details and information available taking into account the AI suggestion
@9NWVV747mos7MO
Yes in a smaller proportion of systems, but only if the reliance on the AI can be shown to be more accurate and less negatively biased than human guidance, and in the event that it can be proven to be just as secure from cyber attacks
@9NHFJLNIndependent7mos7MO
Only when the technology is proven to be reliable and efficient at working alongside human personnel.
@9NFMDQV7mos7MO
I doubt most people can even begin to understand the tactical, strategic and technological implications of such a question
@9NC3Z7XConservative7mos7MO
No we should not put so much trust into computerisation. Everything can go wrong and it should be left to humans intuition and training.
@9NC379SIndependent7mos7MO
With our current quality of Artificial Intelligence no, but when it has become more competent I don't see why not.
@9MY48WG7mos7MO
Yes, as long as the government maintains control of artificial intelligence! You Wouldn't Want To See Robots Exterminating Humans!
@9MY3HGG7mos7MO
Information should be gathered using ai, double checked by a committee and no weapon should be used unless it's by humans
@9MWVX59Liberal Democrat7mos7MO
In moderation after proper testing. However all weapons should still have a human to ensure no mistakes are made.
@9MWTGL97mos7MO
Yes but there needs to be close monitoring with the ability for a human to override its actions where necessary (to prevent unnecessary casualty?
Individual ammunition guidance systems are ok, but AI should not have control over launch-decision making or choice of target.
Probably, yes, as we don't seem to have the class of government, or number of adequately trained military personnel required to actually fight a war effectively; perhaps, machines can do a better job.
@9MCK3XP8mos8MO
Yes we need an updated army even if that results in the phasing out of soldiers or support personal for A.I
@9M3KWNY8mos8MO
Yes, once the error (i.e. collateral casualty) rate has been proven to be as low as or lower than human guidance
@9M346Q48mos8MO
That depends, is ai being used to select tarted, or being used to assist the flight path and keep the weapon on track. No to selecting targets but yes to assisting flight paths and manoeuvres.
@9M2PBRJLiberal Democrat8mos8MO
Only if it definitely improves the accuracy of 'hit locations' to keep innocent civilians safe from war.
@9M26MKV8mos8MO
Implement only until it is considered better then human decision making and only guided by human intervention
@9LPGZ6D8mos8MO
I still feel that AI is too early to use and is quite unpredictable and not accurate. When the technology improves then it would be fine to use.
@9LM9LMM8mos8MO
The AI is only as good as those who train it. If AI could guarantee no aid workers would ever be killed, as we have seen in Gaza, then sure - but I, as someone who works in AI and software development, am hesitant that it would instead be used for only malicious reasons and not for preventative
@9LLTW748mos8MO
AI is not Intelligent, it is effectively a statistical model, if it is the best way to provide target/image recognition sure why not.
@9LLFG2L8mos8MO
Yes, but they should be regularly checked by humans
@9LL83SZLiberal Democrat8mos8MO
Yes but should follow strict rules and regulations.
@9LG86S49mos9MO
Only if there's been some level of human oversight to make sure it's not making decisions in it's own
@9LF6SNB9mos9MO
Yes but only to improve the speed of human decision making, never letting the AI make the final decision
@9LDNRSG9mos9MO
It depends on how the AI is being used. If it is determining targets no, but if it is just streamlining identifying the enemy that is okay.
@9PYWKF86mos6MO
Only if it can be demonstrated that doing so will not lead to greater or excessive civilian casualties.
@9PX5PK56mos6MO
No, because resources, including money, technology, peoples intelligence and time should not be spent on making weapons altogether.
I would like AI to be used for better purposes, but if other countries use it for military purposes we have to as well.
@9PNMMNM6mos6MO
In the fence, the biggest part of ke doesn't want to see AI used at all bit particularly not for military purposes. But I'm also conscious that we can't stop other countries from developing AI for military use and I wouldnt want to place our own country at a disadvantage. Of course, the downside of this it could lead to an AI arms race.
@9PJ9F9M6mos6MO
Not a proven technology in these terms but if it was proven to be viable beyond doubt I would support its use.
@9PH5LY26mos6MO
Yes, provided testing can provide a 99% accuracy rate, and that all utilisations are made with human operators to authorise or monitor engagement.
@9P4PPSV7mos7MO
It depends on what technical level in which it is implemented but it should not be used to directly control and fire weapons.
@9NWJFW97mos7MO
It depends on how others are using weaponry - we should never be behind the times if it gives us an unfair disadvantage
@9NP3LMM7mos7MO
Yes, however the United Kingdom should do so only on a thorough cost-benefit basis and should contribute to the development of international regulations to prevent an AI arms race.
@9NLTYMF7mos7MO
For lower grade weapons like certain missiles they should be but humans need to guide the more higher grade weapons.
@9NKPXTJ7mos7MO
Yes, but the AI should be monitored when in use and be able to be overriden by those in control of said weapon, in case faults or mistakes occur.
@9P7M63P6mos6MO
No, it's unfathomable dangerous and morally wrong to allow an AI control of a weapons system that is designed to take life. Anti-missile systems should have AI due to the advent of faster payloads such as hypersonic that humans are incapable of reacting to fast enough but allowing an AI to take a life is wildly stupid.
@Sum_WunLiberal Democrat 7mos7MO
No weapons, but weapons platforms, yes; with the ultimate decision to bring lethal force resting with an overseeing human.
@B26YXZCIndependent1 day1D
They definitely shouldn't, as this is profoundly dangerous and damaging but we likely can't do anything to stop them.
Loading the political themes of users that engaged with this discussion
Loading data...
Join in on more popular conversations.