AI in defense refers to the use of artificial intelligence technologies to enhance military capabilities, such as autonomous drones, cyber defense, and strategic decision-making. Proponents argue that AI can significantly enhance military effectiveness, provide strategic advantages, and improve national security. Opponents argue that AI poses ethical risks, potential loss of human control, and can lead to unintended consequences in critical situations.
Narrow down which types of responses you would like to see.
Narrow down the conversation to these participants:
Ideology:
Political theme:
@ISIDEWITH9mos9MO
Yes
@9MPC97V9mos9MO
Artificial Intelligwnce is biaised. It will only know and act on what it's been taught. Human defence decisions always require quality control by at least ome other person.
@9MP8LP99mos9MO
AI doesn’t seem to have many positive or genuinely useful purposes, so I can’t see how it would help with our defense
I dont like the idea of using AI in defence and/or offensive capabilities, but conflicts and other countries will end up using this technology and the UK should not fall behind
@9NVWSC78mos8MO
Completely agreed. I would much prefer a world where we don't use AI but it is naive to think the world won't embrace it, therefore I believe we should too.
@B2WQYCF7 days7D
As long as the AI will not directly affect national security and instead will just improve the efficiency of the part of the process.
@B2QWQBN2wks2W
The government should invest in AI for the military however they still should be cautious of the risks and dangers. It can cause with loss of control but it also could be useful.
@B2Q67G42wks2W
Ai is good and the country needs ai and it’s good but actually no I don’t want ai use your own independent mind to think
@ISIDEWITH5mos5MO
What worries you more: nations not adopting AI fast enough in defense or developing AI too quickly without enough oversight?
@9TLS5LW5mos5MO
Not adopting fast enough as countries with real threat are already adopting it
@9QTTF8GLiberal Democrat7mos7MO
Yes, but I think extra precaution should be taken when dealing with AI and there should always be a certain number of human workers to reinforce both authenticity and security on the front line.
@9QRXK5G7mos7MO
Yes, but only within the sphere of research and preparedness until the potential ethical implications are better understood
Yes, with careful consideration of ethical, strategic, technological, security, economic, practical, and political factors, and robust oversight.
@9QPYNDF8mos8MO
This gets dangerous, AI at its current state goes from data provided, to use it for this format would increase the number of civilian casualties
@9QPRKMV8mos8MO
My thoughts on this depend heavily upon the application of the AI usage in question. I strongly believe that AI should not pilot weapon(s) or anything else that could have (even if extremely marginal) a capacity to end someone's life in error.
Geologists use AI detection to measure tectonic shifts for earthquakes; similar applications I think it's totally okay and encouraged.
My reservations on AI has nothing to do with the program(s) themselves and everything to do with the application of said programs.
@9Q83PJ88mos8MO
No, AI should only be used under human supervision or control, and should not be put in charge of making decisions regarding human lives as an AI cannot be held accountable in the event of loss of life.
@9Q7H9ZN8mos8MO
Only in limited roles. AI is not some miracle answer to everything and it should be used with extreme caution and care, if it is used at all.
@9PJ5K6N8mos8MO
AI can be useful in small doses and can not work the way it's supposed to do yes but in a very small amount
@9PHGG9P8mos8MO
Only if it can be proved that there is no bias and that there is transparency as to where the training data comes from
@9PH79DH8mos8MO
The government should invest in appropriate IT (including AI) to enable a more efficient and effective armed forces.
@9PH38DT8mos8MO
Again, difficult now that AI is out of the bag. Also needs global protocols similar to those on nuclear weapons
@9PFGF3Y8mos8MO
Yes, but only in the interest of improving defensive countermeasures such as anti ballistic missiles, or to increase accuracy of offensive technology to decrease casualties
@9PF94LV8mos8MO
Yes, but in a publicly controlled way, strictly regulated - should never be used for attack purposes and / or population controle.
It would be useful in many applications but should be heavily monitored/ controlled by the correct human teams
@9P7M63P8mos8MO
Yes, but only for DEFENCE. No weaponry that can take life. Anti-missile is a great use of AI as it can process information and react far faster than humans can and with the advent of hypersonic missiles and the like... humans simply can't react or cover everything.
@9P6P2SB8mos8MO
Only to provide humans with the likely success of various options. Human beings should ultimately make the decisions.
@9P6LYBDConservative8mos8MO
Yes, but there should ALWAYS be an override. Humans should always have to give a green light before military offensive capabilities are deployed.
@9P4K8WV8mos8MO
Yes, with the exception that this AI is used in conjunction with a human operator in order to maximise the defensive capabilities that the government can put to practice.
@9P49V458mos8MO
Theyre should be limits on what lgrade of wepons AI can be intergrated in and restrictions on their application.
No, not yet - over-reliance on extremely flawed AI solutions will cause more problems than it will solve.
@9P2DFL88mos8MO
Yes but only AI developed in this country and not from commercial companies or companies based overseas.
@9NZW6R38mos8MO
No Would need a high Level of accountability as the risk of misuse is very high. It should net be allowed to be used on citizens even after a terrorist attack.
I think more due dilligence would need conducting and confidence reinstalled into the public around public/government contract for me to be reassured in investing in AI for national defense
@9NZS698Women's Equality8mos8MO
More research needs done on AI. In this instance I think we should be guided by experts and not politicians
@9NZPKG78mos8MO
Yes, it is an unfortunate reality that we will need to in order to counter threat actors that are doing the same.
@9NZ7QGDConservative8mos8MO
Yes but with extremely tight regulations and back up plans and procedures in case we encounter a difficulty controlling them.
@9NYZPB58mos8MO
Yes but caution should be used, it shouldn't be given too much power/agency in case it goes rogue in future
@9NYMB7M8mos8MO
Only in limited use cases. AI should not be given the power to kill, it should only assist with intelligence gathering.
@9NYKP8Z8mos8MO
Yes, where it will improve defensive strategies but does not allow for complete loss of human control.
@9NYJS8T8mos8MO
Provided there are strict methods to counteract and/or shut down any AI. Should the worse happen, e.g. Sofia Microsoft A.I. becoming possibly sentient
@9NY7N2V8mos8MO
No - but the question is too broad, and the technology is not sufficiently mature to be foolproof yet. Skynet...
@9NXDVX38mos8MO
AI is a way forward and can do lots of very clever things But humans must be able to make decisions using all avenues not be just reliant on an algorithm. If this had been the case in 50 we would now be in an atomic desert now as soldiers ignored the computers and did what a human thought was right. Not to hit the button... Could though go the other way so you need technology just need some checks and balances in place
@9NX2LDM8mos8MO
Yes however, we must also focus on teaching the population how to stay safe - treat things at the core of the issues and not just invest on how to investigate things faster after the incident has taken place. We cannot raise generations to become too dependent on Ai provisions but should also be developed if a case calls for it - it can then offer the best help.
@9NWQMPF8mos8MO
It depends on how it's being utilised. Generative AI is not a positive thing but AI can be used in so many different ways that this question feels too broad
@9NVX54Y8mos8MO
Whilst useful it goes against one’s personal rights and is unfair against certain races so needs huge improvement before ever being used
@9NVWSC78mos8MO
Yes. It is naive to think that the other countries would not use this technology offensively. AI could provide essential defense detection which humans simply cannot respond to
In terms of making defence more efficient, but in terms of military capabilities AI lacks empathy that may worsen conflicts, etc. AI is based on the opinions of those who create it and is therefore not representative or necessarily reflects the best options.
@9NJ9Z7M9mos9MO
The government should invest in training AI for defense, but it should be there as a precautionary measure
No context in how they would use AI. If it was ethical and moral then yes but is a grey area which could result in dangerous applications.
@9N66DNS9mos9MO
Depends on your definition of "defense applications". Use of fully-automated weaponry should be strictly off the table.
@9N5XJ9V9mos9MO
Only if the algorithm that is used is developed in the UK, so as to prevent foreign influence, giving us sovereignty over our own weapon systems and can 100% of the time obey human rights laws and the Geneva convention.
Yes, but only supplementary to and not the main use for defence applications. Caution and regulation needed in the early stages of new technology.
@9N3QYYPLiberal Democrat9mos9MO
AI can be utilized to speed up process but there needs to be human controls for quality check and security.
@9N3PWT29mos9MO
Yes but only to shift through raw data to bring relevant information to humans that make the final call on further information to acquire and what actions to take
@9N2ZY9N9mos9MO
I believe it can be used for the betterment of the government however it shouldn't be relayed upon as it can be biased and also create unforeseen issues within a certain situation.
@9N2S6XQ9mos9MO
Yes, with extreme caution AI is new technology and the UK needs embrace AI to use as an add on support not rely on it
@9N26GV39mos9MO
it depends again on how well developed the AI is. how accurate is the information they will use in défense?
Yes but the ai should be able to be over ridden and be able to be completely controlled by humans if something goes wrong
@9MXWBBN9mos9MO
Yes to AI with caution but listen to the experts. I can't see how Asimov's 3 rules of robotics would apply if you're using AI for offensive purposes, but defence often means offence. A conundrum!
@9MXRFT29mos9MO
Only if the algorithm that is used is developed in the UK, as to prevent foreign influence, giving us sovereignty over our own weapon systems and can 100% of the time obey human rights laws and the Geneva convention. If not then the victims of a war crime committed by AI could never find justice.
You cant convict an algorithm.
@9MX5JLC9mos9MO
Yes, but only if the AI is unbiased, trustworthy and reliable with security measures in place to ensure there is no opportunity for it to be hacked/taken advantage of. AI should only be used to accompany qualified, human knowledge.
@9MWR5QR9mos9MO
Yes, but in areas where no threat to life can be affected (I.e. streamlining procurement and capability enhancement).
@9MQS8BM9mos9MO
because in a world were people use ai for effeicney it will need to be used but should be a tools only not a survelience super computer
Yes but a human must review and agree the decision made
@9MQPK8V9mos9MO
within v. strict boundaries and with objective oversight
They probably already do, can’t put the genie back in the bottle
@9MPBC6Q9mos9MO
Yes in a limited capacity such as the protection of UK cyber infrastructure, but an AI should not be power to potentially decide the fate(s) of the lives of people.
@9MP9JDW9mos9MO
It must be safe before any control is given to AI. Until then it could assist only
@9MP5T9J9mos9MO
Not against it as a concept but not a high priority compared to other areas of potential investment
Yes but with huge amounts of transparency on where information is being drawn from, what purpose the AI serves and where money is being invested
@9Q5D5568mos8MO
Yes, but defensive and assisting only. We should have a global non proliferation treaty for autonomous offensive weapons.
@9Q4SWZL8mos8MO
While I disagree with AI in its current form, we need to be first to the table to prevent our own destruction
@9Q3CF478mos8MO
Generally I'd say no, mainly due to the fear of risk of losing human control but I don't have enough overall knowledge or understanding on this subject to form an opinion on it
@9Q2BY9S8mos8MO
I think it’s absolutely critical that we err on the side of caution in regard to AI, so we need to strike a balance between using it to human advantage against allowing it to get out of control
@9PXP3CS8mos8MO
Cybersecurity perhaps, but the AI should not have access to physical weapons or oversee troop deployments.
@9PXL2XK 8mos8MO
Yes, but I a purely assistive manner to process information and give recommendations but all final destination are made by a human and ai should have no capabilities to control weapons.
@9PXCL7ZWorkers of Britain8mos8MO
No as one even with good security measures it can still be hacked and corrupted and two we ain't recreating skynet
@9PP8NJV8mos8MO
Yes to keep pace with other superpowers and to limit loss of troop life in reconoitre missions but large scale offense or face targeted assaults must be used with caution.
@9PMSJMDLiberal Democrat8mos8MO
No, AI technology is nowhere near advanced enough to justify its use in situations that intentionally put peoples' lives and livelihoods at risk.
@9PLTNT48mos8MO
It depends what other countries are doing - we can't be left behind but at the same time we don't want a world run by AI . There has to be an 'off' switch.
@9NXJ83Y8mos8MO
On a case by case basis and only if there is strong evidence that this is the only solution. We should invest more in people and the infrastructure to support more people to be employed instead of relying too heavily on AI which is taking over human jobs
@9NWN8TN8mos8MO
In terms of military application, no, but in terms of defending our infrastructure from cyber attack, yes.
@9NV6SWJ8mos8MO
Only for certain use such as detecting or providing messages humans would not otherwise be aware of. But do not take off human positions and allow AI to control them, this leaves room for errors, misuse, corruption, hacking issues, whereas humans in the position have more control. They should work alongside human use.
Yes to help development but not to takeover ultimate decision making. More considered use of its development is required and strict regulatory boards in place to limit roll out of ai tech that has not had full scenario testing of its future impact
@9N9J7C5 9mos9MO
As long as the artificial intelligence is being used in a safe and professional manor, and use to pick out signs of criminal activity.
@9N95FJ29mos9MO
Providing it doesn't threaten the publics privacy, personal security and isn't able to be accessed by the public and only accessible my high level ranking security
@9N94PWB9mos9MO
Yes. I don’t like the thought of it. But realistically, if we don’t - we will be at a disadvantage against countries who will
@9N8TJGL9mos9MO
As long as we understand how to use AI safely in a way that will prevent human control, then I think it could be good to protect people.But it should never be used to invade peoples privacy and rights
@9N8N38J9mos9MO
I do not agree with investing more to further develop AI, if it’s already out there and has been beneficial with other countries then yes.
@9N82C229mos9MO
The government should continue to invest in using data in defense project without buying in the hype of naming any data based tool "AI" to get more funding
@9N786F69mos9MO
Not if it’s going to be used as Israel has demonstrated it can be used to justify demolishing civilians using AI to justify bombing targets.
@9N6TPW79mos9MO
I do think AI could massively support this, however there is a still a long way to go before we can solely rely on it so it’s more complicated.
@9MY77LF9mos9MO
Current AI applications for military purposes have been proven to be ineffective and dangerous. They are not yet fit for purpose.
@9MY48WG9mos9MO
Yes, all governments of all nations should improve defence technology! otherwise the invaders will invade
@9MY3G7X 9mos9MO
No, in general. But... The use of machines to do human killing is obscene. If humans must continue to kill then it should be without the 'distance' and moral barrier of 'computer says'. AI should be subject to stringent international control but humans lie and cheat so that is not going to happen. Morally we need a block on AI warfare but the argument will be that it's to save lives (and make it easier to steal land and properties). At the least there should be an agreement to use AI only for passive defence from incoming threats... which will ensure that the best defended will become the most arrogant and belligerent. There are no wins without a metanoia in human nature.
Yes provided the algorithm that is used is developed in the UK, to prevent foreign influence and obey human rights laws and the Geneva convention.
@9MTN2WZ 9mos9MO
It depends on what AI they are investing in and what it's purpose is
@9MRFK9X9mos9MO
Only if the AI is proven to be trustworthy and reliable
@9MRDXWB9mos9MO
Yes but only to keep up with other nations, Ai should never make the decision to kill or harm a life.
Loading the political themes of users that engaged with this discussion
Loading data...
Join in on more popular conversations.