AI in defense refers to the use of artificial intelligence technologies to enhance military capabilities, such as autonomous drones, cyber defense, and strategic decision-making. Proponents argue that AI can significantly enhance military effectiveness, provide strategic advantages, and improve national security. Opponents argue that AI poses ethical risks, potential loss of human control, and can lead to unintended consequences in critical situations.
Narrow down which types of responses you would like to see.
Narrow down the conversation to these participants:
Ideology:
Political theme:
@ISIDEWITH1yr1Y
@ISIDEWITH1yr1Y
Yes
@9MPC97V1yr1Y
Artificial Intelligwnce is biaised. It will only know and act on what it's been taught. Human defence decisions always require quality control by at least ome other person.
@9MP8LP91yr1Y
AI doesn’t seem to have many positive or genuinely useful purposes, so I can’t see how it would help with our defense
@ISIDEWITH4mos4MO
I dont like the idea of using AI in defence and/or offensive capabilities, but conflicts and other countries will end up using this technology and the UK should not fall behind
@9NVWSC71yr1Y
Completely agreed. I would much prefer a world where we don't use AI but it is naive to think the world won't embrace it, therefore I believe we should too.
Yes, but only to assist and not replace human decision making, and then only with very strict oversight and regulations. At no point should an AI be capable of independent, unsupervised, and uninterruptible action of any kind, least of all communications or release of weapons activity.
@B5988HJ4mos4MO
Yes, but with very strict ethical oversight, international collaboration on safety standards, and a primary focus on defensive and non-lethal applications that reduce risk to both military personnel and civilians.
@B3K4TXJ6mos6MO
Only in response to attacks on us by others not for attacking others, i believe we should only be defending ourselves against attacks and not attacking others
@B3JQK74Conservative6mos6MO
Yes but they should keep investing into manual defence such as army with the current circumstances of the world today.
@B2WQYCF6mos6MO
As long as the AI will not directly affect national security and instead will just improve the efficiency of the part of the process.
@B2QWQBN7mos7MO
The government should invest in AI for the military however they still should be cautious of the risks and dangers. It can cause with loss of control but it also could be useful.
@B2Q67G47mos7MO
Ai is good and the country needs ai and it’s good but actually no I don’t want ai use your own independent mind to think
@ISIDEWITH11mos11MO
What worries you more: nations not adopting AI fast enough in defense or developing AI too quickly without enough oversight?
@9TLS5LW11mos11MO
Not adopting fast enough as countries with real threat are already adopting it
Yes, but I think extra precaution should be taken when dealing with AI and there should always be a certain number of human workers to reinforce both authenticity and security on the front line.
@9QRXK5G1yr1Y
Yes, but only within the sphere of research and preparedness until the potential ethical implications are better understood
Yes, with careful consideration of ethical, strategic, technological, security, economic, practical, and political factors, and robust oversight.
@9QPYNDF1yr1Y
This gets dangerous, AI at its current state goes from data provided, to use it for this format would increase the number of civilian casualties
@9QPRKMV1yr1Y
My thoughts on this depend heavily upon the application of the AI usage in question. I strongly believe that AI should not pilot weapon(s) or anything else that could have (even if extremely marginal) a capacity to end someone's life in error.
Geologists use AI detection to measure tectonic shifts for earthquakes; similar applications I think it's totally okay and encouraged.
My reservations on AI has nothing to do with the program(s) themselves and everything to do with the application of said programs.
@9Q83PJ81yr1Y
No, AI should only be used under human supervision or control, and should not be put in charge of making decisions regarding human lives as an AI cannot be held accountable in the event of loss of life.
@9Q7H9ZN1yr1Y
Only in limited roles. AI is not some miracle answer to everything and it should be used with extreme caution and care, if it is used at all.
@9PJ5K6N1yr1Y
AI can be useful in small doses and can not work the way it's supposed to do yes but in a very small amount
@9PHGG9P1yr1Y
Only if it can be proved that there is no bias and that there is transparency as to where the training data comes from
@9PH79DH1yr1Y
The government should invest in appropriate IT (including AI) to enable a more efficient and effective armed forces.
@9PH38DT1yr1Y
Again, difficult now that AI is out of the bag. Also needs global protocols similar to those on nuclear weapons
@9PFGF3Y1yr1Y
Yes, but only in the interest of improving defensive countermeasures such as anti ballistic missiles, or to increase accuracy of offensive technology to decrease casualties
@9PF94LV1yr1Y
Yes, but in a publicly controlled way, strictly regulated - should never be used for attack purposes and / or population controle.
It would be useful in many applications but should be heavily monitored/ controlled by the correct human teams
@9P7M63P1yr1Y
Yes, but only for DEFENCE. No weaponry that can take life. Anti-missile is a great use of AI as it can process information and react far faster than humans can and with the advent of hypersonic missiles and the like... humans simply can't react or cover everything.
@9P6P2SB1yr1Y
Only to provide humans with the likely success of various options. Human beings should ultimately make the decisions.
@9P6LYBDConservative1yr1Y
Yes, but there should ALWAYS be an override. Humans should always have to give a green light before military offensive capabilities are deployed.
@9P4K8WV1yr1Y
Yes, with the exception that this AI is used in conjunction with a human operator in order to maximise the defensive capabilities that the government can put to practice.
@9P49V451yr1Y
Theyre should be limits on what lgrade of wepons AI can be intergrated in and restrictions on their application.
No, not yet - over-reliance on extremely flawed AI solutions will cause more problems than it will solve.
@9P2DFL81yr1Y
Yes but only AI developed in this country and not from commercial companies or companies based overseas.
@9NZW6R31yr1Y
No Would need a high Level of accountability as the risk of misuse is very high. It should net be allowed to be used on citizens even after a terrorist attack.
I think more due dilligence would need conducting and confidence reinstalled into the public around public/government contract for me to be reassured in investing in AI for national defense
More research needs done on AI. In this instance I think we should be guided by experts and not politicians
@9NZPKG71yr1Y
Yes, it is an unfortunate reality that we will need to in order to counter threat actors that are doing the same.
@9NZ7QGDConservative1yr1Y
Yes but with extremely tight regulations and back up plans and procedures in case we encounter a difficulty controlling them.
@9NYZPB51yr1Y
Yes but caution should be used, it shouldn't be given too much power/agency in case it goes rogue in future
@9NYMB7M1yr1Y
Only in limited use cases. AI should not be given the power to kill, it should only assist with intelligence gathering.
@9NYKP8Z1yr1Y
Yes, where it will improve defensive strategies but does not allow for complete loss of human control.
@9NYJS8T1yr1Y
Provided there are strict methods to counteract and/or shut down any AI. Should the worse happen, e.g. Sofia Microsoft A.I. becoming possibly sentient
@9NY7N2V1yr1Y
No - but the question is too broad, and the technology is not sufficiently mature to be foolproof yet. Skynet...
@9NXDVX31yr1Y
AI is a way forward and can do lots of very clever things But humans must be able to make decisions using all avenues not be just reliant on an algorithm. If this had been the case in 50 we would now be in an atomic desert now as soldiers ignored the computers and did what a human thought was right. Not to hit the button... Could though go the other way so you need technology just need some checks and balances in place
@9NX2LDM1yr1Y
Yes however, we must also focus on teaching the population how to stay safe - treat things at the core of the issues and not just invest on how to investigate things faster after the incident has taken place. We cannot raise generations to become too dependent on Ai provisions but should also be developed if a case calls for it - it can then offer the best help.
@9NWQMPF1yr1Y
It depends on how it's being utilised. Generative AI is not a positive thing but AI can be used in so many different ways that this question feels too broad
@9NVX54Y1yr1Y
Whilst useful it goes against one’s personal rights and is unfair against certain races so needs huge improvement before ever being used
@9NVWSC71yr1Y
Yes. It is naive to think that the other countries would not use this technology offensively. AI could provide essential defense detection which humans simply cannot respond to
In terms of making defence more efficient, but in terms of military capabilities AI lacks empathy that may worsen conflicts, etc. AI is based on the opinions of those who create it and is therefore not representative or necessarily reflects the best options.
@9NJ9Z7M1yr1Y
The government should invest in training AI for defense, but it should be there as a precautionary measure
No context in how they would use AI. If it was ethical and moral then yes but is a grey area which could result in dangerous applications.
@9N66DNS1yr1Y
Depends on your definition of "defense applications". Use of fully-automated weaponry should be strictly off the table.
@9N5XJ9V1yr1Y
Only if the algorithm that is used is developed in the UK, so as to prevent foreign influence, giving us sovereignty over our own weapon systems and can 100% of the time obey human rights laws and the Geneva convention.
Yes, but only supplementary to and not the main use for defence applications. Caution and regulation needed in the early stages of new technology.
AI can be utilized to speed up process but there needs to be human controls for quality check and security.
@9N3PWT21yr1Y
Yes but only to shift through raw data to bring relevant information to humans that make the final call on further information to acquire and what actions to take
@9N2ZY9N1yr1Y
I believe it can be used for the betterment of the government however it shouldn't be relayed upon as it can be biased and also create unforeseen issues within a certain situation.
@9N2S6XQ1yr1Y
Yes, with extreme caution AI is new technology and the UK needs embrace AI to use as an add on support not rely on it
@9N26GV31yr1Y
it depends again on how well developed the AI is. how accurate is the information they will use in défense?
Yes but the ai should be able to be over ridden and be able to be completely controlled by humans if something goes wrong
@9MXWBBN1yr1Y
Yes to AI with caution but listen to the experts. I can't see how Asimov's 3 rules of robotics would apply if you're using AI for offensive purposes, but defence often means offence. A conundrum!
@9MXRFT21yr1Y
Only if the algorithm that is used is developed in the UK, as to prevent foreign influence, giving us sovereignty over our own weapon systems and can 100% of the time obey human rights laws and the Geneva convention. If not then the victims of a war crime committed by AI could never find justice.
You cant convict an algorithm.
@9MX5JLC1yr1Y
Yes, but only if the AI is unbiased, trustworthy and reliable with security measures in place to ensure there is no opportunity for it to be hacked/taken advantage of. AI should only be used to accompany qualified, human knowledge.
@9MWR5QR1yr1Y
Yes, but in areas where no threat to life can be affected (I.e. streamlining procurement and capability enhancement).
@9MQS8BM1yr1Y
because in a world were people use ai for effeicney it will need to be used but should be a tools only not a survelience super computer
Yes but a human must review and agree the decision made
@9MQPK8V1yr1Y
within v. strict boundaries and with objective oversight
They probably already do, can’t put the genie back in the bottle
@9MPBC6Q1yr1Y
Yes in a limited capacity such as the protection of UK cyber infrastructure, but an AI should not be power to potentially decide the fate(s) of the lives of people.
@9MP9JDW1yr1Y
It must be safe before any control is given to AI. Until then it could assist only
@9MP5T9J1yr1Y
Not against it as a concept but not a high priority compared to other areas of potential investment
Yes but with huge amounts of transparency on where information is being drawn from, what purpose the AI serves and where money is being invested
@9Q5D5561yr1Y
Yes, but defensive and assisting only. We should have a global non proliferation treaty for autonomous offensive weapons.
@9Q4SWZL1yr1Y
While I disagree with AI in its current form, we need to be first to the table to prevent our own destruction
@9Q3CF471yr1Y
Generally I'd say no, mainly due to the fear of risk of losing human control but I don't have enough overall knowledge or understanding on this subject to form an opinion on it
@9Q2BY9S1yr1Y
I think it’s absolutely critical that we err on the side of caution in regard to AI, so we need to strike a balance between using it to human advantage against allowing it to get out of control
@9PXP3CS1yr1Y
Cybersecurity perhaps, but the AI should not have access to physical weapons or oversee troop deployments.
@9PXL2XK 1yr1Y
Yes, but I a purely assistive manner to process information and give recommendations but all final destination are made by a human and ai should have no capabilities to control weapons.
No as one even with good security measures it can still be hacked and corrupted and two we ain't recreating skynet
@9PP8NJV1yr1Y
Yes to keep pace with other superpowers and to limit loss of troop life in reconoitre missions but large scale offense or face targeted assaults must be used with caution.
No, AI technology is nowhere near advanced enough to justify its use in situations that intentionally put peoples' lives and livelihoods at risk.
@9PLTNT41yr1Y
It depends what other countries are doing - we can't be left behind but at the same time we don't want a world run by AI . There has to be an 'off' switch.
@9NXJ83Y1yr1Y
On a case by case basis and only if there is strong evidence that this is the only solution. We should invest more in people and the infrastructure to support more people to be employed instead of relying too heavily on AI which is taking over human jobs
@9NWN8TN1yr1Y
In terms of military application, no, but in terms of defending our infrastructure from cyber attack, yes.
@9NV6SWJ1yr1Y
Only for certain use such as detecting or providing messages humans would not otherwise be aware of. But do not take off human positions and allow AI to control them, this leaves room for errors, misuse, corruption, hacking issues, whereas humans in the position have more control. They should work alongside human use.
Yes to help development but not to takeover ultimate decision making. More considered use of its development is required and strict regulatory boards in place to limit roll out of ai tech that has not had full scenario testing of its future impact
@9N9J7C5 1yr1Y
As long as the artificial intelligence is being used in a safe and professional manor, and use to pick out signs of criminal activity.
@9N95FJ21yr1Y
Providing it doesn't threaten the publics privacy, personal security and isn't able to be accessed by the public and only accessible my high level ranking security
@9N94PWB1yr1Y
Yes. I don’t like the thought of it. But realistically, if we don’t - we will be at a disadvantage against countries who will
@9N8TJGL1yr1Y
As long as we understand how to use AI safely in a way that will prevent human control, then I think it could be good to protect people.But it should never be used to invade peoples privacy and rights
@9N8N38J1yr1Y
I do not agree with investing more to further develop AI, if it’s already out there and has been beneficial with other countries then yes.
@9N82C221yr1Y
The government should continue to invest in using data in defense project without buying in the hype of naming any data based tool "AI" to get more funding
Loading the political themes of users that engaged with this discussion
Loading data...
Join in on more popular conversations.