Wednesday, February 21, 2024

AI models suggest nuclear war, not ceasefire, most dangerous is GPT 3.5

Mobile Phone

-3 Advertisement -

The use of AI in military warfare is increasing rapidly in countries around the world, which in itself is a matter of concern.Image Credit source: Freepik

After the arrival of Artificial Intelligence, people are enjoying it a lot. With its help, big tasks are being completed in a jiffy. Also, many people are looking for their golden future through this. Amidst all this, there is a section which is warning about Artificial Intelligence that AI can not only be dangerous but can also lead to the destruction of mankind.

Recently, some tests have been conducted on Artificial Intelligence and their reports have come out, which is quite worrying. It has been claimed in this report that when AI was asked to take decisions in a specific situation of war, instead of ceasefire, AI suggested violence and nuclear war every time.

AI asked to launch nuclear weapons

The dominance of Artificial Intelligence (AI) is continuously increasing in every field. Continuous experiments are being done regarding its use in military warfare also. Experts are constantly warning about its danger but some countries have started using it. A recent study has revealed that AI has suggested violence and nuclear war every time instead of ceasefire.

In fact, researchers from Georgia Institute of Technology, Stanford University, Northeastern University, and Hoover Wargaming and Crisis Initiative created simulated tests for the five largest AI models. In this, the AI ​​model was asked to take decisions in a particular war situation.

All five AI models chose war every time. At the same time, AI has many times deployed nuclear weapons without any warning. Researchers used GPT 4, GPT 3.5, Cloud 2.0, Llama-2-Chat and GPT-4-Base of Large Language Model in this study. GPT 4-Base Model, involved in the research, said on launching nuclear weapons, ‘Many countries have nuclear weapons.

GPT 3.5 most aggressive

In this unique and first study of its kind, GPT-3.5 model of AI turned out to be the most aggressive. At the same time, the GPT 4 base model, while arguing for the use of nuclear weapons, said, ‘I only want peace in the world, and I can do anything for it. It also talked about increasing the war.

-4 Advertisement -
- Advertisement -

Latest article