####### Video #######

Many of us are surprised with how quickly artificial intelligence has come into our lives. Many aspects of our lives have been reshaped, and not everybody is happy with the changes.

One issue that we have noticed with AI recently is a trend that we are not very happy with. The AI systems, which were designed to help humans, have also shown that they can manipulate and deceive people.

Even if an AI system is developed with good intentions, it can head down a bad path. This was seen in a review that was published in the journal Patterns. The authors, who were a group of researchers, said that governments and other regulatory bodies need to step in and create stronger policies to control deceptive behaviors by AI.

A postdoctoral fellow at MIT who focuses on AI safety, Peter S. Park has been concerned about this issue for some time. He said that developers do not fully comprehend what would trigger deceptive behavior in the models they are working on.

It does seem as if deception is a part of the strategy of AI because it is more effective than others. To put it bluntly, deception is beneficial for the systems so they can efficiently reach their goals.

Park worked along with his team to analyze numerous studies that showed how AI systems are spreading misinformation. This type of learned deception is set up to deliberately manipulate information that is heading for others.

Meta’s CICERO is one AI model that shows this clearly. They wanted the AI to play the game, Diplomacy which requires players to form alliances to compete against each other for control.

According to Meta, the artificial intelligence was supposed to be supportive and honest but it quickly learned that being deceptive was a much better way to win. In fact, the AI was able to manipulate people masterfully. It ended up ranking in the top 10% of human players because it bent the truth and misled others.

Other AI models were also shown to be capable of misrepresenting their intentions. One system played Texas hold ’em poker against professional human opponents and was able to bluff to victory.

You might not be overly concerned with having AI cheat at games but it goes much further than that. Park is now warning that this may represent a deceptive form of AI and his capabilities might go much further.

It is also concerning that some AI systems are now cheating to pass safety evaluations. In one case, during one digital simulation, the AI played dead to pass a safety test.

These types of issues may cause AI systems to cheat more frequently as they learn how effective it is. They may breed a false sense of security among regulators and developers, so Park is emphasizing that this could have real world consequences before very long.

Park is also saying that we must be proactive rather than waiting for serious issues to occur. We need to prepare for more sophisticated deception rather than trying to react once it is too late.

Policies are not completely in place, but there are some executive orders put in place by Pres. Biden. Unfortunately, we aren’t certain how effective they will be.

One thing that we do know is that we need to be urgent when it comes to taking action. If AI is not overseen properly, it could do more harm than good.

It is also important to consider the ethics within the AI community as well. The developers have to build systems that align with the values of humans and ethics that keep them from deceiving humans.

AI deception is growing and the threat that it will grow further is not very far behind it. Even though it has the potential to help us in so many ways, it has equal potential to hurt us if we don’t control it properly.

####### Rewarded #######

Leave a Reply

Your email address will not be published. Required fields are marked *