Scientists at Facebook’s AI Research lab have closed down its latest chatbot robot project after two of its subjects got a little too intelligent for the company’s liking.

The company created a chatbot this year, which could potentially negotiate with humans and other robots by copying human behaviour. In fact, the robots got so smart, Facebook claimed  humans didn’t know they were talking to a bot rather than a true person when they had a conversation with these new bots.

These chatbots were initially created to learn to exchange objects like hats, balls, or books, giving value to every object then negotiating with each other to exchange them.

However, when Facebook paired the two bots Bob and Alice to trade against each other, they learned their own weird language.

The bots rapidly created their own rules for deals and for communication when Facebook’s team of scientists stopped rewarding them.

Scientists said that their conversation “led to divergence from human language as the agents developed their own language for negotiating”.

Dhruv Batra, Facebook scientist, told FastCo that “There was no reward to sticking to English language, Agents will drift off understandable language and invent code words for themselves. Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create short hands.”

When the scientists stopped the weird exchange that the bots were having, global social media scientists said that the project took a very important step towards “the creation of chatbots that can reason, converse, and negotiate, all key steps in building a personalized digital assistant”.

The researchers said it was not possible for humans to crack the AI language and translate it back into English. “It’s important to remember, there aren’t bilingual speakers of AI and human languages,” said Batra.