OpenAI has rolled out an update of its Gym platform, offering new environments for developers to test and implement Reinforcement Learning (RL).
Gym was announced in 2016 as a toolkit to help companies test the robots they’ve created, offering a number of virtual exercises for the bots to practice their intelligence with. This could include teaching them to play console games or use everyday objects we humans take for granted.
The upgrade to openAI’s Gym means robots can now try their hand at more sophisticated tasks, such as playing an instrument, picking up objects from a variety of different surfaces (the floor, a table, a shelf, for example). It also trains them to carry out more complicated manoeuvres such as picking up a pen and writing, teaching them in the same way as a human’s muscle memory works, with repetitive actions.
Open AI’s Gym platform uses Holodeck to recreate these environments, which OpenAI says is a more efficient way of getting robots to practice compared to using a physical environment, that needs to be configured specifically for robots.
“Just as a real gym has different ‘environments’ – like a treadmill, a bench press, an exercise bike, and so on – the OpenAI Gym has environments for AI agents such as ‘make a toy figure walk’ or ‘make a car run up a slope’,” Peter Welinder, a researcher at OpenAI, told The Register.
Gym uses a reward system to encourage the robots to learn. The closer to the goal the robot gets, the better the rewards. If the task isn’t completed to a satisfactory standard, the robot won’t receive a reward until it is successfully finished.