ZSL London Zoo has employed machine learning technology to analyse photos of its residents and those in the natural environment using automated camera traps around the world.
The existing technology works using sensors that detect heat and motion, triggering the camera shutter when an animal walks past. This data can then be used by scientists to develop conservation plans for each of the species.
Although the camera technology for taking photos has been employed for some time, the zoo found hundreds of images were being taken every day and scientists were unable to analyse every single one, because there was simply not enough hours in the day. In some cases, it was taking nine months to develop a report , by which time the situation may have completely changed in that animal’s natural habitat.
So ZSL decided to employ Google’s Cloud AutoML Vision tool to automatically analyse the pictures, rather than taking on extra staff to turn the pictures into key insights.
“Previously, AI and machine learning technologies have been pretty inaccessible to an organisation like us,” Sophie Maxwell, head of conservation technology at ZSL said. “You need a data scientist, really, to write your own model, and the pre-trained models that are commercially available tend to be pretty basic – they can distinguish a cloud from a cup, but not deliver the intricate level of detail we need, like identifying a particular species within a group.”
She explained ZSL had instead worked closely with Google to develop the usability of the AutoML Vision tool. It has now developed its own ZSL Instant Detect technology, which not only monitors the animals themselves, but also any risks, such as poachers and then alerts rangers on-site to take action.
“We’re working to build models based on existing data that we hold, so that for locations where we regularly do camera trapping, such as Borneo or Costa Rica, we have the models ready when new data comes in, so we can compare this year’s data with last year’s data, for example,” she added. “Over time we’ll have customer models for particular locations, particular species sets, for particular environments – such as forest, savannah or Antarctica. So far, the results are looking really good.”