How do you annotate training data when you can't give the task to humans? By using AI to mimic how humans would do it. That's the idea behind Annotell's winning entry in Zenseact's Edge Annotation Challenge.
Annotell, just like Zenseact an AI Sweden partner, specializes in software tools that help companies annotate their training data, by supporting the human annotators. They just joined - and won - the Zenseact Edge AnnotationZ Challenge with their entry on how to annotate on the edge.
A key factor was using data on how humans approach the annotation of images. Roland Meertens, Perception expert at Annotell, explains how:
“For example, humans change the view of the image by zooming, or changing the contrast or the colors. Humans also trace objects over multiple frames. If an object has been identified in the sequence before the frame that's being annotated, it's possible to understand what's in that frame as well, even if you only see a small part of the object.”
Jonatan Kallus, Machine learning engineer at Annotell, explains the last part further:
“If you have a complete point cloud from the LIDAR that is also nearby, it's easy to understand what it is. But if it's only part of an object and/or further away, recognizing it becomes much harder. This is where the frame sequence can provide valuable information: If you have already detected a car in previous frames, you can calculate it's approximate location in the key frame and based on the available sensor data find exactly where it is.”
Based on these observations, Roland Meertens, Jonatan Kallus, Clément Dardenne, and Luca Caltagirone were able to develop a method for imitating human annotation allowing for automated annotations on the edge.