Computer vision technology (a subset of artificial intelligence) is often mistaken for being able to accomplish more than the human eye. In fact, because most computer vision machine learning neural networks are trained by humans, this is not the case. While it may have taken a human a few more seconds to zoom in on a photo to find something that the computer would, there is no case where the computer will find something that is imperceptible to people.
Therefore, computer vision is only useful to reduce time or ensure completeness. Of course, the reduce time objective is critical. A driverless car must be able to recognize a traffic light, crosswalk or pedestrian in a matter of milliseconds to take appropriate action. Moreover, it must recognize all pedestrians in its vicinity, avoiding any possibility of blindspots.
When it comes to using computer vision in an asynchronous workflow such as inspection (i.e. not realtime driverless car scenario), computer vision achieves both of these objectives by minimizing data overload. These benefits are magnified as artificial intelligence is processing more data.
For inspections, larger assets such as skyscrapers and bridges can result in thousands of images or hours of video. This takes days to analyze. The risks can be seen in the case of the I-40 bridge in Arkansas, where a significant red flag was missed during the review of drone inspection footage. Often, when owners or engineers use drone inspection methods without AI, they are overburdened with the analysis that sometimes they return to traditional methods.
Drone inspection is ideal. It results in more complete results and represents a comprehensive record of the asset being inspected. It can deliver data in the form of photogrammetry, orthomosaics and 3D models. Still, significant risks remain if this type of data is not paired with the proper AI technology to identify red flags automatically and reduce the burden of analysis overload.