Computer Vision

What is Computer Vision?

Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human visual system can do. It involves the development of a theoretical and algorithmic basis to achieve automatic visual understanding. Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information.

Where did the term "Computer Vision" come from?

The field of computer vision began in the late 1960s at universities pioneering artificial intelligence. It was meant to mimic the human visual system as a stepping stone to endowing robots with intelligent behavior. Early studies in the 1970s formed the foundations for many of the computer vision algorithms that exist today, including extraction of edges from images, labeling of lines, and motion estimation. The 1990s saw the first time statistical learning techniques were used in practice to recognize faces in images.

How is "Computer Vision" used today?

Computer vision is used in a wide range of applications, from self-driving cars to medical imaging. In medicine, it is used for tasks such as detecting tumors or other malign changes. In industry, it is used for quality control, where products are automatically inspected for defects. It is also used in autonomous vehicles for navigation and obstacle detection. The advancement of Deep Learning techniques has brought further life to the field of computer vision, with the accuracy of deep learning algorithms on several benchmark computer vision data sets surpassing prior methods.

Related Terms