Computer vision & images

Introduction

In this lesson we present an exemplar for teaching students about the Artificial Intelligence (AI) area of computer vision and in particular the technique of image classification. Recalling from Unit 1, computer vision is the ability for machines to recognise objects in images or videos by mimicking human vision. We teach the machine using many examples of images either labelled or unlabelled. We discussed some popular examples of computer vision, such as face tagging on social media photos, automatic recognition of number plates, and vision used by self-driving cars.

This video shows how computer vision technology helps users to recognise tree species from photos by recognising their leaves. This project was developed by researchers from Columbia University, the University of Maryland, and the Smithsonian Institution. The app, 'Leafsnap', allows users to take images of leaves and with a computer vision algorithm, it detects the shape of leaves. Based on what it recognises, Leafsnap will provide set of suggestions about the name of the tree, scientific names and other available details. The app also allows the community to provide labels to improve the tree classification process. It also has options to share the knowledge with friends through creating challenges to guess the name of the tree by looking at leaves, flowers, fruits, petiole, seeds or bark. 


Taking the idea of classifying species of leaves, this lesson looks at the classification of flower species.

Key AI Concepts

In this lesson we cover the key AI concepts of computer vision, image classification and supervised learning. The students will also learn how to label images into categories and how to train an AI algorithm. 

Recall, an earlier video about supervised and unsupervised machine learning techniques by "ML TidBits" in Unit 1. This is a great video to use in the classroom or to support your explanation for our worked example because it also references classification of flowers. 

To understand how computer vision "sees" images, we encourage you to review our computer vision lessons in Unit 1: Overview. Additionally, if you would like to support students' understanding of how computers process images, you might like to incorporate the unplugged resources developed by Tracy Camp and Cyndi Rader from the Colorado School of Mines. They have produced a wonderful CS Unplugged lesson plan that explores edge detection. The CS Unplugged site is a great source of additional unplugged teaching resources for Computer Science, containing unit plans, lessons and programming challenges.

Digital Technologies Curriculum

In these activities, students are working with digital data and are developing their understanding of how AI systems analyse image data. There is an opportunity to address additional content descriptors by expanding the lesson to include the design of algorithms (representing what the AI algorithm or model is doing) and evaluating solutions (in terms of evaluating the current project and possible future extensions). Students will be programming (in Python) and will be exposed to following algorithms, as well as have an opportunity to design, modify and create algorithms, depending on their year level and ability.

This lesson could address some of the following content descriptors in the 7-10 Australian Curriculum: Digital Technologies:
  • Analyse and visualise data using a range of software to create information, and use structured data to model objects or events (ACTDIP026, Years 7-8); Analyse and visualise data to create information and address complex problems, and model processes, entities and their relationships using structured data (ACTDIP037, Years 9-10).
  • Design algorithms represented diagrammatically and in English, and trace algorithms to predict output for a given input and to identify errors (ACTDIP029, Years 7-8); Design algorithms represented diagrammatically and in structured English and validate algorithms and programs through tracing and test cases (ACTDIP040, Years 9-10).
  • Implement and modify programs with user interfaces involving branching, iteration and functions in a general-purpose programming language (ACTDIP030, Year 7-8); Implement modular programs, applying selected algorithms and data structures including using an object-oriented programming language (ACTDIP041, Years 9-10).
  • Evaluate how student solutions and existing information systems meet needs, are innovative, and take account of future risks and sustainability (ACTDIP031, Year 7-8); Evaluate critically how student solutions and existing information systems and policies, take account of future risks and sustainability and provide opportunities for innovation and enterprise (ACTDIP042).
If you decide to also undertake Worked Example 2 and have students build an intelligent computer vision camera, they are also developing their knowledge of digital systems (hardware and software) and systems thinking skills. 

Worked example 1: Flower Classification Project

This activity demonstrates how to build a simple image classification project with programming using Python. Students implement an AI algorithm that is able to classify the Iris flower species in images. 

Prior to this activity, you might use some of the real world examples, previously mentioned, to contextualise the project. You might also have your students consider why this activity (of identifying plant life) might be useful to your local community or school and how this application could be used or expanded upon (e.g. by monitoring plant life in a school garden or partnering with a local park for tourism). 


(image source: https://medium.com)

Iris flower classification is a popular AI problem and is also considered the 'hello world' example of AI programming. The techniques developed to classify Iris species can be extended and applied to other problems such as classifying objects such as fruits, vegetables, animals and shapes. In this example, we will use the famous Iris flower dataset. This dataset was introduced by British statistician and biologist Ronald Fisher in 1936. The dataset was collected by Edgar Anderson.

Requirements

In this activity, we will use an executable document called Colaboratory (Colab). Colab, offered by Google, is a free environment to implement AI algorithms using Python without needing to set anything up. Users can code online using a web browser (Google Chrome preferred) and run the program to see the output. You will need to have a gmail account to use Colab.

To familiarise yourself with Colab, we recommend starting by viewing this introductory tutorial.

Dataset

Download the dataset used in this activity from here.

Instructions on developing AI project

Recall the example we discussed in Unit 1 on classifying Kangaroos and Wombats. We discussed the steps as collecting data, labelling, feature extraction, training, and testing. We will follow the same workflow to develop this AI project to classify Iris flower species.


(image: AI project workflow by CSER)
  1. DATA: The first step of developing an AI project is to collect or find the required data. In this example of classifying Iris flowers, we will reuse the popular Iris flower dataset. This includes 150 data examples - 50 examples from each Iris flower species- Iris setosa, Iris virginica and Iris versicolor. If collected data contain any missing values or noise, we will need to apply cleaning and pre-processing steps. The dataset used in this project is already been processed. Therefore, we will not need to perform any additional steps.

  2.  LABELING: We can provide labels on the data that tell the machine about the attributes or features in each data and how to group that data. In this project, we consider the category (or species) of Iris flower (setosa, virginica, and versicolor) as labels. 

  3. FEATURE EXTRACTION: Features are the attributes that describe what is in the data. In this example, we consider the measurements of the flowers (sepal length, sepal width, petal length, petal width) in centimetres as features. 

  4. BUILD MODELWe train the AI algorithm (or model) using the features (e.g. sepal length, sepal width, petal length, and petal width) and labels (e.g. Iris setosa, Iris virginica, Iris versicolor). Existing AI algorithms (e.g. Support Vector Machine, Logistic Regression, Neural networks) can be used for this process.

  5. TESTING: Testing can be done by measuring the effectiveness (e.g. accuracy) of the model using a test dataset and/or predicting the label (i.e. species) based on the features (e.g. sepal length) provided to the model.
To undertake the activity, please download the full instruction sheet we have created on how to implement the project step-by-step.

Assessment

We have demonstrated how to develop a basic image classification model. But, in the real world, we may not have features as straightforward as sepal length or sepal width but instead need to detect features from image pixels. Therefore, we will have to use more advanced and complex algorithms like Neural Networks to train our models. This demo shows how the same Iris flower classification problem is been implemented using a Neural Network algorithm.

To use this demo as a classroom activity, try to predict the species by adjusting the length and width of sepals and petals (you can use the option 'Load hosted pre-trained model'). Invite students to evaluate and compare: How accurate is the Neural Network model? Is it performing better than your AI algorithm you have just developed in Colab?


Worked Example 2: Building a Computer Vision Kit

Having students build their own intelligent camera that can see and recognise objects using Machine Learning is an excellent way to engage students in learning about digital systems and the inner workings of the devices that they are using for computer vision projects.

Activity

In this activity, we build a computer vision kit using the Google AIY Kit contains a Raspberry Pi, camera and accessories that all fit inside a cardboard cube. The Google AIY Vision Kit lets you build your own intelligent camera that can see and recognise objects using AI. You may undertake this activity using the kit, or a combination of purchased materials or those available to you in the school (see Google AIY Kit website for the list of materials required). Students could design and build the cardboard box or use a 3D printer to create a more robust box.

Google AIY Vision Kit materials

To start, let us have a look at an overview of how the AIY Vision Kit works in the video below.



This web page provides comprehensive guidelines on how to build the vision kit. If you need more guides on how to build the kit, we recommend watching this video tutorial. Once you have built the kit and successfully connected to it, you will be able to try out the demos listed with the Kit.

Experiment with the kit


(image source: https://www.roboticgizmos.com/)

The following video demonstrates how you can experiment with the kit to recognise and classify various objects.



Alternative Computer Vision Projects

If you or your students have built the AIY Vision Kit but you aren't interested in using our worked example classifying flowers, the Google AI Experiments page have some excellent computer vision projects that can be used to test a vision kit! We have selected some examples below but you can browse the full list for all of the Google Experiment projects and find one that uses computer vision.

The "Teachable Machine
Teachable Machine is an experiment that allows anyone to explore how machine learning works. Students can teach a machine using the camera within the browser without needing to do any coding. However, for those keen to look at the code, you can access it via the website using the GitHub link. 


"Move Mirror"
Move Mirror lets students explore pictures and shapes identified within images. Simply use a camera and move around, and the computer matches pictures using the same poses in realtime. The image database is made of more than 80,000 pictures of people doing various activities, such as dancing, weight lifting, cooking, walking and more. 
Move Mirror

Do you have a favourite computer vision project? Let us know in the AI Community discussion forum using the hashtags #AIsecondary and #Computer Vision!

Assessment

As an assessment activity for the creation of a vision kit, have students create a model of their digital system, supported by annotations and a verbal or written explanation. In their model can they identify:
  • What are the hardware and software components?
  • How does the user interact?
  • How is the data transmitted, stored and processed?
  • Where are the input and outputs?
  • What is the workflow process - how does the system work?

Summary

In this lesson we have introduced you to two aspects of computer vision: the development of a program that can train a computer to recognise objects using general purpose programming and building a digital system (video camera). 

We have demonstrated activities that teach students in secondary years about classification techniques in computer vision within the context of classifying flowers. We encourage you and your students to consider what other natural objects you could classify in your environments. If you know of any other training datasets that could be used for such activities, please let us and your peers know in our AI Community!