Computer Vision: Feature Extraction (Unplugged)


In this lesson we will explore the Artificial Intelligence (AI) area of Computer Vision, and the image processing technique of feature extraction. As we've seen previously, these techniques are used in a whole range of AI systems that enable machines to recognise objects in the world. In this lesson, we are going to explore feature extraction with Australian animals. 

A real world AI computer vision technology, relevant to this lesson, are phone and tablet applications used to identify plant and wildlife species such as "iNaturalist". iNaturalist is a community site that launched in 2008 by three Masters students. It has evolved to become a community of over 150,000 people worldwide who have captured 5.3 million photos representing 117,000 species! By uploading and labelling these images (by identifying features using metadata) and tagging where they were taken (GPS data), users are actually inadvertently building one of the world’s largest datasets of species and helping train AI systems to automatically identify species through the application.

"Seek by iNaturalist" is a child-friendly version of iNaturalist built on the same dataset but designed for the classroom. While iNaturalist requires users to be 13 years and over, Seek by iNaturalist does not because it does not actually post observations to iNaturalist or require registration. However, it still provides some tools such as automated species identification and nature journaling as well as the experience of participating! We tested it below on a couple of images of Australian animals and it seems to be working! 

seek naturalist examples

Digital Technologies and Key AI Concepts

The outcome of this lesson is for students to have an understanding of "labels" and "features" that we might use to train an AI to see objects in images. Feature extraction is the conversion of data in the original format (for example, an image) into a series of quantitative (e.g. numeric) or qualitative (e.g. text description) features that can be used to distinguish and compare different objects in the original data. Let's recall our image that represents features and labels (see below).

Recall that Humans train an AI system using supervised learning by providing labels that identify the objects and tagging "features" of the object. Alternatively, an AI system might discover these on its own by looking for patterns in the image using an unsupervised learning approach. We will touch on these in the activities. 

Students will also build their skills and knowledge in Digital Technologies through exploring different types of data and sorting data to discover patterns. In particular it addresses the following content descriptors relating to data for F-4:
  • Recognise and explore patterns in data and represent data as pictures, symbols and diagrams (ACTDIK002, F-2)
  • Recognise different types of data and explore how the same data can be represented in different ways (ACTDIK008, Years 3-4)
Additionally, if extending the sorting and classification of image data to include algorithms (students describing algorithmic processes or creating algorithmic representations), the lesson can also provide opportunities for students to:
  • Follow, describe and represent a sequence of steps and decisions (algorithms) needed to solve simple problems (ACTDIP004, F-2)
  • Define simple problems, and describe and follow a sequence of steps and decisions (algorithms) needed to solve them (ACTDIP010, Years 3-4)
As we have used Australian animals, the activities in this lesson also provide an excellent opportunity to address content descriptors in the Australian Curriculum: Science (Biological Sciences). In these activities, students develop their understanding that "Living things have a variety of external features (ACSSU017, Year 1) and that "Living things can be grouped on the basis of observable features and can be distinguished from non-living things (ACSSU044, Year 3)".

Worked Examples

We present examples of ways students can learn about identifying features for objects in images. We use the context of animals as an example, however, you could replace this with other topics such as plant life, insects (minibeasts), dinosaurs, sea creatures, transport, story characters and more!

Unplugged - Sorting and Organising Data by Features

In the first example, students can simply explore ways of sorting and classifying animals into groups based on their features. In the second part, they extract data from the features and represent their data. 

The teacher could use a storybook about animals that emphasise features of animals, such as having eyes, ears, a nose or tail as the basis for this activity and to facilitate some initial discussion around animal features. A couple of examples could be:
  • What do you do with a tail like this? By Steve Jenkins - a book that compares and contrasts unique animal features.
  • Heads and Tails by John Canty - a riddle guessing book about animals based on their features.
After reading the book, students, work in pairs or individually, to find as many ways as they can to group images of animals according to their features. These could be images provided by the teacher or those they have found themselves. Students decide on one particular grouping to present to the class. Encourage students to talk with their partner to justify their reasons for grouping them the way they have and to discuss the patterns, similarities and differences in animal features that they notice. Considering algorithms, students can use algorithmic language that incorporates branching (decisions) to explain their decision process. For example, "IF the animal has feathers THEN it goes into GROUP A (or BIRDS)".

In AI machine learning, this process of looking for patterns and identifying features that emerge is what is known as an unsupervised learning technique that the machine would use. 

What makes an AI system different to a traditional program is that we can give the AI system an image it has not yet seen before and it should hopefully correctly identify the image from what it learned in training. What happens when we have grouped our animals and we are then introduced with a brand new animal we haven't seen before? Students could be given an additional unseen animal and have to see if they can fit it within their groupings, or whether they need to change their groupings to accommodate the new image!

The class can re-group and the teacher invites students to explain and justify the different ways they decided to group the animals based on the animal features. 

As an extension to the above activity and for older year levels, students learn that data from images can be extracted and represented as quantitative and qualitative data to train the AI computer to process the image data to recognise objects. In this next part, students document data about their curated and grouped images. As an extension, students can represent this as a graph or create a summary about patterns they notice in the data (e.g. most of the animals have legs, more animals have tails than ears, etc). 

To train an AI system, this data would be used by humans to tag the image data for a supervised approach in machine learning. 

Activity: Plugged Extension

As a fun "plugged" extension to the previous activity of exploring features, a Google AI Experiment, "Safari Mixer", is an AI tool that lets you invent a brand new animal by combining the head, body, and legs of other animals. It will them combine the creation and bring it to life! This activity can be played on a device with a microphone or using a Google Assistant or Google Home device. 

Safari Mixer

Leading into the next lesson idea, students could use these unique animals that have a unique set of features for a board game of "Guess the Animal".

Unplugged Feature Extraction - Guess the Animal

After students have learned about features and labels, they could implement what they have learned into a "Guess Who" style game using animals (or the type of data they have been working with). This familiar board game is a great example of guessing a character (or animal if replaced) by their features or characteristics. For example: "Does your animal have ears?" or "Does your animal have a tail?".

Students can create their own animal cards to insert into a Guess Who board game, or simply play the game on the floor with a divider between the two students so that they cannot see the cards that they are turning over. Linking with Digital Technologies, there is also the opportunity for students to create algorithm flowcharts that demonstrate ideal guessing processes for an animal (or character) by using decisions/branching, iteration and conditions. 


In terms of understanding of AI concepts and Digital Technologies, we would be looking for students to demonstrate the following:
  • Can the student correctly identify features of an object? 
  • Can the student correctly find relevant and appropriate data for the activity (e.g. images of Australian animals)?
  • Can the student extract features from an image and describe it using quantitative (numeric) and qualitative (text) data? 
  • Can students correctly sort and classify objects based on their features?
  • Can students articulate how an AI algorithm or computer "sees" an image by using feature extraction or through finding patterns?
  • Can the student articulate their decision making process for sorting and classifying images using an algorithmic terminology or representation (e.g. a flowchart)?
Summative assessments for this lesson sequence could involve students preparing visualisations for their image groupings on a presentation slide or poster, as well as the production of any tables and visualisations about the features extracted from images they found. An additional final project could be to assess their board game, along with any instructions and algorithms that demonstrate how decisions are made when selecting characters or animals. 

Ways in which assessment data could be collected to monitor student learning through these learning activities is through the use of checklists and/or through think-aloud discussions with students. For ideas on how to generate assessments and templates visit the Digital Technologies Hub Assessment site


In this lesson we have unpacked how you might teach students about the fundamentals of features and labels for image processing. While we can tell the AI algorithm what to see, how does it know what these features look like? How can it detect them in an image? In the next lesson, we are going to explore how AI systems use data representation and edge detection to "see" images and unpack some ways that you might introduce this in the primary classroom.