Capítulo 18. Automatic Identification of Learning Objects in Order to Support Learning Styles

https://--


Georgina Flores Becerra


Carolina Medina Ramírez


Omar Flores Sánchez


Dimensions


Capítulo 18. Automatic Identification of Learning Objects in Order to Support Learning Styles

*Georgina Flores Becerra, **Carolina Medina Ramírez, Instituto Tecnológico de Puebla

***Omar Flores Sánchez, Universidad Autónoma Metropolitana-Iztapalapa

Abstract

At this work we propose a method to automate the selection of learning object (lo) that best supports the user’s learning style. For our proposal, we have considered that the lo’s have a certain number of sections, and didactic resources. We have also contemplated the learning styles: kinesthetic, auditory and visual. In order to achieve the automatic identification process, we consider the representation of lo’s, their sections and didactic resources through fuzzy relationship matrices. A matrix represents the learning styles, while B matrix represents a set of didactic resources versus a set of lo’s, which indicates the percentage of use of certain didactic resource by each section.

Introduction

Currently, only certain student sector population can access multiple digital resources to support their learning processes. However, given a huge number of contents on the Web, the search and retrieval process for finding which resource best supports the students would require a large amount of time. Instead, we present an automatic way to select learning objects appropriate to the learning style of students in order to achieve more effective support.

A learning object (lo) can be defined as any digital or non-digital entity to present certain knowledge (ieee, 2002) to make it easier to be learnt (Wiley, 2002). A lo should be reusable and self-contained, with a clear educational purpose, and must have at least three editable components: content, learning activities, and metadata (an external structure of information) to facilitate its storage and retrieval (Chiappe et al., 2007).

It is not enough to have a lot of digital educational resources, it is necessary to automate the selection of these resources that best suit the needs of the subject who requires them. In order to have an impact for the benefit of students, it is important that lo’s are designed to support their learning styles. Learning styles are cognitive, affective and physiological features, which serve as relatively stable indicators of how students perceive, interact and respond to their learning environments (Alonso et al., 1994).

Neuro-Linguistic Programming (nlp) proposes a model of three learning styles that are based on the way we perceive information through our senses: (i) the visual system, when we remember the information that is presented to us through abstract and concrete images; (ii) the auditory system, present when we remember more the spoken information (it is easier to remember a conversation than a note on the blackboard), and (iii) the kinesthetic system, used when we remember information by interacting with or manipulating it (Burón, 1996). In this work we have considered that a lo is a digital entity that is structured in sections, and each one presents its contents through didactic resources (Figure 18.1). A didactic resource is any fact, place, object, person, process or instrument that helps the teacher and the students to achieve the learning objectives (Bedmar, 2009).

A Model to Select LO’s

Considering that there is a set of learning objects that address the same topic using different teaching strategies and, therefore, different didactic resources, in this work we propose a method to automate the selection of learning object that supports the user’s learning style. For our proposal, we have considered that the lo’s have a certain number of sections, and they handle a certain number of didactic resources like: texts, videos, animations, audios, word searches and puzzles. We have also considered the learning styles: kinesthetic, auditory and visual. Through the metadata of the lo’s, it is possible to extract information automatically about the resources students use.

Also, we consider the representation of lo’s, their sections and didactic resources through matrices. First, a matrix representing the learning styles versus a set of didactic resources, that indicates the percentage a didactic resource supports certain learning style. Second, a matrix to represent a set of didactic resources versus a set of lo’s, which indicates the percentage a section uses a specific didactic resource.

For example, based on the nlp, we have defined that the learning resources support the learning styles in the following proportions: a video supports the visual and auditory styles at 60 and 40%, respectively, while an audio supports completely the auditory style, and so on, as seen in the A matrix of Figure 18.2.

On the other hand, suppose there are four lo’s on the same topic, each with five sections that use video, audio, text, animation, puzzle and word search. Suppose that lo 1 uses two texts, two videos, and an animation; lo 2 uses three texts, an animation and a word search, and so on, as shown in Table 18.1. Then the proportions of didactic resources are computed in the B matrix of Figure 18.2 —where, for example, lo 2 does not have videos nor audios nor puzzles— it has 3 text in sections 1, 2, 5 (3/5 = 0.6); 1 word search in section 3 (1/5 = 0.2), and an animation in section 4 (1/5 = 0.2). The same computations are made for all lo’s.

The A and B matrices are called fuzzy relationship matrices, and we can apply the max-product composition (Ross, 2010) in order to obtain a new matrix, which we will call C and represents the relationship between learning styles and lo’s. Each component (i, j) of the C matrix is computed by taking the maximum of the products of element by element of the ith-row of matrix A and the jth-column of matrix B. Figure 18.2 shows how to compute one element of C.

In our example, the C matrix shows that lo 1 supports the visual styles in greater proportion and to a lesser extent the auditory and kinesthetic styles; lo 2 supports the visual style; lo 3 supports the 3 styles more or less in the same way; and lo 4 supports the kinesthetic style in greater proportion. Acceptable results are obtained in our example. lo 1 is more visual since it has more videos and texts. lo 2 is more visual because it has more texts (which support the visual style at 100%). lo 3 is balanced because the videos support the visual style in 60%, puzzles support the kinesthetic style in 80% and audios support the auditory style in 100%. Finally, lo 4 supports the kinesthetic and the auditory because it uses videos (it supports the visual and auditory styles), audios (it supports the auditory style), word searches, and puzzles (both support the kinesthetic style).

Table 18.1. The didactic resources in the sections of a set of LO’s

Section 1

Section 2

Section 3

Section 4

Section 5

LO 1

Text

Video

Video

Text

Animation

LO 2

Text

Text

Word Search

Animation

Text

LO 3

Video

Puzzle

Video

Audio

Puzzle

LO 4

Video

Word Search

Puzzle

Audio

Word Search

Results

Some results obtained from a set of experiments are presented in order to observe the behavior of the max-product composition. The A matrix has remained fixed with the values of Figure 18.1, and the B matrix has varied in the number of lo’s and number of sections.

In Table 18.2, the max-product composition compute that lo 1 supports kinesthetic style, because it has more puzzle and word searches (shaded in gray); lo 2 and lo 4 support visual style, because they have more animations and videos; and lo 3 supports auditory style, because it has audios and an animation. In the same way we can read the results of experiments in tables 18.3 and 18.4.

Table 18.2. Experiment results 1 of classification

Sect1

Sect2

Sect3

Sect4

Sect5

Results 1

LO 1

Audio

Puzzle

Puzzle

Text

WordS

Support Kinesthetic

LO 2

Anima

Anima

Video

Audio

Video

Support Visual

LO 3

Anima

Audio

WordS

Text

Audio

Support Auditory

LO 4

Anima

WordS

Anima

Audio

Text

Support Visual

Table 18.3. Experiment results 2 of classification

Sect1

Sect2

Sect3

Sect4

Sect5

Results 2

LO 1

Puzzle

Anima

Text

WordS

Puzzle

Kinesthetic

LO 2

Text

Text

Puzzle

Anima

Puzzle

Visual

LO 3

Audio

Text

Video

Anima

Audio

Auditory

LO 4

Video

Anima

Video

Anima

Anima

Visual

LO 5

Puzzle

Puzzle

Text

WordS

Audio

Kinesthetic

LO6

Text

WordS

Anima

Puzzle

Puzzle

Kinesthetic

Table 18.4. Experiment results 3 of classification

Sect1

Sect2

Sect3

Sect4

Sect5

Results 3

LO 1

WordS

Video

Video

Audio

Anima

Visual

LO 2

Puzzle

Video

WordS

Audio

WordS

Kinesthetic

LO 3

Anima

Text

Text

Text

WordS

Visual

LO 4

Text

Puzzle

Anima

Anima

Anima

Visual

LO 5

Audio

Video

WordS

WordS

Audio

Auditory

LO 6

Audio

Video

Puzzle

Puzzle

WordS

Kinesthetic

LO 7

Text

WordS

Audio

WordS

Anima

Kinesthetic

LO 8

Video

Audio

Puzzle

Audio

WordS

Auditory

Based on the results obtained, we can develop a software system that automatically selects the objects that most effectively support learning styles by applying fuzzy composition.

References

Alonso, C. M., Gallego, D. J., & Honey, P. (1994). Los estilos de aprendizaje: Procedimientos de diagnóstico y mejora. Mensajero.

Bedmar, J. (2009). Recursos didácticos en el proceso de enseñanza-aprendizaje: Temas para la educación. Revista Digital para Profesionales de la Enseñanza, 5.

Burón, J. (1996). Enseñar a aprender: Introducción a la metacognición. Mensajero.

Chiappe, A., Segovia, Y., & Rinón, H.Y. (2007). Toward an Instructional Design Model Based on Learning Objects. Educational Technology Research and Development, 55, 671-681.

IEEE Standard (2020). ieee 1484.12.1-2020 - ieee Approved Draft Standard for Learning Object Metadata. Learning Technology Standards Committee. https://standards.ieee.org/standard/1484_12_1-2020.html

Ross, T. J. (2010). Fuzzy Logic with Engineering Applications (3th Ed.). Wiley.

Wiley, D. (Ed.) (2002). The Instructional Use of Learning Objects. Agency for Instructional Technology (ait) and the Association for Educational Communications and Technology (aect).