Author: Mirjam Neelen
I went to the EARLI special interest group (SIG) 27 conference to present a poster on the DEVELOP project. SIG 27 deals with online and more objective measures of learning processes. The idea is that learning processes are very important to understand because learning is an on-going process leading to an outcome. The same goes for assessment; that requires considering the process as well. In other words, it’s shouldn’t be so much about “what” but more about “how” and “why”.
The conference focuses on the state-of-the-art as well as innovative approaches to measure the process of learning. It also looks at innovative solutions to analyse data. The main question for the DEVELOP poster was: How can you objectively measure learning processes if you 1) don’t have control over the objectives, content, and quality of the learning interventions (after all, DEVELOP will work with a trial partner’s learning interventions) and 2) don’t know if the user has completed a learning intervention?
I’m far from an experienced “academic conference participant” but I was very interested in this particular one because of the theme. I was eager to discover what is happening in this space and I was curious how big the gap is between the types of evaluations and measurements that currently take place in the workplace learning space and this academic community that focuses on measuring learning processes in an objective manner. I wasn’t 100% sure what these “objective measures” included, although things like “eye-tracking” and “fMRI” crossed my mind. In the light of the DEVELOP project, I was hoping to find out if there are possible objective learning process measures that we might be able to implement in the project.
Structure of the conference
First of all, I found the formats quite interesting. Although the conference offered traditional paper and poster presentations, there was also a flip-the-paper format in which contributors delivered a 5 minute pitch on their research and then the participants picked the one they were most interested in. This way, the conference audience broke into smaller groups, which then allowed for a lot of interaction. My favourite format was the no-or-not-perfect-data sessions in which the presenter shared the study at hand in about 10 minutes, followed by a 10 minute discussion. The fact that it was so clear beforehand that the study was far from perfect enabled very fruitful and constructive interactions.
It’s really not possible to give an overview of all the sessions because there were around 90 in total and I attended about one third of them. Let me give first paint the overall picture and then I’ll give some examples.
There was a strong focus on self-regulated learning (SRL), which can be defined as “the active, constructive process whereby learners set goals for their learning and attempt to monitor, regulate and control their cognition, motivation, and behaviour, guided and constrained by their goals and the contextual features in the environment” (Endedijk et al., 2006, p 3) and how to measure these learning processes objectively, such as through eye-tracking, physiological data (e.g. fMRI, facial expressions, and wearables that measure things like electro dermal activity, heart rate, temperature, and acceleration of movement), wearables that measure social interaction in offline settings, such as sociometric badges and online log files. Often times these objective forms of measurement were combined with more subjective ones such as questionnaires or interviews, or video recordings of participants who were, for example collaborating on a task. An interesting research method within this context is cued retrospective reporting because if you have more objective measures you can ask more focussed questions based on those.
The domains in which this type of measurements were conducted varied widely. There were sessions on reading, writing, physics, and math in primary and secondary school settings but also ones in the adult learning space, for example on collaborative writing, team work in general, civil engineering, and so forth.
As Paul Kirschner summarised, learning is cognitive (which includes processes that fall under what others would call “metacognitive”), affective/emotional, motivational, and interactional. All these aspects of learning were covered during the conference as well.
- First, context is critical. Without it there is no way to interpret your data and to extract any meaning.
- Second, it’s about combining situational variables and data and last, it’s about aligning all elements, which is very challenging.
I will give some examples of the sessions that I attended. The first was from Professor Guoying Zhao from the University of Oulu who discussed reading hidden emotions through micro facial expressions and heart rate estimation. It was a very detailed, technical, and fascinating talk. It’s quite incredible what they can see with their technology; emotions that remain hidden for the human eyes.
The second was from Professor Roger Azevedo from North Carolina State University on multimodal, multichannel process data to measure and foster SRL in real-time with advanced technologies. Azevedo presented the major theoretical, methodological, and analytical challenges that come with using this type of data.
He also discussed recently used multimodal multichannel data used to detect, track, and model SRL processes while learning with several advanced learning technologies (ALTs). Lastly, he also outlined the learning principles designed to enhance ALTs’ capability to provide real time, intelligent support of learners’ SRL processes. Azevedo gave many examples, such as using personalised eye-tracking visual attention feedback on a construction site in order to identify what hazard stimuli that received or did not receive attention. This information can also be provided as feedback to workers to communicate search process deficiency, trigger self-reflection processes, and improve subsequent hazard search performance (Jeelani, Albert, & Azevedo, 2016).
The other sessions that I attended covered a wide range of domains, however I would like to focus on the ones that discussed studies in a workplace learning context. One example of a no-to-not-perfect data session is Wijga’s and Endedijk’s study. It focussed on using process mining to develop a dynamic model of self- and social regulation in relation to team performance in the workplace.
I was particularly interested in this session because although we all seem to accept that the regulation of learning in teams contributes to enhancing team performance, we’re not really sure how it happens and how to facilitate or support it. Wijga and Endedijk have tracked ICT teams for several months. They videotaped team meetings and individual team members’ kept diaries as well. The researchers aim to extract a dynamic model of the relation between the quantity and quality of self- and social regulation of teams in relation to team performance.
Self-regulation includes four flexibly sequenced phases of recursive cognition. These phases are:
- task perception,
- goal setting and planning,
- and adaptation (Winne & Hadwin, 2008).
Social regulation can be defined as ‘the processes by which multiple others regulate their collective activity. From this perspective, goals and standards are co-constructed. Socially shared regulation is collective regulation where the regulatory processes and products are shared’ (Hadwin & Oshige, p.253, 254).
Workplace social interaction
I was also very interested in Endedijk’s, de Laat’s and Ufkes’ flip-the-paper session in which Endedijk discussed their study which monitored social interaction in a team in a healthcare context in order to identify to what extent this could be used as a proxy for informal social learning. Endedijk and colleagues continuously tracked the location of team members using WiFi tags and sociometric badges. They then used social network measures, such as density, centrality, and degree of brokerage to calculate the dynamics of social interaction patterns. Their research in still in its early phases, however the idea is that they can qualify the content of the interaction so that they’ll hopefully find some evidence for informal social learning instances. This research could be wonderfully combined with other types of social network analysis and ideally this would be analysed in the light of (team) performance or, in DEVELOP’s case, in the light of competency or career development.
Although I found many of the sessions very niche and theoretical, for the workplace learning ones I could envisage valuable practical implications (it might be because workplace learning is my own expertise, who knows ☺). The type of research that Azevedo, Endedijk, and Wijga were discussing can really help to determine what type of interactions in what type of contexts in the workplace actually support learning. If we’re able to identify patterns that way, we could also find ways to support the interactions that matter most for learning and/or performance.
Long story short, I left inspired and somewhat overwhelmed, and for a minute I even considered going for my PhD. Dark Finland playing tricks with me, I guess.
Endedijk, M., Brekelmans, M., Sleegers, P., & Vermunt, J.D., (2006). Measuring self-regulation in complex learning environments.
Hadwin, A.F., & Oshige, M., Self-regulation, co-regulation, and socially-shared regulation: exploring perspectives of social in self-regulated learning theory. Teachers College Records, 113, p 240-264.
Jeelani, I., Albert, A., Azevedo, R., & Jaselskis, E.J., (2016). Development and Testing of a Personalized Hazard-Recognition Training Intervention. Journal of Construction Engineering and Management, DOI: 10.1061/(ASCE)CO.1943-7862.0001256. Retrieved from http://ascelibrary.org/doi/pdf/10.1061/(ASCE)CO.1943-7862.0001256
Winne, P.H. & Hadwin, A.F. The Weave of Motivation and Self-Regulated Learning. In Schunk, D.H., & Zimmerman, B.J. (2008), Motivation and Self-Regulated Learning: Theory, Research, and Application (pp. 297–314). New York, NY: Routledge