1. Introduction
Although the world of MOOCs has only just “found its feet” (the first course called a MOOC took place in 2011), the courses already have their own identity. They have their defenders and their detractors, practice and theory, research that analyses the courses so far and experts who seek to devise what the future holds. The MOOC phenomenon is so new and has such importance and impact that it is, in itself, worthy of study (Adell, 2013).
In an earlier paper (Sánchez & Prendes, 2013), we reported an interesting study from the UK Department for Business Innovation and Skills which, following an exhaustive study of MOOCs, concluded that there are two trends: one involving those who embrace MOOCs enthusiastically and value their potential in learning, even going so far as to talk about a transformation of online education; the other has its detractors, who are critical and sceptical of this phenomenon. The critical stances do seem to take in an idea that we believe is important: the pedagogical literature around the turn of the century valued the potential of personalising online education, in other words, one of the most outstanding advantages of online education is the possibility of communicating with the students over distances, of having interaction and feedback on their individual learning processes. Thus, summative assessment is given more importance than formative assessment. Nevertheless, the massification inherent to MOOCs in some way brings with it a divergence from this educational trend and, as the report cited states, represents a victory for packaging content.
Before the advent of MOOCs, online assessment had been widely studied in the literature. Besides being of scientific interest, the point was made that it was one of the didactic aspects of online teaching that had, in the main, been least susceptible to pedagogical changes and innovations (Dorrego, 2006; Prendes, 2007). MOOCs, while bringing the “massifying” nature to online courses, also mean that we have to consider alternative models for assessing online students’ learning.
It is true that there is a broad typology surrounding MOOCs, and it can be categorised into two trends: xMOOCs and cMOOCs. The categories are different in both their origins and in the pedagogical model underpinning each: xMOOCs focus on content and transmission of information, while cMOOCs are more constructivist and prioritise interaction among participants (Brown, 2013). The distinguishing characteristics are well expressed in the table below, drawn up by Yuan, Powell, and Olivier (2014).
xMOOCs |
|
cMOOCs |
---|---|---|
Scalability |
Massive |
Network and connections |
Open access and restricted licenses |
Open |
Open access and free licenses |
Individual learning on a single platform |
Online |
Network learning through various platforms and services |
Acquiring knowledge and skills |
Course |
Common practices, learning and understanding |
Scalability is a term used in computer science to refer to the ability of a system or network to skilfully manage a growing amount of work. It also refers to the ability of the system to react and adapt without loss of quality (Bondi, 2000). Relating this idea to that of xMOOCs comes from the fact that massification here focuses on participation in a training system that grows in terms of resources and people, while the massive nature of cMOOCs lies in their potential to establish learning communities and create new connections.
Moreover, cMOOCs are usually run on various platforms and do not usually have a single virtual environment, as occurs with xMOOCs. Many cMOOCs use blogs, wikis and open social networks to establish connection networks.
Basically, xMOOCs focus on content and its acquisition individually, while cMOOCs focus on community learning and the potential to learn by creating and joint collaboration, designing resources and generating the actual content.
Given this panorama, we will focus on an aspect that has always been of great relevance in any formative action: educational assessment and how MOOCs are addressing this very important aspect of the educational process.
2. A review of MOOCs and their methods of assessment
Assessment is not dealt with in the same way in the two large MOOC typologies described above. In xMOOCs, the potential lies in the content, where the formative action is focused, and so the main thing is to assess learning in relation to content, and this has been done mainly through multiple choice tests. These can be done on the same platform at the end of each week or unit, and there may also be a final test.
In contrast, in cMOOCs, peer assessment prevails, which means that one student assesses another. In order to facilitate and, to some extent, standardise the process, it is common to provide students with assessment rubrics so that they know what aspects to assess.
xMOOCs |
cMOOCs |
---|---|
Multiple choice tests at the end of each week |
Assessment of tasks or resources created |
Multiple choice final examination |
Use of rubrics |
Student recognition protocols (identity check): webcam, digital ID… |
One fellow student provides feedback on another |
A specific platform is developed to accommodate all the information |
Developed on the web using various resources and telematic tools |
Table 2 can be said to represent the original trend in understanding assessment. In short, in xMOOCs there is basically a summative assessment, while in cMOOCs the idea is to assess the process. Nevertheless, peer assessment is also being incorporated into the former as there are now platforms such as Coursera that allow multiple choice tests to be used alongside peer assessment.
We would also highlight that it is possible to run a MOOC without using any single specific platform, a strategy that is used mainly in cMOOCs, which start from a website but then develop through blogs, wikis, social networks and any other tools used by the learner community. Peer assessment is not so widely used in this case.
The assessment trends found in MOOCs do not differ much from other trends considered to be assessment. To some extent, an assessment based on objective multiple choice tests is a classic model which is widely used in teaching if we consider the two models put forward by Escudero in 1998, a long time before MOOCs took off.
From assessment of “before”... |
...to another version of assessment |
---|---|
Assessment of results, in particular those related to the students’ learning |
Assessment of processes and of each of the components and subjects involved in the teaching |
Quantitative |
Qualitative |
Hierarchical, bureaucratic top-down control |
Social, democratic, participative control |
Emphasis on decisions relating to design, methodology, tools |
Emphasis on the underlying theory, on systems of criteria and values, on the importance of the contexts in which what is assessed happens |
Decision taking and reasoning |
Stimulating social debate and reflection |
Grading, report preparation and administrative decisions |
Emphasis on use and on social and educational implications |
It is interesting to compare Tables 2 and 3 as a means of reflecting on the teaching model we are introducing when adopting a specific decision about the assessment of students.
2.1 Advantages and limitations of assessment through tests
Online assessment through multiple choice cloze tests is nothing new (Prendes, 2007), since before MOOCs, these were used either on-site or online in the majority of cases in the realm of virtual campuses. The use of multiple choice tests does not have to be negative, provided that they are incorporated into a broader and coherent methodology. Among the advantages for the teacher of objective testing is that they offer reliable measurements of an area of knowledge, while for the student, in a wider framework, they can, when accompanied by appropriate feedback, reinforce the learning of content.
The problem arises when MOOCs focus exclusively on this aspect, i.e., when a test of this nature has to be passed to obtain a certificate.
On the web, taking the concept of the tests to the extreme may lead to the mistaken idea that assessment by test of educational content is also an indication of a whole range of educational parameters (quality of the course, teachers, content, etc.) and that other components such as skills can easily be assessed, when these really require another type of formative assessment. xMOOCs usually employ tests based on answers to previously seen content, and therefore foster rote learning and conceptual learning. Without additional strategies and assessment tools, we will be foregoing more recent assessment models of processes that take into account not so much content as students’ skills.
This perspective of assessment based on a succession of tests responds to the model described in Table 2 as a “before” assessment model, and it is worth stressing that the “before” was stated by Escudero in 1998.
2.2 Advantages and limitations of peer assessment
Although peer assessment has characterised the type of assessment cMOOCs make, it is true that there are more and more platforms associated with xMOOCs that also include peer assessment.
On the plus side, we can point to recent research (Luo & Robinson, 2014) reporting that students like peer assessment on MOOCs. In this study, the students indicated they had received fair grades and useful feedback, which they considered to be a motivating element.
As regards the validity of scores given to students, Piech, Huang, Chen, Do, Ng, and Koller (2013) cite the study by Kulkarni, Pang-Wei, Chia, Papadopoulos, Koller, and Klemeer (pending publication), which states that, on average, the grades awarded by a group of facilitators are not significantly different from those given by students in peer assessment.
From a general perspective, students’ becoming assessors may bring various spin-offs. Assessing in itself is an important learning experience, although it does need to be clearly designed and explained to the students.
It is also true that peer assessment seeks to make a formative assessment and to take into account the huge number of students enrolled on MOOCs. This may at first seem to be an enriching system but, as O’Toole (2013) states, rather than calling it peer assessment, we should be calling it peer grading, since in many case what ultimately happens is that the students are given closed assessment rubrics along with a series of instructions on how to apply them, which rather defeats the object.
Downes (2013) analyses another two difficulties associated with this method, against which it is difficult to argue:
- “The blind leading the blind”. An interesting name for a phenomenon that may arise when leaving assessment in the hands of the learners, as they may create their own rules on the basis of criteria that are not well founded.
- “The charlatan”. Some students are not experts but they believe they are, which may lead them to issue mistaken opinions, or even to erroneously consider another student to be an expert.
3. Back to the past or looking forward? Assessment alternatives in MOOCs
In 2013, The Chronicle carried out a study with over 100 MOOC teachers. Among other questions of interest, they were asked which assessment method they had used. We would highlight here a comment made by a teacher who had worked with the Coursera platform (Kolowich, 2013): “I would, of course, prefer to be able to read each piece of work carefully and follow its logic, but that is a technological issue that I believe Coursera will soon solve.” We do not agree that this is a technological problem, since the careful reading of students’ work when there is a huge number of students is a problem of teacher-student ratio, so a huge number of teachers would be needed to balance things out. Where technology can help to some extent is in facilitating grading, as we will see.
In the same vein, Sandeen (2013) explains that 74% of teachers stated they had used automatic marking and 34% had made use of peer assessment. Of the former, 67.1% had found automatic marking to be highly reliable and 30% said it was fairly reliable. Among the 34% who used peer assessment, the level of reliability was reported as being lower, with 34% claiming it was highly reliable and 71% describing it as fairly reliable.
The huge numbers of students mean that assessment is complicated, so any software that eases the task is welcome; however, MOOCs should not represent a step backwards in teaching. If we understand assessment as a qualitative aspect that forms part of the learning process, we need to seek alternatives that enable us to marry this to the high number of students on the courses.
Below, we look at some proposals that could be used on MOOCs as part of an appropriately designed strategy. Combining a variety of these can broaden our outlook and offer new opportunities.
3.1 Peer assessment 2.0
We have already spoken about peer assessment and its drawbacks. Even so, that does not mean that it should be automatically discarded. As Downes (2013) states, peer assessment may work very well for blog entries or discussion forums.
In order to avoid problematic issues with the model (like a student not taking assessment seriously, a failure to understand the rubric or not knowing how to do it), professor Duneier, in an article in The New York Times, explains that, in the method he used, the student had to assess five fellow participants in order to receive his or her own grade, which was the mean of the grade awarded by the five fellow participants (cited by Lewin, 2012).
We give the name peer assessment 2.0 to a new peer assessment model that includes other approaches that may make the model more effective. One option is to understand peer assessment as yet another task within MOOCs, which may even be assessed by a fellow participant, in order to ensure higher reliability. This would imply designing procedures based on triangulation, which is highly characteristic of qualitative assessment.
Designing a mixed peer assessment approach based on peer assessment combined with assessment by experts who supervise the process may prove to be more effective. Similarly, tasks can be assigned in peer assessment according to the state of the learning and the participant’s context, instead of randomly (O’Toole, 2013).
3.2 Network-based grading
Network-based grading is explained by Downes (2013) by the fact that in this model people are not assessed on individual pieces of work (as occurs in the main in peer assessment), but according to the network metric, a classic approach in assessment of collaborative tasks (see Prendes, 2003). Downes gives Klout as an example, an application that measures our influence in social networks by assigning a value between 0 and 100, based on various factors like retweets, mentions, friends on Facebook, etc. (related to the idea of learning analytics that will be dealt with later), and which, since the beginning of 2014, allows score or influence to be increased by creating content.
Developing a similar idea for MOOCs so that the importance of our rating lies in how we create and share content would seem to be interesting indeed.
3.3 Portfolio
The possibilities of the portfolio in online assessment have already been outlined (Prendes & Sánchez, 2010). Among other things, portfolios enable an understanding of the students’ learning process as well as providing them with useful feedback. Downes (2013) explains how a portfolio in a MOOC can serve as a resource that a student has to present as proof of his or her learning. The combination of portfolio strategies with peer assessment and the semantic web is of considerable interest given its flexibility and the possibility of using a wide range of complementary techniques.
3.4 The mantle of the expert
O’Toole (2013) proposes the mantle of the expert as an alternative assessment in MOOCs. Designed by Dorothy Heathcote in 1985, this methodology starts by grouping students who are designated as expert assessors according to their area of knowledge. Together, they have to perform the assessment and, to do so, they negotiate their expectations with other groups of assessors. The teacher may act as a facilitator, aiding consensus and assuring compliance with minimum requirements and institutional rules of the course. Each group is responsible for its own section being developed by the other groups, and they can create a product to explain the whole process followed.
3.5 Semantic web
Codina (2003) explains that semantic web is the name of an aspiration, of an aim that, if totally fulfilled, will radically change the web as it is today. To speak of the semantic web implies that “the significance of the data published can be known by humans and applications alike” (Fernández-Breis, Prendes-Espinosa, Castellanos-Nieves, Martínez-Sánchez, Valencia-García, & Ruíz-Martínez, 2007). This means that information should be published to enable two interactions: content and human users and content and applications.
In today’s web we move through information by links, obtaining various resources and then we use the links to jump around, but the computer cannot record the information. In the semantic web, we first give the computer a basis with a sense of content, so that when we start to move around the information, we do so in a more logical and organised structure of knowledge (Sánchez-Vera, 2010), and this throws up a wealth of educational applications (Prendes, 2013).
Codina’s aim is ever closer. There are now programs that allow a student to perform assessment tests and, through the semantic web and ontologies, to receive assessment and feedback. One example is the OeLE program, which allows for online examinations that use open questions and for students to get feedback on their learning process (Fernández-Breis et al., 2007; Sánchez-Vera, 2010).
MOOCs have provided a new opportunity for these types of technologies. Sandeen (2013) explains how we are witnessing a push towards incorporating these technologies into the MOOC context, and some teachers who have participated in MOOCs indicate that consideration should be given to the idea of using automatic assessment technologies in them (Kolowich, 2013).
3.6 Learning analytics
Learning analytics has been defined as “measuring, collecting, analysing and communicating data about learners and their contexts with the purpose of understanding and optimising learning in the context in which it takes place” (University of Bristol, 2013). It is a growing field of study that, like MOOCs, is being tackled from various angles. Its potential lies in combining information from various sources in order to enhance learning conditions during the process and to consider various viewpoints as to what the student is doing (Booth, 2012). Data can be used from platforms and from actions that students have taken when using other tools and connecting with contacts. As with other phenomena, these analytics may have a quantitative or qualitative perspective, with emphasis on the type of information sought, how it is to be analysed and the use to be made of it.
We should not lose sight of this phenomenon, as it may provide interesting information that helps to ascertain what participants on MOOCs are doing. It may also help when it comes to performing assessment, by providing information of an individual nature about each student and also an overview of general trends within a specific group.
4. Conclusions: pedagogy-based technology
It is crucial to be aware of the fact that, depending on the stance we take towards assessment, some tools and methods will serve while others will not. If we consider assessment to be ascertaining the degree to which a student has assimilated a certain amount of information, then multiple choice may meet our aims. The problems arise when we consider that assessment makes little sense unless it forms part of the overall teaching process. If we understand assessment as part of learning, and that we need to use procedural and formative strategies, the mass use of these courses does indeed become a problem. This is the perspective to adopt, or at least we need to know how to place MOOCs correctly.
MOOCs may be very useful and proffer important information that will enrich our personal learning environment (PLE). However, if we take assessment and feedback as key parts of the educational process, we can understand why MOOCs cannot easily replace a good online non-massive course, because the facilitator is essential for guiding and contextualising the students’ learning process. We therefore think that MOOCs are one of the many possibilities that give flexibility to the educational offering; while they can never be a substitute, they can indeed be a complement.
Sandeen (2013) considers that assessment focuses the development of MOOCs from the outset, yet we firmly believe that many MOOCs focus on developing quality content or network learning, and relegate assessment and certification to a secondary level. MOOCs here are like a field for experimenting and innovating with online massive teaching strategies.
From the information collected here, there is clearly a need to foster research into the assessment made in MOOCs. Some interesting experiments exist on assessment and MOOCs, such as “wrapped MOOCs”, according to (Downes, 2013), which are MOOCs whose content and development are shared by several institutions but whose students are assessed at the institution to which they belong, which makes the assessment more contextualised.
Finally, it should be pointed out that MOOCs do not cease to be online training courses and therefore share many of the challenges in network education put forward by Ridway, McCusker, and Pead (2004):
- To re-establish the credibility of online assessment, since there are areas in which they consider it not to be reliable.
- To build a capable system. Programs and systems are needed that provide wider assessment than just designing tests.
- Appropriately designed tasks. There are few real experts in creating suitable online assessment tests.
- To establish technical standards. Agreed guidelines need to be drawn up to cover students’ needs and the procedures to be followed.
- To improve ICT infrastructure.
- To pay closer attention to assessment processes.
- To make the sector more professional. High-quality professional experts in online assessment.
- Participation by schools since most experiments are done in higher education.
- Tools to deal with the problems of plagiary.
- The issue of equity. Online assessment technologies need to be usable and accessible.