RUSC v12i1 | Sánchez [en]

MOOCs: Game Changer or Passing Fad?

Beyond objective testing and peer assessment: alternative ways of assessment in MOOCs

María del Mar Sánchez-Vera

University of Murcia, Spain

María Paz Prendes-Espinosa

University of Murcia, Spain

Submitted in: June 2014

Accepted in: October 2014

Published in: January 2015

Recommended citation

Sánchez-Vera, M. M., & Prendes-Espinosa, M. P. (2015). Beyond objective testing and peer assessment: alternative ways of assessment in MOOCs. RUSC. Universities and Knowledge Society Journal, 12(1). pp. 119-130. doi http://dx.doi.org/10.7238/rusc.v12i1.2262

Abstract

MOOCs (massive open online courses) are an online teaching proposal that, in their short lives, have already thrown up two very different possibilities: cMOOCs and xMOOCs. Both are analysed in this paper from the perspective of assessing students’ learning. While assessment in xMOOCs is usually limited to multiple choice tests and sometimes delivering tasks, in cMOOCs the aim is to foster interaction from an educational standpoint, usually on the basis of peer assessment. Pedagogically, both models have their limitations. Multiple choice tests are mainly content bound while peer assessment has its own difficulties and drawbacks, which we explain here. We will put forward some useful ideas to give more flexibility to assessment in MOOCs (groups of experts, semantic web, portfolio, learning analytics), in an attempt to address educational assessment not as an end in itself but as another part of the educational process.

Keywords

MOOCs, assessment, feedback, e-learning

Más allá de las pruebas objetivas y la evaluación por pares: alternativas de evaluación en los MOOC

Resumen

Los MOOC (cursos masivos abiertos en línea) son una propuesta de enseñanza en línea que en su corta vida han mostrado ya dos posibilidades bien diferenciadas, los cMOOC y los xMOOC. Ambas son analizadas en este artículo desde la perspectiva de la evaluación de los aprendizajes de los alumnos, pues mientras que en los xMOOC la evaluación suele aparecer reducida a pruebas tipo test o en algunos casos a la entrega de tareas, en los cMOOC la evaluación pretende promover la interacción desde una perspectiva de evaluación formativa y suele sustentarse en la evaluación por pares. Ambos modelos pueden resultar limitados desde una perspectiva pedagógica. Por una parte, los exámenes tipo test se circunscriben principalmente al contenido y, por otro lado, la evaluación por pares tiene también dificultades e inconvenientes que explicamos en el artículo. Así, en este artículo presentamos diversas alternativas útiles para flexibilizar las posibilidades de evaluación en los MOOC (grupos de expertos, web semántica, portafolio, analíticas de aprendizaje), intentando abordar la evaluación educativa no como una finalidad en sí misma, sino como una parte más del proceso educativo.

Palabras clave

MOOC, evaluación, feedback, e-learning

1. Introduction

Although the world of MOOCs has only just “found its feet” (the first course called a MOOC took place in 2011), the courses already have their own identity. They have their defenders and their detractors, practice and theory, research that analyses the courses so far and experts who seek to devise what the future holds. The MOOC phenomenon is so new and has such importance and impact that it is, in itself, worthy of study (Adell, 2013).

In an earlier paper (Sánchez & Prendes, 2013), we reported an interesting study from the UK Department for Business Innovation and Skills which, following an exhaustive study of MOOCs, concluded that there are two trends: one involving those who embrace MOOCs enthusiastically and value their potential in learning, even going so far as to talk about a transformation of online education; the other has its detractors, who are critical and sceptical of this phenomenon. The critical stances do seem to take in an idea that we believe is important: the pedagogical literature around the turn of the century valued the potential of personalising online education, in other words, one of the most outstanding advantages of online education is the possibility of communicating with the students over distances, of having interaction and feedback on their individual learning processes. Thus, summative assessment is given more importance than formative assessment. Nevertheless, the massification inherent to MOOCs in some way brings with it a divergence from this educational trend and, as the report cited states, represents a victory for packaging content.

Before the advent of MOOCs, online assessment had been widely studied in the literature. Besides being of scientific interest, the point was made that it was one of the didactic aspects of online teaching that had, in the main, been least susceptible to pedagogical changes and innovations (Dorrego, 2006; Prendes, 2007). MOOCs, while bringing the “massifying” nature to online courses, also mean that we have to consider alternative models for assessing online students’ learning.

It is true that there is a broad typology surrounding MOOCs, and it can be categorised into two trends: xMOOCs and cMOOCs. The categories are different in both their origins and in the pedagogical model underpinning each: xMOOCs focus on content and transmission of information, while cMOOCs are more constructivist and prioritise interaction among participants (Brown, 2013). The distinguishing characteristics are well expressed in the table below, drawn up by Yuan, Powell, and Olivier (2014).

Table 1. MOOC typology (Yuan, Powell, & Olivier, 2014)

xMOOCs

cMOOCs

Scalability

Massive

Network and connections

Open access and restricted licenses

Open

Open access and free licenses

Individual learning on a single platform

Online

Network learning through various platforms and services

Acquiring knowledge and skills

Course

Common practices, learning and understanding

Scalability is a term used in computer science to refer to the ability of a system or network to skilfully manage a growing amount of work. It also refers to the ability of the system to react and adapt without loss of quality (Bondi, 2000). Relating this idea to that of xMOOCs comes from the fact that massification here focuses on participation in a training system that grows in terms of resources and people, while the massive nature of cMOOCs lies in their potential to establish learning communities and create new connections.

Moreover, cMOOCs are usually run on various platforms and do not usually have a single virtual environment, as occurs with xMOOCs. Many cMOOCs use blogs, wikis and open social networks to establish connection networks.

Basically, xMOOCs focus on content and its acquisition individually, while cMOOCs focus on community learning and the potential to learn by creating and joint collaboration, designing resources and generating the actual content.

Given this panorama, we will focus on an aspect that has always been of great relevance in any formative action: educational assessment and how MOOCs are addressing this very important aspect of the educational process.

2. A review of MOOCs and their methods of assessment

Assessment is not dealt with in the same way in the two large MOOC typologies described above. In xMOOCs, the potential lies in the content, where the formative action is focused, and so the main thing is to assess learning in relation to content, and this has been done mainly through multiple choice tests. These can be done on the same platform at the end of each week or unit, and there may also be a final test.

In contrast, in cMOOCs, peer assessment prevails, which means that one student assesses another. In order to facilitate and, to some extent, standardise the process, it is common to provide students with assessment rubrics so that they know what aspects to assess.

Table 2. Assessment trends in xMOOCs and cMOOCs

xMOOCs

cMOOCs

Multiple choice tests at the end of each week

Assessment of tasks or resources created

Multiple choice final examination

Use of rubrics

Student recognition protocols (identity check): webcam, digital ID…

One fellow student provides feedback on another

A specific platform is developed to accommodate all the information

Developed on the web using various resources and telematic tools

Table 2 can be said to represent the original trend in understanding assessment. In short, in xMOOCs there is basically a summative assessment, while in cMOOCs the idea is to assess the process. Nevertheless, peer assessment is also being incorporated into the former as there are now platforms such as Coursera that allow multiple choice tests to be used alongside peer assessment.

We would also highlight that it is possible to run a MOOC without using any single specific platform, a strategy that is used mainly in cMOOCs, which start from a website but then develop through blogs, wikis, social networks and any other tools used by the learner community. Peer assessment is not so widely used in this case.

The assessment trends found in MOOCs do not differ much from other trends considered to be assessment. To some extent, an assessment based on objective multiple choice tests is a classic model which is widely used in teaching if we consider the two models put forward by Escudero in 1998, a long time before MOOCs took off.

Table 3. Synthesis of assessment movements (Escudero, 1998)

From assessment of “before”...

...to another version of assessment

Assessment of results, in particular those related to the students’ learning

Assessment of processes and of each of the components and subjects involved in the teaching

Quantitative

Qualitative

Hierarchical, bureaucratic top-down control

Social, democratic, participative control

Emphasis on decisions relating to design, methodology, tools

Emphasis on the underlying theory, on systems of criteria and values, on the importance of the contexts in which what is assessed happens

Decision taking and reasoning

Stimulating social debate and reflection

Grading, report preparation and administrative decisions

Emphasis on use and on social and educational implications

It is interesting to compare Tables 2 and 3 as a means of reflecting on the teaching model we are introducing when adopting a specific decision about the assessment of students.

2.1 Advantages and limitations of assessment through tests

Online assessment through multiple choice cloze tests is nothing new (Prendes, 2007), since before MOOCs, these were used either on-site or online in the majority of cases in the realm of virtual campuses. The use of multiple choice tests does not have to be negative, provided that they are incorporated into a broader and coherent methodology. Among the advantages for the teacher of objective testing is that they offer reliable measurements of an area of knowledge, while for the student, in a wider framework, they can, when accompanied by appropriate feedback, reinforce the learning of content.

The problem arises when MOOCs focus exclusively on this aspect, i.e., when a test of this nature has to be passed to obtain a certificate.

On the web, taking the concept of the tests to the extreme may lead to the mistaken idea that assessment by test of educational content is also an indication of a whole range of educational parameters (quality of the course, teachers, content, etc.) and that other components such as skills can easily be assessed, when these really require another type of formative assessment. xMOOCs usually employ tests based on answers to previously seen content, and therefore foster rote learning and conceptual learning. Without additional strategies and assessment tools, we will be foregoing more recent assessment models of processes that take into account not so much content as students’ skills.

This perspective of assessment based on a succession of tests responds to the model described in Table 2 as a “before” assessment model, and it is worth stressing that the “before” was stated by Escudero in 1998.

2.2 Advantages and limitations of peer assessment

Although peer assessment has characterised the type of assessment cMOOCs make, it is true that there are more and more platforms associated with xMOOCs that also include peer assessment.

On the plus side, we can point to recent research (Luo & Robinson, 2014) reporting that students like peer assessment on MOOCs. In this study, the students indicated they had received fair grades and useful feedback, which they considered to be a motivating element.

As regards the validity of scores given to students, Piech, Huang, Chen, Do, Ng, and Koller (2013) cite the study by Kulkarni, Pang-Wei, Chia, Papadopoulos, Koller, and Klemeer (pending publication), which states that, on average, the grades awarded by a group of facilitators are not significantly different from those given by students in peer assessment.

From a general perspective, students’ becoming assessors may bring various spin-offs. Assessing in itself is an important learning experience, although it does need to be clearly designed and explained to the students.

It is also true that peer assessment seeks to make a formative assessment and to take into account the huge number of students enrolled on MOOCs. This may at first seem to be an enriching system but, as O’Toole (2013) states, rather than calling it peer assessment, we should be calling it peer grading, since in many case what ultimately happens is that the students are given closed assessment rubrics along with a series of instructions on how to apply them, which rather defeats the object.

Downes (2013) analyses another two difficulties associated with this method, against which it is difficult to argue:

  • “The blind leading the blind”. An interesting name for a phenomenon that may arise when leaving assessment in the hands of the learners, as they may create their own rules on the basis of criteria that are not well founded.
  • “The charlatan”. Some students are not experts but they believe they are, which may lead them to issue mistaken opinions, or even to erroneously consider another student to be an expert.

3. Back to the past or looking forward? Assessment alternatives in MOOCs

In 2013, The Chronicle carried out a study with over 100 MOOC teachers. Among other questions of interest, they were asked which assessment method they had used. We would highlight here a comment made by a teacher who had worked with the Coursera platform (Kolowich, 2013): “I would, of course, prefer to be able to read each piece of work carefully and follow its logic, but that is a technological issue that I believe Coursera will soon solve.” We do not agree that this is a technological problem, since the careful reading of students’ work when there is a huge number of students is a problem of teacher-student ratio, so a huge number of teachers would be needed to balance things out. Where technology can help to some extent is in facilitating grading, as we will see.

In the same vein, Sandeen (2013) explains that 74% of teachers stated they had used automatic marking and 34% had made use of peer assessment. Of the former, 67.1% had found automatic marking to be highly reliable and 30% said it was fairly reliable. Among the 34% who used peer assessment, the level of reliability was reported as being lower, with 34% claiming it was highly reliable and 71% describing it as fairly reliable.

The huge numbers of students mean that assessment is complicated, so any software that eases the task is welcome; however, MOOCs should not represent a step backwards in teaching. If we understand assessment as a qualitative aspect that forms part of the learning process, we need to seek alternatives that enable us to marry this to the high number of students on the courses.

Below, we look at some proposals that could be used on MOOCs as part of an appropriately designed strategy. Combining a variety of these can broaden our outlook and offer new opportunities.

3.1 Peer assessment 2.0

We have already spoken about peer assessment and its drawbacks. Even so, that does not mean that it should be automatically discarded. As Downes (2013) states, peer assessment may work very well for blog entries or discussion forums.

In order to avoid problematic issues with the model (like a student not taking assessment seriously, a failure to understand the rubric or not knowing how to do it), professor Duneier, in an article in The New York Times, explains that, in the method he used, the student had to assess five fellow participants in order to receive his or her own grade, which was the mean of the grade awarded by the five fellow participants (cited by Lewin, 2012).

We give the name peer assessment 2.0 to a new peer assessment model that includes other approaches that may make the model more effective. One option is to understand peer assessment as yet another task within MOOCs, which may even be assessed by a fellow participant, in order to ensure higher reliability. This would imply designing procedures based on triangulation, which is highly characteristic of qualitative assessment.

Designing a mixed peer assessment approach based on peer assessment combined with assessment by experts who supervise the process may prove to be more effective. Similarly, tasks can be assigned in peer assessment according to the state of the learning and the participant’s context, instead of randomly (O’Toole, 2013).

3.2 Network-based grading

Network-based grading is explained by Downes (2013) by the fact that in this model people are not assessed on individual pieces of work (as occurs in the main in peer assessment), but according to the network metric, a classic approach in assessment of collaborative tasks (see Prendes, 2003). Downes gives Klout as an example, an application that measures our influence in social networks by assigning a value between 0 and 100, based on various factors like retweets, mentions, friends on Facebook, etc. (related to the idea of learning analytics that will be dealt with later), and which, since the beginning of 2014, allows score or influence to be increased by creating content.

Developing a similar idea for MOOCs so that the importance of our rating lies in how we create and share content would seem to be interesting indeed.

3.3 Portfolio

The possibilities of the portfolio in online assessment have already been outlined (Prendes & Sánchez, 2010). Among other things, portfolios enable an understanding of the students’ learning process as well as providing them with useful feedback. Downes (2013) explains how a portfolio in a MOOC can serve as a resource that a student has to present as proof of his or her learning. The combination of portfolio strategies with peer assessment and the semantic web is of considerable interest given its flexibility and the possibility of using a wide range of complementary techniques.

3.4 The mantle of the expert

O’Toole (2013) proposes the mantle of the expert as an alternative assessment in MOOCs. Designed by Dorothy Heathcote in 1985, this methodology starts by grouping students who are designated as expert assessors according to their area of knowledge. Together, they have to perform the assessment and, to do so, they negotiate their expectations with other groups of assessors. The teacher may act as a facilitator, aiding consensus and assuring compliance with minimum requirements and institutional rules of the course. Each group is responsible for its own section being developed by the other groups, and they can create a product to explain the whole process followed.

3.5 Semantic web

Codina (2003) explains that semantic web is the name of an aspiration, of an aim that, if totally fulfilled, will radically change the web as it is today. To speak of the semantic web implies that “the significance of the data published can be known by humans and applications alike” (Fernández-Breis, Prendes-Espinosa, Castellanos-Nieves, Martínez-Sánchez, Valencia-García, & Ruíz-Martínez, 2007). This means that information should be published to enable two interactions: content and human users and content and applications.

In today’s web we move through information by links, obtaining various resources and then we use the links to jump around, but the computer cannot record the information. In the semantic web, we first give the computer a basis with a sense of content, so that when we start to move around the information, we do so in a more logical and organised structure of knowledge (Sánchez-Vera, 2010), and this throws up a wealth of educational applications (Prendes, 2013).

Codina’s aim is ever closer. There are now programs that allow a student to perform assessment tests and, through the semantic web and ontologies, to receive assessment and feedback. One example is the OeLE program, which allows for online examinations that use open questions and for students to get feedback on their learning process (Fernández-Breis et al., 2007; Sánchez-Vera, 2010).

MOOCs have provided a new opportunity for these types of technologies. Sandeen (2013) explains how we are witnessing a push towards incorporating these technologies into the MOOC context, and some teachers who have participated in MOOCs indicate that consideration should be given to the idea of using automatic assessment technologies in them (Kolowich, 2013).

3.6 Learning analytics

Learning analytics has been defined as “measuring, collecting, analysing and communicating data about learners and their contexts with the purpose of understanding and optimising learning in the context in which it takes place” (University of Bristol, 2013). It is a growing field of study that, like MOOCs, is being tackled from various angles. Its potential lies in combining information from various sources in order to enhance learning conditions during the process and to consider various viewpoints as to what the student is doing (Booth, 2012). Data can be used from platforms and from actions that students have taken when using other tools and connecting with contacts. As with other phenomena, these analytics may have a quantitative or qualitative perspective, with emphasis on the type of information sought, how it is to be analysed and the use to be made of it.

We should not lose sight of this phenomenon, as it may provide interesting information that helps to ascertain what participants on MOOCs are doing. It may also help when it comes to performing assessment, by providing information of an individual nature about each student and also an overview of general trends within a specific group.

4. Conclusions: pedagogy-based technology

It is crucial to be aware of the fact that, depending on the stance we take towards assessment, some tools and methods will serve while others will not. If we consider assessment to be ascertaining the degree to which a student has assimilated a certain amount of information, then multiple choice may meet our aims. The problems arise when we consider that assessment makes little sense unless it forms part of the overall teaching process. If we understand assessment as part of learning, and that we need to use procedural and formative strategies, the mass use of these courses does indeed become a problem. This is the perspective to adopt, or at least we need to know how to place MOOCs correctly.

MOOCs may be very useful and proffer important information that will enrich our personal learning environment (PLE). However, if we take assessment and feedback as key parts of the educational process, we can understand why MOOCs cannot easily replace a good online non-massive course, because the facilitator is essential for guiding and contextualising the students’ learning process. We therefore think that MOOCs are one of the many possibilities that give flexibility to the educational offering; while they can never be a substitute, they can indeed be a complement.

Sandeen (2013) considers that assessment focuses the development of MOOCs from the outset, yet we firmly believe that many MOOCs focus on developing quality content or network learning, and relegate assessment and certification to a secondary level. MOOCs here are like a field for experimenting and innovating with online massive teaching strategies.

From the information collected here, there is clearly a need to foster research into the assessment made in MOOCs. Some interesting experiments exist on assessment and MOOCs, such as “wrapped MOOCs”, according to (Downes, 2013), which are MOOCs whose content and development are shared by several institutions but whose students are assessed at the institution to which they belong, which makes the assessment more contextualised.

Finally, it should be pointed out that MOOCs do not cease to be online training courses and therefore share many of the challenges in network education put forward by Ridway, McCusker, and Pead (2004):

  • To re-establish the credibility of online assessment, since there are areas in which they consider it not to be reliable.
  • To build a capable system. Programs and systems are needed that provide wider assessment than just designing tests.
  • Appropriately designed tasks. There are few real experts in creating suitable online assessment tests.
  • To establish technical standards. Agreed guidelines need to be drawn up to cover students’ needs and the procedures to be followed.
  • To improve ICT infrastructure.
  • To pay closer attention to assessment processes.
  • To make the sector more professional. High-quality professional experts in online assessment.
  • Participation by schools since most experiments are done in higher education.
  • Tools to deal with the problems of plagiary.
  • The issue of equity. Online assessment technologies need to be usable and accessible.

5. References

  • Adell, J. (2013). Los MOOC, en la cresta de la ola. Retrieved from http://elbonia.cent.uji.es/jordi/2013/03/19/los-moocs-en-la-cresta-de-la-ola/
  • Barberá, E. (2006). Aportaciones de la tecnología a la e-Evaluación. RED: Revista de Educación a Distancia. Monograph VI, 1-13 . Retrieved from http://www.um.es/ead/red/M6/barbera.pdf
  • Bali, M. (2014). MOOC pedagogy: gleaning good practice from existing MOOCs. MERLOT. Journal of Online Learning and Teaching, 10 (1), 44-56.
  • Booth, M. (2012). Learning Analytics: the new black. Educause Review Online. Retrieved from http://www.educause.edu/ero/article/learning-analytics-new-black
  • Bondi, A. (2000). Characteristics of scalability and their impact on performance. In WOSP 2000. Paper presented at the Proceedings of the 2nd International workshop on software and performance. New York, USA (40-49). New York, USA: ACM.
  • Brown, S. (2013). MOOCs, OOCs, flips and hybrids: the new world of higher education. In ICICTE 2013. Paper presented at the Proceedings of IEEE International conference. Austin, USA. (pp. 237-247). North Carolina, USA: IEEE.
  • Codina, L. (2003). La web semántica: una visión crítica. El profesional de la información, 12(2), 149-152.
  • Daradoumis, T., Bassi, R., Xhafa, F., & Caballé, S. (2013). A review on massive e-learning (MOOC) design, delivery and assessment. Paper presented at the Eighth International Conference on P2P, Parallel, Grid, Cloud and Interpreting Computing. Compiegne. USA: IEEE.
  • Davis, H., Dikens, K., Leon-Urrutia, M., Sanchéz-Vera, M. M., & White, S. (2014). MOOCs for Universities and Learners an analysis of motivating factors. Paper presented at the 6th International Conference on Computer Supported Education. Retrieved from http://eprints.soton.ac.uk/363714/
  • Department for business, innovation & skills (2013). The maturing of the MOOC: literature review of massive open online courses and other forms of online distance learning. Retrieved from https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/240193/13-1173-maturing-of-the-mooc.pdf
  • Dorrego, E. (2006). Educación a distancia y evaluación del aprendizaje. RED. Revista de Educación a Distancia, 6 , 1-23. Retrieved from http://www.um.es/ead/red/M6/dorrego.pdf
  • Downes, S. (2013). Assessment in MOOCs [Web log post]. Retrieved from http://halfanhour.blogspot.com.es/2013/05/assessment-in-moocs.html
  • Escudero-Muñoz, J. M. (1998). Evaluación Educativa: Aproximación general, orientaciones y ámbitos. In J. M. Escudero, M. T. González, & J. Del Cerro. Evaluación de programas, centros y profesores (pp. 4-22 ). Murcia: Diego Marín.
  • Fernández-Breis, J., Prendes-Espinosa, M. P., Castellanos-Nieves, D., Martínez-Sánchez, F., Valencia-García, R., & Ruíz-Martínez, J. (2007). Evaluación en e-learning basada en tecnologías de la Web semántica y procesamiento del lenguaje natural. Murcia: Diego Marín.
  • Fournier, H., Kop, R., & Durand, G. (2014). Challenges to research in MOOCs. MERLOT. Journal of Online Learning and Teaching, 10(1),1-15.
  • Kolowich, S. (2013). The professors who make the MOOCs. The Chronicle of Higher Education. Retrieved from http://chronicle.com/article/The-Professors-Behind-the-MOOC/137905/#id=overview
  • Lewin, T. (2012). College of the future could be come one, come all. The New York Times. Retrieved from http://www.nytimes.com/2012/11/20/education/colleges-turn-to-crowd-sourcing-courses.html?_r=0
  • Linn, R. (2011). Test-based accountability. The Gordon Commission on the Future of Assessment in Education. Retrieved from http://www.gordoncommission.org/rsc/pdf/linn_test_based_accountability.pdf
  • Luo, H., & Robinson, A. C. (2014). Is peer grading a valid assessment method for Massive Open Online Courses (MOOCs)? Paper presented at the 7th Annual International Symposium. Emerging Technologies for Online Learning. Retrieved from http://sloanconsortium.org/conference/2014/et4online/peer-grading-valid-assessment-method-massive-open-online-courses-moocs
  • O’Toole, R. (2013). Pedagogical strategies and technologies for peer assessment in Massively Open Online Courses (MOOCs). Unpublished discussion paper. University of Warwick, Coventry. Retrieved from http://wrap.warwick.ac.uk/54602/
  • Piech, C., Huang, J., Chen, Z., Do, C., Ng, A., & Koller, D. (2013). Tuned models of peer assessment in MOOCs. Paper presented at the International Conference on Educational Data Mining. Retrieved from http://arxiv.org/abs/1307.2579
  • Prendes, M. P. (2003). Aprendemos... ¿cooperando o colaborando? Las claves del método. In F. Martínez Sánchez, Redes de comunicación en la enseñanza. Las nuevas perspectivas del trabajo corporativo (pp. 93-128). Barcelona: Paidós.
  • Prendes, M. P. (2007). Internet aplicado a la educación: estrategias didácticas y metodologías. In J. Cabero, (Coord.), Nuevas tecnologías aplicadas a la educación, (pp. 205-222). Madrid: McGraw Hill.
  • Prendes, M. P. (2013). Las aplicaciones educativas de la Web Semántica. In I. Aguaded, & J. Cabero, (Coords.). Tecnologías y medios para la educación en la e-sociedad (pp. 289-315). Madrid: Alianza.
  • Prendes, M. P., & Sánchez, M. M. (2008). Portafolio electrónico: posibilidades para los docentes. Pixelbit. Revista de Medios y Educación, 1(32), 21-34. Retrieved from http://www.um.es/gite/mmarsanchez/Proyectos%20y%20publicaciones_files/2.pdf
  • Ridway, K., McCursker, S., & Pead, D. (2004). Literature review of E-assessment. Futurelab. Retrieved from http://hal.archives-ouvertes.fr/docs/00/19/04/40/PDF/ridgway-j-2004-r10.pdf
  • Salinas, J. (1999). Enseñanza flexible, aprendizaje abierto. Las redes como herramientas para la formación. EDUTEC: Revista Electrónica de Tecnología Educativa, 10, 1-18. Retrieved from http://edutec.rediris.es/Revelec2/Revelec10/revelec10.html
  • Sánchez-Vera, M. M. (2010). Espacios Virtuales para la evaluación de aprendizajes basados en herramientas de Web Semántica [unpublished doctoral dissertation]. Universidad de Murcia, Murcia.
  • Sánchez-Vera, M. M., & Prendes, M. P. (2013). La participación del alumnado en los cursos masivos (MOOC). Paper presented at the II Congreso Internacional de Innovación Docente. Retrieved from http://digitum.um.es/xmlui/bitstream/10201/39138/1/C156_mooc.pdf
  • Sandeen, C. (2013). Assessment place in the new MOOC word. Research and practice in assessment, 8, 1-8.
  • Tempelaar, D., Cuypers, H., Van de Vrie, E., Heck, A., & Van der Kooij, H. (2013). Formative assessment and learning analytics. Paper presented at LAK 2013. Learning Analytics and Knowledge. Leuven, Belgium.
  • University of Bristol (2013). Learning analytics and technology enhanced assessment (TEA). Retrieved from http://www.bris.ac.uk/education/research/sites/tea/publications/learninganalytics.pdf
  • Valverde, J. (2014). MOOCs: una visión crítica desde las ciencias de la educación. Revista de currículum y formación del profesorado, 18(1), 1-19.
  • Yuan, L., Powell, S., & Olivier, B. (2014). Beyond MOOCs: sustainable online Learning in Institutions. Retrieved from http://publications.cetis.ac.uk/2014/898

About the authors

María del Mar Sánchez-Vera

Senior lecturer in the Department of Didactics and School Organisation of the Faculty of Education at the University of Murcia and member of the Research into Educational Technologies Group (GITE)

PhD in Pedagogy. She has worked at the National Centre for Education, Information and Communication and she has been a visiting researcher at the University of Cambridge, John Moore University (Liverpool) and the University of Southampton. She has participated in national and international projects related to the implementation of ICT in society and school, which has allowed her to publish in high-impact journals and books in the educational technology field. Currently, she is interested in different topics, such as students’ and teachers’ digital competence, social networks in educational contexts, flipped classroom and MOOCs.

Universidad de Murcia

Facultad de Educación

Campus Universitario de Espinardo

30100 Murcia

Spain

María Paz Prendes-Espinosa

Professor in the Department of Didactics and School Organisation of the Faculty of Education at the University of Murcia and director of the Research into Educational Technologies Group (GITE)

PhD in Educational Sciences. She is also secretary of the Association for the Development of Educational Technology and New Technologies in Education (EDUTEC). She has led and participated in numerous research projects, both national and international, focusing on the study of didactic image, textbooks, design and evaluation of teaching resources, e-learning, collaborative networking, online tools for training, new technologies for special education, semantic web applications, social software and Personal Learning Environments (PLEs). She is guest expert at different scientific national and international events.

Universidad de Murcia

Facultad de Educación

Campus Universitario de Espinardo

30100 Murcia

Spain

Refbacks

  • There are currently no refbacks.




 Universitat Oberta de Catalunya. eLearn Center

RUSC. Universities and Knowledge Society Journal is an e-journal edited by the Universitat Oberta de Catalunya (Barcelona).

Creative Commons
The texts published in this journal are – unless indicated otherwise – covered by the Creative Commons Spain Attribution 3.0 licence. You may copy, distribute, transmit and adapt the work, provided you attribute it (authorship, journal name, publisher) in the manner specified by the author(s) or licensor(s). The full text of the licence can be consulted here: http://creativecommons.org/licenses/by/3.0/es/deed.en.