Dimensions of Perceived Service Quality in Higher Education Virtual Learning Environments

Virtual learning environments (VLEs) make intensive use of information and communication technologies (ICTs) to underpin the delivery of basic higher education institution functions (teaching, administrative procedures, materials distribution, etc.), and are a characteristic feature of today’s educational context. The spread of e-learning and of competence in this field has given rise to growing concerns about the quality of this service. It is therefore vital to develop systems, models and scales that allow institutions to obtain valid, reliable and consistent measurements of the quality of educational services provided by means of VLEs. That is the basic aim of the work described in this article. Taking the literature on perceived service quality (PSQ) as its point of reference, and also a holistic approach to educational services, it presents a scale model that allows PSQ in such environments to be measured. The scale is formed by 24 items grouped into four dimensions: core business (teaching), facilitative or administrative services, support services and user interface. La aparición de entornos virtuales de aprendizaje (EVA) caracterizados por el uso de las tecnologías de la información y la comunicación (TIC) en las diversas funciones institucionales básicas de la educación superior (docencia, procesos administrativos, desarrollo y distribución de materiales, etc.) constituye un elemento característico del contexto educativo actual. La expansión del aprendizaje virtual (e-learning) y la competencia en este ámbito han hecho que aparezca una creciente preocupación por la calidad de este servicio. En este sentido resulta necesario desarrollar sistemas, modelos y escalas que permitan obtener medidas válidas, fiables y consistentes de la calidad de los servicios educativos que se ofrecen en EVA. Este es el objetivo básico del presente trabajo, que toma como punto referencia la literatura sobre calidad del servicio percibida (CSP), y adoptando una perspectiva holística de los servicios formativos se presenta una escala modelo que permite medir la CSP en dichos entornos. Esta escala está compuesta por 24 ítems que se subsumen en cuatro dimensiones: servicio esencial (docencia), servicios facilitadores o administrativos, servicios de apoyo e interfaz del usuario.


Introduction
Today's context of higher of education is one of change, with new educational needs to meet the demands of the knowledge society, ever-growing numbers of universities, budget cutbacks, and information and communication technologies (ICTs) offering new opportunities for face-to-face and distance service provision. This context forces universities to re-examine their structures, strategies and processes, and, in particular, to adopt competitive strategies that allow them to differentiate their offerings on the basis of higher levels of quality (DeShields et al., 2005). More attention is now being paid to perceived service quality (PSQ) from the university students' perspective (O'Neill & Palmer, 2004;Stodnick & Rogers, 2008). This encompasses diverse yet related issues such as determining the dimensions that form part of the PSQ construct, designing the quality management model and dealing with the issues arising from its implementation.
However, in order to articulate this strategy, it is essential not only to ascertain the types of attribute that students take into account when assessing quality, but also to determine their relative importance (Nath & Zheng, 2004). So the need for assessment and conceptualisation is critical to e-services because of their intangible and impersonal nature on the one hand, and the difficulty in defining tangible indicators on the other (Zeithaml et al., 2002).
Moreover, a view held by many authors on the subject is that traditional PSQ models cannot be applied automatically to virtual environments, mainly because their features are very different (Cox & Dale, 2001). Usually, PSQ measurement scale items are connected with the personal interaction that takes place in traditional services (Bitner, 1990). In the absence or lack of physical interaction, the dimensions are assessed under different criteria (Long & McMellon, 2004;Ward et al., 2010) and therefore need "to be reformulated before they can be meaningfully used in an e-service context" (Riel et al., 2001, p. 363). However, there is very little literature on perceived e-service quality (PeSQ) (Sureschandar et al., 2001;Zeithaml & Bitner, 2003;Parasuraman et al., 2005). So "an important research priority is to examine the scales in the context of pure-service sites" (Parasuraman & Grewal, 2000, pp. 214 and 229).
As an extension to existing works on offline higher education (O'Neill & Palmer, 2004;Joseph et al., 2005;Marzo-Navarro et al., 2005;Palmer & Holt, 2009), this work takes a holistic approach that does not focus exclusively on the assessment of teaching. Rather, it includes auxiliary university services that form part of a student's overall experience. It is about analysing the dimensions that have an impact on online students' perceptions of university service quality.

Assessment of perceived service quality in virtual learning environments
managers are capable of developing programmes that can satisfy students. However, the literature on quality points to the opposite, suggesting that users' needs are the relevant issue. So, ascertaining what these needs are is the first and fundamental step of the process, since it is crucial to have prior knowledge of what is expected (Zeithaml et al., 2002). On the other hand, many PSQ initiatives are weighed down by too much emphasis on the technical dimensions or aspects of quality, such as academic performance and research activity, to the detriment of functional aspects that are more related to PSQ and satisfaction. Indeed, such aspects can be used to create a competitive advantage (O'Neill & Palmer, 2004;Udo et al., 2011).
A review of the literature on the assessment of PSQ in face-to-face education revealed that many approaches were inspired by SERVQUAL, 1 SERVPERF 2 or own item scales centred on the assessment of teaching quality. However, the service provided in a virtual learning environment (VLE) has two particular characteristics: it is pure 3 because it does not take the form of a one-off transaction (it requires prolonged interaction over a period of time) and complex because it includes teaching and supplementary services.
Given these characteristics, most of the research on e-services, which focuses on the analysis of websites and e-commerce, cannot be applied to the service provided in a VLE. The scarcity of studies on this topic is therefore worthy of note (Table 1).
The analysis shows that O'Neill and Palmer (2003), and Udo et al. (2011) applied a methodology that was specific to PSQ assessment in higher education VLEs, albeit limited to a particular university service (a library) and to e-learning on a degree course, respectively. The other works are of an exploratory nature (they used their own scales, analysed the level of satisfaction with specific services and/or did not allow the dimensions of the construct to be identified). That is why their conclusions are not comparable to those obtained in similar studies of face-to-face learning (De Lange et al., 2003;Ehlers, 2004).
Taking the limitations of these studies as the starting point, the aim of this work is to get an understanding of the students' overall experience of the service, which includes all of the teaching and non-teaching services offered, and to capture the particular features of VLEs. The reason for doing so is that it is likely, in the presence of these particular features, that students will assess PSQ differently from the way they do so in face-to-face learning (Long & McMellon, 2004;Garza, 2010).

Empirical analysis: method and results
This section describes the research process that was followed to identify, rigorously and reliably, the dimensions and attributes that have an impact on online students' perceptions of university service quality. Section 4 discusses the content of these dimensions and their implications for management. Bearing in mind the attributes generated in this qualitative analysis process, and taking the e-SERVQUAL scale structure (Parasuraman et al., 2005;Zeithaml et al., 2002) as the benchmark, a questionnaire containing 33 items was designed. A questionnaire pre-test was conducted and, after making the necessary adjustments, a final total of 30 items was obtained. These were the ones contained in the questionnaire sent by e-mail to the personal e-mail addresses of the students enrolled on the UOC's undergraduate programmes (25,223). From this total number of students, 1,870 valid responses were received, representing a sample error of 2.18%. In addition, it was found that the characteristics of the students whose opinions were ultimately taken into account in the study did not substantially differ from those of the study population. Moreover, there were no significant biases arising from exclusively considering those students who had decided to fill in the questionnaire voluntarily, contrasting the absence of significant differences between the earliest and the latest responses (Amstrong & Overton, 1977).

Quantitative analysis
In order to determine the dimensions of PSQ in online higher education, a factor analysis was performed, initially exploratory and subsequently confirmatory.

Factor analysis
After ensuring that the sample data were suitable for an exploratory factor analysis to be performed, the principal component with orthogonal rotation method was applied. This analysis highlighted four factors (see Table 3) that subsume 24 attributes of the scale initially designed, which explain 60.3% of the variance. In order to validate the result obtained, a confirmatory factor analysis was performed by means of the structural equation model (Hair et al., 2004).
In order to study the proposed model's goodness of fit, a three-level assessment was performed: (1) test of overall model fit (absolute, incremental and parsimonious), (2) test of measurement model, and (3) test of structural model fit (Barrio & Luque, 2000).
The model fit measurement analysis in Table 2, using various indices that are generally analysed in such cases (Barrio & Luque, 2000), allowed a better fit of the four-factor overall model to be confirmed. Table 3 below, the statistical significance and reliability of each indicator, as well as the composite reliability and the variance extracted from each dimension, were at acceptable levels (Hair et al., 2004), thus allowing a good fit of the measurement model to be confirmed.

As shown in
5. These dimensions were connected with the design and the focus of the programme, the didactic materials and resources, the development of learning, its evaluation, the staff's speed in responding (teaching, administrative and information technology [IT] staff ), the appropriateness of the response, the staff's friendliness and accessibility, the simplicity of administrative procedures, the physical delivery of documentation, relationships with fellow students, the user interface, and the costs and benefits. The structural model fit was also satisfactory, as all the regressions of the items in relation to their latent variables, as well as the rest of the relationships established in the structural model, were significant (Table 3).

Reliability, validity and explanatory power of the scale
Once the scale had been obtained, formed by 24 items grouped into four dimensions, their reliability and validity were confirmed by means of different analyses.

Reliability
In the exploratory factor analysis and by means of Cronbach's alpha, it was found that the scale as a whole (0.93) and each of the dimensions were internally consistent. This conclusion was subsequently corroborated in the confirmatory analysis, as all of the items' factor loads in relation to their latent variables were significant, and the composite reliability of each factor was higher than 0.70 en every case (Hair et al., 2004).

Validity
Content, construct and predictive validity were assessed.  any new attributes, though they did contain more detailed assessments of aspects already considered.
This was another piece of evidence that helped to corroborate the questionnaire's validity (Capelleras & Veciana, 2001). (1) Convergent validity was confirmed by means of an analysis of variance (ANOVA), considering the PSQ level as a factor. In relation to the scale as a whole and to each factor, the differences between the means for each overall quality group were significant, both generally (Welch and Brown-Forsythe's F test) and in pairwise and inter-group multiple comparisons (Games-Howell test). In addition, a significant positive relationship was found between each factor and the overall quality group variable 6 (Capelleras & Veciana, 2001). (2) The discriminant validity of the scale was confirmed, as the different items solely and exclusively formed part of one of the latent variables according to the confirmatory factor model. And by means of the Bonferroni test, the correlations between the different measured dimensions were found to be relatively weak.
6. Therefore, a very high assessment of overall quality corresponds to a very high assessment of the scale as a whole and of each factor; a high assessment of overall quality corresponds to a high assessment of the scale as a whole and of each factor; and finally, a low assessment of overall quality corresponds to a low assessment of the scale as a whole and of each factor.  (3) In relation to nomological validity, ANOVA corroborated that the scale assessments were significantly different between students, with high, intermediate and low PSQ assessments.
And the correlation between overall quality and the scale was positive and significant (0.680).
(c) Finally, the (concurrent) predictive validity of the scale was confirmed, as a positive and significant degree of Pearson correlation was obtained (0.68) between the scale and the PSQ variable (Capelleras & Veciana, 2001).

Explanatory power
In order to assess explanatory power, a multiple linear regression analysis was performed after verifying the fulfilment of the model's necessary validity conditions. As a result, it was found that the four identified dimensions had a significant and positive impact on the variable that had to be explained, that is to say, on PSQ. In addition, while the relative importance of the first factor to overall quality was almost 37%, the fourth factor represented just 17% (less than half of the first one), and the other two dimensions 24% and 22%, respectively. Finally, and assuming a linear relationship, it was found that the four-dimension scale had limited explanatory power of PSQ (corrected r 2 = 0.501)

Discussion of results
According to the analysis, the students' perceptions of a VLE's service quality are a multi-dimensional construct formed by four factors. In line with the conclusions drawn by Grönroos (1984Grönroos ( , 1990, Grönroos et al. (2000), Liljander et al. (2002) and Riel et al. (2001Riel et al. ( , 2004, we can call these: (1) Dimension 1: core business: teaching.
(1) The core business is what, by convention, we have called teaching. This factor subsumes ten indicators. Of these, the one that makes the biggest contribution to determining the latent variable is the one pertaining to the tutors' knowledge, experience and pedagogical capacity, followed closely by the feedback that students get from tutors on activities that students carry out, and thirdly, by the speed and efficiency of teaching-related query-solving. Then, in order of importance, come the contribution to learning of the activities carried out throughout the course (practicals, assignments, exercises, debates, etc.); the structure, objectives and characteristics of the programme; and the format and content of didactic materials and resources. Finally, the remaining indicators are student guidance (e.g., providing study techniques, and academic and professional guidance); the assessment system's consistency with programme objectives and activities carried out throughout the course; the friendliness and courteousness of lecturers in their dealings with students; and the assessment system itself.
(2) The factor reflecting facilitative or auxiliary services comprises six variables that are basically connected with aspects of an 'administrative' nature. As Grönroos (1990) pointed out, these are auxiliary yet essential services for securing the core business. So, in order of impact, this factor is shaped by the following indicators: the speed and efficiency of solving administrative queries, incidents and problems (enrolment, delivery of documentation); the ease of communicating problems, complaints and queries (phone service, online assistance); the speed and efficiency of solving IT queries, incidents and problems (connection, viruses, etc.); the simplicity and clarity of administrative procedures (e.g., enrolment and dossier management); the friendliness and courteousness of the institution's administrative staff in their dealings with students; and finally, the fulfilment of administrative documentation delivery deadlines (certificates, degree certificates, etc.). Within this latent variable, the indicators that appear to have greater importance than the attributes inherent to the service (administrative procedure simplicity, administrative staff friendliness and administrative deadline fulfilment) are those pertaining to administrative and IT problem-solving, and the ease of making complaints or, in other words, what Zeithaml et al. (2002) and Parasuraman et al. (2005) call responsiveness.
(3) When referring to support or supplementary services, we mean those whose provision, albeit not compulsory, does differentiate an institution's educational offering. Four items are considered as such: supplementary services (job bank, internships in firms or institutions, extracurricular activities); synchronous activities (such as videoconferences or chats); face-to-face activities (meetings, conferences, face-to-face tutorials, etc.); and finally, virtual spaces for student interaction such discussion forums and groups.
(4) Finally, there are four user interface indicators. Listed in order of relative importance, they are speed of navigation, of web-page loading, and of file uploads and downloads; the ability to connect to the campus quickly at all times; the robustness of the campus (whether it crashes when web pages are loading, or when files are being uploaded or downloaded); and finally, the simplicity and intuitiveness of campus navigation. Of these, the one that has the biggest impact is navigation speed, and the one that has the least impact is navigation simplicity and intuitiveness. Frequent use of the interface almost certainly has a kind of 'experience effect' that causes aspects pertaining to navigation speed, connectivity and robustness to acquire a prevalent importance over those pertaining to ease of use.
This dimension corresponds to what some authors (Zeithaml et al., 2002;Parasuraman et al., 2005) call reliability or system availability, that is to say, to the technical operation of the website, meaning that it is always available and operating properly.
The structural dimension that we have just presented is similar to the one obtained in studies on PSQ conducted in other contexts by authors like Grönroos (1990), Grönroos et al. (2000), Liljander et al. (2002) and Riel et al. (2001Riel et al. ( , 2004. In addition, a higher degree of correlation was found between the first three dimensions -those representing what the institution offers-than between these and the fourth dimension pertaining to the user interface -representing the way or means by which the service is provided (Liljander et al., 2002). This is consistent with the well-known Grönroos model (1990), which postulates the existence of two basic types of quality dimension: technical quality, referring to the service outcome, that is to say, what the consumer actually receives, and functional quality as an expression of the service provision process itself.

Conclusions
Face-to-face and online students' perceptions of service quality have become a critical strategic aspect of differentiation in today's university system. Despite its importance, very few relevant theoretical contributions (Parasuraman et al., 2005) to the PSQ construct in VLEs have so far been made. This work partly fills that void. The four identified dimensions (core business, facilitative services, support services and user interface) are not analogous with those of the original SERVQUAL model (Parasuraman et al., 1988), though they are similar to those of other studies conducted in the field of e-services (Riel et al., 2001;. These dimensions are, in essence, an extension of the Grönroos model (1990), because they can be grouped into two basic overarching dimensions: technical quality and functional quality, with the latter being especially adapted to the specific characteristics of virtual environments.
According to the research presented here, universities that operate online should bear in mind that, when it comes to assessing the quality of the e-service they provide, their students pay a great deal of attention to the teaching that a university offers. To be precise, students focus above all on M. Jesús Martínez-Argüelles is a senior lecturer in the field of business organisation in the Economics and Business Studies Department at the UOC. She is also the director of the Business Management and Administration undergraduate programme at the same university. Her main lines of research include the spread and use of information and communication technologies (ICTs), and how they impact on business organisation and strategy. She has taken part in research projects, has had articles published, and has presented papers on the impact of ICTs in higher education, particularly from an e-learning management perspective, and on teleworking and its implications for organisations and for work-life balance. José M. Castán Farrero is a university professor in the Economics and Business Organisation Department at the UB. Besides his interest in teaching and his intense dedication to it, he currently undertakes his research activities within the consolidated research group on technological innovation in the production system (GITASP). His main lines of research include the sources of competitive advantage in firms. From this perspective, he has studied logistics, quality management, R&D investment, foreign participation in share capital, and financial management as sources of competitive advantage. He has had various books and articles published, and has given numerous papers on these topics at conferences.