Secondary Logo

Journal Logo


Using M-Learning on Nursing Courses to Improve Learning


Author Information
CIN: Computers, Informatics, Nursing: May 2011 - Volume 29 - Issue 5 - p 311-317
doi: 10.1097/NCN.0b013e3181fcbddb
  • Free



With the new "anytime, anywhere computing" paradigm (ubiquitous computing), a shift from "electronic" to "mobile" services has begun. So, just as e-commerce has extended to m-commerce, e-learning now includes m-learning (mobile learning).1 In the field of teaching and learning, the expected benefits of this new mobility include, among others, more efficient instruction along with an improvement in the learning outcome. In this framework, it is crucial to create new tools that add value to the teaching-learning process, but it is also important to have tools that allow us to exert some control over the results of the learning process. Within this context, this article presents a new mobile application designed for self-assessment. It allows students to test their knowledge and expertise on a specific topic using questionnaires designed by their teachers. Young students use mobile phones as an integral part of their lives and think of them as a crucial communication tool. Therefore, providing them with learning tools that work on these mobile applications is important because this will increase their motivation to learn. However, designing and implementing new tools are not enough; they must form an integral part of learning activities, and their usefulness must also be measured. For these reasons, this article also describes the actions undertaken to test the new application on a group of nursing students. In Background and Motivation, a brief review of the latest developments in m-learning along with the motivation for our research is presented. In the third section, The System, the mobile application and the system that provides the appropriate support for the learning action are presented. In the fourth section, The Experiment, we describe the experiments carried out, and finally, Results and Conclusion include the results, analysis, and conclusions.


A number of definitions, covering a wide range of aspects, have been used to denote the term "m-learning." Some of them identify m-learning as a mere evolution of e-learning, while others define it as an independent trend that has its origins in the ubiquitous nature of present-day communication systems. These identify m-learning as "location-independent and situation-independent."2 McLean3 considers m-learning to be a term coined to cover a wide range of issues created by the convergence of new mobile technologies, a wireless infrastructure and development in e-learning. Further analysis of available definitions allows us to summarize the two essential features of m-learning: (1) because it is mobile, m-learning allows the educational process to take place anywhere and at anytime; and (2) any kind of handheld device (small and easy to carry) along with a communication technology is required.

A recurring theme in different works on m-learning4,5 is that mobile/electronic education should not attempt to replace traditional education, but should instead support both students and teachers by providing them with services that facilitate teaching, learning, and/or any related administrative tasks. The basic approach is integrative, combining a variety of (mobile and nonmobile) devices and using either wired or wireless transmission technologies.1 This hypothesis is supported by Houser et al.6 After analyzing successful m-learning projects, they conclude that all the projects they studied used mobile devices as part of a mixed educational program (b-learning or "blended learning") that combined traditional attendance-based education with Web learning and mobile components.

Shepherd7 proposes three possible uses for m-learning. (1) The first is to use m-learning to help during the preparatory phase, before any learning actually takes place, through the use of "diagnosis." This includes pretests, learning-style tests, attitudinal surveys and the gathering of prerequisite data about the learner's experience, job, and qualifications. This useful data can then be used to avoid wasting time during teaching by adapting the learning experience to each learner's profile. (2) The second is to use m-learning as a means of support for students when they are preparing for their examinations, reviewing content, and reinforcing the knowledge they have acquired so far. Finally (3), the most interesting challenge for m-learning (according to Shepherd) is the contribution it can make to continuous on-demand learning (usually applied to real-world problems).

Another issue that must be considered is the kind of content that can be delivered by means of m-learning. As Wuthrich et al point out,8 the special features of the mobile devices used in this type of initiative mean they can be used as a conduit for distributing self-evaluation tools and study guidelines and, in some cases, enable feedback between educators and learners. These authors emphasize the essential role that tests and questionnaires play in knowledge acquisition and consider that mobile devices are especially suitable to the questionnaire format, considering the special circumstances of mobility that students face nowadays.

The research presented here is based on these ideas. Our aim was to build a mobile application that could be used as an aid to students' self-evaluation. Teachers first design their learning action in a traditional way, but use the new tool as support. They are now able to provide the students with a set of questionnaires designed to reinforce learning. With this tool, we are able to demonstrate how current technologies enable mobile learning initiatives to be conducted in accordance with the aforementioned trends. However, it probably will not make a real contribution to the present state of the art. Therefore, the second objective of this research was to assess the real usefulness of the application in terms of how it performs with one group of learners and also in terms of the students' attitude toward the new tool and the methods that its inclusion in the course gives rise to. None of the previously cited works include similar surveys that allow us to draw any real conclusion about the real effects of mobile self-assessment on learning actions, and our aim is to fill that gap.

There are many works in the different educational levels that incorporate m-learning as a tool in the teaching and learning process. As it is shown in Martín and Carro's9 article from the Universidad Autónoma of Madrid that presents a system to support the generation of adaptive mobile learning environments, the work by Ktoridou et al10 evaluates the viability of integrating mobile technology into the teaching and learning processes in higher education, and Park11 discusses, in his article, the characteristics and requirements of m-learning based on ubiquitous computing. In the field of nursing, there are also many works, such as that of Tilley12 or the work by Crane,13 about improvements in learning, although these contributions use a Web environment (e-learning), but Maag's14 work from the University of San Francisco is focused on the use of m-learning in nursing education.


A Web-based system was designed and built to support mobile self-assessment in traditional class-based learning. The architecture (Figure 1) comprises three different systems: (1) A Web server to store, deliver, and evaluate online tests; (2) the mobile application that students use to connect to the server, download questionnaires, and complete them; and (3) a Web-based front-end that offers different functionalities to each kind of user. The latter can be used by students to complete their tests, and teachers can use it to configure questionnaires and review students' results. The administrator's role also exists, and his/her responsibilities are linked to user (students, teachers, and other administrators) management.

The system architecture.

The system was developed using Java technology (Java Micro Edition for the mobile application) (both by Oracle Corporation, Redwood Shores, CA) and XSLT transformation sheets. The latter technology makes it easier to adapt output to Web and mobile system requirements. The mobile application was tested on a wide range of available devices. Mobile devices must be Java enabled to run the applications, and they must also be open to current Internet connection technology (eg, GPRS or UMTS).

Every student is provided with a log-in and password so they can access both the mobile and Web application. First, they must connect to the server where a list of all available subjects and tests is displayed. They can then complete any of the available tests, get their results, and review their answers (Figure 2). The Web application includes the same features. The only difference being that all questions are presented in a row (Figure 3).

Mobile application screenshots (in Spanish). Left, A question. Right, Posttest results.
Web application for the students (in Spanish).

Teachers can upload and configure tests. One important feature of the system is that it supports the IMS Question and Test Interoperability (QTI) specification15: the system internally stores and manages all tests and questions that use that format. Question and Test Interoperability is a widely adopted specification that ensures interoperability between systems. In this way, tests that conform to the specification can later be moved to any other compliant system. The QTI specification takes into account a wide range of question types including multiple choice, gap fill, ordering, association, and open answer, among others. However, at present, the mobile application supports only multiple-choice questions. So, teachers must design questionnaires using this format. The number of answers per question varies, depending on how many the teacher feels are suitable. Teachers are also able to review each student's achievement as all their personal scores are stored (Figure 4).

Web application showing a student's achievement (in Spanish).

The architecture and systems are easy to install, use, and maintain. This ensures that many institutions can afford the cost of using them. However, the features currently on offer are limited, making it difficult to use the system as the sole or central part of a learning action. Indeed, it is designed to be used as a complementary system that can be incorporated into a new or existing learning action. However, the underlying architecture and technologies used also ensure that it can quickly be extended at a low cost. We cite just one example. The QTI questions and tests are stored and delivered using an XML dialect. The use of XSLT technology makes it easy to transform XML data to any output (user readable) format used by mobile and Web applications (please note that from a technological point of view, these two formats are rather different).


Learning Actions and Experimental Groups

Our aim was to test the degree to which mobile self-assessment improves the achievement of nursing students. So, as our experimental group, we chose a group of 28 third-year students aged between 20 and 21 years. We were able to collect accurate data enabling us to reach conclusions on the improvements that mobile self-assessment can produce when it is targeted at a specific group.

The next step was to arrange the subjects being assessed into a set of learning objectives (LOs). For the nursing course, the LOs were drawn from the official syllabus approved for that degree by the National Council of Universities. The LOs were the following:

  • Objective 1 (NLO 1): to become acquainted with the vaccination schedule for the region.
  • Objective 2 (NLO 2): to understand and use the Mantoux test.
  • Objective 3 (NLO 3): to become acquainted with the complementary feeding of healthy 0- to 18-month-old infants.
  • Objective 4 (NLO 4): to understand and apply the treatment for diabetes.

Teachers designed a self-assessment test with 10 questions for each LO. Single-choice questions with four to five options were used. Questionnaires were later adapted to conform to the QTI specification and uploaded to the Web server. Teachers and students were finally provided with their log-in and password.

Conducting the Experiment

The mobile learning tool is designed for self-assessment, so the obvious way to distribute it is to make it available to every student by installing the application on his/her mobile phone. Although this is probably the best option, it must be said that there are some disadvantages to this. First, technical problems may arise due to the many different kinds of devices that students have. Technical support was provided, but sometimes it proved impossible to run the application because of hardware, software, or communication requirements that terminals did not meet. Second, the mobile application must have an Internet connection so the questionnaires can be downloaded and the response sent back to the server. This obviously requires an appropriate device, but it should also be noted that this communication has a cost. And although this is not high, it may be beyond the limited budget of a young adult. To solve these problems, teachers were temporarily provided with a set of five preconfigured mobile phones so that they could schedule different sessions in which the students were able to use these devices to perform their self-assessment. Two 50-minute sessions were scheduled for the group. During these sessions, assistance was available from the students' teacher and also from technicians who were in attendance. Finally, if they preferred, students were also able to use the Web front-end to access the questionnaires from any computer with an Internet connection and a Web browser. A few of them used this method, but only after they had taken the mobile test and usually because they wanted to recheck their answers. The idea was to provide every student with a variety of ways to complete the tests.

As all the mobile sessions were intended for self-assessment, no limit was set on the number of attempts the students could perform. This was reasonable as the mobile assessment results do not carry any weight in the students' final grade. And what is more, the Web system also records all of a student's attempts and makes this information available to the teacher for him/her to use if he/she considers it worthwhile. For both these reasons, imposing any limitation on the number of attempts makes no sense. Through a process of trial and error, the students can make repeated attempts to answer correctly. This will not affect their final grade unless they really gain some understanding of the concepts being studied. Students were graded (for each module) using the method that each teacher normally used depending on his/her preference and experience, but also in accordance with the requirements imposed by his/her own institution or any other public regulations. Examination methods include papers, examinations, and practical tests. Final grades were also provided by the teachers; to compute them, it was assumed that each module carried the same weight. All the experiments and grading were conducted during the 2008-2009 spring semester, and all the aforementioned LOs form part of the course syllabus taught during that semester. It is also important to note that the control group was selected from the same institution taking care to choose one that had shown a similar achievement (up until the time of the experiment) as the experimental group. This selection was an easy task because every teacher had performance data from the previous semester.


Outcome data collected on the group are presented and discussed in this section. It must be borne in mind that every teacher provided a grade for every LO for each student, along with a final mark for the course. Students' opinions were also appraised in an attitudinal survey.

Achievement Improvement

The students' achievements were collated into the set of defined LOs for both the experimental and the control groups. They were also normalized in the 0-to-1 range. It should be noted that, in accordance with our national system and after this normalization, a final mark of 0.5 or above is a pass mark.

The results for the nursing course are shown in Figure 5 and Table 1. No significant differences could be determined between the final score of both groups in Figure 5. As for the results of descriptive statistics listed in Table 1, the final mean scores of 28 learners for the control and experimental groups are 0.6904 and 0.7615, respectively. That represents an improvement of 10.3%. Similar results were returned for NLO 1, NLO 2, and NLO 3, with increments of 12.64%, 11.95%, and 11.78%, respectively. A more moderate effect can be observed in NLO 4 because the mean score has increased to just 5.25%. Teachers of this course may feel, when designing future learning actions, that the mobile application contributed little or nothing to the students' achievement in NLO 4. A subsequent analysis carried out with the help of teachers on the course upheld the validity of this result: it seems that the topics covered in this LO are mainly practical and as such are difficult to test with a mobile-assessment application. As will be discussed later, a new and interesting line of research remains open here. Table 1 also gives the comparison results of independent-samples t test for the control and experimental groups. This study found that the difference of the mean scores does not reach the significant level for any LO as well as for the final mark (P > .05 in all cases). However, because of the small sample size and limited functionality of the application, we cannot generalize these results.

A box plot of the final scores for the nursing group.
Table 1
Table 1:
Grades for the Nursing Group

Attitudinal Survey

The students were also asked to answer a questionnaire of 10 items designed to evaluate their opinion of the learning tool and their level of satisfaction. The instrument used was a questionnaire based on a five-point Likert scale, with the items shown in Table 2, with all the sentences scored on a positive scale. Similar instruments have been used by other researchers.16 The most important results are summarized in Table 2. The average for these questions is 4.08 on the five-point scale, indicating that the students' attitude to this experience was very positive. The lowest rated statement was item 2, which is related to the students' learning. This is reasonable as the application is designed for self-assessment and reinforcement. Another statement with a low rate is item 9, which refers to the students' motivation toward new learning. We feel that it would be worth the effort for both teachers and researchers to design new experiments and learning actions that would increase student motivation as lack of motivation was caused by a number of aspects that are not easy to summarize here. All other items were rated above 3.5. The ratings for items 4, 7, and 10 are especially significant. Item 4 demonstrates how user-friendly the tool is. Students very quickly became acquainted with it: it is worth stating that it took longer to train the teachers than it did the students. Item 7 is related to the time available to complete the activities. Given the positive rating the tool achieved, it would appear that the learning activities and sessions were adequately scheduled with enough time to complete them. The high rating given to item 10 reflects a very positive attitude toward the learning experience.

Table 2
Table 2:
Attitudinal Survey Resultsa

Answer variability is low because overall SD is 0.93, which represents less than one-fourth of the mean. So it can be said that the answers are homogeneous. To complete the analysis of the attitudinal survey, Cronbach α was computed to measure the internal consistency of the survey. The result returned was .86. This value was higher than .7, which suggests that the test items measure the same construct.

Conclusions and Future Work

A new system of m-learning, which consists of a mobile application for student self-assessment, the server side and a Web front-end are presented in this article. Its conformance with current specifications is a remarkable feature of this system, as it ensures that the questionnaires designed for it can later be transferred to any other compliant system. A nursing course has been adapted to include support for this tool, and different sessions were scheduled to test its usability, usefulness, and performance. The students' achievement for a number of the LOs of this course was collected for statistical analysis. Results show that there was an improvement in students' achievement: it returned a 10.3% improvement in 65% of cases, although there is no statistical significance. Results suggest that the inclusion of this new tool in the learning actions produces a moderate improvement in the students' achievement. An attitudinal survey was also carried out, and results from it suggest a fairly positive attitude by the students.

Apart from these results, additional conclusions may be drawn. First, the decreasing level of improvement that occurred on the nursing course must be considered. It could be argued that it may have been related to the students' age. It seems that teenagers feel more at home with new technologies, and this familiarity increases their motivation and as a result improves their performance. Older students are not as motivated by the mobile application as are their younger colleagues, and this may explain their lower (but still important) improvement. Conversations with teachers point in this direction, but additional research will be required to confirm it and find empirical evidence to support this point. Further research will be conducted in this area with a larger sample of learners and courses.

We have also observed that improvement in LO NLO 4 (to understand and apply the treatment for diabetes) is remarkably low. This was due to the fact that the principal component of applying treatments is of a practical nature, making it extremely difficult to include in a self-assessment activity such as the one incorporated into the proposed mobile application. As such, it is necessary to look for alternative ways to design the LOs and the way they are assessed within this type of course. The easiest way would be to exclude the LO from the m-learning activity, but we think that research into new mobile applications could be carried out. 3D applications use three-dimensional graphics to provide visual representations that are more appealing to the human eye because they represent reality more precisely than traditional two-dimensional applications. They have shown their learning potential,17 and 3D technology for mobile devices and its application in learning is also becoming a reality.18 Mobile 3D learning applications can be investigated to determine its applicability to teach practical competencies.

It is also important to take into account the fact that the attitudinal survey suggests that students do not learn using this tool. Although this result is to be expected as the tool is designed for self-assessment and therefore used to reinforce acquired knowledge rather than gain new knowledge, we feel it is important that this issue is given greater consideration in the future. The low rates obtained by items related to this in the attitudinal survey suggest that students have little expectations about learning with their mobile phones. However, we believe it is important to design and analyze tools that support knowledge acquisition as well as knowledge reinforcement.

Personalization is a hot topic in its own right, which is also connected with knowledge acquisition tools and mechanisms. Adaptive tests and systems have been studied thoroughly,19 and it is also possible to find work on m-learning adaptive tests.20 The system we have presented can be extended to include these kinds of tests; this could play an important role in current m-learning applications (described in the Introduction). Furthermore, if at a later date learning content inclusion is considered, adaptive technologies also offer a wide variety of techniques for improving learning.


1. Lehner F, Nösekabel H. The role of mobile devices in e-learning-first experiences with a wireless e-learning environment. Proceedings of the IEEE International Workshop on Wireless and Mobile Technologies in Education (WMTE'02). Växjö, Sweden; 2002.
2. Nyíri K. Towards a philosophy of m-learning. Proceedings of the IEEE International Workshop on Wireless and Mobile Technologies in Education (WMTE'02). Växjö, Sweden; 2002.
3. McLean N. The M-Learning Paradigm: An Overview. Sydney, New South Wales, Australia: Royal Academy of Engineering and the Vodafone Group Foundation; 2003.
4. Mobilearn. The Mobilearn Project Vision. The MOBILearn Project. 2003. Accessed November 23, 2009.
5. Vavoula GN, Lefrere P, O'Malley C, Sharples M, Taylor J. Producing guidelines for learning, teaching and tutoring in a mobile environment. Proceedings of the 2nd IEEE International Workshop on Wireless and Mobile Technologies in Education (WMTE'04). Bristol, UK; 2004.
6. Houser C, Thornton P, Kluge D. Mobile learning: cell phones and PDAs for education. Proceedings of the International Conference on Computers in Education (ICCE'02). London, UK: IEEE Computer Society; 2002.
7. Shepherd C. M is for Maybe. Brighton, UK: Fastrak Consulting Ltd; 2001.
8. Wuthrich C, Halverson R, Griffin TW, Passos NL. Instructional testing through wireless handheld devices. Proceedings of the 33th ASEE/IEEE Frontiers in Education Conference. Boulder, CO; 2003.
9. Martín E, Carro RM. Supporting the development of mobile adaptative learning environments: a case study. IEEE Trans Learn Technol. 2009;2(1):23-34.
10. Ktoridou D, Gregoriou G, Eteokleous N. Viability of mobile devices integration in higher education: faculty perceptions and perspective. Proceedings of the International Conference on Next Generation Mobile Applications, Services and Technologies (NGMAST'07). Cardiff, Wales, UK: IEEE Computer Society; 2007.
11. Park YC. Study of m-learning system for middle school. Proceedings of IEEE International Conference on Industrial Engineering and Engineering Management. Singapore: IEEE Computer Society; 2008.
12. Tilley DS, Boswell C, Cannon S. Developing and establishing on-line student learning communities. Comput Inform Nurs. 2006;24(3):144-149.
13. Crane KR. Systematic Assessment of learning outcomes: developing multiple-choice exams. Comput Inform Nurs. 2002;20(4):127-128.
14. Maag M. iPod, uPod? An emerging mobile learning tool in nursing education and students' satisfaction. Proceedings of the 23rd annual ascilite conference: Who's learning? Whose technology? Sydney, Australia: Sydney University Press; 2006.
15. IMS Question and Test Interoperability Information Model-v2.1. IMS Global Learning Consortium. 2009. Accessed November 23, 2009.
16. Garrido PP, Grediaga A, Ledesma B. Visual JVM: a visual tool for teaching java technology. IEEE Trans Educ. 2008;51:86-92.
17. Chittaro L, Ranon R. Web3D technologies in learning, education and training: motivations, issues, opportunities. Comput Educ. 2007;49:3-18.
18. Gutierrez JM, Otón S, Jiménez ML, Barchino R. M-learning enhancement using 3D worlds. Int J Eng Educ. 2008;24:56-61.
19. Barchino R. Assessment in learning technology standards. US-China Education Review. 2005;2 no. 9 (serial no. 10):31-35.
20. Triantafillou E, Georgiadou E, Economides AA. The design and evaluation of a computerized adaptive test on mobile devices. Comput Educ. 2008;50:1319-1330.

Mobile assessment; Mobile computing; Nursing studies; Online education

© 2011 Lippincott Williams & Wilkins, Inc.