Supporting Learner Mobility in SCORM-Compliant Learning Environments with ISN Mobler Cards
Publication Type:
Journal ArticleSource:
Connections: The Quarterly Journal, Volume 12, Issue 1, p.31-43 (2012)About the author *
Introduction
Over the last decade, mobile information technologies have become a ubiquitous part of daily life. Mobile learning research has been going on for less than ten years, given that the smartphone revolution only started in 2006.[1] As such, this field is among the newest research areas in educational technology. Given the overwhelming success of smart mobile devices on the global scale, this technology appears to be well suited to extending the reach and continuity of educational programs.
Mobile technologies have become increasingly relevant for education and training in security and defense organizations not only because of the market success of mobile phones and other portable devices but also because many mobile technologies have become part of the standard infrastructure in these organizations. Mobile technologies are part of the information networks that characterize the professional environments of soldiers, policemen, fire fighters, and other security workers. An example of such a networked infrastructure in the defense sector is the “Gladius” System that integrates infantry and vehicle-based weapon systems in the German military.[2] Further development towards “network-enabled” combat systems is currently in process.[3] These examples illustrate that the scope of technological change represented by the mobile revolution goes far beyond the availability and use of mobile phones.
Education and training in security and defense organizations are challenged by mobile technologies and the new relevance of mobility from four perspectives:
- Technological
- Socio-technological
- Professional complexity
- Organizational.
This essay emphasizes the organizational perspective. Many organizations have already made substantial investments in learning management systems (LMSs) for advanced distributed learning (ADL) infrastructure and in developing appropriate educational material. Significant investments have also been made in the training of instructors and authors to make good use of the available ADL solutions; indeed, many organizations have a rich pool of educational resources available in the SCORM format.[4] The introduction of mobile technologies into education and training is particularly challenging because many organizations have not yet completed the initial adoption of conventional ADL solutions. This raises the question for security and defense organizations of whether they have to reiterate the process and create new educational resources and programs if they want to introduce scalable mobile learning solutions.
Mobile Learning and SCORM
The sharable content object reference model (SCORM) is the most prominent interoperability standard for educational material that can be used for Web-based training. Originally developed by the ADL Co-labs in the United States, it has emerged as the industry standard for exchanging Web-based training material. SCORM introduced interoperability standards for educational material that made the solution independent from the underlying delivery platform. SCORM is one of the core elements for sustaining investments into the development of educational material. The “reference model” has been widely adopted for packaging and exchanging educational material between so-called “run-time” environments and “learning management systems.” SCORM specifies three aspects of ADL solutions:
- Content packaging, exchange, and delivery
- Content arrangement and sequencing
- Interactive content and data persistency.
The central elements of SCORM are “content objects.” The terminology of SCORM refers to content objects as “sharable content objects,” or in shorthand, “SCOs.” SCOs are those educational resources that can be shared across ADL courses, compared to non-reusable contents, such as submissions to discussion forums or student presentations, which cannot be shared across courses. SCOs can be text documents, videos, audio files, interactive multi-media, as well as tests and assessments.
The most prominent aspect of SCORM is content packaging. This aspect defines how related educational resources need to be packaged, so that learning management systems produced by different vendors can consistently export and import educational material. Content packaging in SCORM includes not only the storage format for educational materials for distribution, but also the internal logical structure of and between SCOs.
The second aspect of SCORM defines the arrangement and sequencing of SCOs in the run-time environment. There are two levels of arrangement and sequencing. The first level of sequencing is based on the internal logic of the content package. This is similar to the arrangement of print material into chapters and sections. The second level is the dynamic arrangement of SCOs, depending on how the learners use them. This is typically referred to as sequencing, and allows the definition of complex and adaptive interactions as well as simple forms of personalized presentation.
The third aspect of SCORM defines how interactive SCOs can store and retrieve data from the run-time environment, as well as how different interactive SCOs can share data through a standardized application programming interface (API). This aspect of SCORM provides a unified approach that empowers content authors to create complex and integrated learning experiences that consider how learners interact with the learning material. This part of SCORM extends the data exchange protocols of existing content formats, such as the one defined by the IMS Question and Testing Interoperability (IMS QTI) specification.[5] IMS QTI specifies a standard interface for delivering tests and assessments as well as collecting the results.
One important characteristic of SCORM is that the specification is agnostic with respect to the capabilities of the delivery platform or the devices that learners use to access and interact with the learning material. This does not mean that every SCORM package can and will run on any compliant delivery platform, but that it is the responsibility of the content author to design, select, and arrange SCOs appropriately for any kind of SCORM-compliant run-time environment. This characteristic suggests the possibility of SCORM-compliant run-time environments for mobile learning experiences, and several solutions have been analyzed and discussed elsewhere.[6] Most of these solutions focus on SCORM’s content packaging and delivery aspect. These solutions discuss the automatic adaptation of SCOs to the capabilities of the device used by the learner. The underlying presumption is that learners in conventional and mobile learning environments will perform the same learning activities.
Nakabayashi, et al., present an approach for arranging and sequencing SCOs depending on the device capabilities.[7] The authors’ presumption is that learners perform different learning activities on mobile devices and in conventional ADL environments. Therefore, the approach separates SCOs for mobile, conventional, and mixed delivery modes.
Degani, et al., analyze the new requirements of mobile learning for content delivery and educational scenarios.[8] This analysis indicates differences in use cases for SCOs in mobile learning contexts and in conventional ADL run-time environments. The authors identify new use cases for SCOs that are currently unsupported by SCORM and suggest a mobile-SCORM (or m-SCORM) extension that primarily focuses on the sequencing and interaction aspects of SCORM.
The central limitation of the approaches of both groups of authors mentioned above is that both approaches require extensions of the SCORM specification, which carries with it a corresponding reduction in interoperability. In order to be able to benefit from these extensions, a run-time environment has to be substantially extended in its functional capabilities. For security and defense organizations, such extensions create a significant barrier for adoption, because the organizations rely on external vendors who need to implement these features into their products. Furthermore, the present approaches require the development of new educational material.
Challenges for Research
The research presented in this contribution addresses the need for scalable solutions that require no conceptual changes to existing educational material and can be achieved with minimal extensions of learning management systems. Interoperability standards were specifically considered in the present work in order to minimize the organizational risk posed by introducing new educational concepts.
The present research addressed the challenge of applying SCORM-compliant learning material for supporting mobile learning and extending the continuity of learning. The objective was to identify whether the reuse of existing learning material for mobile learning experiences can be achieved by adapting existing SCORM concepts. This challenge relates to the SCORM aspects of content delivery, interactive resources, and data persistency.
Educational Design Underpinnings
The research and application development has been guided by observations and conceptual differences between the uses of mobile technologies and conventional desktop computing. In order to understand the differences between mobile learning and conventional ADL solutions, it is necessary to define the unique characteristics of mobile learning. Mobile learning can be characterized as the processes (personal and public) of coming to know through exploration and conversation across multiple contexts, among people and interactive technologies.[9] This definition avoids mentioning portable devices, while it highlights the relevance of context as a key educational dimension that is specific to mobile learning.
From the perspective of educational design, contexts are dynamic elements of mobile learning processes. This is contrary to the approaches of conventional ADL systems that consider the context of learning as constant throughout a learning activity (similarly, SCORM does not explicitly consider the context of accessing and using SCOs). However, two contextual factors are central to the design of many ADL solutions: learning time and Internet connectivity.
SCORM-compliant solutions typically presume that learners have sufficient time for learning and are connected via stable broadband services to the Internet. However, mobile learners often experience a lack of needed data connectivity when they have the time for learning, or they find that other tasks are more important than learning when the data connectivity is available. Although this mutual dependency of time for learning and availability of data connectivity is not explicitly required by the SCORM specification, it is frequently reinforced by the design of compliant run-time systems.
Closely related to these two factors—time for learning and data connectivity—is a direct consequence of dynamic contexts: interruption. Whenever a context changes, the learning experience is interrupted. This marks a fundamental paradigm shift in educational technology. Conventional ADL solutions aim to minimize interruptions and distractions in order to support the learners’ focus on achieving the learning objectives. The interruptions of the learning process that persist are generally considered to be accidental incidents that can be ignored. For mobile learning, on the other hand, minimizing interruption is out of the question. Instead, interruption is one of the core principles of mobile applications, because interruptions can and will happen at any time.[10]
Instead of addressing these interruptions for generic SCORM modules, this article focuses on representing appropriate educational approaches within the concepts of SCORM. The solution is based on the instructional design concept of “micro learning.” [11] The core idea of micro learning is to minimize the organizational traits for learning and to provide short learning activities. Micro learning has three defining characteristics:
- No or short preparation for learning
- Short, self-contained learning activities
- Immediate assessment and feedback.
Within this concept, a learning activity is a task presented to the learner. Each task is self-contained, which means that it has no sub-tasks and creates no immediate dependencies to other tasks. “Short” refers to the time that is required to complete a task. Different from conventional educational design, micro-learning activities are not directly related to the achievements of learning objectives. Learners can achieve learning objectives only through repetition and in combination with other micro-learning activities.
The concept of micro learning is suited for mobile learning because it addresses procedural interruptions. Learners can interrupt and take up the learning at any time. Due to the short completion time and self-contained nature of each task, however, the impact of procedural interruptions remains limited.
Micro learning emphasizes the individual learning activity, but it does not conceptualize the overarching learning process. However, learners require feedback on the individual learning activity and on the overarching learning process for motivation and self-regulation.[12] Previous research has indicated that relatively simple statistical metrics are beneficial for learners to orient and organize their learning.[13] As micro learning already measures the learning performance for activity-level feedback, more complex perspectives on personal experiences can be aggregated from this data across activities. Two types of feedback metrics can be differentiated based on monitoring activities: performance metrics and effort metrics.[14]
Performance metrics refer to the qualitative level of success in completing the learning activities. Performance metrics include the number of correct responses or the speed of completing test items. The metrics are qualitative because they provide information about the quality of the learning in relation to the learning objectives. Effort metrics refer to the quantitative aspects of learning. These metrics indicate the effort a person invests in learning. Effort metrics are quantitative metrics because they only refer to the quantity of activities that were performed; they provide little information about the achievement of learning goals. Both metrics provide valuable information for evaluating learning and self-motivation. Both elements have been identified as essential elements of self-regulating and self-managing learning processes.[15] Such “learning analytics” can be used to broaden the perspective on the learning process beyond the level of single learning activities.[16]
Integration with SCORM Concepts
The core limitation of previous SCORM-compliant mobile learning solutions was the need to create new learning material for these solutions. In order to overcome this limitation, it was necessary to identify those SCO types that support the characteristics of micro learning. Furthermore, it was required that sufficient SCOs were already being used by security and defense organizations.
Test items are the only SCO type that supports all the characteristics of micro learning. Test items are questions that are used in tests, assessments, and exams. Typically, test items are stored in so-called “test item stores” or “question pools.” Tests and exams refer to these question pools for randomizing and personalizing the questions that have to be solved by the learners. The handling and exchange of test items, test item stores, and tests between ADL systems is specified by IMS QTI, which is a valid SCO format within SCORM. The presented solution uses IMS QTI test items outside of an assessment context in order to support self-practicing.
All IMS QTI test items have four elements:
- A challenge
- A response definition
- Assessment rules for the response
4. Performance feedback.
The challenge is a question that a learner needs to answer. In micro learning terms, this refers to the task description. The response definition defines the interaction rules for providing the answer to the challenge. IMS QTI defines thirty-two different interaction types for responding to a challenge.[17] The specification defines for each interaction type the rules for assessing the learners’ responses. This allows automatic identification of fully correct, partially correct, and incorrect responses. For the different levels of correctness, IMS QTI allows the provision of predefined multi-media feedback.
The Mobler Cards App
This article introduces the Mobler Cards app for the Ilias LMS.[18] The app is a prototype for demonstrating the feasibility of introducing novel mobile learning concepts by building on existing educational material. Mobler Cards is a variation of the flash card learning concept that uses test questions for repetitive practice on smartphones. The unique feature of Mobler Cards is that it synchronizes itself with an LMS while offering all functions regardless of the connectivity of the learners’ devices.
Figure 1: Mobler Cards Core Interaction Flow.
After learners install the app on their smartphone, they connect to the LMS. After authentication, Mobler Cards identifies appropriate learning resources from the courses in which a learner is enrolled. For each of the learners’ courses, the app has two modes: a practice mode and a statistics mode.
The practice mode offers the typical flash card learning experience of a question and an answer extended by the immediate performance assessment and feedback that is required by the micro-learning concept. In order to be able to provide direct feedback on the learning performance, the learners have to show that they are able to answer the question correctly. This is a conceptual change compared to the original flash card learning approach, in which the learners have to imagine the correct response to a challenge while they can easily access the correct answer. To assess the learning performance, Mobler Cards relies on the assessment rules that the LMS provides for the test item. Each response can have three levels of correctness: “excellent,” if the correct answer has been provided; “partially correct,” if some parts of the response were correct; and “wrong,” if the provided response did not match the correct answer at all. Based on these levels, the app calculates and stores a score for the test item. In addition to the calculated feedback, the learners can compare their response with the correct solutions and can access predefined multi-media feedback, if it is available. The Challenge-Response-Feedback loop of Mobler Cards is illustrated in Figure 1.
The minimalistic concept of micro learning helps learners to directly analyze their learning performance through immediate task-related feedback. However, this approach does not allow learners to relate to the overarching learning objectives. Mobler Cards’ statistics mode serves this purpose and allows the learners to analyze their performances at the course level. Four analytical measures are provided to the learners: the number of questions handled during a twenty-four-hour period; the average score that has been achieved during the same period; the progress toward answering all questions correctly; and the average time for answering each question. The difference between the average score and the progress is that the average score includes partially correct answers as well as fully correct answers, while the progress measure includes only fully correct responses. In addition to these performance-based learning analytics, the app offers two learning badges that are based on the effort of learners using the app. The first learning badge indicates that the learner handled all available questions for a course. The second badge is awarded after the learner answers a large number of questions in one sitting. For both learning badges, the performance score is irrelevant. Figure 2 shows the Mobler Cards’ interfaces for performance metrics and learning badges.
Figure 2: Mobler Cards Interfaces for Performance Statistics and Learning Badges.
Mobler Cards is loosely connected to an LMS. The app works independent from the normal interaction patterns that are implemented by the LMS, but it authenticates learners with the LMS. For authenticated learners, the app synchronizes educational resources and activity monitoring data with the LMS, which allows course moderators to monitor the activities of mobile learners. The loose integration creates some platform and system independency, so learners can configure the app to use their preferred LMS. In order to achieve this flexibility, Mobler Cards relies on three Web services and open interoperability standards. The first service is the authentication service. This service is based on the OAuth protocol, which allows secure authentication and session validation without password transmission and constant session keys. The second service is the question pool service that selects the test item pools for the courses of the authenticated learner. This service provides only access to information that is accessible by the learners and sends data by using the IMS QTI Information Model. Finally, the experience tracking service collects data about the learning performance. This service accepts and stores monitoring data from the Mobler Cards app in compliance with the Experience API.[19] The system architecture of Mobler Cards is shown in Figure 3.
Figure 3: Mobler Cards Service Architecture.
Application Design Requirements
Mobler Cards is designed to complement existing Web-based ADL courses with exercises for repetitive practice. Three core requirements were considered for the app. These requirements are key for scaling up mobile learning in security and defense organizations.
First, the app needs to be integrated with the underlying LMS. Besides avoiding the hosting and maintenance of additional systems and infrastructure, this automatically allows the app to integrate mobile learning into existing education and training programs. Furthermore, the reuse of existing infrastructure allows the utilization of existing SCORM-compliant learning material whenever possible without additional overhead. This requirement also defines that no mobile learning activities are disconnected from other course activities.
Second, the app has to minimize the overhead for content authors. This lowers a significant barrier for scaling mobile learning in security and defense organizations by enabling content authors to use their knowledge of Web-based courses and Web-based assessment. This can be achieved by reusing the authoring capabilities of the LMS for content creation. A side effect of this is that it allows the repurposing of appropriate components of existing SCORM packages for mobile learning. Being able to use existing learning material for initial courses can significantly reduce the barriers to providing mobile learning offerings on a broad scale. Instead of the necessity for creating entirely new educational resources, this approach relies on adapting existing educational material.
Third, the app has to provide full flexibility for mobile learners in order to support the continuity of learning. From the learners’ perspective, an appropriate solution enables learning in suitable moments as they occur. These learning opportunities can vary in their duration and context. These opportunities include moments such as waiting for a bus or commuting on the train. This requirement also considers the issue of Internet connectivity as a factor in the learning experience. This means that learners should be able to access the learning material during extensive offline phases as well as when they are fully connected.
Mobler Cards optimizes the time frame that is available for learning by hiding most administrative tasks from the learners. This includes authentication, data synchronization, and course navigation. Furthermore, Mobler Cards allows the learners to access supportive features such as learning statistics at any time. This feature requires that all functions have to be implemented in the app instead of being provided by the LMS.
Proof of Concept
Mobler Cards has been used with the PFPC LMS at the ISN Zürich. The proof of concept addressed the validity of the first two requirements under real-world conditions and analyzed the feasibility of reusing existing educational material as well as the implications of adapting resources for mobile learning. The present study is based on two courses: “Introduction to NATO” and “Building Defense Organizations.” Both courses are available as SCORM packages that contain primarily text resources and test items.
The proof of concept should provide insights for extending SCORM content with mobile learning features. The proof of concept was separated into two parts. The first part analyzed whether existing test item stores can be used by the app. The second part analyzed the procedures for transforming SCORM content so it can be used with Mobler Cards.
The “Building Defense Organizations” course is used to teach basic knowledge about organizational structures and management strategies. The core course structure has been built with the Ilias SCORM editor and was extended with IMS QTI-compliant tests that use a test item store. The test item store for this course consists only of test items of the types “multiple choice,” “single choice,” and “sorting.” This allowed the use of these test item stores directly in Mobile Cards without modifications.
This part of the proof of concept reused existing learning material for mobile learning. Although all test items were successfully displayed within Mobler Cards, the analysis identified differences in the minimal design requirements for test items that were designed only for assessment and those that can be used for repetitive practice. The following content-related problems were identified:
- The main information of the test item is contained within the challenge or question
- Answering the questions correctly is relatively easy even with no or limited knowledge of the subject
- The feedback is not related to the subject of the question but to the performance of the learner
- Several test items treated multiple aspects within the same test item
- Long answer options of similar lists of terms.
The first three problems are related to the intended use and the new application scenario: while the intended use focused on a scenario that allows learners to provide answers only once, the new application scenario repeatedly confronts the learner with the same question. Learners have only a single chance to respond to a test item in the original scenario. The test was intended to identify whether a learner has mastered the main text resources of the course or not. Therefore, the test items were not very difficult, and when it was present, the additional feedback simply referred to the correctness of the response. The new application scenario allows learners to repeat test items as often as they wish. This provides the opportunity to present more challenging test items and feedback that is related to the subject matter of the course.
The last two problems are related to the display size and the mode of interaction. In conventional Web-based training scenarios on desktop computers it is relatively easy to read and distinguish between complex questions and answer options. Complex question-answer settings typically benefit from the simultaneous availability of complete information. In mobile learning scenarios, the screen real estate is far more limited, and learners typically cannot access information in the question and from the answer options at the same time. This problem increases with the complexity of the test item: the longer a question gets, the harder it is for learners to memorize it correctly while responding.
The “Introduction to NATO” course provides a general overview about the history and structure of NATO. The course has been created with the SCORM editor of the Ilias LMS. The test items for this course were either single- or multiple-choice items, but were embedded as interactive text. Therefore, this course had no test item store that can be used by Mobler Cards.
This part of the proof of concept addressed strategies for transforming existing and creating additional educational resources for mobile learning. This process included the following steps:
- Transforming existing exercises into test items in a course-wide test item store. This step separates text material from test items. Many SCORM-compliant authoring solutions treat test items simply as interactive text content and not as a different resource type. Therefore, it is necessary to separate the different types of SCOs.
- Extending the test item store with additional test items within the same logic as the existing exercises. This step extends the pool of test items by creating variations of the original test items based on the subject matter information in the available text resources. The objective of this step is to stimulate the learners’ attention rather than encouraging the memorization of answer patterns.
- Identifying gaps in the exercises with respect to covered subject matter and creating new test items. This step seeks for gaps in the existing test items and the text resources. New test items were created whenever a gap was identified. The objective of this step is to cover all learning objectives with the test items.
4. Creating appropriate feedback and enriching test items. The final step focused on providing meaningful information to the learners that supports the answering of a test item. For incorrect or partially correct answers, the relevant passage of the text resource was provided to the learner. For correct answers, additional background information related to the test item is included.
Conclusions and Implications for Practice
The objective of the presented study was to analyze the constraints of reusing SCORM-compliant educational resources for mobile learning. This approach is considered as appropriate for lowering the barriers to mobile learning in security and defense organizations because they can directly integrate mobile learning into their existing ADL and blended learning strategies. Prior research suggested a direct translation of SCORM concepts of Web-based training into the mobile realm. This project analyzed the implications of transforming conventional Web-based training material for supporting mobile learners. Instead of treating mobile learning as a technical problem of content delivery, this contribution analyzed the underpinnings of the instructional design for mobile learning and grounded the technology development on these insights. This has resulted in the Mobler Cards app that applies the concept of micro learning to test item stores of SCORM modules for supporting individual practice.
Although the initial proof of concept showed the applicability of the solution in a real-world environment, it also indicated conceptual differences between testing for assessment and testing for practice. This article analyzed the required steps for adapting and extending the available SCOs for mobile learning without creating new requirements for the structure of educational resources. This can be considered to lower the barrier for adopting mobile learning in security and defense organizations because the demand for the creation of new content is relatively low.
* Dr. Christian Glahn – see page 1.
[1] Reinhard Oppermann and Marcus Specht, “Adaptive Mobile Museum Guide for Information and Learning on Demand,” in Workshop on Interactive Applications of Mobile Computing (IMC 98, 1–5 November 1998, Rostock, Germany). Elliot Soloway and Cathleen A. Norris, “Using Technology to Address Old Problems in New Ways,” Communications of the ACM 41:8 (1998): 11–18.
[2] “Infanterist der Zukunft,” Deutsches Heeresamt, Infanterieschule (2012); available at http://bit.ly/14DDVsq.
[3] “Rheinmetall and SAAB: Creating Network-Enabled Warfighters,” Rheinmetall Defense, Press release, 30 May 2012; available at http://www.rheinmetall-defence.com/de/media/editor_media/rm_defence/publicrelations/pressemitteilungen/2013_1/2013-04-09_LAAD_2013_ISSP.pdf.
[4] Advanced Distributed Learning (ADL) Initiative, Sharable Content Object Reference Model (SCORM) 2004, 4th Edition Run-Time Environment (RTE), Version 1.1 (Alexandria, VA: ADL Initiative, 2004).
[5] Wilbert Kraan, Steve Lay, and Pierre Gorissen, IMS Question & Test Interoperability Assessment Test, Section and Item Information Model, Final 2.1 (2012); available at www.imsglobal.org/question/#version2.1.
[6] Fernando Mikic, Luis Anido, Enrique Valero, and Juan Picos, “Accessibility and Mobile Learning Standardization, Introducing Some Ideas About the Device Profile (DP),” in Second International Conference on Systems (ICONS’07), 32; Maia Zaharieva and Wolfgang Klas, “MobiLearn: An Open Approach for Structuring Content for Mobile Learning Environments,” in Web Information Systems, ed. Christoph Bussler, Suk-ki Hong, and Woochun Jun (Berlin, Heidelberg: Springer Verlag, 2004), 114–24; R. Yu-Liang Ting, “Mobile Learning: Current Trend and Future Challenges,” in Fifth IEEE International Conference on Advanced Learning Technologies (ICALT’05) (2005): 603–7.
[7] Kiyoshi Nakabayashi, Takahide Hoshide, Masanobu Hosokawa, Taichi Kawakami, and Kazuo Sato, “Design and Implementation of a Mobile Learning Environment as an Extension of SCORM 2004 Specifications,” Seventh IEEE International Conference on Advanced Learning Technologies (ICALT 2007) (July 2007): 369–373.
[8] Asi Degani, Geoff Martin, Geoff Stead, and Frances Wade, Mobile Learning Shareable Content Object Reference Model (m-SCORM) Limitations and Challenges, [N09-35] (Cambridge, U.K.: 2010); available at www.m-learning.org/images/stories/MobScorm.pdf.
[9] Mike Sharples, Josie Taylor, and Giasemi Vavoula, “A Theory of Learning for the Mobile Age,” in The Sage Handbook of E-learning Research, edited by Richard Andrews and Caroline Haythornthwaite (London: Sage Publications, 2007), 221–47; available at www.lsri.nottingham.ac.uk/msh/Papers/Theory of Mobile Learning.pdf.
[10] Alex Sbardella, “Ten Tips for Mobile UX,” Red Ant Blog; available at https://www.redant.com/articles/ten-tips-for-mobile-ux.
[11] Gerhard Gassler, Theo Hug, and Christian Glahn, “Integrated Micro Learning – An Outline of the Basic Method and First Results,” in International Conference on Interactive Computer Aided Learning (ICL), ed. Michael E. Auer and Ursula Auer (Villach, Austria, 29 September – 1 October 2004) (CD-ROM).
[12] Deborah L. Butler and Philip H. Winne, “Feedback and Self-Regulated Learning: A Theoretical Synthesis,” Review of Educational Research 65:3 (1995): 254–81.
[13] Judy Kay, “Learner Know Thyself: Student Models to Give Learner Control and Responsibility,” in Control and Responsibility: International Conference on Computers in Education (AACE, 1997), 17–24.
[14] Christian Glahn, Marcus Specht, and Rob Koper, “Smart Indicators on Learning Interactions,” in Creating New Learning Experiences on a Global Scale: LNCS 4753. Second European Conference on Technology Enhanced Learning (EC-TEL 2007), ed. Erik Duval, Ralf Klamma, and Martin Wolpers (Berlin, Heidelberg: Springer Verlag, 2007), 56–70.
[15] Butler and Winne, “Feedback and Self-Regulated Learning: A Theoretical Synthesis.”
[16] Dominique Verpoorten, Christian Glahn, Milos Kravcik, et al., “Personalisation of Learning in Virtual Learning Environments,” in Learning in the Synergy of Multiple Disciplines, ed. Ulrike Cress, Vania Dimitrova, and Marcus Specht (Berlin, Heidelberg: Springer Verlag, 2009), 52–66.
[17] Kraan, Lay, and Gorissen, IMS Question & Test Interoperability Assessment Test.
[18] “Ilias Open Source e-Learning,” www.ilias.de.
[19] Advanced Distributed Learning (ADL) Initiative, Experience API. Draft Specification (19 October 2012); available at http://cdn3.tincanapi.com/wp-content/assets/spec/Tin-Can-API-Releasev095.pdf.