The decade old SCORM and AICC specs are set to completely change – ADL is working on Next Generation SCORM and the Experience API (popularly known as Tin Can API) is their first step in that direction. AICC is working on CMI 5 with objectives similar to those of ADL’s. I have been following evolution of SCORM and AICC specifications right from their initial versions, though I was never so excited looking at what is happening as I am now!
The Tin Can API specification is fast getting into good shape and once it is formalized it will have profound impact on the way we think of eLearning. Tin Can API will not really give us a ready solution or a detailed specification of a next generation eLearning platform; however it will give us a new way of thinking using which innovative solutions can be built to make learning more effective. It will help remove most of the restrictions imposed by current specifications and give us a foundation to build upon.
AICC is also planning to use foundation of Tin Can API to achieve the set goals of CMI 5 (next version of AICC spec). This would be a welcome change as the industry would need to deal with only one type of interoperability model. It is particularly important for content creators and platform developers as they would not need to worry about being compliant to two different standards.
My intention is not to introduce the Tin Can API or talk about its basic concepts, please read an overview here or dive deep here. I would rather attempt to present key benefits and opportunities it will offer. In my opinion, following are the important benefits (not in any order) of this paradigm shift:
Decoupling of Content from LMS
- The tracking can also be decoupled through integration with external LRS (Learning Record Server). LMS companies can also build their own LRS and have a tighter integration and at the same time collect data from other sources/LRSs.
- This paradigm shift will enable building distributed systems and help us leverage true benefits of SOA and Cloud.
Freedom for Content Creators
- As Tin Can API will recognize and track wide range of learning activities, including non web-based content, it will present huge opportunities for content creators. It will enable them to use content of various types and forms, from various sources, and in online or offline mode. It will support tracking of content defined data. Tin Can API recognises that learning is a continuous process and it cannot be confined by structured courses delivered through web based systems.
- Content built using any technology with an ability to consume web-services will work. Device specific features and device optimized user experiences (specifically on mobile devices) can be effectively used. Real life experiences can also be tracked with simple calls to LRS (using plain text statements or even binary data like audio/video clips).
- Hybrid learning paths can be easily built by mixing content of various types – traditional learning, mobile learning, simulations, virtual sessions, serious games, real life experiences, collaborative learning, social-learning interactions, offline learning, etc.
- Compared to SCORM and AICC, Tin Can will help you stay in better control of your content as you no longer need to have everything in web compatible form, there is no need of distributing content packages as zip files, and there are no cross-domain limitations.
Better Learning Experiences through Improved Analytics
- Today we live in a data-rich world. There are many businesses that use loads of data collected from user’s experiences and activity streams to enhance their product and services, or enhance user’s experience of their products and services. Same can be certainly done for learners, and Tin Can will help in enabling it. It will provide means of tracking data that can be as fine-grained or as coarse as you want. With current specs of SCORM and AICC, there are limitations doing this, as the data model is limited/pre-defined and there is limited support for user defined variables.
- Tin Can API will allow saving almost everything on LRS (including multiple attempts, run time data etc). Wisely formed queries on actor, verb and/or objects can provide interesting information and will provide whole new level of reporting detail. Even a raw activity feed of a user will reveal good information if the statements are used appropriately.
I believe this paradigm shift will open many opportunities in future, going well beyond Tin Can API. I understand that there are some gaps/work-in-progress items in the current version (0.95) of the Tin Can spec and ADL folks are working towards resolving most of those. I would prefer to be optimistic and wait until ADL publishes final version of the Experience API, likely to happen early next year. However one thing is clear that eLearning industry is going through an important transformation, using newer and lighter technologies!