Why SCORM 2004 failed & what that means for Tin Can

“SCORM 2004 is dying (if not already dead!).” Now that might seem like a strong statement but it’s the sad truth. For the careful observer there are many signs to support this view, and here are a few of them:

Sign #1: 75% of packages are still on SCORM 1.2, 10 years after the initial release of SCORM 2004 [1] [2]


Sign #2: There is no certification process for tools and packages for the latest SCORM 2004 4th edition. This is the case although several years have passed since 4th release. Currently, someone can be a 4th edition adopter but *not* certified. [3]

Sign #3: ADL itself heavily supports Tin Can as the successor of SCORM.[4]

In essence, SCORM 2004 always lived in the shadow of SCORM 1.2. Now, with the introduction of Tin Can API it seems certain that its adoption rate will decline even further.

Reasons SCORM 2004 Failed

There are a multitude of reasons why SCORM 2004 failed. Here are most prominent (and yes, we refer to SCORM 2004 in the past tense quite deliberately):


The major contribution of SCORM 2004 was the “simple sequencing model”. In fact, it was anything but simple. It was a lot of work for LMS vendors to implement and more importantly, it was too complex for many courseware developers to use. Even the simplest of sequencing required a room full of flow-chart diagrams, dozens of field settings – and even then you needed to be an expert to actually understand what it was doing.

The sad fact is that SCORM 2004 had some nice extensions over SCORM 1.2 which generally made sense, but was hidden under the sequential model nightmare.

For example, a major problem with SCORM 1.2 was that when you took a SCORM quiz there was no way for the LMS to know what the actual questions were. You could access the kind of the question, the correct response, the student response and the score – but not the actual question. This is one of the areas where SCORM 2004 was profoundly better than SCORM 1.2. It included a full text question description and a descriptive identifier for answers. This meant that you could do some effective reporting on questions and the distribution of answers. It was a dramatic improvement but only a few took notice.

Low adoption

This was a side-effect of high complexity. Pedagogically, SCORM 2004 offered important new opportunities but at a disproportional cost.  In other words, the added benefits from the standard were unbalanced by its complexity. The end result was low adoption from vendors and instructional designers.

Even when vendors offered support for SCORM 2004 this was handicapped to great extent. For example, many rapid elearning tools that are available for creating courses DO NOT allow you to build anything easily other than basic SCORM 2004. Almost none of them have an interface for creating a dynamically sequenced Multi-SCO package.

Technology shift

10 years is a long time. Since SCORM 2004 was introduced new technologies have come and gone, smartphones have become mainstream, gamification has been introduced, Cloud & lean solutions are hot topics. We are living in a much different, and more connected, world yet SCORM is still an isolated, browser-based, LMS-centered standard. SCORM 2004 had to change in order to adapt to such a dramatically different environment and rather than do that ADL decided to save itself the trouble and start from scratch through what we now know as Tin Can or Experience API (xAPI)

On not being pragmatic

When SCORM was first introduced it answered a real-world problem – that of the standardization of learning packaging and delivery. And it succeeded because until that point in time there was no adequate way to do that job. On the other hand SCORM 2004 tried to address less obvious problems. It had a higher vision and tried to allow or enforce sound pedagogical concepts but was proven less pragmatic.

There is one important real-world issue that SCORM 2004 deliberately avoided dealing with and that is making SCORM a concrete standard. SCORM is a reference model and not a true standard – you can’t plug this into a wall and have everyone work the same way. There is still too much variation in how compliant LMSs implement UIs associated with the SCORM engine. Will content be loaded in a new window? A frameset? How large a window? How will the table of contents be presented? What navigation request does closing the browser imply? Content authors should be able to rely on a consistent set of UI expectations.

Lessons to be learned and the Tin Can future

Tin Can is trying to succeed where SCORM 2004 failed. Nothing is perfect though as ADL admits [5]. Ongoing compromises are not a bad thing per se, but can certainly be tricky.[6]

Simplicity matters

It seems like the need for simplicity is something that Tin Can endorses. Simplicity drives adoption and without adoption no standard can succeed. In essence Tin Can is much simpler that SCORM 2004 and even simpler than SCORM 1.2 (still, on the latest 0.95 and upcoming version 1 of the standard some complexity elements emerge like support for multiple languages – in essence this is good but comes with higher complexity for technology providers).

Technology-shifts can render you irrelevant

Another important characteristic of Tin Can is that it is actually technologically ‘agnostic’. It can be used inside the LMS, outside the LMS, embedded in a mobile phone or in a videogame. That provides some assurance against technology-shifts and opens new possibilities for capturing interesting learning interactions from informal activities.

Ongoing project support is important

An interesting decision regarding Tin Can is that ADL hired a company to drive the Tin Can project (Rustici Software). What that means for the future of the standard is still not clear, however, currently, the marketing and support effort is much improved.

Freedom and Standardization are opposite forces

Unfortunately, Tin Can does not help in the path towards standardization. It does however offer even more freedom to content creators by letting them, for example, define their own verbs used on statements. Interoperability of content between LMSs is somewhat improved due to the simpler messaging system and absence of Javascript; however, standardization of presentation will not be benefited from Tin Can as it is shaped today.

Tin Can chose freedom over standardization. It remains to be seen if this was a good move.

Reporting is critical for eLearning

The need for reporting is one of the main driving forces behind eLearning. Without reporting you cannot calculate the ROI (return-on-investment) of your learning activities. Reporting was not a favorite topic of SCORM but is at the core of Tin Can.  In principal, Tin Can is built around descriptors of actions (‘training evidences’) that can be translated to better reports. Still, the reporting itself depends on each vendor’s interpretation.  Also, for reporting to be useful it may help to merge statements to form higher level descriptors. For example, Tin Can can report on what you experienced or completed. But those are low level statements that cannot be rendered easily to something like “George is good at mathematics”.

Summing up

To say that SCORM 2004 failed because it was too complex is an over-simplification. There were a number of forces that led to this outcome. Tin Can tries to succeed where SCORM 2004 failed by addressing several but not all of the ongoing issues. It also comes with a fresh view on the technology landscape.

It seems that the compromises were calculated ones in order to simplify the standard but we anticipate that in the near future Tin Can will introduce several new elements in favor of standardization. Hopefully during this process its simplicity won’t be hammered too much. It is still early but a good way to introduce standardization might be through a new standard that builds over Tin Can (and thus, does not make it more complex) and addresses visual and reporting concerns. Let’s call it “Tin-Can X”.


  1. http://scorm.com/blog/2011/08/scorm-stats-then-and-now/
  2. http://scorm.com/scorm-stats/
  3. http://www.adlnet.gov/scorm/scorm-certification
  4. http://www.adlnet.gov/the-definite-indefinite-future-of-scorm
  5. http://scorm.com/project-tin-can-phase-3-known-weaknesses/
  6. http://dspace.dial.pipex.com/town/street/pl38/comp.htm


What is Tin Can?

The Tin Can API is a brand new learning technology specification that opens up an entire world of experiences (online and offline). This API captures the activities that happen as part of learning experiences. A wide range of systems can now securely communicate with a simple vocabulary that captures this stream of activities. Previous specifications were difficult and had limitations whereas the Tin Can API is simple and flexible, and lifts many of the older restrictions. Mobile learning, simulations, virtual worlds, serious games, real-world activities, experiential learning, social learning, offline learning, and collaborative learning are just some of the things that can now be recognized and communicated well with the Tin Can API. What’s more, the Tin Can API is community-driven, and free to implement.  (TinCanAPI.com)

ADL (the keepers of SCORM) is the steward of this new specification aka “the next generation of SCORM.”

Up until now, SCORM has been the most widely used elearning standard but it falls short when it comes to capturing the entire picture of elearning. If an LMS is SCORM conformant then it can play any SCORM content and vice versa (any SCORM content can be played in any SCORM conformant LMS) however learning is happening everywhere – not just in traditional SCORM courses inside traditional LMSs. The Tin Can API gives you the ability to see the whole picture and lets you record any learning experience, wherever and however it happens.

“SCORM has served us well, but it really doesn’t capture the entire picture of e-learning.” ~ TinCanAPI.com

What are the differences between SCORM & the Tin Can API?

Both SCORM and Tin Can API allow track completion; track time; track pass/fail; and report a single score. Tin Can API however allows so much more: report multiple scores; detailed test results; solid security; no LMS required; no internet browser required; keep complete control over your content; no cross-domain limitation; use mobile apps for learning; platform transition (i.e. computer to mobile); track serious games; track simulations; track informal learning; track real world performance; track offline learning; track interactive learning; track adaptive learning; track blended learning; track long-term learning; and track team-based learning.

Who’s using the Tin Can API?

Providers have come to realize that learning experiences happen everywhere (not just in the LMS) and Tin Can adopters want to give their users the ability to easily track these experiences. Many elearning providers have already adopted this new specification (including TalentLMS and very shortly eFront Learning too) for a full list click on this link.

For more on Tin Can API, please read this blog post: Tin Can Demystified and this page.

TinCan Demystified

If you are somewhat interested in eLearning and unless you have been living on a deserted island for the last year you probably have already heard about the TinCan project. TinCan is heavily promoted as the successor of SCORM and was designed to fix many things that were lacking on the previous standard. In this post we discuss what TinCan really is and how it compares to SCORM.

The Tincan API resulted from several deliberations on SCORM 2.0 over the last five years. The standard is developed by the company RUSTICI but ADL is still the steward of the specification, just like SCORM. The Tin Can API is community-driven, and free to implement.

At its core, TinCan is a messaging system. You collect messages in the form of JSON statements about what your learners are achieving while learning or playing or interacting with other people. Those statements are stored on what TinCan calls LRS (Learning Record Store). The LRS can be either standalone or part of an LMS. The standard doesn’t touch on how you go about translating those messages into something useful. In its simplest form you simply present the statements in the form of “Noun, verb, object” or “I did this”. It is totally up to the LRS developers to make use of this data in any other way they see fit.

Compared to TinCan, SCORM was a very complex standard. It took our team around 8 months to build a SCORM 1.2 engine and more than 16 months for the SCORM 2004 / 4 edition. On the contrary, we spent only 1 month to complete a basic TinCan implementation for use with eFront and TalentLMS. Perceived simplicity is a core ingredient of the new offering and a major adoption point for LMS and authoring tools developers.

A nice side-effect of the messaging system is that any enabled device or program can send Tin Can API statements (mobile phones, simulations, games, real world activities etc.). On the contrary, SCORM was browser and LMS based only. As TinCan project put it People learn from interactions with other people, content, and beyond. These actions can happen anywhere and signal an event where learning could occur. All of these can be recorded with the Tin Can API.” This openness is very important and in our point of view, the biggest benefit that TinCan brings to the world.

TinCan also claims improvements on another commonly required but rarely delivered functionality – the ability to complete learning objects offline and synchronize when you get online. Even when not working completely offline people ask for better support for browser timeout and connection drops. SCORM depends on the browser session and such issues are common and catastrophic.

In reality, the new API offers little real help on this front. However, since the communication happens through simple messaging, client programs can easily store the messages when offline and communicate them to the LRS whenever the user returns to online status. No matter how basic this seems to be an efficient solution. Never underestimate the power of simplicity!

TinCan is very cryptic on a few prominent elements on SCORM like Packaging. The reason is that you might not need Packaging at all. Your learning object might be a mobile application or a game that does not run inside an LMS; Packaging has no value on such a loose-end environment. If you choose to import a TinCan package to your LMS though then yes, you will need to deal with content packaging, launch and import issues.[i]

TinCan has also little to do with the complexities of things like Sequencing. Do you remember what SCORM’s 2004 sequencing was? Let me refresh your memory…

In SCORM 2004, the sequencing is completely dynamic; the sequencing implementation identifies the next activity based on both Tracking Model and Sequencing Definition Model of activities. In fact, the values of Tracking Model are dynamic but the values of Sequencing Definition Model are static. Actually, in SCORM 2004, the sequencing implementation collects the result of learner interactions with SCO (through CMI data model) and maps them to the Tracking Model and then evaluates the sequencing rules (defined for activities) based on the Tracking Model.”[ii]

This sort of complexity led to very low SCORM 2004 adoption. From our experience over 90% of SCORM is still 1.2. Perceived simplicity is the reason. People just want to grab the raw score. The other things that SCORM 2004 offers are often in surplus to requirements. People often require SCORM 2004 support but rarely use it.

Our biggest complaint with SCORM was that it is a reference model and not truly a standard; you don’t plug this into a wall and everyone works the same way. There is still too much variation in how compliant SCORM LMSs implement UI associated with the SCORM RTE. Will content be loaded in a new window? A frameset? How large a window? How will the table of contents be presented? What navigation request does closing the browser imply? Content authors should be able to rely on a consistent set of UI expectations.

Unfortunately, TinCan does not help towards this standardization. On the contrary, it leaves even more freedom to content creators by letting them, for example, define their own verbs used on statements. Interoperability of content between LMSs can be somewhat improved due to the simpler messaging system and absence of Javascript; however, standardization of presentation or reporting will not be benefited from TinCan directly.

To summarize, TinCan brings many good things like simplicity and freedom from the browser and the LMS. On the other hand, it falls short on standardization of UI and reporting. In essence, TinCan tries to bridge elements of formal learning (mainly Reporting) with informal activities (e.g, browsing or game playing). We can foresee additional tools or sub-standards on top of TinCan to address real world issues especially with reporting and standardization of the verbs on statements.

[i] http://scorm.com/wp-content/assets/tincandocs/Incorporating-a-Tin-Can-LRS-into-an-LMS.pdf

[ii] http://stackoverflow.com/questions/12080589/with-a-scorm-2004-lms-and-or-scorm-2004-scos-can-a-teacher-change-the-sequenci