DevLearn Post-Conference Resources

We had a great deal of fun talking games and gamification at DevLearn 2013. Whether you attended Sharon Boller and Karl Kapp’s learning game design workshop, visited the Knowledge Guru® booth in the Expo, or attended our learning stage session on ways you can build your own games, we are grateful for the opportunity to connect and share with you.

In the learning stage session I gave with Leanne Batchelder, Powerful Learning Games You Can Build Yourself, I called out some specific resources we would send to you after the conference. Here they are:

Scroll of Knowledge: Game-Based Learning: Why Does it Work?

This downloadable PDF outlines the four requirements needed for learning, then maps them to game elements and mechanics that match. The scroll is a great introduction to the world of games. If you’d like to see the sources and research studies associated with the scroll, contact us.

View Game Based Learning Scroll

Event Slides: Powerful Learning Games You Can Build Yourself

Our learning stage session summarized the recent research and case studies surrounding the use of games for learning, then showed you how you can use Knowledge Guru to build your own learning games. See the slides below.

Using Game Elements to Improve Learning Outcomes

Sharon Boller, Knowledge Guru® creator and BLP prez, has written a new white paper on Game Mechanics and Game Elements. I gave some of my own general ideas on game mechanics in a previous post, and this one’s all about game elements. Sharon identifies 12 common ones in the white paper:

  • Conflict
  • Cooperation
  • Competition
  • Strategy
  • Chance
  • Aesthetics (ooo, pretty!)
  • Theme
  • Story
  • Resources
  • Time
  • Rewards
  • Levels
  • Scoring

Are these the only game elements? Of course not. They ARE the most common ones, and chances are you could analyze any game you’ve played and find at least a few of these. The white paper goes into detail for each one of these elements, covering a few key points:

  • What type of learners/players will respond well to each element.
  • What learning objectives work best with each element.
  • What questions you should ask as a learning designer when attempting to use each element.
  • Specific examples of game elements found in commercial games, as well as learning games we have created.

No matter what game elements you use, always think about how they will work together with your game mechanics to maximize the learning experience for your players. You’ll need to play test a few times to get everything just right, but that is just a part of game design.

Learn More in Our White Paper

Learning Game Design White Paper - free downloadUsing Game Mechanics and Game Elements in Learning Games is 25 pages of specific advice and examples for creating learning games. It’s designed as a practical guide rather than a collection of theories. You can use it to make your learning game design efforts better right away. Download it now.



How to Link Game Mechanics to the Learning Experience

If you want to design a learning game, you need to know about game mechanics and game elements. This post is about mechanics.

Game mechanics are the rules players follow… and rules the game follows. In the board game Ticket to Ride, you can choose between four possible actions but only perform one of them on every turn. In Settlers of Catan, you have to move the Robber if you roll a seven. These are all game mechanics.

It takes tons of hard work and play testing to make game mechanics balanced and fun in commercial games… and learning games are even tougher. That’s because the game mechanics need to carefully link to the learning experience, and help learners achieve the learning objectives in your design. You have to think like a game designer and an instructional designer at the same time.

“The best mechanics link to the learning experience, or at a minimum, don’t distract from it,” says Sharon Boller in her new white paper on Learning Game Mechanics and Game Elements. The secret is to have players complete the same mental tasks they will do in the real world. This does NOT mean your game has to be hyper realistic!


You’re designing a game for a sit-down pizza restaurant to teach the wait staff customer service basics. Your learners are mostly college-aged. Before you start designing the game, think about the context learners will need to apply skills on the job: waiting tables is time-based, involves multitasking and requires teamwork. Your game mechanics should imitate this by including rules such as timed turns, chaining multiple activities together quickly and measuring success or failure based on how the group performs.

While the game mechanics need to link carefully to the way learners need to think on the job, this does NOT mean your game’s setting must be hyper realistic! What if the game involves serving up finger foods in an underwater resort? What if it was mob-themed? This is where creative use of game elements come in… and I’ll talk about that more in another post.

Bottom line: make your game mechanics match up with how people will need to think on the job, but get creative with everything else to fit your audience.

Learn More in Our White Paper

Learning Game Design White Paper - free downloadI’m really just scratching the surface here. Sharon Boller, BLP president and creator of Knowledge Guru®, has written a white paper on Using Game Mechanics and Game Elements in Learning Games. The white paper is full of case studies taken from real learning games we’ve designed for corporate clients. It’s a great starting point if you (or the people you manage) are ready to implement a game-based learning solution. Download it now.



Using Game Mechanics and Game Elements in Learning Games (White Paper)


Sharon Boller, creator of Knowledge Guru and co-author of Play to Learn: Everything You Need to Know About Designing Effective Learning Games, has authored a white paper on using game mechanics and game elements in learning games. Through a variety of case studies and real-life examples, the white paper demonstrates how learning game designers can design game mechanics and game elements that support real learning outcomes.

When a game’s mechanics—or set of rules—are too complicated, it detracts from the learning. But if the mechanics are too simple or predictable, players will not find the game fun. And even when the game mechanics are figured out, you’ll still need to choose appropriate game elements (we list 12 common ones) that link to your desired learning outcomes.

Sharon explains all of this and more in the white paper. She also shows that the secret to a good learning game is to play test and iterate, always seeking the right balance with game mechanics and game elements.

Game Mechanics

  • Learn to choose the right game rules for players to follow. (And the right set of rules for the game itself to follow.)
  • See examples of learning games used in corporate and non-profit settings and examine their game mechanics.

Game Elements

  • See an overview of the 12 most common game elements, including competition, collaboration, strategy, chance and more. Most importantly, you’ll learn how each one of these game elements can link to learning… and which learners will respond best to each element.
  • See examples of each game element being used in a real-world learning game… mostly in corporate settings.

Written for Designers, Developers and Managers

If you play a role in designing, developing, or managing the creation of learning solutions and are ready to include games in your learning mix, this white paper for you. Look for a recurring series of “Questions to Ask as a Learning Designer” to challenge yourself, or your design team, to think differently about learning game design.

Download the White Paper

You can download the white paper here.

About the Author:

Sharon BollerSharon Boller is the president of Bottom-Line Performance, Inc, an Indianapolis-based learning design company. She is also the lead designer of Knowledge Guru, a platform of games for training reinforcement.

Sharon founded BLP in 1995 when online learning was a blip on the screen of the T&D industry. Now, there are hundreds of digital learning solutions and games under the company’s belt. Sharon’s primary area of interest is games and the gamification of learning. In addition to her work with Knowledge Guru, she has been the lead designer of numerous other digital and tabletop learning games.

Sharon considers herself very much a learner rather than a teacher, and her presentations are geared toward this. She likes to show her own lessons learned and point to other “gurus” in game design.

Sharon speaks at numerous conferences on the topic of learning game design—including ATD, DevLearn, and Training. She also co-facilitates a learning game design workshop with Dr. Karl Kapp, author of The Gamification of Learning and Instruction. She also co-authored Play to Learn with Dr. Kapp.

Want to learn more about game design? Pick up a copy of Play to Learn: Everything You Need to Know About Designing Effective Learning Games.

Learning Research by Annie Murphy Paul: Distributed Practice, Repetition and More

Interested in spaced learning and distributed practice? Then download our free Primer on Spaced Repetition and Feedback Loops. This guide will teach you everything you need to know about these concepts so you can incorporate them in your own training.

Annie Murphy Paul header

For the past year, Knowledge Guru® creator Sharon Boller has been a recipient of Annie Murphy Paul’s Brilliant Report, a weekly newsletter on the science of learning. As a result, many an article has been forwarded around the company, and it’s always an interesting read.

Annie Murphy PaulAnnie Murphy Paul’s bio tells us she is a “book author, magazine journalist, consultant and speaker who helps people understand how we learn and how we can do it better.” Most importantly for learning professionals, she has sifted through loads of research on the science of learning, synthesized it, and presented it to us in easy-to-read chunks, (usually) once a week.

I’m highlighting two specific studies found in Murphy Paul’s Brilliant Report, as they are particularly relevant to the learning principles we use in our Knowledge Guru game engine.

Distributed Practice

It turns out that the learning tactics most commonly used by both students and professionals are also the most ineffective. A 2013 report by the Association for Psychological Science examines ten learning tactics, rating their utility based on evidence gathered by five leading psychologists. The team was led by Kent State University professor John Dunlosky. Re-reading material, highlighting and underlining key points were all deemed “ineffective” learning tactics by the research, showing little value beyond simply reading the text.

Murphy Paul notes that “the learning strategies with the most evidence to support them aren’t well known outside the psych lab.” A prime example is distributed practice, or intentionally breaking learning into chunks and spacing study sessions out over time. The report shows that multiple repetitions, spaced out over time, build long-term memory better than other study methods. Murphy Paul also notes that “The longer you want to remember the information, whether it’s two weeks or two years, the longer the intervals should be.”

Takeaway: Instead of delivering training all at once, space it into smaller sessions. No cramming!

Practice Testing

The Association for Psychological Science report also noted Practice Testing as a highly effective learning tactic. Giving learners more evaluations, often not graded, will aid in further learning. Because learners must bring information to mind multiple times, they are more likely to remember it. Murphy Paul notes that flash cards are a familiar method to use for practice testing.

Takeaway: Have learners retrieve information multiple times in a “practice” setting.


It might seem counterintuitive, but giving learners a pretest before they have studied the material will actually enhance long-term memory. In Issue 14 of her Brilliant Report, Murphy Paul explains how completing a test on information you do not yet know, then receiving feedback afterwards, is an effective learning strategy. She cites three studies, one of them authored by Williams College psychology professor Nate Kornell and published in the Journal of Experimental Psychology. She explains:

Kornell and his coauthors theorize that searching our minds for answers (even if we come up empty) creates “fertile ground” in the brain for encoding the answer when it is eventually provided.

Takeaway: Allow learners to try, and possibly fail, before learning your content.

Learning Principles at Work in Knowledge Guru

Annie Murphy Paul’s research dovetails nicely into the work done by John Medina for his book, Brain Rules. We based the design of the Knowledge Guru game off of learning principles discussed in Medina’s book, and they are further validated through the research Annie Murphy Paul has compiled. Some examples:

  • Distributed practice sessions, accomplished via the separate Guru Grab Bag mode. Players return to the game to play the new game mode, with repeat content, after they complete the regular game.
  • “Practice Tests”, accomplished via multiple repetitions. Each learning objective in Knowledge Guru has one or more question sets, which include three iterations of the same question. Learners must successfully answer all three questions to master the topic. The multiple repetitions enhance long-term memory.
  • “Pre-Tests”, accomplished via asking questions, then providing immediate feedback. When Knowledge Guru is used as the primary learning method, learners answer questions they may not yet know the answer to. When they get the question incorrect, they receive immediate feedback and then try to answer the question again.

Read Annie Murphy Paul’s Upcoming Book

If you’re interested in learning more about, well, how we learn, then have a look at Annie Murphy Paul’s blog and keep an eye out for her upcoming bookBrilliant: The Science of How We Get Smarter. You can also subscribe to The Brilliant Report, her email series on learning science.

Final Step in Learning Game Design: Playtest, playtest, playtest

Previous posts in this learning game design series have focused on sexy stuff such as game goals and core dynamics, game mechanics, game elements, or scoring and rewards. This final post is about the critical importance of playtesting your game as you move through development.

Playtest Playtest Playtest

Designing  a game is different than designing an eLearning solution. There is a totally new term that comes into the process: playtesting. Playtesting is NOT usability testing, focus group testing, quality assurance testing, or internal design review. Playtesting is what you do to evaluate whether your game is really playable and that it  functions the way you intended for it to function – as a game and as a learning solution.

Playtesting helps you answer these questions: Is it fun? Is it balanced (e.g. not too hard and not too easy)? Is it complete? Did people learn what you intended for them to learn? Playtesting is not something you do once or twice. You do it several times, each time further refining your game play experience and the learning experience. For mega-games like Halo or The Sims, designers may have done up to 3,000 hours of playtesting to verify that their game worked. For a learning game you craft yourself – or with a small team – you should assume at least 30-40 hours of play testing. Gulp – that’s right. 30-40 hours of testing time.

Phases of Playtesting


Your first playtest is with the very first version of the learning game you create, which should be a paper prototype. The image above is a paper prototype participants created in the Play to Learn workshop that Karl Kapp and I do together.  It’s rudimentary, but it gets the job done. The game designers and learning designers very quickly discover what works and where the holes are – and there will be holes. They’ll also come up with new ideas or ways to tackle problems. Here’s an overview of the three major phases of playtesting you should plan to go through. You may do multiple rounds of playtesting within each phase.

Phase 1 –  Self-test. You and your design team play the initial prototype and evaluate it. It’s okay to do a lot of discussing while you’re playing – and modifying rules and ideas on the fly as you go. You should keep game materials very basic for this test. Paper is best. If the paper test goes well, you can shift to online formats. If it doesn’t, re-do the game on paper and playtest it again as a paper version before going online.

Phase 2 –  Play test with friends and colleagues. Once you go through initial playtests with your design team and refine your game a few times, you’re ready to pull in some outside perspectives. Ideally you will include someone from your target audience.  Your team’s job  is to sit back and observes (quietly!) while others come in and play the game. At this point, you want there to be some legitimate game assets – artwork, programmed interactions, real content, scoring, and rules to follow.  You’ll debrief the experience and then decide what changes to make.

Phase 3 –  Play test with (gulp) strangers. Ideally all these strangers represent your target audience. These folks will be 100% objective, which friends and colleagues are not. I’ve learned the hard way (by going too far with internal playtesting and getting “great” results) that you need to loop in people who do not care about your feelings or the amount of time you’ve spent on the game. However, they SHOULD reflect people who really need to learn the stuff in your game. Otherwise, they can rate it lower simply because it’s not of interest to them.

6 Tips For a Good Learning Game Play Test

  1. Don’t share the background of the game before people play. That’s part of the playtest. Can your players “get it” without you explaining what the game is about?
  2. Do tell them what to expect: 15-20 minutes of game play followed by Q&A.
  3. Emphasize the need for playtesters to “think out loud” as they play. You want to hear their internal thoughts spoken aloud. Things such as “This is really confusing.” “I don’t understand the rules.” “I wonder what would happen if I make this choice? ” are all good things to say aloud.
  4. Keep your own mouth closed as much as you can. Do help players if they get truly stuck, but try to limit your interactions with players during the game.
  5. Stop play after about 20 minutes and conclude with debrief questions. Take copious notes.
  6. Keep a playtesting journal or log that documents the results of each playtest you do and chronicles the decisions you make about game changes.

Post-play debrief questions

We used to have a pretty big list of questions we asked. We’ve distilled the list down to five:

  1. What did you learn? Compare responses to what people were supposed to learn. Rationale: if people didn’t learn, the game doesn’t work – no matter how fun people think it was. It’s critical you get people to tell you what they learned in their own words so you can compare it against the learning goal and learning objectives of the game.
  2. On a scale of 1 – 5, with 1 being low and 5 being high, what was your engagement level in the game play experience? Rationale: if people weren’t engaged, then they aren’t having fun – and games should be fun. They should intrigue or interest the player. Otherwise, the player will mentally check out pretty quickly and not learn much of anything.
  3. Did your engagement level change at any point during play (going up, down)? Rationale: There could be a confusing game element, a rule that needs to be changed or enhanced, or some other game element that requires adjustment to maximize the experience. Conversely, the game could start slow and really build for players. You want to assess all of this and determine if and how the game goal, the core dynamic, the game mechanics, or the game elements need to be tweaked. Careful listening can help you decide if you need to adjust a game element, a rule, or even the game’s goal or core dynamic.
  4. If it did change, why did it change?
  5. What, if anything did you find confusing or hard to understand as you played? Get people to explain their responses. Don’t accept a “Yes. The rules were confusing.” Rationale: even on a good game, there can be some confusion about how to play. Your job is to figure out whether the player’s confusion warrants action on your part. Did their confusion affect their learning or the engagement factor? If not, you may decide to do nothing. Was confusion limited to a single player or did many players report the same confusion?

Learning Game Design Series, Part 8: Dump ADDIE; Iterate Instead

Learning game design is a VERY iterative process. It’s not an approved design document, two drafts plus final—or design, alpha, beta, and gold master.

This post describes (and shows) the iterative design process required to create an effective learning game. I define “effective” as a game that 1) achieves the learning goal set for the game and 2) players describe as engaging or fun to play.

Version 1

We have a project going on right now that includes several different “mini-games.” Below is an early prototype for one of them. (It’s the first version beyond the initial paper prototype.) In its first programmed version, it was called “Story Shuffle.” The learning goal was to be able to identify the data you need to collect as part of an incident investigation.



Note that even this early version includes content. Unlike other kinds of learning designs, a game has to include content from the beginning. When we “played” this initial programmed version (v1) we quickly decided it wasn’t fun – and wasn’t “game-y” enough to suit us.

Version 1.1

We brainstormed—and came up with a new way to approach the game—but we thought we could streamline our process by NOT putting in the content. Instead, we thought we’d do the Michael Allen SAM approach and just build the interaction out so we could see if we liked it. Here’s our V1.1.


We had five people play test it, and all of them concluded: “We can’t tell whether it will be fun or not because there is no content.” We also have no idea what it will look like aesthetically. If you remember from my game elements post, the aesthetics are part of the fun factor. We need something to show us what this game will “feel like.”  Our lesson learned. With learning games, real (or at least realistic) content has to be part of every iteration because you can’t assess the game without it. We also agreed that, for us, art assets need to be included relatively early so playtesters get a sense of the look/feel of the game. Aesthetics are too important of a game element to not start fleshing out early.

Version 1.2

Here’s V1.2 of the game, now renamed “Late for Lunch.” The game goal is to get to lunch before you pass out from hunger. The learning goal is the same: to be able to identify the appropriate data to gather for an incident investigation.


In this version—which is still not done—we included aesthetics. Our testers concluded this version is pretty good, though we still have more revisions to make. In contrast to the previous version, we now have CONTENT, which lets us evaluate the playability of the game. Never again will we try to shortcut and omit content from an iteration of a game.

As you iterate in learning game design, consider this as a possible sequence for a digital game. If it’s not a digital game, then you obviously won’t create digital outputs. You’ll simply keep refining the components of the table-top game:

  1. Conduct Game Design brainstorming meeting. Define your instructional goal and objectives, your audience characteristics, select a theme, a game goal, and a core dynamic. Identify possible content you can include. Build a paper prototype, including some content that would actually appear in the game.
  2. Play test this paper prototype in the game design meeting. Document player reactions; identify the first list of revisions.
  3. If revisions are extensive, build a second paper prototype. Playtest again. Identify revisions.
  4. Build initial digital version of game, perhaps in PowerPoint. Create enough art assets to give testers a sense of the game’s theme and look/feel. Include enough content that playtesters can evaluate the game for its fun factor and its learning value.
  5. Play test this digital version with 3-4 play testers. Document feedback. Determine next steps. If needed, revise this digital version again. Play test again.
  6. Build initial programmed version (V1) of game with enough game levels or game loops within it for players to fully experience the game mechanics and assess the core dynamic. Include sufficient content to support the levels or loops you create. (If your game is going to have multiple rounds or levels of play, this means you don’t develop ALL of them – you develop enough to let players assess the playability and learning efficacy.)
  7. Play test. Document feedback. Determine next steps.
  8. Revise to V1.1. Play test again.
  9. Etc. When we go “live,” we are at V2.

How far do you iterate?

How far do you go with the iterations? You iterate until you get satisfactory answers to these questions from players who truly represent your target audience:

  • What did you learn? (Responses should mirror what you wanted them to learn.)
  • Did you remain engaged in the game throughout play? (You want a “yes,” here.)
  • Did anything confuse you about game play? If so, what was it? (You want to unearth any major confusion. You may not act on everything someone finds confusing. It depends on how many people cite it as a problem, and if the confusion hinders learning or engagement.)

August Learning Game Design Workshop Recap

Play to Learn: Designing Effective Learning Games

Another successful Learning Game Design workshop is in the books. Sharon Boller and Karl Kapp gave their first joint session in May of 2013 at ASTD International in Dallas… and the gaming goodness continued in Indianapolis on August 28th. We had a full room, with many participants coming from the Indianapolis area. Some out-of-town guests joined the party as well.

Karl Kapp at Learning Game Design Workshop

I assisted Sharon and Karl at this workshop at ASTD ICE, but this was the first time I actually got to experience it as a participant. It was an absolute luxury to spend an entire day playing games, learning about various game mechanics and game elements, and prototyping a game with a small team.

ExactTarget was our gracious host for the workshop. They have a great office space, and even provided participants with a midday ice cream break!

Flow of the day

Once initial introductions were out of the way, we spent some time going over Core Goals, Dynamics and Elements found in learning games. It was a nice way to gain some quick exposure to the terminology… without the workshop turning into a lecture.

Our terminology review only lasted about 20 minutes, and then we were turned loose to, you guessed it, play games! This was a workshop highlight, and helped “break the ice” among participants. Sharon and Karl carefully selected games, one competitive and one collaborative, that showcased the various game elements and dynamics we were discussing.

Gameplay at learning game design workshop

Our play session led nicely into a short discussion on best practices to follow when designing games. I found it easier to conceptualize these after just playing two games, noting mechanics I liked and disliked.

After lunch, we were split into teams for a fun review exercise… using Knowledge Guru®! Sharon took all of the content on game mechanics, elements and dynamics (lots of this can be found in the Learning Game Design blog series) and use it to create a guru game called Game Design Guru. Teams competed against eachother for 10 or 15 minutes, trying to get the high score. The spaced learning and repetition built into the game engine, and into the workshop itself, really helped reinforce the new terminology I had learned.

Playing Game Design Guru

After our game, we broke into teams and spent the rest of the afternoon prototyping a game of our own. I was on a team with a fellow BLPer, Corey Callahan, and one of our clients. The client is currently developing ideas for a game she can use at her company, so we were able to use what we learned throughout the day to make a prototype she can (hopefully) actually use.

Play to Learn workshop - play testing

Once teams created their prototypes, we rotated to another table and playtested. The feedback the game designers received was just as valuable as the content Sharon and Karl taught earlier on; it’s only through trying to make a great game, getting feedback, and improving that we actually get better as game designers.

Karl summed it all up best towards the end of the day, when he said learning game design really comes down to “knowing what rules to break.” After a day of learning best practices and trying out our new skills, I left the workshop ready to produce my next learning game prototype… and make it better.

Games to Play and Evaluate

Sharon has compiled a list of games you can evaluate, including some game mechanics and elements to look out for as you do.

See the list here

Game Design Guru

You can play Game Design Guru to learn basic learning game design terminology.

Play here


Here is an example of a paper prototype created by participants at the Play to Learn workshop.

Play to Learn workshop prototype

Attend a Workshop

Want to learn more about this workshop? Interested in attending in the future? Two more sessions are coming up this year: one in Chicago and another in Las Vegas. Sharon and Karl can also deliver a private workshop just for your organization.

Click here to learn more and see upcoming dates.

The Chicago workshop is a shorter, 2.5 hour version of this workshop.

Learning Game Design: Think About the Learning and Then the Game


I am a firm believer that most games teach. However, not all games are explicitly designed to be learning games. If your intention is to create a learning game that achieves specific learning outcomes for the players, then you have to think about the learning before you begin crafting the game design.

It’s critical to have a strong understanding of game goals, core dynamics, game mechanics, and game elements—but that understanding doesn’t guarantee you a learning game if you don’t also have solid instructional design skills. Why? Because an effective learning game requires a solid instructional goal and learning objectives, as well as a clear understanding of the backgrounds and preferences of the target audience for the game.

The phrase “learning game” says it all—you are creating games that help people learn. What distinguishes a serious game from a commercial game is its intention to help people learn something specific. Players will either know something or be able to do something as a result of playing the game. In many instances, there may be attitudinal adjustments you’re seeking as well.

Questions You Need to Answer Before Designing Any Game

As part of our Play to Learn book, Karl Kapp and I put together a checklist of questions learning game designers need to answer before starting on game design. Here are the questions, which are pretty straightforward needs analysis kind of stuff:

What is the business need that is driving the use of a learning game?

  • A need to increase sales or to support the launch of a new product?
  • Customer complaints or ineffective customer service?
  • A need to comply with gov’t regs?
  • Quality issues?
  • Safety issues?
  • A need to build knowledge or skill on a business-critical process?
  • Something else? What is it?

After playing this game, what will learners be able to do in their jobs? (This should be your instructional goal).

  • As part of achieving the instructional goal, what do learners need to know, do, and believe? (These statements convert into your game’s learning objectives.)

Want the full checklist? Pick up Play to Learn: Everything You Need to Know About Designing Effective Learning Games.

Learning Game Design: Rewards and Scoring


In preceding posts on learning game design, I’ve focused on game goals, game mechanics, and a variety of game elements. Continuing with game elements, this post focuses on rewards and scoring. Let’s look at rewards first.


Rewards can be anything players earn via game play. Some games have them. Many games don’t. The new wave in learning games—and in gamification of learning—is to give players achievements for accomplishing certain tasks or hitting certain milestones. There is a general trend toward giving a LOT of rewards—and this isn’t necessarily a good thing in learning games. Here are some general rules for rewards:

  • Reward people for completing boring tasks but not interesting ones. If the task in a game is interesting, the task itself (or the accomplishment of it) is the reward.
  • If you choose to give rewards, give them for performance rather than completion. Giving someone a badge for completing a section, for example, isn’t a good idea. (This is sort of like the school rewards given for perfect attendance; being rewarded for showing up regardless of how well you performed while there.) Instead, give a reward if they could complete the section to a certain standard of proficiency.
  • Let your reward, such as points, be a form of feedback to the player. Only allow the player to earn points (or resources) if they perform to a certain standard. This is a form of feedback. Have the player lose points if they fail to achieve that standard.

An Example:

We have an achievement case in Knowledge Guru. Players can earn badges for reaching certain milestones in the game. On first glance, you might think, “Well that’s completion.” But in order to actually earn a milestone, a player needs to avoid some of the costly point deductions from wrong answers. Answering a question in the game isn’t sufficient. This is because answering wrong and correcting yourself (especially on the final path) could mean you won’t make it to the point level for a certain milestone.

After we tested the game many times, we concluded that these achievements matter to a significant subset of players, but not at all to others. Having them in the game doesn’t really hurt the game and appeals to a certain demographic. We noted some players responded positively to their existence in the game (“Oh, I hit another milestone”). So we decided to keep them in the game. Players who don’t value badges simply tended to ignore them.


Scoring is crucial to how a game is perceived by players. A good scoring algorithm helps hook players in the game experience. A poor one demotivates, sometimes dramatically.

Scoring typically correlates to how well/poorly someone is doing in the game. It ties to their progress in the game. It can be points (as in Knowledge Guru), dollars earned, resources accumulated, etc. In a learning game, scoring should offer your players clear feedback on how they are learning. Players often view the scores they receive as a form of reward, and good scoring can motivate continued game play.

Play-testing a game is an invaluable way to find out if your scoring is increasing player motivation, decreasing it, or having no influence at all. The example below illustrates this.

An Example:

We developed a game for sales reps called Formulation Type Matters. The story in the game is that a predecessor decimated sales in your newly-assigned territory. Your job is to regain customer satisfaction and sales by successfully resolving a variety of issues related to formulation type. When players entered their territory, they discovered five customers, each with a different issue. Players had to find/locate the appropriate information to resolve each of the issues.

Behaviors we were trying to cultivate in players included:

  • Asking relevant questions to clarify the problem and to understand customer’s needs and past behavior.
  • Seeking out relevant expertise within the player’s real organization, which had technical team members in place to support sales rep in the field.
  • Reviewing past issues in an effort to understand what might have occurred in the past that is driving current customer behavior.
  • Responding appropriately to the customer the first time.

Our feedback and scoring mechanisms included:

  • Territory sales, which could go up or down based on player’s decisions in the game. As player worked through a customer’s scenario, everything was expressed as “potential sales.” When player actually responded to the customer, the sales shifted from potential to actual.
  • Customer satisfaction, which could rise or fall based on player’s actions.
  • Customer complaints, which could rise or fall based on player’s actions.

Good Intentions, Bad Results (This Is Why We Playtest)

In an early iteration of the game, we rewarded players with an increase in potential sales for every question they asked the customer. The client wanted to emphasize the importance of asking questions so we made sure every question players could ask was a good question that provided valuable information. Our intent was to reinforce question-asking behavior. Good idea, right?

Wrong. When we playtested, players focused intently on each question they could ask the first customer they’d selected in their sales territory. Players took notes on the customer’s responses. However, once all questions had been asked of this customer – and players got an increase in potential sales for each question they asked – they noted, “So I get points for every question? There’s no strategy in selecting which questions to ask?” On subsequent customers, players focused much less on the questions they asked their game customers, a response we did not want.

Based on the playtest, we redesigned the scoring and varied the customer questions. For one customer, we made ALL the questions good ones. For the others, we included good, neutral, and poor questions. This forced the player to truly think about the value of asking each question.

Additional scoring things we did that increased player motivation:

  • Rewarded people with potential sales for accessing resources since this was a key behavior we wanted players to do in the real world.
  • Gave players a significant negative impact for selecting an incorrect response to the customer issue. We always made sure there were four response options – and every one of them was realistic. If players responded incorrectly, half of their sales potential got lost. While they had to try again before they could get back to their main territory map, they could not regain all the sales potential lost. This scoring choice helped us reinforce the real-world need to give accurate information to customers or risk loss of customer satisfaction and loss of future sales.

[layerslider id=”9″]


Think very carefully about both rewards and scoring. These two elements require as much thought as your game mechanics and more “fun” elements such as theme, story, and aesthetics. Rewards and scoring should also correlate to what you want people to learn and the feedback you want to offer to them re: their performance.