The subject of difficulty settings in video games is a divisive one in the gaming community, and is often characterized by the stance that games shouldn’t cater to low-skill players. Many gamers believe that part of the experience of a difficult title is to just keep playing until they get better. Although research supports the idea that challenge is integral to enjoying video games, the main factor appears to be the idea of an optimal challenge; a game that is too easy is just as likely to lose players as a game that is too hard. Self-efficacy (the belief that one can overcome a challenge due in part to past experiences), as well as self-determination theory (the idea that we will engage in a task because we feel we can use our skills effectively to overcome any challenges we may face) both provide support that players will not continue playing if a game is too hard. Instead, if players feel their skill too outmatched by the challenge, they will simply quit. The implications for overall game health are apparent: differing skills should be catered to when designing a game to maximize the reach of a game and the longevity of its players’ engagement.
The age-old argument over the need for games to default to an extreme level of difficulty has been re-kindled by the release of Cuphead by Studio MDHR. Cuphead is a notoriously punitive game that takes no pity on players with slower reaction times or who are not as experienced with “run and gun” gameplay. Cuphead does offer an Easy setting, but to play the last portion of the game players must beat the earlier bosses on the Normal difficulty mode. This has led to many gamers to debate and discuss whether games should have easier difficulty settings, or if easy difficulty settings take away from the overall gaming experience. The common argument is usually a variation upon the following: game developers have created a game with a certain difficulty, and this difficulty is part of the experience of the game. If the game is too hard, players are free to play the game over and over until they obtain mastery (or as gamers are fond of saying, “git gud”). The first part of this argument is difficult to counter; if an artist’s intention is for the player to suffer, that is their intention and social psychology theory certainly cannot address that. What the science can address is whether or not players will continue to play a punitive game until they achieve mastery over the game. In other words, will players ever get good?
The idea that gamers will play a difficult game until they are good at it is based upon the notion that players are motivated by the challenge of the game. Therefore, if people play games for the challenge, wouldn’t an easy mode take away from that motivation? Challenge certainly is a factor that influences whether a player likes a game. A study by Abuhamdeh and Csikszentmihalyi (2012) found that, with games of chess played over the internet, players preferred games where they played against players rated higher than themselves (using conventional chess rating systems) compared to games played against players rated lower than them. Challenge was even more of a factor in enjoyment than winning, meaning that a player who played against too easy an opponent and won had less fun than if they had played against a much tougher opponent and lost. We can see this study as support for the challenge argument. If games were too easy, people wouldn’t enjoy them as much, and would stop playing.
However, the argument for the challenge of a game only goes so far. The same study found that there was an optimal level of challenge that a player could face before enjoyment began falling off again.
You can think of this as an inverted-U shaped relationship; as challenge increases, enjoyment does as well, but at a certain point, the challenge is too great and enjoyment begins falling again. A game against too tough an opponent would be the same as a game against too weak an opponent; in both scenarios, the challenge isn’t right, and the player leaves.
A player leaving a game because the challenge isn’t tough enough can be explained by a need for challenge, but what about the game being too challenging? What is it about a certain level of challenge that makes it too hard for a player to want to continue trying? One explanation could be the theory of self-efficacy (Bandura, 1977). Self-efficacy, as Bandura described it, is the idea that our motivation to engage in an action stems in part from our own perceived ability to succeed. This perceived ability comes from four sources of information:
Each of these four pieces of information feed our own feelings of self-efficacy, and influence whether we feel we can complete a task successfully. The first item, our own knowledge and internal evaluation of skill based upon past experiences, is the one we rely on the most. This is because our experiences are what we automatically rely upon as the most credible evidence about how we will perform in related tasks. These internal evaluations are continually updated as we attempt a task. Even if we go into a task with some belief in our own self-efficacy, if we continue to fail at the task, we will re-evaluate ourselves and reconsider our self-efficacy in that domain.
Taking Cuphead as an example, the theory states that a player’s motivation to play the game is related to their own perceptions of success. These perceptions may be informed by their previous performance with “run and gun” style games, watching Let’s Play or Twitch streams of other gamers playing the game, feedback from friends recommending the game, and finally the feeling they themselves get when playing the game. If this information is positive enough that the player believes they will succeed, they will have the motivation to try the game and persevere over the game’s many obstacles to success.
However, if while playing the game the player fails continuously because the challenge level is higher than their skill, and they are getting negative physiological feedback in the form of stress and frustration, they are likely to churn out of the game and not return to it. Self-efficacy is of course not the only thing that governs our behavior, nor is it the only theory that could explain why games that are too challenging would deter players.
A study by Ryan, Rigby, and Przybylski (2006) used a scale to measure player’s feelings of competence and autonomy while playing a series of games, including Super Mario 64 and Zelda: Ocarina of Time, and rated how much enjoyment the players had with the game, and whether they would want to play the game in the future. With all games played, the researchers found that the higher a player’s feelings of competence and in-game autonomy were, the higher their reported enjoyment of the game was, and they expressed more interest in continuing. Just like self-efficacy theory predicts, if a player cannot exert their skill in a meaningful way to overcome the challenge of the game, they will not want to continue playing.
Self-determination theory (SDT), proposed by Deci and Ryan (2000) simply says that we do things because we want to, because they are fun, and that this is a key motivation for engaging in play activities. Under this theory, a player plays a certain game because they have internal motivations, or a personal desire, toward playing it. These internal motivations can be influenced by autonomy and competence, or our feeling of being able to have control over activities we engage in, and our need for taking on and feeling like we are succeeding over a challenge. In gaming terms, it’s the ability to have skill in a game and can exert that skill in a meaningful way to affect the outcome of the game.
These theories and findings go against the common argument that if a game is too challenging, a player will simply continue playing until they get better. The evidence points to the conclusion that a player will not continue to play and will instead simply churn out. A game that presents a frustrating grind of levels and bosses too difficult could prove too high a barrier for those who do not think they are up to the challenge, or those who may initially think they are until they receive evidence to the contrary. This goes beyond the idea that gamers want a challenge; they do, but they want an optimal challenge; a challenge they can see a way to defeat. The challenge level to keep players playing the game therefore needs to balance itself atop that inverted-U shape; if a game is too challenging, the research shows, players may not continue playing it long enough to get better. They will not, in other words, get good.
Games like Cuphead, Dark Souls, and Ninja Gaiden are made tough on purpose. The challenge is part of the game’s atmosphere. Dark Souls is hard because the game’s tone requires it to be hard to get across the oppression and the malice that the designers want the players to feel. Cuphead wants players to feel continuously overwhelmed. Designing games to be hard on purpose is fine—according to the chess study discussed earlier it seems to be preferred by players versus making games too easy. The issue arises when games are too hard for players to feel mastery within the game after a reasonable amount of time and effort. When designing games, the research says it is best to shoot for an optimal challenge, and as everyone has their own experience with games and their own skill, the best way to achieve this is with difficulty settings. And, that players will play at the difficulty that will give them an optimal challenge and feeling of mastery and competence. Therefore, including an easy setting does not weaken or cheapen the experience as some may claim; instead it simply makes the game accessible to those who want to play a game at their own optimal challenge level.
The implications of getting the difficulty level correct can be very serious for a game’s overall health. A game that provides no challenge will not keep players, nor will a game with too high a bar for continuation. Difficulty is frequently framed as a “not for everyone” element to a game: “Cuphead is difficult, but it’s not for everyone.” I would propose that this is the wrong way of thinking about difficulty. The genre of a game, or the gameplay elements, are things that may not be for everyone; the difficulty should be something that can accommodate a variety of gamers, as a one-size-fits-all tactic will only end up alienating gamers if the challenge is not optimized to their skill level.
There are of course other factors to consider, such as the social aspects of the game. However, it is reasonable to assume that a player will stay with a game that has an optimal level of challenge versus one that does not, regardless of the amount of social contact within that game.
How can developers determine the correct level of challenge? One method NBI has applied to other games is to examine early player experiences and determine different skill-based segments. By examining churn and retention among those different skill segments, along with associated telemetry, clear actions and recommendations surface that can help to guide the early player experience to be a more enriching and challenge-appropriate experience for the entire target player base.
To learn more, contact us!
Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55, 68-78.
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191-215.
Abuhamdeh, S., & Csikszentmihalyi, M. (2012). The importance of challenge for the enjoyment of intrinsically motivated, goal-directed activities. Personality and Social Psychology Bulletin, 38, 317-330.
Ryan, R. M., Rigby, C. S., & Przybylski, A. (2006). The motivational pull of video games: A self-determination theory approach. Motivation and Emotion, 30, 347-363.