You are on page 1of 9

Available online at www.sciencedirect.

com

ScienceDirect
Available
Availableonline atatwww.sciencedirect.com
Procedia online www.sciencedirect.com
Manufacturing 00 (2017) 000–000
Procedia Manufacturing 00 (2017) 000–000
www.elsevier.com/locate/procedia
ScienceDirect
ScienceDirect 
Procedia Manufacturing 21 (2018) 429–437
Procedia Manufacturing 00 (2017) 000–000
www.elsevier.com/locate/procedia

15th Global Conference on Sustainable Manufacturing

Influence of Gaming Elements on Summative Assessment in


Manufacturing Engineering Society International Conference 2017, MESIC 2017, 28-30 June
Engineering Education 2017, Vigofor Sustainable
(Pontevedra), Spain Manufacturing
Mustafa Severengiz*,a, Ina Roederb, Kristina Schindlerb, Günther Seligera
Costing models for capacity optimization in Industry 4.0: Trade-off
a
Technische Universität Berlin, Department of Assembly Technology and Factory Management, Pascalstr. 8-9, 10587 Berlin, Germany
a
bb
between
Technische used
Universität Berlin, capacity
Department of Industrial and operational
Information efficiency
Technology, Pascalstr. 8-9, 10587 Berlin, Germany

A. Santanaa, P. Afonsoa,*, A. Zaninb, R. Wernkeb


a
University of Minho, 4800-058 Guimarães, Portugal
Abstract b
Unochapecó, 89809-000 Chapecó, SC, Brazil

Regarding the massive sustainability challenge mankind is currently facing, there is an indisputable need to implement
sustainability as the key reference point into higher engineering education in order to prepare the stakeholders of tomorrow. This
requires networked thinking on the part of the learner and increases the learning goals’ complexity dramatically. The actual
Abstract
achieved learning outcomes are often evaluated by assessing factual knowledge in higher education. However, it has been shown
Undertimes
many the that students
concept of choose
"Industrythe examination format processes
4.0", production for orientation
willwhen studying.toThus,
be pushed the authors propose
be increasingly a gamified
interconnected,
summative assessment approach that requires networked thinking to direct students’ learning efforts towards broad competency
information based on a real time basis and, necessarily, much more efficient. In this context, capacity optimization
building. In a study with 25 students of a master engineering course, the effects of a gamified examination design are
goes beyond the traditional aim of capacity maximization, contributing also for organization’s profitability and value.
investigated.
Indeed,
© 2017 The lean management
Authors. Publishedand continuous
by Elsevier B.V. improvement approaches suggest capacity optimization instead of
© 2018 The
maximization. Authors.
The Published
study of by Elsevier
capacity B.V.
optimization
Peer-review under responsibility of the scientific committee andofcosting
the 15thmodels is an important
Global Conference researchManufacturing.
on Sustainable topic that deserves
Peer-review under responsibility of the scientific committee of the 15th Global Conference on Sustainable Manufacturing (GCSM).
contributions from both the practical and theoretical perspectives. This paper presents and discusses a mathematical
model forsummative
Keywords: capacityassessment,
management based higher
gamification, on different costing
education, models
engineering (ABC and TDABC). A generic model has been
education
developed and it was used to analyze idle capacity and to design strategies towards the maximization of organization’s
value. The trade-off capacity maximization vs operational efficiency is highlighted and it is shown that capacity
1. Challenging
optimization Engineering
might Education
hide operational for Sustainable Manufacturing
inefficiency.
© 2017 The Authors. Published by Elsevier B.V.
The Paris
Peer-review agreement
under in 2015
responsibility wasscientific
of the yet another step towards
committee a carbon-free
of the Manufacturing environment
Engineering “holding
Society the increase
International in the
Conference
2017.
global average temperature to well below 2 °C above pre-industrial levels and to pursue efforts to limit the
temperature increase to 1.5 °C above pre-industrial levels” [1]. With a 21 % share of all carbon emissions, industry
Keywords: Costone
Models; ABC; TDABC; Capacity Management;
to climateIdle Capacity; Operational
In orderEfficiency
is globally one of the worst contributors to climate change [2]. In order to master the considerable technological,
is globally of the worst contributors change [2]. to master

1. Introduction
* Corresponding author. Tel.: +49 (0)30/314-25549; fax: +49 (0)30 / 314-22759
The cost
E-mail of idle
address: capacity is a fundamental information for companies and their management of extreme importance
Severengiz@mf.tu-berlin.de
in modern production systems. In general, it is defined as unused capacity or production potential and can be measured
in several©ways:
2351-9789 tons
2017 The of production,
Authors. available
Published by Elsevier B.V.hours of manufacturing, etc. The management of the idle capacity
Peer-review underTel.:
* Paulo Afonso. responsibility
+351 253 of the761;
510 scientific committee
fax: +351 253 604of741
the 15th Global Conference on Sustainable Manufacturing.
E-mail address: psafonso@dps.uminho.pt

2351-9789 © 2017 The Authors. Published by Elsevier B.V.


Peer-review under responsibility of the scientific committee of the Manufacturing Engineering Society International Conference 2017.
2351-9789 © 2018 The Authors. Published by Elsevier B.V.
Peer-review under responsibility of the scientific committee of the 15th Global Conference on Sustainable Manufacturing (GCSM).
10.1016/j.promfg.2018.02.141
430 Mustafa Severengiz et al. / Procedia Manufacturing 21 (2018) 429–437
2 Mustafa Severengiz / Procedia Manufacturing 00 (2017) 000–000

economic and institutional challenges ahead [2] both a top-down [3] and bottom-up [4] approach must be
considered, e.g. creating governmental agreements for regulations and sensitizing customers for a decarbonized
industry. However, all such approaches must fail if the engineers, as agents of this change towards sustainable
manufacturing lack the competencies to meet those new requirements.
In complex, interrelated and competing global value creation networks, system competency is of utmost
importance, especially for engineers [5, 6], and it is the key competence for sustainable development. Quickly
developing and often unsustainable technological innovations even increase the urgency of such competency
building [7]. The ever-growing amount of teaching content often leads to a superficial learning process. The result is
a lack of understanding, applying and connecting the previously learnt knowledge and skills [8].
In education, the assessment type plays a major role in students’ follow-up of each lecture or exercise and exam
preparation [9]. As the students’ degree is a sum of their single grades, they do not focus on deep understanding, but
on successful exams. The lecture format in combination with the common multiple choice (MC) testing format
further encourage this kind of learning behavior. The abstract theoretical perspective of lectures focuses on basic
facts and their simple interdependencies rather than on their impact in the complex interwoven global network. The
time-economic examination design of MC tests [10] also focuses on simple fact checking only.
In higher education, the highest learning levels in the sense of analyzing, evaluating and creating [11] needs to be
addressed, while often Bachelor courses aim for lower and Master courses for higher levels [10]. Moreover,
additional competencies, first of all networked thinking need an appropriate environment to unfold. However, in
assessment designs, which are the unofficial guide for students’ learning behavior, this is seldom considered.
Great potential to develop and increase networked thinking is attributed to simulation games. A simulation game
“combines the features of a game (competing, co-operation, rules, and players) with those of a simulation
(incorporation of features of the real world)” [12]. Thus, it bridges the gap between theoretical knowledge and daily
practice [13], while decisions taken; positive or negative, have no effects in reality [5]. With real-world problems,
simulation games consider the major characteristics of problem-based learning [14]. Despite their ability to demand
networked thinking as well as deep factual knowledge, and despite the assessment’s influence on learning behavior,
simulation games have so far not been appropriately adapted for assessing knowledge. Gamified summative
assessment is a research field, which has not been widely considered [15].
Designing tools for gamified summative assessment, that challenge system competency is decidedly not trivial. A
number of possible side effects of the assessment design on the examinee’s testing behavior could hinder assessing
the actual learning outcome, and therefore needs to be carefully controlled. However, the impact of a gamified
testing design on the testing situation and the testing behavior has not been scientifically illuminated so far.
Therefore, after bringing together testing, learning and teaching theories from various fields as a basis for theory
building for gamified summative assessment, a study is presented that examines those very side effects.

2. State of Knowledge

BACHMANN (2014) has introduced a “triple jump” between learning and teaching activities, assessment methods
and intended learning outcomes. He highlights the relevance of coherent teaching activities and assessment methods,
which need to be designed to address or assess together the learning outcomes in a meaningful way. Consequently,
the major challenge for educators lies in matching learning goal, teaching format and assessment type [16, 17]. In
this paper, the authors concentrate on assessment methods only, well aware of the fact, that for maximum teaching
productivity with regard to set learning goals, an assessment method cannot stand for itself but requires an
adaptation of the teaching activities.
Focusing on assessment for a start, in this chapter common summative assessment types are compared, briefly
discussing their benefits and shortcomings. In order to understand the necessity of additional research on summative
assessment, Blooms’ Taxonomy is presented and discussed. Furthermore, the leverage of simulation games for
displaying complex systems is demonstrated and first attempts for application in summative assessment are made.
Mustafa Severengiz et al. / Procedia Manufacturing 21 (2018) 429–437 431
Mustafa Severengiz/ Procedia Manufacturing 00 (2017) 000–000 3

2.1. Summative assessment typed compared

The literature often distinguishes between summative and formative assessment [18]. The summative assessment
“applies to those terminating examinations which grade students either at the end of a course or at the end of their
period of study”, while a formative assessment “is made as the student progresses through courses and the years of
study”. Some formative assessments can also contribute to the summative assessment.
Most common summative assessment types are written exams, oral exams, homework, project work and portfolio
tasks. Written exams have a fixed duration, are accomplished in individual work, and can contain open, semi-open
and closed (MC) questions. In most cases, they are performed as closed-book assessments. Advantage of written
exams generally are that the answers are easier to compare and contrast to e.g. project works, and especially in MC
tests basic knowledge can be checked with minimal effort (see Table 1). Disadvantages are that written exams, due
to their theoretical nature, usually address only the lower levels of learning [19, 20]. In addition, competency testing,
as networked thinking, and social interaction are difficult, or even impossible to achieve. Nevertheless, MILLER ET
AL. (1998) state that this does not invalidate the usage of MC for testing higher levels of learning, but merely
provides confirmation that it is more difficult to develop suitable questions [21].

Table 1: Comparison of evaluation form and evaluation criteria [22, 23, 24, 25, 20]

Another assessment type is the oral exam. Individual and group exams can be distinguished. Commonly there is
an evaluator and an assessor involved. A major advantage for the evaluator and examinee is the flexibility during the
exam, which means that misunderstandings can be solved immediately [20]. Disadvantages are the difficulty in
comparing the results, because the oral examination can be a rather dynamic process, see Table 1, which can lead to
errors of observation and errors of assessment [20].
Other assessment types are homework and project work. In terms of homework, they are often performed
individually, while project work is mostly conducted in groups. In terms of transparency, comparability and
objectivity, they are rather difficult to evaluate. In addition, they require a rather high amount of effort from the
evaluator. The assessment forms have their benefits in testing knowledge, enabling transfer tasks, enforcing
networked thinking and require social interaction. Portfolio examinations are a combination of at least two
examinations. They can consist of different assessment forms and are therefore not generically classifiable.
432 Mustafa Severengiz et al. / Procedia Manufacturing 21 (2018) 429–437
4 Mustafa Severengiz / Procedia Manufacturing 00 (2017) 000–000

2.2. Games for learning and assessing

CAILLOIS (1961) defines a game as “an activity that is voluntary and enjoyable, separate from the real world,
uncertain, unproductive (in that the activity does not produce any goods of external value), and governed by rules"
[26]. Referring to the saying “A picture says more than thousand words”, DUKE (1974) adds that – properly
designed – a game is worth a thousand pictures [27]. A good game has benefits in terms of reducing stress [10],
supporting skill development [28], increasing motivation [29], while bringing gamers in the state of flow [30].
Consequently, benefits of games have been adapted for educational purposes [30] and its potential for positively
affecting learning outcomes has been stated in several publications [31, 32, 33, 34]. Going beyond that
differentiation, there is less consistency in each terms’ definition [29].
Simulation games, as well as serious games or games for game-based learning differ from entertainment-oriented
games in terms of their primary purpose to teach: which is other than entertainment and leisure [35, 36]. Simulation
games can be used as an approach to meet the complexity of networked thinking [5, 13]. Simulation games can have
a high degree of relevance for students, as they are mainly based on real-world problems [37], although for
didactical reasons the complexity of reality’s representation can be reduced [38]. In addition, as described above,
future engineers require well-developed competencies in networked thinking in order to meet the challenge of
sustainability [6].
While the relevance and impact of gaming in teaching has been recognized [39], there is a gap in research
regarding assessment. There are so far only few studies in the field of gamified summative assessment, and even
those focus mainly on the lower levels of learning [40, 41]. Gamification is “defined as the use of game design
elements in a non-game context” [42]. Approaches to gamified assessments are usually designed as a quiz [40, 41].
Nevertheless, Quizzes are MC tests and therefore lack the ability of targeting higher learning levels [21].

2.3. Criteria for gamified summative assessment

Summative assessments are supposed to match a summary of the teaching content with the predefined learning
goals. Since with regard to sustainable development, learning goals need to include networked thinking, the teaching
content’s summary has to include the inner and outer connectedness of the basic facts taught. This level of
complexity can only partly be achieved with classic written exams, least with MC tests. To enable networked
thinking, freedom of choice must be given and choices’ consequences must be considered in the process, as in the
possibility to react to those consequences. A simulation-oriented gamified summative assessment seems most
appropriate for that. However, being an assessment those free choices need to be transparent and comparable. A
great challenge therefore lies in defining the boundaries of the simulated environment in a way that they allow for as
free as possible choices while limiting them to a handable set of criteria. A possible approach could be to assess the
change of choices when confronted with the choices’ consequences rather than the choices themselves. Generally
the simulation game can be either computer-based or in shape of a board game. Both approaches have their
advantages. For a realistic simulation, communication between the students could be enabled to a degree, while
cheating must be eliminated, e.g. some tasks could be group assessments, requiring cooperation while others have to
be solved alone. To reduce the overload of students during the assessment the students should have seen the
assessment format before and the rules of the simulation games

3. Examined: Effects of a gamified assessment design on assessment experience and outcome

Introducing an unusual testing format in the educational system is likely to provoke a number of unintended
effects, which need to be controlled to properly evaluate the effectiveness of the approach and to allocate the effects
to their specific causes. Therefore, a quasi-experiment has been carried out to test the effects of an unusual visual
and framing design on examinees’ testing experience and on their scoring as a first step. For this purpose, a classic
MC test was gamified to single out the effects specific to the design. Had the structure and the content of the
assessment already been changed as well, as in a complete simulation game-oriented design, the effects of design
and content could not have been distinguished methodologically. The study was contextualized as a practice exam in
Mustafa Severengiz et al. / Procedia Manufacturing 21 (2018) 429–437 433
Mustafa Severengiz/ Procedia Manufacturing 00 (2017) 000–000 5

the end of a two weeks’ daily masters’ course of Sustainable Factory Planning at the German Vietnamese University
in Ho Chi Minh City, Vietnam.

3.1. Research design

To test the impact of design and framing on the assessment atmosphere as experienced by the examinee and the
achieved assessment result of a classic MC test were to be compared to that of a gamified MC test. Major questions
were the comparability of (1) result, (2) duration, (3) perceived difficulty, and (4) perceived testing atmosphere. To
control predispositions, questions of perceived difficulty and testing atmosphere were further linked to the
examinees’ preference of playing games in general. Finally, the authors wanted to know (6) which design examinees
preferred, as well as linking it to their preference of playing games in general. Thus, the following seven Hypothesis
were formulated.
H1: A gamified format of an MC test leads to a different test score of the examinee than the classic MC test
format, although the questions remain the same.
H2: Taking a gamified MC test takes the examinees longer than taking the classic MC test.
H3: Examinees experience the questions as being simpler when examined in a gamified MC test than in a
classic MC test.
H4: Examinees experience the testing atmosphere as more relaxing when taking a gamified MC test than when
taking a classically designed MC test.
H5: Examinees who enjoy playing games are more likely to experience the testing atmosphere of a gamified
MC test as relaxed, than examinees who do not enjoy playing games.
H6: Examinees who already know the questions experience the testing atmosphere of the gamified MC test as
less stressful than examinees who see the questions for the first time when taking the gamified MC test.
H7: Examinees who enjoy playing games are more likely to prefer a gamified MC test to a classic MC test.
The study was an anonymous within-subject design, thus the 25 participants had a double function as
experimental and control group. The statistical population was split randomly into two groups (g): g 1, n = 13;
g2, n = 12. g1 was given the classic MC test first, g2 the gamified MC test. After finishing the first test the examinees
were given a questionnaire asking about their stress level (5-point Lickert Scale – reduced to 3 points for evaluation)
and their perception of the questions difficulty (5-point Lickert Scale – reduced to 3 points for evaluation). After a
short break, they took the other version of the test. In the end, both groups had taken the test in both designs but in a
different order. After the gamified MC test, they were further asked if they liked to play games in general (3-point
scale). After the second test, they were asked to choose which test design they preferred and to explain their
preference. The students also measured the time it took them to take each test.
As treatment, a simple MC test was developed, covering five questions with three answering options each. Of
those options, one was an elaborated correct answer for 2 points, one a simple correct answer for 1 point and one a

Figure 1: Game board of the gamified MC test (left), introductory card and MC question card (right)
434 Mustafa Severengiz et al. / Procedia Manufacturing 21 (2018) 429–437
6 Mustafa Severengiz / Procedia Manufacturing 00 (2017) 000–000

wrong answer for 0 points. The test was then redesigned as a game board. The board represented a factory building
with 15 working places that were named after German and Vietnamese companies (see Figure 1, left). A gummy
bear represented the examinee. In addition, a set of cards was designed containing an introduction to the setting and
the MC questions with their respective answering options (see Figure 1, right). The introduction card welcomed the
examinee as a new member of a fictional company’s managerial board whose judgement would have a great impact
on critical entrepreneurial decisions in the upcoming year. The gummy bear was to be sent on its journey through
the factory layout on conveyor belts, taking it to five working stations where the examinee had to answer one MC
question each. Depending on the chosen answer, the examinee had to pick a specific conveyor belt to carry his
gummy bear to the next station. The belts and the stations were connected to specific scores unknown to the
examinees. After five question cards, a last card congratulated the examinee to a successful business year and the
reward of promotion. Thus, the MC test design had been gamified by drafting a game board and introducing task
cards instead of a form, representing the examinee through a playing figure (gummy bear), and adding a narrative
framework.

3.2. Findings

The findings are listed in the order of the hypotheses. Hypotheses H4, H5 and H6 are summoned under the same
heading since they are all connected to the topic of perceived test atmosphere.

3.2.1. Test score comparability


To test H1, a paired t-test was conducted, comparing all participants’ scoring results from the classic MC test with
their scoring results from the gamified MC test. Taking into account the entire sample, the deviation of the mean
(arithmetic) value is 0.6 points in favor of the classic MC test; out of a maximum score of 15 points. The deviation is
statistically not significant (t(24) = 1.16, p = 0.258), so with regard to test results the impact of a gamified test
design seems to be negligible.
Splitting the sample into g1 and g2 offers a more detailed analysis of the effects. The deviation of the mean value
for g2, who took the gamified test before the classic test, was 1.2 points, a 100 % increase compared to 0.6 points
deviation of the whole sample. The scope ranges from -3 to +4 points derivation. At the same time, the deviation for
g1, who took the classic test first and then the gamified version turns out to be 0, the scope ranging from -5 to +4
points deviation.
Clearly, there are two conclusions to be drawn apart from the overall comparability of the results of both test
designs. First, there are great variations in the two samples, indicating highly individualistic reactions to the test
design by the examinees. Second, examinees who have already thought about the test questions in a familiar format
(g1) find it easier to handle the unknown design and to replicate their results. A likely explanation for the latter is the
overtaxing of the examinee when being confronted with test questions while simultaneously handling an unfamiliar
test design. Having already dealt with the questions before gives room to concentrate on the unknown design. This
clearly indicates that a gamified test design needs to be introduced before applying it to a real-test situation.

3.2.2. Test duration


Ignoring three invalid data fields (samples for H 2: g1, n = 10; g2, n = 12), the average test duration for the whole
sample was 5’14 minutes for the classic MC test and almost double for the gamified MC test with 9’55 minutes. The
difference is highly significant, as shown by a paired t-test calculation (t(21) = -3.94, p < 0.01). The gamified MC
test demands considerably more time than the classic design. Considering the time needed to capture the game
dynamics such as the framing narrative and the “rules” this is easy to explain. Also game mechanics such as moving
the playing figure over the game board, turning the playing cards and identifying the correct paths costs time. The
impact of understanding the game dynamics is expected to be much lower if the examinees are introduced to the
gamified design before, e.g. during the course. The time consumed by the game mechanics, however, can only be
reduced by a different gamification design. However, since gamification implies adding a further component to the
test situation, it can never be eliminated completely.
Mustafa Severengiz et al. / Procedia Manufacturing 21 (2018) 429–437 435
Mustafa Severengiz/ Procedia Manufacturing 00 (2017) 000–000 7

3.2.3. Perceived test difficulty


To test H3 again a paired t-test was conducted, that shows no significant correlation between the test design and
the perceived difficulty of the involved tasks (t(24) = 0.7, p > 0.1). When comparing the deviation of the perceived
difficulty of the questions in the classic test and the gamified test for the whole sample the modal indication is 0
with 80 % frequency. This shows clearly that the perception of the exam questions’ level of difficulty is not
influenced by the test’s overall design. Therefore, the variable can be neglected in the later testing of a prototype for
summative gamified assessments.

3.2.4. Perceived testing atmosphere


To test the impact of the test design on the perceived stress level during the examination McNemar’s chi-squared
test was conducted, showing no significance (χ2(1) = 0, p > 0.1). It has to be emphasized that the data set was
statistically complicated since most participants perceived both testing atmospheres as relatively relaxed, so that a
statistical within-comparison with regard to the hypothesis was meaningless. To test the perceived stress level
during the testing situation it is necessary to have an authentic situation, in which the examinees are under real
examination pressure. This could not be simulated in the experimental setting. Thus, H4 and its connected
hypothesis H5 remain to be evaluated. In addition, H6 was effected by the overall feeling of relaxation of almost all
examinees in both testing situations.
However, when the students were asked in a later question to choose their preferred testing design and explain
their preference in their own words, the relaxing effect of the gamified design was the major criterion for choosing
the gamified version over the classic design, being named seven times independently. Since games are commonly
associated with relaxing and entertaining contexts, ordinarily among friends or family, this could be a very positive
effect of gamifying testing.

3.2.5. Preference of gamified testing and preference of games in general


When asked about their preference, 13 out of 25 students said they preferred the gamified MC test over the
classic one, although it took them longer to take it and most students scored slightly worse in it. 12 students
preferred the classic test design. Moreover, there is a significant impact of examinees’ tendency to enjoy playing
games in general on their preference of a gamified testing approach, as McNemar’s Chi-squared test showed (χ2(1) =
5.82, p = 0.16).
When explaining the reasons for the preference it becomes clear that classic MC tests are often preferred because
they are associated with basic factual knowledge that can be easily memorized off the lecture slides. 5 out of 12
students who preferred the classic test explained their choice similar to this one: “I like the [classic] exam because I
have learned. [Classic] exams are suitable for people who learn the teaching material.” This shows the
interconnectedness of expected testing method and learning method as mentioned in Chapter 1. It also shows the
limited definition of learning and knowledge that is preserved by an educational system that still focuses widely on
lectures and classic MC tests, while they need to impart competencies. The major argument for preferring the
gamified MC test is the low stress level as described in Chapter 4.2.4. Four students also named the possibility of a
more practical orientation of a gamified approach that tests a wider understanding of the subject than just basic facts.
Four students also stated that they would prefer that kind of gamification tool as a teaching and learning tool in
class, since the given context would help them understand the broader implications. Two students said that they
found the gamified version confusing, but that they would choose it if they had a chance to practice using the tool or
if it was designed in a simpler way.

4. Summary and outlook

The key to achieving a sustainable industry lays in engineering education for sustainable manufacturing. This
leads to the challenge of integrating an understanding of the importance of sustainability and the introducing
networked thinking into today’s curricula. For developing networked thinking, simulation games can play a major
role. However, students are test-oriented learners. Up to now, networked thinking is seldom required in common
assessment types.
436 Mustafa Severengiz et al. / Procedia Manufacturing 21 (2018) 429–437
8 Mustafa Severengiz / Procedia Manufacturing 00 (2017) 000–000

Out of the many unknown effects of such a new assessment format, the authors chose the influence of design and
framing for closer examination. The study has shown that gamifying a classic MC test has no significant impact on
the test results and does not influence the perceived level of difficulty of the test questions. Those impacts therefore
do not need to be excessively controlled when testing the impact of a prototype for gamified summative assessment.
While the data set was not useful for statistically evaluating the impact of gamification on the testing atmosphere, in
open comments several students stated that the gamified test design took pressure from them or helped them relax.
Thus, an effect has to be expected and should be tested again in a real examination situation. With regard to
preference, the student’s opinion was divided. Almost 50 % preferred the classic MC test, the major argument for it
being the assumed possibility of easy test-oriented learning, and therefore better grades. From the students’ point of
view, this motivational drive might be understandable, but it also shows the necessity to develop testing designs
targeting networked thinking in order to educate systemic thinkers.
When planning gamified testing, time needs to be taken into account. Gamified testing requires significantly
more time than classic testing due to the added gamifying elements. However, test duration could be reduced by
introducing the format beforehand in class, which is also preferable from a didactic point of view in order to avoid
overtaxing the examinee as shown by H1. These findings, in combination with several students’ comments on their
wish for a gamified learning tool similar to the gamified test and for better familiarity with the gamified testing tool
strengthen the point of the triple jump of matching teaching, learning and assessment method. This should be
considered when developing gamified summative assessment.
Next, a methodology for gamified summative assessment will be developed and the effects of changed structure
and content that transforms the MC test into a simulation game-oriented assessment tool will be examined. Lastly, a
recommendation will be formulated on how to adapt teaching and learning in ways that are coherent with gamified
summative assessment.

References

[1] Paris Agreement – United Nations Framework Convention on Climate Change, United Nations, 2015.
[2] IPCC, Climate Change 2014: Mitigation of Climate Change. Contribution of Working Group III to the Fifth Assessment Report of the
Intergovernmental Panel on Climate Change [Edenhofer, O., R. Pichs-Madruga, Y. Sokona, E. Farahani, S. Kadner, K. Seyboth, A. Adler, I.
Baum, S. Brunner, P. Eickemeier, B. Kriemann, J. Savolainen, S. Schlömer, C. von Stechow, T. Zwickel and J.C. Minx (Eds.)], Cambridge
University Press, Cambridge, United Kingdom and New York, NY, USA, 2014.
[3] Kyoto Protocol to the United Nations Framework Convention on Climate Change, 1998.
[4] Rayner S., How to eat an elephant: a bottom-up approach to climate policy. Climate Policy 10, doi: 10.3763 / cpol.2010.0138, ISSN: 1469 –
3062, 2010, pp. 615 – 621.
[5] Dörner, D., Die Logik des Misslingens – Strategisches Denken in komplexen Sitatuionen, Rowolth Verlag, Reinbek bei Hamburg, 2010.
[6] M. Meyer, A Vision of Sustainable Education with Competence Management, in: Seliger, G., Nasr, N.; Bras, B.; Alting, L. (Eds.),
Proceedings of the Global Conference on Sustainable Product Development and Life Cycle Engineering, Berlin, 2004, pp. 319-325.
[7] H. Bachmann, Hochschullehre neu definiert – shift from teaching to learning in: Heinz Bachmann (Ed.), Kompetenzorientierte
Hochschullehre – Die Notwendigkeit von Kohärenz zwischen Lernzielen, Prüfungsform und Lehr-Lern-Methoden, hep verlag, Bern, 2014,
pp. 14-33.
[8] H. Bachmann, Aktivierende Hochschullehre – kompetenzorientierte Hochschullehre variantenreich gestalten, in: Heinz Bachmann (Ed.),
Hochschullehre variantenreich gestalten – kompetenzorientierte Hochschullehre – Ansätze, Methode und Beispiele, 2013, pp.11-18.
[9] U. Gonschorrek, Prüferhandbuch – Grundsätze, Regeln und Hintergrundinformationen. Prüfungspsychologie, Prüfungsdidaktik,
Prüfungsmethodik, LTU, Bremen, 1988.
[10] H. Bachmann, Formulieren von Lernergebnissen – learning outcomes, in: H. Bachmann (Ed.), Kompetenzorientierte Hochschule – Die
Notwendigkeit von Kohärenz zwischen Lernzielen, Prüfungsformen und Lehr-Lern-Methoden, 2014, 35-49.
[11] L. O. Wilson, Anderson and Krathwohl – Bloom’s Taxonomy Revised, Online: http://thesecondprinciple.com/teaching-essentials/beyond-
bloom-cognitive-taxonomy-revised, 02.22.2017.
[12] K. Jones, Simulations – A Handbook for Teachers and Trainers, Kogan Page Limited, London, 1995.
[13] Blötz, U, Das Planspiel als didaktisches Instrument, in: Blötz, U. (Ed.), Planspiel in der beruflichen Bildung, Bundesinstitut für
Berufsbildung, Bonn, 2008, pp. 13-27.
[14] Klabbers, J., On the improvement of competence, in: Klabbers, J. (Ed.), Proceedings of the SAGA 19th Conference (1989), New York, Sage,
1989.
[15] S.A. Kocadere, S. Çağlar, The design and implementation of a gamified assessment, Journal of e-Learning and Knowledge Society, Vol.11
No.3, 2015, pp. 85-99.
[16] Kennedy, D., Hyland, A. & Ryan, N., Writing and Using Learning Outcomes: A Practical Guide, Online: http://www.bologna.msmt.cz/files
/leaming-outcomes.pdf, 12.4.2010, 2006.
Mustafa Severengiz et al. / Procedia Manufacturing 21 (2018) 429–437 437
Mustafa Severengiz/ Procedia Manufacturing 00 (2017) 000–000 9

[17] Biggs, J. B., Teaching for Quality Learning at University. Buchingham: The Open University Press, 2003.
[18] J. Heywood, Assessment in Higher Education. Student Learning, Teaching, Programmes and Institutions, Athenaeum Press, Gateshead,
Tyne and Wear, 2000.
[19] Scouller, KM and Prosser, M. Students' experiences in studying for multiple-choice question examinations', Studies in Higher Education Vol.
19, 1994, pp. 267-79.
[20] Zimmermann, T., Durchführung von lernzielorientierten Leistungsnachweisen, in: Heinz Bachmann (Ed.), Kompetenzorientierte
Hochschullehre – Die Notwendigkeit von Kohärenz zwischen Lernzielen, Prüfungsform und Lehr-Lern-Methoden, hep verlag, Bern, 2014,
pp. 50-85.
[21] A. H. Miller, B. W. Imrie, K. Cox, Student Assessment in Higher Education – A handbook for Assessing Performance.
[22] E. Kröber, Entscheidungshilfen zur Wahl der Prüfungsform. Eine Handreichung zur Prüfungsgestaltung, Zentrum für Lehre und
Weiterbildung, Stuttgart, 2014, pp. 10-17.
[23] Ehlers J. P., Guetl, C., Höntzsch S., Usener C. A., Gruttmann S., Prüfen mit Computer und Internet. Didaktik, Methodik und Organisation
von E-Assessment, in: Ebner M.; Schön S. (Eds.), L3T. Lehrbuch für Lernen und Lehren mit Technologien, 2. Edition, Frankfurt am Main,
peDOCS, 2013.
[24] Hilkenmeier F., Schaper N., Bender E., Umsetzungshilfen für kompetenzorientiertes Prüfen, HRK-Zusatzgutachten,
Hochschulrektorenkonferenz, 2013.
[25] Brauns J.; Göymen-Steck T.; Horn P., Lernziele, Veranstaltungs- und Prüfungsformen in erziehungswissenschaftlichen
Bachelorstudiengängen. Eine vergleichende Analyse von Studienprogrammen an acht Universitäten, Göttinger Beiträge zur
erziehungswissenschaftlichen Forschung No. 36, 2015, pp.19-20.
[26] R. Caillois, Man, play, and games, Schocken Books, New York. 1961.
[27] R. D. Duke, Gaming – The Future’s Language, SAGE Publications, New York, 1974.
[28] Dondlinger, M. J., Educational video game design: a review of the literature, Journal of applied educational technology, Vol. 4, No. 1, 2007,
pp. 21-31.
[29] Priscilla Haring, Dimitrina Chakinska, Ute Ritterfeld, Understanding Serious Gaming: A Psychological Perspective, in: Patrick Felicia (Ed.),
Handbook of Research on Improving Learning and Motivation through Educational Games: Multidisciplinary Approaches, Information
Science Reference, Vol. 1, 2011, pp. 413-430.
[30] Crisp, G., Assessment in next generation learning spaces. In The future of learning and teaching in next generation learning spaces Vol. 12,
2014.
[31] Barzilai, S., & Blau, I., Scaffolding game-based learning: Impact on learning achievements, perceived learning, and game experiences.
Computers & Education, Vol. 70, 2014, pp. 65–79.
[32] Liu, C.-C., Cheng, Y.-B., & Huang, The effect of simulation games on the learning of computational problem solving. Computers and
Education, Vol. 57, No. 3, 2011, pp. 1907-1918.
[33] Chang, K.-E., Wu, L.-J., Weng, S.-E., & Sung, Y.-T., Embedding game-based problem-solving phase into problem-posing system for
mathematics learning. Computers and Education, Vol. 58, No. 2, 2012, pp. 775-786.
[34] Sabourin, J. L., & Lester, J. C., Affect and engagement in game-based learning environments. Affective Computing, Vol.5, No. 1, 2014, pp.
45-56.
[35] Davidson, D., Beyond fun: Serious games and media, Pittsburgh, PA, ETC Press, 2008.
[36] Hamari, J., & Koivisto, J., Why do people use gamification services? International Journal of Information Management, Vol. 35, No. 4,
2015, pp. 419-431.
[37] M. Prenzel, B. Drechsel, Ein Jahr kaufmännische Erstausbildung: Veränderungen in Lernmotivation und Interesse, Unterrischtswissenschaft,
Vol. 24, No.3, 1996, pp. 217-234.
[38] Kriz, W., Erwerb von Systemkompetenz mit Planspielmethoden, in: Bachmann, H. (Ed.), Hochschullehre variantenreich gestalten –
Kompetenzorientierte Hochschullehre – Ansätze, Methoden und Beispiele, Forum Hochschuldidaktik und Erwachsenenbildung Vol. 4, hep
verlag, Bern, 2013, pp. 108 – 134.
[39] Gee, J. P., What video games have to teach us about learning and literacy: Revised and updated edition, New York, NY, Palgrave
Macmillan, 2007.
[40] Cheong, C., Cheong, F., & Filippou, J., Quick Quiz: A Gamified Approach for Enhancing Learning. Paper presented at the PACIS, 2013.
[41]A. I. Wang, The wear out effect of a game-based student response system, Computers & Education, Vol. 82, 2015, pp. 217-227.
[42] S. Deterding, K. O’Hara, M. Sicard, D. Dixon, L. Nacke, Gamification: Using Game Design Elements in Non-Gaming Contexts, CM CHI
Conference on Human Factors in Computing Systems, May 7th to 11th, 2011.

You might also like