Peer reviewed article
A research article about the use of computational thinking and its link to the general capabilities and assessment.
Recent media coverage in Australia has highlighted the need to prepare students for a future in which digital communications and technology will present an increasingly important source of distinction in allowing access to employment opportunities and in participation as informed, capable and engaged citizens (Dodd, 2014; Foo, 2014; McDougall, 2015; Shiffman, 2015). Following the revised Australian Curriculum, NSW Educational Standards Authority (NESA), formerly The Board of Studies Teaching and Educational Standards NSW has published resources to support teaching across this broad and varied discourse which makes explicit connections to coding as well as to other elements such as critical and creative thinking that can be incorporated across the curriculum.
What is computational thinking?
A leading proponent on computational thinking, Jeanette Wing, (who popularised the term) considers computational thinking to be characterised by:
* recursive thinking
* and pattern recognition
all of which is underpinned by its defining feature: abstraction (or generalisation beyond specific instances) (Wing, 2006, 2008, 2011 ).
Following a literature synthesis on computational thinking, Selby and Woollard (2014) proposed that computational thinking is a thought process that reflects:
* The ability to think in abstractions
* The ability to think in terms of decomposition, or breaking problems down by functionality
* The ability to think algorithmically
* The ability to think in terms of evaluations, or the ability to analyse the trade-offs of using different solutions
* The ability to think in terms of generalisations, which follows from decomposition in being able to reuse functional components to solve different problems.
While these abilities may be present individually or as separate components of other forms of thinking, it is their combination that constitutes computational thinking.
Seven computational thinking concepts are identified (in the context of the design of interactive media but are applicable to other design and problem solving contexts) (Brennan & Resnick ).
- Sequences. Sequences can be understood as a set of programming instructions specifying the intended behaviour or action. The concepts described below rely on an understanding of sequences.
- Loops. Loops are a mechanism for running the same sequence multiple times.
- Parallelism. While a single sequence is typically expressed as a single serial set of instructions, parallelism is used to describe sequences that are being run at the same time.
- Events. An event can be conceptualised as an outcome that needs to be met to cause something to happen; a trigger.
- Conditionals. The ability to describe a set of conditions that must be met for a certain outcome to occur can be conceived as conditionals. Conditionals are commonly introduced to students in the context of ‘if/then’ statements.
- Data. A broad set of related understandings fall under the category of data. Students should understand the structure of data, as variables and lists. They also require an understanding of what can be done with data, such as storing, retrieving, manipulating and updating it.
- Operators. This concept is best understood as an enabler of data manipulation which may be numerical, logical or string (such as text) in nature.
Brennan and Resnick (2012) suggest that while computational thinking concepts focus on the what, computational thinking practices focus on the process of thinking and learning, or the how. These practices were inferred through observations of young programmers using Scratch (Resnick et al., 2009):
* being incremental and iterative. This practice involves iterative cycles of imagining and building, where students develop some of the program, try it out and then develop further based on their experiences, feedback from the system and new ideas.
* testing and debugging. Students need strategies for anticipating and dealing with problems. These may include trial and error, transfer from other activities or support from peers or experts.
* reusing and remixing. The practice of building on the work of others is an accepted practice for programmers; this has the benefit of being able to create artefacts that are more complex than what may be created alone.
* abstracting and modularising. The ability to build something by putting together collections of smaller pieces (of code).
Brennan and Resnick identified computational thinking perspectives that reflect the shifts that they observed in students’ understandings of themselves, their relationships to others and the technological world through their work in Scratch (Brennan & Resnick, 2012). These perspectives can be seen as components of computational thinking.
* Expressing Thinking beyond computation as a consumption activity and instead seeing it a as a medium for creation and exploration.
* Connecting Being able to form intentional partnerships and collaborations with others who are designing or benefiting from their work; creating with and for others. The diversity of audiences and purposes through which connection could take place span the ability entertain, engage, equip and educate.
* Questioning. Brennan and Resnick describe the ability to critically analyse assumptions that are taken for-granted through design. They highlight the transformative possibilities of this perspective in that students come to realise that they have the potential to modify the status quo through design.
Underlying computational thinking are four general capabilities: critical thinking, systematic thinking, holistic thinking and creative thinking.
Critical thinking can be defined as thinking that
1. facilitates judgement
2. relies on criteria [a ‘standard’ relative to which judgements are made]
3. is self-correcting
4. is sensitive to context. (Lipman, 2003, pp.211-212)
The ability to think in terms of evaluations (against criteria) is a form of critical thinking.
Systematic thinking is characterised as involving a system or method of thinking that is complete – not leaving out any important consideration. It involves addressing a problem or a puzzle in such a way that all relevant issues are considered and no crucial issues are missed by haste and inadvertence. Algorithmic thinking involves the ability to understand, execute, evaluate and create step-by-step instructions for solving a task (Brown, 2013). Thus algorithmic thinking appears to be a form of systematic thinking, since it involves taking account of all relevant considerations.
Holistic thinking is the ability to understand the relationships between whole and part. At lower levels it involves an understanding about the contribution that a part makes to the whole. Decomposition is a (lower order) form of holistic thinking. At higher levels it involves being able to think about how the whole influences and shapes its components; for example: how a genome influences its component genes or how an ecosystem influences the characteristics of the component organisms – through providing ecological niches (Stevens, 2012). Abstraction can be seen as a form of holistic thinking, involving generalisation beyond specific instances.
Creative thinking can be understood in two ways. In one sense to create is to make something – an artefact. In another sense, to be creative is to extend ideas in new ways. In this sense, computational thinking requires higher levels of critical, systematic and holistic thinking that involve extending ideas.
These capabilities are essential to enable learners to address the economic, technological, social, ecological and health challenges to be faced in the future (Stevens, 2012). Critical and creative thinking are embedded in the learning across the curriculum components of NSW syllabuses.
Focal categories of computational thinking tasks
How can the quality of a student’s performance of computational thinking be assessed?
Computational thinking literature is notable for its embedded approach to assessment (embedding assessment in practical tasks). This could be attributed to the practical affordances of the medium in which computational thinking is naturally associated and also underscores the emphasis on computational thinking as a practice. A discussion on assessment is therefore incomplete without a discussion of instructional tasks. This is reflective of an integrated approach to assessment and the affordances of the contexts in which computational thinking can be developed.
We propose that practical tasks that require students to create a product that solves a problem can enable teachers to observe the different dimensions of computational thinking (concepts, practices, and perspectives). Teachers might design assessment/ learning tasks in such a way that success in the task requires particular concepts, practices, perspectives and capabilities. If students are assessed as succeeding in these tasks, then this would provide evidence that students have mastered the pre-requisite concepts, practices, dispositions and general capabilities. If students are not successful in the task, then nothing can be inferred about the students having acquired the pre-requisite concepts, practices, dispositions and capabilities. They may have acquired some and not others. To determine this, teachers may test the concepts, practices, perspectives and general capabilities directly and individually. An alternative approach would be to design a new learning as assessment task, perhaps consisting of a series of sub-tasks, to be sequenced from the simple to the more complex, as informed by the levels of the SOLO or Bloom’s taxonomy (Anderson, 2001; Biggs & Collis, 1982). These taxonomies can be used to assess where a student is at and where the student might go next.
It is preferable for students to learn computational thinking holistically by performing tasks that involve computational thinking and associated concepts, practices, perspectives and capabilities. This is because such learning is likely to have more significance to the students, (helping to make the learning more meaningful and important to students) and it is easier for students to understand the relationships between concepts as an example.
The Structure of the Observed Learning Outcome (SOLO) taxonomy, (Biggs, 1995; Biggs & Collis, 1991 & 1882) provides a systematic way of describing how a learner’s performance grows in complexity when mastering varied tasks. The SOLO taxonomy postulates five levels of increasing complexity in growth or development of concepts or skills:
The task is engaged, but the learner is distracted or misled by an irrelevant aspect belonging to a previous stage or mode
The learner focuses on the relevant domain and picks up one aspect to work with
The learner picks up more and more relevant and correct features, but does not integrate them
The learner now integrates the parts with each other, so that the whole has a coherent structure and meaning
The learner now generalises the structures to take in new and more abstract features, representing a new and higher mode of operation (Biggs & Collis, 1991, p. 65).
Implicit in the SOLO model is a set of criteria for evaluating the quality of a response to (or outcome of) a task. The quality (or richness or complexity) of a response to a complex task varies with the relevance of the considerations brought to bear on the task, the range or plurality of those considerations, and the extent to which these considerations are integrated into a whole, and extended into broader contexts to create something new.
An alternative taxonomy or framework to SOLO was developed by Benjamin Bloom and colleagues in 1956 (Bloom & Krathwohl, 1956). Bloom’s original taxonomy was organised around six broad Levels: Knowledge; Comprehension; Application; Analysis; Synthesis and Evaluation. Bloom’s revised taxonomy is also organised around six levels: Remember; Understand; Apply; Analyse; Evaluate and Create (Anderson, 2001).
The SOLO framework can be used to assess the quality of a performance in a task involving computational thinking. It can be used to assess the quality of an individual performance, the performance of a group working collaboratively on a task, and the contribution of an individual to a group performance. SOLO can be used to design learning and assessment tasks and sequencing of learning tasks from simpler to more complex. Most crucially, SOLO can be used to document a learning journey – identifying where a learner has been, where they are now, and where they might go next, as the examples below illustrate.
Examples of learning/assessment task design
We now consider a number of task designs to cultivate and assess computational thinking.
Bers (2010) notes that robotics provides opportunities for young children to learn about mechanics, sensors, motors, programming, and the digital domain. The approach invites young children to build their own robotic projects, such as cars that follow a light, or puppets that can play music (Bers, 2010, pp.1-2). TangibleK involves children making robotic artefacts and programming their behaviours.
Children are required to keep design journals while creating robots. This helps make visible to the children, their teachers and parents their own thinking and their learning over time (Bers, 2010, p. 6). TangibleK consists of seven sessions.
1. What Is a Robot? After an introduction to robotics by looking at different robots and talking about the functions they serve, children build their own robotic vehicles and explore the parts and instructions they can use to program them.
2. Sturdy building: Children build a nonrobotic vehicle to take small toy people from home to school. The vehicle needs to be sturdy and able to perform its intended functions. Design journals: Children will use the design journals to learn the engineering design process.
3. The Hokey-Pokey: Choose the appropriate commands and put them in order to program a robot to dance the Hokey-Pokey.
4. Again and Again until I Say When: Students use a pair of loop blocks (‘repeat’/’end repeat’) to make the robot go forward again and again, infinitely, and then just the right number of times to arrive at a fixed location.
5. Through the Tunnel: Children use light sensors and commands to program a robot to turn its lights on when its surroundings are dark and vice versa.
6. The Robot Decides: Students program their robots to travel to one of two destinations based on light or touch sensor information.
7. The Final Project: students design a robotic city, a zoo with moving animals, a dinosaur park, a circus, and a garden with robotic flowers responsive to different sensors. These projects all incorporated use of inexpensive recyclable materials. These final projects are shared in an open house for the wider community.
These sessions can be sequenced in increasing complexity in terms of the SOLO taxonomy (though Bers did not). They are, metaphorically, low floor – high ceiling: starting with simpler tasks and moving to more complex (and session 7 has wide walls – allowing a wide variety of activities) (Papert, 1980). Sessions 2 requires performance at a unistructural level – making a non-robotic vehicle. Session 3 and 4 requires performance at least at a multistructural level – programming a robot to dance and move forward repeatedly. Sessions 5 and 6 requires performance at a relational level – programming a robot to respond to its environment. Session 7 provides the opportunity for performance at an extended abstract level – extending what they have learned in Session 1-6.
Each session focuses on a key computational thinking concept: Session 3 sequences; Session 4 loops; Session 5 conditionals; Session 6 events.
Sessions 2-7 involve students being incremental and iterative; sessions 3-7 involve students testing and potentially de-bugging, as well as abstracting and modularizing. Session 7 would involve re-using and re-mixing.
Session 3-7 in particular would require systematic thinking; sessions 5-7 critical thinking (understanding conditionals).
Each session involves some level of holistic thinking and creativity (in the sense of making something). Each session involves the three perspectives of
* expressing – creating content
* connecting – communicating, community building and caring are at the heart of TangibleK and
* questioning – students make choices and ask ‘what if?’ These tasks can be assessment tasks as well as learning tasks since success in these tasks is sufficient evidence for the student having acquired the concepts, practices, perspectives and capabilities to a level necessary for that success.
Multi-agent based modelling Sengupta, Kinnebrew, Basu, Biswas, and Clark (2013) describe a unit of work that can be used to cultivate and assess computational thinking. The task involves modelling an ecosystem using NetLogo .
The task involves designing a simulation of a closed fish tank system consisting of fish, duckweed, Nitrosomonas bacteria and Nitrobacter bacteria. It involves designing a model of the entities and processes involved in the phenomenon using an agent-based, visual programming platform. A possible sequence of tasks could be:
1. Students begin with programming the behaviour of single agents in the ecosystem [Unistructural]
2. Students gradually develop more complex programs for modelling the behaviour of multiple species within the ecosystem [Multistructural]
3. Students gradually develop more complex programs for modelling the interactions between multiple species within the ecosystem [Relational] (Sengupta et al., 2013, p. 363)
4. Students compare their model to an expert model of the phenomenon and adjust their model accordingly using data [Extended Abstract]
5. Students program new agents – for example, water snails – and re-test models [Extended Abstract]
6. Students may apply the developed model and the learned science concepts in a new context [Extended Abstract].
In terms of learning programming, according to Sengupta (2013) these modelling activities introduce students to fundamental programming constructs:
* conditionals (needs-based interactions between agents)
* loops (for repetitions of an action)
* code re-use and encapsulation (Sengupta et al., 2013, p. 369) [Reusing and modularising in Brennan and Resnick’s terminology].
In addition to these, the modelling activities would introduce students to concepts discussed by Brennan and Resnick, namely: events; parallelism; operators; data.
The activity would clearly provide opportunities for students to enact the four practices identified by Brennan and Resnick, namely,
* being incremental and iterative
* testing and debugging, (particularly in step 4)
* reusing and remixing, (particularly in steps 4-6) and
* abstracting and modularizing (particularly in steps 4-6).
The task requires each of the general capabilities constitutive of computational thinking:
* systematic thinking
* holistic thinking – particularly in steps 3,4,5 and 6
* critical thinking
* creative thinking (in both senses we identified).
Seeding success in these tasks demands pedagogy that satisfies the three dimensions of the NSW Quality Teaching Model
* intellectual quality – pedagogy focused on producing deep understanding of important, substantive concepts, skills and ideas
* quality learning environment – pedagogy that creates classrooms where students and teachers work productively in an environment focused on learning
* significance – pedagogy that helps make learning meaningful and important to students. (NSW Department of Education and Training, 2003).
The guided design processes in these tasks is analogous to a guided inquiry process (Kuhlthau, 2010). Both are planned, targeted, supervised interventions, grounded in the constructivist approach. Both can be designed and assessed using the SOLO framework.
In this paper we have examined the question of what computational thinking is and how it might be assessed. We suggested that the SOLO taxonomy could be used as a framework for evaluating the quality of performances in tasks involving computational thinking, and at the same time, the requisite computational concepts, practices, perspectives and capabilities. The SOLO framework can be used to indicate how students might build on their performance (for example, how they might relate and extend their ideas). Teachers can use the framework to ensure that assessment and learning tasks have a low floor but a high ceiling.
The authors wish to acknowledge the contribution of Dr Matt Bower, School of Education, Macquarie University to this article. Discussions between Dr Bower and the authors strongly informed this article.
Anderson, L. W. & Krathwohl, D. R. 2001, A taxonomy for learning, teaching, and assessing: a revision of bloom’s taxonomy of educational objectives, Allyn & Bacon, Boston.
Bers, M. U. 2010, ‘The TangibleK Robotics Program: Applied computational thinking for young children’, Early Childhood Research and Practice, vol. 12, no. 2, Article 2.
Biggs, J. B. 1995, ‘Assessing for learning: some dimensions underlying new approaches to educational assessment’, The Alberta Journal of Educational Research, vol. 41, no. 1, pp.1-17.
Biggs, J. B., & Collis, K. F. 1982, Evaluating the quality of learning - the SOLO taxonomy, Academic Press, New York.
Biggs, J. B., & Collis, K. F. 1991, ‘Multimodal learning and the quality of intelligent behaviour’, In H. Rowe (ed.), Intelligence: reconceptualization and measurement, Lawrence Erlbaum, Hillsdale, NJ.
Bloom, B. S., & Krathwohl, D. R. 1956, Taxonomy of educational objectives: the classification of educational goals, Longmans, New York.
Brennan, K., & Resnick, M. 2012, New frameworks for studying and assessing the development of computational thinking. Paper presented at the Annual Meeting of the American Educational Research Association, Vancouver, BC.
Brown, W. 2013, Introduction to algorithmic thinking, accessed 31 January, 2017.
Dodd, T. 2014, ‘Teach our young coding and we all reap the benefits’, Australian Financial Review, 30 June, p. 34.
Foo, F. 2014, ‘Christopher Pyne urged to act on coding in classrooms’ , The Australian, 9 December, accessed 31 January 2017.
Kuhlthau, C. C. 2010, ‘Guided inquiry: school libraries in the 21st century’, School Libraries Worldwide, vol. 16, no. 1, pp.17-28.
Lipman, M, 2003. Thinking in education (2nd edn), Cambridge University Press, New York.
McDougall, B. 2015, ‘Cracking code of our digital future: kids embrace 21st century skills’, Daily Telegraph, 19 October, p. 1.
NSW Department of Education and Training, 2003, Quality teaching in NSW public schools: discussion paper, Sydney NSW.
Papert, S. 1980, Mindstorms: children, computers, and powerful ideas, Basic Books, New York.
Resnick, M., Maloney, J., Monroy-Hernández, A., Rusk, N., Eastmond, E., Brennan, K., Millner, A., Rosenbaum, E., Silver, J., Silverman, B. & Kafai, Y. 2009, ‘Scratch: programming for all’, Communications of the ACM, vol. 52, no. 11, pp.60-67.
Selby, C. C., & Woollard, J, 2014. Computational thinking: the developing
Sengupta, P., Kinnebrew, J. S., Basu, S., Biswas, G., & Clark, D. 2013, ‘Integrating computational thinking with K-12 science education using agent-based computation: a theoretical framework’, Education and Information Technologies, vol. 18, no. 2, pp.351-380.
Shiffman, A. 2015, ‘Time to teach kids a new language: code’, Australian Financial Review, 10 March, p. 25.
Stevens, R. 2012, ‘Identifying 21st century capabilities’, International Journal of Learning and Change, vol. 6, no. 3, pp.123-137.
Wing, J. M. 2006, ‘Computational thinking’, Communications of the ACM, vol. 49, no. 3, pp.33-35.
Wing, J. M. 2008, ‘Computational thinking and thinking about computing’, Philos Trans A Math Phys Eng Sci, vol. 366, no. 1881, pp.3717-3725.
Wing, J. M. 2011, ‘Research notebook: computational thinking - what And why?’, The Link, Spring, accessed 31 January 2017.definition. Paper presented at the SIGCSE, Atlanta, GA.
How to cite this article: Ructtinger, L & Stevens, R. 2017, ‘Computational thinking task design and assessment’, Scan 36(1), pp. 34-41