Introduction

I started using PeerWise in 2015 as part of assessment tasks for both an undergraduate and a postgraduate subject. New to this teaching approach, I adopted an existing marking criteria. However, since then I’ve found a need to experiment with new ways to ensure these criteria are both fair and reliable and also encourage peer learning.

PeerWise is a peer learning portal that requires students to write their own multi-choice questions, answer questions written by their peers and rate their peers’ questions written. They have score systems built in the portal that calculate students’ scores automatically based on a number of variables. I adopted their reputation score and answer score as part of my assessment criteria. These score systems encourage students to get into PeerWise as early as possible, be actively engaged in rating and commenting their peers’ questions and answering questions accurately.

The use of PeerWise helps students to achieve their topic learning outcomes, and forms part of the requirement to develop ‘high quality’ questions, an important assessment criteria I recently incorporated.

Peerwise was a fantastic resource – some discussion over answers may have been beneficial after it had been marked (i.e. students disagreeing over the correct answer).

I loved the Peerwise system – I thought this was a fabulous way to learn. It enabled you to do so much revision and learn so much. I loved the challenge board and gaining points.

What’s important about this learning and teaching story?

Incorporating technologies into our teaching can look simple at first glance; however, we need to be aware of the various ways students can ‘game’ the system and put in place a range of strategies to ensure it both encourages effective learning and acts as a fair and reliable tool for assessment purposes. This can require time to both uncover the issues, as well as find effective solutions.

What were you trying to achieve?

In my early teaching with Peerwise, I found that the reputation score built into the tool is useful, as it motivates students to access the tool early in the session and put in consistent work throughout the session.

However, it is also open for misuse. For example, I discovered that a student had posted a video for his colleagues on how to achieve high reputation scores without putting much ‘real effort’ (he suggested putting in lots of short and unproductive comments such as ‘thank you’, ‘well done’, ‘good job’ without considering the quality of the question).

While students responded to low quality questions with their own high quality feedback, a high reputation score can also be achieved by students writing as many low quality questions as possible (e.g. not on topic, grammatical errors and so on).

My first attempt to battle the misuse of reputation score was to require students to add ‘constructive’ comments (for which I provided a definition). Yet this turned out to be too vague for students, and too subjective when it came to marking.

Does this mean we should not use the reputation score?

That wasn’t my response. Instead, I still wanted to keep this feature of the technology, as it did increase motivation; yet I needed to keep exploring ways to ensure students used it more effectively, and to make assessment more fair and equitable.

What did it look like?

I decided to lower weighting of the reputation score, and revise the marking criteria to more clearly reflect the way students needed to contribute. While I haven’t included the full marking rubric here (it is available on request), below are the criteria I currently use:

  1. Achievement of reputation score [20%] – I set minimum reputation scores for each grade;
  2. Achievement of answer score (answering questions correctly) [40%] – Again, I set minimum answer scores for each grade. This relates to the number of questions for which the student has given the correct response.
  3. Authoring of questions covering the topic learning outcomes [40%] – I set a minimum number of ‘high quality’ questions that have been authored for students by each grade, and the number of topics that need to be covered by these questions.

The answer score is more reliable than the reputation score in the sense that it limits the opportunities to misuse it. This is why I give it a higher weighting.

Authoring questions is the key for students. I started setting boundaries regarding the kinds of questions I was looking for from students. I provide a distinctive definition of ‘high quality’ questions, and emphasise the importance of linking these to the learning outcomes for each topic. Through various trials, I have found this is a critical point in defining quality and ensuring that students reflect on the topic learning outcomes.

How can I make this happen?

Experimentation is important whenever using a new technology; simple incorporation will not necessarily lead to success, and removing the tool or particular features isn’t always the best option when things go wrong. With some trial and error, we can influence student behaviour so that they engage with the technology and each other in the way intended, so as to enhance learning outcomes.