/ #education #evaluation 

Evaluation Mechanisms

Remember that time when mobile phone still had the size of a brick? In roughly 30 years’ time, mobile phones have developed to the incredibly fast and powerful smartphones of today. To make sure we can keep innovating the technology of today, we need flexible education that cannot only keep up with these advancements, but education that is one step ahead. In our common effort to tackle the challenge to deliver such a degree of flexibility, we are constantly advancing our university programmes. But at what cost? Are the innovations we introduce in education actually effective?

Answering these questions is rather difficult. The quality of education cannot simply be measured through an oscilloscope or multimeter. Of course, one could argue that simply comparing a new form of teaching (think of Problem Based Learning) to the previous one is a way to establish at least relative quality. But what criteria would you compare? And how would they stack up to form a final verdict? Establishing these criteria has become a distinct science within academic education.

Evaluation Mechanisms

To make comparison of evaluation results possible not only between the various editions of for instance a module, but also between the various modules or even between those of other programmes, the University of Twente has developed a standardized questionnaire. It has several indicators that score a value on a 5 point-scale to a certain property, think of quality of teachers, division of study load, etc. Standardized questionnaires are very strong tools to compare educational units on a select set of fixed criteria. Unfortunately, they fail to capture problems that are beyond the scope of the formulation of the questions and heavily rely on a significant response rate to be of any use.

To cover these shortcomings, one would need a dynamic form of evaluation that provides a much more complete picture. The programme staff foresees in this need for thorough evaluations through the OpleidingsKwaliteitCommissie (OKC) or Programme Quality Committee in English. The OKC evaluates education by interviewing students and publishes these results in reports that are presented to the programme committee. Also Scintilla’s educational committee applies this evaluation technique. Restrictions on available manpower limit the use of these interviews to exceptional cases, like ‘problematic’ situations or recent educational innovation. The combination of standardized questionnaires and deepening interviews is the backbone of the university safeguarding its education. With this knowledge in mind you might wonder how this benefits students; let us have a look at two recent cases.

Digital Testing Case

In the last few years, the vast increase of the student influx at the UT has surpassed the growth of the mathematics department. With a rough 1000 students each quarter, the number of students following the mathematics line is becoming too big to deliver test results in time. This is mainly why the UT has initiated a project group that is working on the implementation on digital testing. When implemented properly, this new method of testing has several benefits:

  • Students receive instantaneous feedback
  • The degree of flexibility of the tests increases
  • There is a significant reduction in workload for teachers, saving time the teachers can invest in other tasks

There are many new concepts that have to be implemented to enable digital testing; an entirely new infrastructure is needed for formulating questions and collecting student responses. The project group has been working on developing these concepts, but the uncharted terrain requires thorough evaluation to determine the effectiveness.

Already back in 2015 have pilots been conducted with both of these concepts. The first digital tests were made on the laptops of students in a lock-down browser. It turned out that this lock-down browser was very easy to breach and that students were able to use other tools during the test. After this pilot, the project group interviewed students to ask them for alternatives. This has resulted in a close and very unique collaboration between the university and computer science students to develop the use of a combination of Chromebooks, VLAN, IP-restriction and whitelisting for making the digital tests.

Also in the formulation of questions has evaluating proven to be very useful. The very first digital test was fully graded on final answers only. Students indicated they were doubting the fairness of such a system in the online questionnaires. The project group subsequently adjusted the composition of the mathematics tests to be 67% digital and 33% written.

Through the feedback provided by students in the online questionnaire, other drawbacks and pitfalls were easily identified. The initial estimate of the project group that students would be used more to digital testing in the current digital era turned out to be mostly incorrect. Not only did students indicate they need a lot of time to get accustomed to the new method of testing, they only indicated they were making much less use of scrap paper than when making written tests, resulting in more calculation errors.

The project group for digital testing is constantly analyzing all the evaluation results and using them for improvements. Their focus is now on developing a hybrid test that can measure the knowledge and skills in the first-years mathematics line, but feels familiar for students.

Closing remarks

I personally think the digital testing case shows that evaluations are a very powerful tool in the process to improve education. It is very important for all students to continue providing feedback on their education, only then can we identify flaws and work on improving them. Feel free to send an email to education@scintilla.utwente.nl with any suggestions or questions you might have left.