Thursday, May 10, 2007

CIPP Evaluation Model

Well, here I was thinking about how all I ever do was the formal/summative evaluation in my teaching, focusing on the assessment of student learning, when it all hit me (right between the eyes it seemed!) that I do complete much of the evaluation outlined in the readings.

The CIPP model, as I have said recently, now not only seems to be the most relevent, but actually is the most accurate description of what I do! Up until now, I was thinking that it was the model that most represented what I should be doing.

This unit of evaluation seems to have taken the most brain energy to get through, despite having the least number of readings. The limited number has maybe encouraged me to think deeper rather than just take in as much as I can.

In this process, the evaluation of my learning in this course is improving! Very exciting!

Signing off, ready to hand in assignment Discussion 3!

Tuesday, May 8, 2007

Evaluation of Flexible Programs - Module 4

Having now read through the entire module 4 and through some of the checklists, it is now clear to me the whole process of evaluation and the full extent of the different processes and the procedures that it can inform.

Whilst reading through Eseryel as shown in the previous post, Shufflebeam's CIPP (Conext, Input, Process, Product) model came out as the most appropriate to my context in my mind. It seemed the most appopriate to my situation in catering for online/flexible delivery and the face to face classroom.

But as read through the notes of module 4, the depth of the evaluation becomes more apparent. The 4 areas of evaluation are:
  1. Context
  2. Input
  3. Process and
  4. Product

However, as I now know, these are not just the different types of evaluation that can take place, but these indicate 4 completely different areas, and line up well (it seems to me) to the ADDIE method of instructional design as each area focuses and infroms the different areas of ID using the ADDIE method.

It certainly seems exhaustive, so will keep reading and thinking about it's implementation in my context.

Monday, May 7, 2007

Approaches to Evaluation of Training

In this article, (found at http://www.ifets.info/journals/5_2/eseryel.html) it outline show evaluation is a complex task, and often not undertaken due to the complexity or lack of experience/skills.
The 4 purposes:
Evaluation of student learning
evaluation of instructional materials
transfer of training
return on investment
Were interesting as it seems that only the first 2 were relevent for the situation that I have chosen.
When looking at the systems approaches, CIPP (1987) is the most useful one in my situation:
Context: obtaining information about the situation to decide on educational needs and to establish program objectives
Input: identifying educational strategies most likely to achieve the desired result
Process: assessing the implementation of the educational program
Product: gathering information regarding the results of the educational intervention to interpret its worth and merit
However, as he suggests, they do not address the collaborative process of evaluation.
Eseryel also suggests that systematic and planned evaluation was generally not found in practice, nor was there a distinction between formative evaluation and summative evaluation. The most common type of evaluation is of the student performance, in the form of assessment (which is a heavy focus in our school and educational setting) and not enough on the reviewing the design of the instruction based on the results of the assessment. Much of the focus is on the learner, not the course design.
In my experience and in discussing this issue with a number of staff, it seems that some staff are aware of inadequacies of the instructional design, but are also equally aware of the weakness of the students, and only use assessment to fromally "prove" that their judgements are correct.

It has also recently arisen the plan to modify the assessment to suit the final outcome, as Eseryel suggests - the bias for internal evaluators (and in this situation - the assessers) the bias may have a very positive effect on course uptake - even at the school level.

One staff member has said to another: "Mark this task easily so that we don't turn off the students from choosing this subject!"

Similarly, evaluation tools are limited, thereaction sheets are adding to the failure of evluation in training scenarios.

My thoughts about the parts of this article worth mentioning.