In the last blog on Level 1 Evalations, we looked at different scales that can be used in creating a survey. Here let us consider some of the characteristics the items on the scales need to have so that the respondents can answer the survey effectively.
“Items” means the “questions” or “statements” in the survey. It is very important that the items have to be worded and designed very carefully in any survey. The following characteristics need to be considered to create an effective survey instrument:
ORDER: The items have to be placed in a proper order. If you are collecting data on different aspects of training, group items according to these aspects. For example, all the items related to the design, to the facilitator skills, administrative support, etc. have to be neatly separated into these different categories.
===================
The items in a survey have to be worded effectively. The following is a list of errors that can be made while designing a survey.
DIRECTIONAL QUESTIONS: Items such as “The facilitator was good”, “The content was relevant” are directional. These are also called “loaded” questions. They are worded positively and this may influence the respondents. The items should be worded in a neutral manner. For example, the item can say “style of the facilitator” and “good” can be one of the points on the rating scale that ranges from say, “Very Poor” to “Very good”
AMBIGUOUS QUESTIONS: The items on the rating scale should not be ambiguous. Here is an example of an ambiguous item: "Was the facilitator good or bad?" This question cannot be answered on a rating scale or a Yes or No scale! It would need to have a qualitative answer.
DOUBLE NEGATIVES: Do not word your questions using double negatives. For example: “The session did not start late” This is an example of a very badly worded sentence on a survey. The direct question would be if the session started on time.
USING ABBREVIATIONS: Do not use abbreviations in your survey. For example: “How do you rate the training delivered by AGC?” Is AGC is a company/a department, or what? This is very true if the feedback is being taken from samples drawn from different populations, where the reference and context may not be easily discernible. If an abbreviation is to be used, its first reference has to be in parenthesis, and in the following sections, the abbreviation can be used.
USE OF JARGON: The questions should be worded simply. Here is an example of a feedback sentence with jargon: How did you find the andragogy of the facilitator? While the word Andragogy is the correct word to be used in adult training context, the participants may not understand what it means- if the words methods or pedagogy is used, they may understand it better. If a word has to be used, its meaning can be written in parenthesis for better understanding
USE OF SLANG: Avoid using slang in designing surveys. For example, use of the words like “guys”, “fellow” etc., should be avoided.
GENDER BIAS: Sentences should be free of gender bias. For example: “Trainer was well versed in his area of work.” Apart from being directional, this statement also has a gender bias. A correct sentence would be “The trainer was well versed in his/ her area of expertise.” A correct item would be Rate the “Expertise of the Trainer”.
===================
Item design is a very important step in collecting data. The relevance and validity of the data gathered depends on the design of the questionnaire.
PILOT THE INSTRUMENT: It is a good idea to pilot the questionnaire to see the type of data it yields. Also get inputs from the pilot group about any problems with each of the items.
-
Did the pilot respondents think that the item is relevant?
-
Was it clear?
-
Did the rating scale match the item? For example if the item asks for “Frequency” and the Rating Scale is that of “Quality” there would be a mismatch.
-
How much time did it take? Was it justified?
-
Did it have enough scope for a qualitative aspect as well (if required)
-
What were the problems that the respondents faced while answering the survey?
-
What is the suitable design for administering such a survey? Hands on after the session, web survey, etc?
The pilot would help you to find out the challenges that may lie in scoring the surveys too. For example:
-
How much time does it take to score the survey?
-
Can it be machine scored?
-
What is the software requirement?
-
What kinds of scores does the survey yield?
-
What kind of analysis is possible with the data?
-
What meaningful conclusions can you draw from the data?
===================
Successful Level I evaluations will help you redesign your training interventions in terms of the training content, method, participant levels, facilitators and training administration and coordination support services. Get started with an online surveys free trial and start evaluating your trainings at Level 1.