<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1692983330987094&amp;ev=PageView&amp;noscript=1">

Five Common Assessment Compliance Pitfalls

18 - February - 2020

John Dwyer is a highly experienced consultant in the Vocational Education and Training industry, a much sought-after presenter, who has a love for (and niche expertise in) all things Competency Based Assessment.  In this article he shares his thoughts on five common assessment compliance pitfalls.

Note the points made in this article represent my personal viewpoint. They are not necessarily points of view held by Velg Training.

1. Failure to understand how the principles of evidence and/or the rules of evidence apply

An understanding of the principles of evidence and the rules of evidence is critical in avoiding compliance pitfalls. The principles of assessment apply to the assessment processes and products (assessment tools/instruments) that we use to assess a whole unit of competency while the rules of evidence apply to judgements we make about individual pieces of evidence. Many assessors have difficulty in making this distinction.

Unfortunately, the official definitions sometimes contribute to this confusion.

Principles of assessment

Table 1.8-1 in the Standards for RTOs, 2015 refers to “validity” as “any assessment decision of the RTO is justified, based on the evidence of performance of the individual learner”. While this is logical, assessment decisions are actually justified against the rules of evidence.

assessment complianceTable 1.8_1 goes on to indicate that:

Validity requires:

  • assessment against the unit/s of competency and the associated assessment requirements covers the broad range of skills and knowledge that are essential to competent performance;
  • assessment of knowledge and skills is integrated with their practical application;
  • assessment to be based on evidence that demonstrates that a learner could demonstrate these skills and knowledge in other similar situations; and 
  • judgement of competence is based on evidence of learner performance that is aligned to the unit/s of competency and associated assessment requirements.

Dot point one above refers to “content” validity which requires assessors to assess all components of the unit of competency – nothing more or nothing less. This protects learners against assessors who want to include their own “pet” requirements even though they are not part of the unit of competency. It is allowable to train beyond the requirements of the unit but assessors who then include any of this additional information/activity in an assessment item have created a non-compliant assessment instrument. This is why we map the assessment instruments we have developed for a unit of competency to check first that we have not missed any unit requirements and second that we have not included any items that cannot be mapped to the actual requirements of the unit.

The second dot point in the list above is actually an assessment procedure rather than a principle but it can be linked with “face” validity which is not addressed in this list. “Face’ validity requires that the assessment process and tools used are acceptable to industry and meet industry requirements. This is why we are required to consult with industry representatives to validate our assessment procedures and tools. I think that if I suggested to a supervisor of apprentices that I intended to assess a group of student bricklayers by asking them to write a three page essay on how to lay bricks I would get a fairly direct (negative) response.

The third dot point above is linked to a definition of competency-based assessment that requires “competency” to be demonstrated in a range of contexts and over a period of time. This begs the question how many contexts and over what period of time? The format of units of competency was changed to address this issue. Units are now written as the unit of competency and its associated assessment requirements. These assessment requirements define required contexts and they also indicate numbers of times that performance must be demonstrated. Assessors developing assessment processes and assessment instruments/tools must take these requirements into account. This has further implications. It means that assessment should be conducted progressively during the training period, not held and conducted as a “final” assessment.  Some of the confusion that exists here comes about because of a misunderstanding of the difference between “formative” and “summative” assessment. Many assessors define “summative” assessment as “occurring at the end” and use this to justify holding summative assessment to a “final” exam/assessment. While summative assessment does”occur at the end”, the “end” is the end of any particular learning process no matter when this occurs within the learning/training period. Any assessment which an assessor undertakes that is designed to contribute towards the overall assessment is “summative” assessment.

On the other hand “formative” assessment is much less formal. It can occur at any time when the assessor (or the learner) wants to “check how things are going”. A simple question to a group, “Do you understand this point?” is actually a formative assessment. Depending on the response obtained the trainer/assessor may decide that the point needs re-training possibly in a different way. Formative assessment is not recorded, except perhaps as guidance noted for the trainer/assessor or course developer/reviewer.

The fourth dot point above is actually related to “judgement of competency” and is therefore closely linked to the rules of evidence. (See comments below.)

It is interesting to note that the dot points above do not refer to “construct” validity. “Construct” validity refers to constructs that trainers/assessors regularly call on in carrying out their activities. The Australian Qualifications Framework (AQF) is one such construct. (Others are the Core Skills Framework and the Core Skills for Work Framework.)

One of the most common non-compliances in competency-based assessment is that assessment tools/instruments are not pitched at the relevant Australian Qualifications Framework (AQF) level. Assessment tasks are often pitched at a level above or below the level described in the AQF. This issue was compounded when the decision was taken to remove the AQF level indicator from the unit code. It is acknowledged that AQF levels apply at the qualification level rather than at the unit level but assessors now have dilemmas when units written originally for a qualification at a higher or lower AQF level are imported into a qualification.

Rules of evidence

Table 1.8-2 in the Standards for RTOs, 2015, in describing “validity” as one of the rules of evidence states, “the assessor is assured that the learner has the skills, knowledge and attributes as described in the module or unit of competency and associated assessment requirements”. While this is technically correct, it is actually a re-statement of the fourth dot point in the principles of assessment list. A more commonsense definition could be that “a piece of evidence” is “valid” if it is actually evidence of what it claims to be evidence of. Remember that evidence is always evidence of something. A curriculum vitae (CV) is a piece of paper but it may be evidence of the holder’s work experience as this relates to the unit of competency for which it is being presented as evidence. If a unit of competency requires evidence of work experience in a particular industry area and such evidence is not included in the CV then the CV is an invalid piece of evidence and cannot be used to support a competent judgement against the unit (even if it is authentic and current).

The rules of evidence are applied to individual pieces of evidence – each piece of evidence must be valid, authentic and current and there should be sufficient pieces of evidence that satisfy each of these criteria to allow the assessor to make a judgement overall.

Standard 1; Clause 1.8 of Standards for RTOs, 2015 states

1.8. The RTO implements an assessment system that ensures that assessment (including recognition of prior learning):

      1. complies with the assessment requirements of the relevant training package or VET accredited course; and
      2. is conducted in accordance with the Principles of Assessment contained in Table 1.8-1 and the Rules of Evidence contained in Table 1.8-2.

In the light of the comments made above, the significance of 1.8 (b) may now be more apparent

I would now like to look at a number of more specific issues that lead to common assessment compliance pitfalls. Most of these I have become aware of as part of my consultancy activities with a range of RTOs.

2. Failure to understand (or misunderstanding) the intent of a PC  (or other unit component)

Sometimes the person preparing the assessment tasks fails to understand, or misunderstands the intent of a particular component. An example might make this clear.

The unit TLIF2010  Apply fatigue management strategies involves the skills and knowledge required to apply fatigue management strategies within the transport and logistics industry. Work is undertaken in compliance with relevant legislation, regulations, codes and guidelines. It includes identifying and acting on signs of fatigue and implementing appropriate strategies to minimise fatigue during work activities, [my emphasis and underlining].

PC 1.2 states, “Personal warning signs of fatigue are recognised and necessary steps are taken in accordance with workplace procedures to ensure that effective work capability and alertness are maintained.”

An actual online knowledge quiz that I reviewed (as a consultant) mapped questions 3, 4, 10, 11, 14 and 15 against this PC. Questions 10, 14 and 15 are reproduced below.

TEST 1 items (Extract) Job – Bus driver

assessment compliance

While each of these questions has something to do with fatigue, none of them deals with “personal warning signs of fatigue” and none of them refers to “necessary steps in accordance with workplace procedures.” In addition these questions are not linked to minimising fatigue “during work activities”.

The inclusion of these items in the test impacts on the validity of the evidence gathered for this PC. These items are not addressing what is actually required. They are not evidence of what they claim to be evidence of.

As an aside, the provided responses to Questions 10 and 14 illustrate a common problem that occurs with multiple choice questions. If multiple choice questions are to really test the learner, the incorrect answers (the distracters) must really distract. It is very unlikely that any learner would choose any of responses (c), (d), or (e) for Question 10 or any of the responses (a), (c), or (d) for Question 14.

3. Gathering “observation” evidence – beyond putting a tick in a box

Evidence of actual performance is critical in competency-based assessment. Often such evidence is gathered via “observation”, where the assessor is present and observes the performance. The assessment instruments used to gather such evidence need careful attention if the evidence they gather is to be judged as “reliable”. Once again, an example might clarify the points being made.

assessment compliance

There are major problems with the “reliability” of this item which I have copied from an actual observation checklist (assessment instrument).  Reliability occurs if the same assessor makes consistent judgements when using the assessment instrument on a number of occasions with a variety of learners and/or when a number of assessors make consistent judgements when using the assessment instrument in a variety of contexts and with a variety of learners.

One way of building in “reliability” is to provide clear instructions to the learner and the assessor, but in this case no directions were included anywhere in the assessment instrument. For example, how are the S/U boxes to be filled in? In this example, does the learner have to demonstrate all of the dot points to get a tick or can a tick be provided if some of the dot points are demonstrated (how many)?

Another way of building in “reliability” is to ensure that the assessors are given guidance about what is actually required.  (What is “satisfactory” performance?) . In this example the learner is being asked to perform a pain assessment. The dot points provided alert the assessor to what is to be looked for with regard to a “satisfactory” pain assessment. (I have seen many observation checklists which contain only the first line “perform a pain assessment “without any explanation of what constitutes “satisfactory” performance.)

However, these dot points do not provide enough information to ensure “reliability”. What does the assessor need to look for with regards to “identify indication” or “a clear explanation of the procedure”. When I raise this point with many assessors I am often told, “But I am an experienced practitioner and assessor and I know what is required and this is what I look for. The problem with this claim is that each of these assessors is working from his or her personal experience and personal biases and so they may be looking for different things. In this case a tick in a box indicates only that I saw something. (In a worst case scenario a tick in a box may indicate only that I put a tick in a box, perhaps without even looking at the learner’s performance.)

If observation checklists are to yield reliable results, they must include clear directions for both the learner and the assessor and the items to be observed must be written is such a way that they spell out “satisfactory” performance – e.g. “a clear explanation of the procedure” above may need to be accompanied by a series of dot points that spell out the critical procedural steps to be explained.

4. The importance of marking guides

Another way of building “reliability into an assessment tool is to provide a “marking guide” or “suggested responses” to ensure that all assessors will make consistent judgements about the response provided by the learner.  I actually prefer the term “benchmark criteria” because this goes beyond a “suggested” response by defining what constitutes a “satisfactory” response. It is important to remember that competency-based assessment requires a “satisfactory” response for every assessment item.

Here is a good practice example from an actual assessment instrument.

2. Identify three oxygen delivery devices and the appropriate flow rate for each.

  • Nasal prong (2- 4 litres /min.)
  • Hudson (simple) mask (6-8 litres/min.)
  • Venturi mask (4-12 litres/min.)
  • Non-rebreather (12-15 litres/min)(>10litres/min.)

Here, what constitutes a “satisfactory” response is incorporated in the question – “Identify three...” with four options provided. Sometimes the indicator is included in the guidance to the assessor. E.g. The learner must provide at least three of the following suggested responses.)

Here is a different example

assessment compliance

This time no guidance is provided to the assessor. This is not a set of assessment criteria/benchmarks.  It is not likely to lead to “reliable” judgements.

Here is a different kind of marking guide.

Q5) Define the term homeostasis:

Provided “Benchmark answer”

Homeostasis

The body is in homeostasis when its internal environment contains the optimum concentration of gases, nutrients, ions and water and has an optimal temperature

Our internal environment conditions stay much the same, even though our external environment is constantly changing. This stability is achieved through homeostasis, which means “staying the same”

Attempts to provide “model” answers are seldom useful. It is highly unlikely that any learner will duplicate exactly the words provided (except perhaps if the question requires the learner to recall a pre-learned response). In fact, even though this is a very complex response it doesn’t provide the assessor with much useful information.

It contains three sentences each with a number of points

  • Internal environment optimum concentration of : gases, nutrients, ions and water; and has an optimal temperature
  • Internal stays same; external constantly changing
  • Stability achieved through  homeostasis

It is a detailed response but what constitutes a “satisfactory” response? Does all of this information need to be provided? If not, which must be provided by the learner?

The assessor would have found this more useful if the sustained prose in the model answer had been broken down into a series of dot points with the assessor advised that the learner must include at least “X” of these point to demonstrate a “satisfactory” response. If it were considered important the assessor could have been told that the learner must include point “a” plus at least “x” others.

Marking guides such as this put the principle of reliability at risk. They also risk the “validity” of the response because “critical” information may be overlooked by the learner or the assessor. In addition the principle of “fairness” is at risk if one assessor is expecting a “broader” response than is expected by other assessors.

5. The importance of mapping matrices

While assessment tools do not have to be mapped against the unit of competency, the RTO is required to be able to demonstrate that the assessment process addresses all of the requirements of the unit.  The easiest way to do this is to map the assessment items against the unit requirements. This is perhaps best done by the tool developer, mapping assessment items as they are written.

Best practice mapping maps specific assessment items against the unit components.

assessment compliance

This allows the RTO to be able to respond to a question such as, “Where did you address dot point 4 in the required Knowledge Evidence.

Compare this form of mapping with another quite common approach that is likely to lead to assessment pitfalls.

assessment complianceI call this form of mapping, “Have a guess” or “Take your pick”. Mapping is really only worth doing if it is of use to the RTO and all this map tells the RTO is that it has addressed PCs 1.1 – 1.4 somewhere in Task 1 Written or Task 3 observation or Task 4 Vocational Placement.

Mapping is a measure of the validity of the assessment instruments – have they covered all of the requirements of the unit – nothing more or nothing less? This second form of mapping leaves the validity of this set of assessment instruments in question.

With the right approach in these five key areas at your RTO, you can avoid some of the most common assessment pitfalls!

For more information on Velg Training and how to become a member click here.