Research into Structuring of the Assessment of Occupational/Practical Competence

One of our principal aims at Qualifications Wales is to ensure that qualifications are effective for meeting the reasonable needs of learners.

To achieve this, we are required to consider whether assessment arrangements for the qualifications we regulate are reliable, valid, and effective.

In balancing these aspects, a question we need to answer focuses around whether learners should be assessed against all defined learning outcomes/assessment criteria, or just a sample of the outcomes/criteria. This question is particularly important in the case of vocational qualifications (VQs) that assess competence with the aim of establishing readiness to work.

When reviewing and reforming VQs, there are always debates about how we define a competency, how we assess it and, in particular, what is important to assess. Qualifications Wales wanted to investigate various examples outside the VQs that we regulate to inform our approach in answering these questions.

Assessing full competence is time consuming. However, qualification stakeholders might have an expectation that an assessment of all learning outcomes is conducted, because of the assumption that only this can ensure reliability in determining readiness to work. This expectation could be especially strong in the case of occupations that have higher degrees of risk for the learner or public.

This report, prepared by AlphaPlus, explores how assessments of occupational competency can be structured so that the best balance is struck between validity and reliability on the one hand, and effectiveness on the other hand.

The report includes examples of different assessment designs of non-regulated qualifications of various levels and types. The case studies were selected to examine situations where unreliable assessment of competence may bring risks to either learners or the public:

  • undergraduate doctors
  • overseas medical professionals, covering doctors, nurses, and midwives
  • two different types of engineering qualification, and
  • an accreditation scheme within a caring profession.

The case studies consider how assessment designers sample the assessment content, how they manage risks to learners and the public, and how they involve stakeholders in the process. 

The findings show that insisting on assessing every learning outcome every time may create problems. For example, over assessment and too many ‘false negative’ outcomes could lead to decreasing the overall reliability of assessment. At the same time, stakeholder support for assessment design is important, with a range of design approaches possible when considering the public interest in proportionate management of risk and gaining stakeholder acceptance. Aspects of the competency, where incompetence is a risk to public safety, could be viewed as topics that must be assessed. Whereas it may be possible to think of aspects that are less risky in terms of should or could be assessed.

What comes after the qualification is likely to be as important as what is in the qualification. For example, approaches may differ if work is supervised post qualification, compared to situations where the work of the newly-qualified individual is carried out in isolation or with limited supervision or support.   

The report relates the case studies to the history of vocational assessment and the theory of conjunctive, compensatory and disjunctive assessment approaches. It includes some reflection on the challenges of relating that theory to practice. 

Qualifications Wales hopes this report will be useful to anyone interested in the assessment of occupational or practical competency.