A Strategic Guide to Designing Assessment Centres for Competency
In the rush to explore talent, many organisations fall into a common trap: they confuse the tools with the process. They purchase a generic psychometric test or download a standard case study, believing they have created an assessment centre. But a true Assessment Centre is not only a product you buy off the shelf; it is a full journey you design. When Designing Assessment Centres, the goal is not just to filter candidates. The goal is to simulate the reality of your organisation so accurately that you can predict, with scientific validity, how a person might actually perform in a specific role.
Whether you are hiring for a high-stakes executive position or screening juniors for a future leaders programme, the "one-size-fits-all" approach rarely works. It fails to capture the unique cultural nuances of your company and often misses the specific behavioural drivers required for success in your market. At Riverwaves, we believe in an architectural approach. Just as you wouldn't build a house without a blueprint, you shouldn't build a selection or development process without a design strategy. In this guide, we break down the three critical steps to building a bespoke competency-based assessment centre that actually works, ensuring you select talent that drives performance.
Step 1: Defining the Competency Framework
The foundation of any assessment is the "What." What exactly are you looking for? If your definition of success is vague, your results will be vague. Many organisations rely on generic lists of virtues: "Leadership," "Communication," "Integrity." While these sound good on a poster, they are too broad for measurement. To ensure accuracy, you need rigorous Competency Matrix Design.
Moving from Titles to Behaviours
You must translate how the organisation defines the competency and its behviours. These are different behaviours requiring different measurements. When we work with clients to build a framework, we focus on detailed clustering:
- Defining the Cluster: Grouping related skills (e.g., "People Management") to create a logical structure.
- The Positive Indicators: What does "good" look like? (e.g., "Proactively seeks input from quiet team members"). This gives assessors a clear target to look for.
- The Negative Indicators: What does "poor" look like? (e.g., "Dominates the conversation and interrupts others"). This helps assessors spot derailers and the undesired behaviour.
By defining these indicators upfront, you create a shared language of performance and behaviour. This ensures that every assessor is looking for the exact same thing, removing subjectivity from the equation.
Step 2: Selecting the Right Activities
Once you know what to measure, you must decide how to measure it. This is where HR Activity Design becomes an art form. A common mistake is using exercises simply because they are familiar to the team or "fun." However, every activity must have a direct line of sight to a specific competency in your matrix. If an activity doesn't measure a specific competency behaviour, it is a waste of time and unnecessary. To build a robust Assessment/Development Centre, we recommend a "Multi-Trait, Multi-Method" approach. This means testing every competency at least twice using different types of Assessment Centre Design Course Activities:
1. The Analysis Presentation
- Best for: Strategic Thinking, Analysing Information, Presenting & Delivering Messages, Planning & Organising
- The Setup: During the activity, candidates are asked to read a business case and provide their recommendations or decisions based on the information presented in the case. They must analyse the data and present a recommendation.
- Why it works: Presentation activities are designed to assess the candidate’s ability to analyse information and reach conclusions.
2. The Group Discussion
- Best for: Analysing Information, working with others, taking decisions, presenting & Delivering Messages.
- The Setup: During the group exercise the participant will be presented with a business case that tackles a work related problem. Participants are requested to go through the presented information and reach a solution within the assigned time frame.
- Why it works: It measures the ability of the candidate to work in a team, contribute, delegate, and solve problems.
3. The Role-Play
- Best for: Empathy, Conflict Resolution, Coaching.
- The Setup: During the role play, the candidate is presented with a case where the participant and the role player will be assigned fictional roles. The role player will also be given a brief he/she has to follow, often introducing new information (different than the one that is given in the participant’s brief) during the conversation.
- Why it works: Role plays measure the ability of a candidate to deal with difficult situations. The employer can truly see how the candidate will perform in real life work related situations.
4. The In-Tray / E-Tray
- Best for: Customer Focus, Planning & Organising, Strategic Planning, Customer Focus, Taking Decisions.
- The Setup: In-tray exercises are basically a pile of fictional documents that represent a simulation of the paperwork that arrives in the mailbox of a typical manager.
- Why it works: The in-Tray exercise measures the ability of the participant to set priorities and make decisions in a tight framework and some of the cases can measure the customer focus.
Step 3: Resource Planning & Assessor Calibration
Design is nothing without execution. The final phase involves securing the necessary infrastructure to bring your blueprint to life. This means meticulous planning of resources: securing the right venue (or digital platform), preparing standardised scoring sheets, and ensuring all materials are ready for deployment. However, even with the perfect stage set, the quality of the data depends entirely on the capability of the people observing it.
The Challenge of Unconscious Bias
You can have the most sophisticated Competency Matrix Design and the most realistic activities, but if your human assessors are not trained to play the effective role of an assessor, the data is compromised. The human brain is wired to make snap judgments. Without rigorous training, observers often fall victim to the "Halo Effect" (favouring a candidate based on a single positive trait) or "Confirmation Bias" (deciding in the first minute and looking for evidence to support that decision). To design a truly objective centre, you must invest in calibrating the people running it.
The ORCE Methodology
We train assessors to strictly follow the ORCE protocol:
- Observe: Watch the candidate without judgment.
- Record: Write down exactly what was said or done (verbatim quotes), not your opinion of it.
- Classify: Afterwards, look at your notes and decide which competency and indicator the behaviour belongs to.
- Evaluate: Only then do you assign a score.
Score Integration Session
The design process doesn't end when the candidate leaves the room. It ends with the integration session. This is where all assessors gather to compare evidence. If Assessor A rates a candidate as a "4" for Communication and Assessor B rates them as a "2," there is a discrepancy. In a well-designed process, these assessors must debate the evidence, not their feelings, to reach a consensus. This calibration ensures that the final hiring decision is robust, fair, and legally defensible.
Designing a bespoke assessment centre is an investment. It requires time to understand the role, craft the simulations, and train the team. However, the return on investment is substantial. By moving away from generic, off-the-shelf tests and building a process that mirrors your unique reality, you stop guessing and start predicting. You ensure that the people you hire today are the ones who will drive your success tomorrow.
Do you have the skills to run an assessment centre?
Ensure your team is qualified to deliver objective, BPS-standard assessments.



