Custom Psychometric Tool Development and Validation

From concept to validated instrument – we design scientifically rigorous assessments tailored to your specific organizational needs and target populations.

Psychometric Test Development

Approach

Our Custom Psychometric Assessment Development Process

Adhering to best practices in psychometric test development proposed by Van Zyl & Ten Klooster (2022), we employ a rigorous, multi-phased methodology to design robust assessment measures.

Learn More

The Custom Psychometric Test Development Process

Our Approach

This multifaceted approach ensures a balanced and comprehensive development process of each of our bespoke diagnostic models. Each phase is meticulously designed to build upon its predecessors, ensuring a comprehensive understanding and accurate measurement, of organizational health within your unique business environment. This methodology guides the development of a robust measurement model and validated diagnostic tools that can provide genuine, meaningful insights.

The design process comprises six critical steps:

  1. Conceptualization & Framework Design
    Begin by defining the constructs to be measured through conceptual analysis, stakeholder engagement, and review of existing frameworks. This forms the foundation for building a behaviourally anchored measurement model.

  2. Competency & Behaviour Mapping
    Translate abstract values or constructs into measurable states, traits, behaviours, competencies or experiences using participatory action research, focus groups, and thematic analysis. Define observable indicators for each competency.

  3. Test Specification & Item Development
    Design the assessment structure: select item formats (e.g., Likert, scenario-based), define response scales, and draft items. Conduct expert reviews and cognitive interviews to refine content.

  4. Pilot Testing
    Administer the draft tool to a representative sample to evaluate factor structure, item performance, reliability, and usability. Use both classical and modern psychometric techniques (e.g., EFA, ESEM, Rasch analysis).

  5. Statistical Validation
    1. Construct Validity (CFA, ESEM)

    2. Criterion Validity (predictive, concurrent)

    3. Measurement Invariance (across groups)

    4. Reliability (internal consistency, inter-rater, parallel forms)

    5. Validation

      Perform full psychometric validation including:

  6. Multi-Method, Multi-Source Integration
    Combine self, peer, and group data with qualitative feedback and objective business metrics to create a holistic understanding of the constructs.

  7. Norming, Scoring & Reporting
    Develop scoring algorithms, benchmarks, and reporting formats. Establish norms based on pilot and validation data. Create clear, stakeholder-ready feedback mechanisms.

  8. Deployment & Continuous Improvement
    Implement via secure web-based platforms with multilingual support. Monitor performance, update norms, and refine items based on ongoing data collection.

Step 1

Operationalising The Diagnostic Framework

Developing a Conceptual Measurement Model

The purpose is to develop a bespoke measurement model centered around the unique states, traits and behaviours you would like to assess

Here, we identify both what explains what inputs to measure the factor and to link it to core business outcomes and stakeholder perceptions.

The purpose is to develop a bespoke predictive process model to identify the drivers (i.e. antecedents) of the factor you would like to measure and identify how this affects important business and people outcomes.

By understanding the drivers, it affords the opportunity to identify the specific areas that can be targeted to improve the factor but is also crucial for the validation phase

Developing a Predictive Process Model

Step 2

Design Test Specification& Assessment Methodology

Design the assessment structure and Assessment Methodology: select item formats (e.g., Likert, scenario-based), define response scales, and draft items. Conduct expert reviews and cognitive interviews to refine content.

Learn More

The Persona 360 Methodology

Process for Designing Test Structure

In this step, the structure and content of the assessment tool are designed, including item formats, rating scales, and data collection methods. The methodology is tailored to accurately capture both qualitative and quantitative feedback from a diverse range of stakeholders. It involves these steps

    1. Define the Assessment Objectives
      Clarify what the assessment aims to measure (e.g., behaviours, traits, perceptions), who the target population is, and how the results will be used for decision-making or development.

    2. Select Item Types and Data Inputs
      Decide on the most suitable question formats (e.g., Likert scales, semantic differentials, open-ended questions, scenario-based items), and identify both subjective (self/peer ratings) and objective (performance metrics) data sources.

    3. Choose the Response Scales
      Determine the appropriate response format (e.g., 5-point, 7-point) and anchor descriptions that best capture the nuance of the construct being assessed.

    4. Determine the Number of Items
      Use statistical tools like power analysis and Monte Carlo simulations to estimate how many items are required to ensure reliable and valid measurement.

    5. Define the Scoring Approach
      Decide whether to use weighted or unweighted scoring, and whether composite or subscale scores are needed for interpretation and decision-making.

    6. Select Psychometric Models
      Identify the most appropriate psychometric evaluation models, such as Classical Test Theory, Item Response Theory, or Rasch modelling, depending on the data structure and test goals.

    7. Integrate Qualitative Analysis Tools
      For open-ended data, apply natural language processing (NLP) tools—like transformer-based models—to analyse sentiment, themes, and respondent narratives at scale.

    8. Design the Administration Method
      Choose the mode of delivery (e.g., online, mobile-friendly), ensure accessibility across user groups, and design the system for real-time data collection and processing.

Step 3

Developing the Diagnsotic Measure and Item Pool

The focus in this step shifts to developing a draft diagnostic tool based on the measurement model and test structure established in the earlier phases.

Step 4

Pilot Testing and Item Reduction

Design the assessment structure and Assessment Methodology: select item formats (e.g., Likert, scenario-based), define response scales, and draft items. Conduct expert reviews and cognitive interviews to refine content.

Learn More

Pilot Testing

The next step is to pilot the draft assessment tool with a small, representative sample of internal stakeholders to explore its factorial structure, evaluate its effectiveness, and refine the item pool. This phase also assesses the credibility and transferability of qualitative data collected through open-ended questions. Based on pilot feedback and empirical findings, the instrument will be optimized for clarity, reliability, and validity.

The process typically includes:

  1. Administering the draft assessment electronically to a random sample of approximately 300 participants to evaluate factor structure, item relevance, and usability.

  2. Collecting 360° feedback by having each participant rated by multiple peers and their direct supervisor, including measures of key drivers and outcomes.

  3. Conducting exploratory factor analysis (e.g., ML-EFA) and Rasch modelling to examine item performance, remove poorly functioning items, and streamline the item set.

  4. Assessing measurement quality through indicators such as factor loadings, internal consistency (e.g., McDonald’s Omega), and reliability metrics (e.g., Cronbach’s Alpha).

  5. Applying natural language processing to analyse qualitative responses, enhancing understanding through thematic and sentiment analysis.

  6. Evaluating metadata (e.g., response patterns, item difficulty) and collecting user feedback to assess usability and respondent burden.

  7. Finalizing the item pool and developing a scoring algorithm that supports reporting at the individual, team, organizational, and group levels.

  8. Integrating data from self-ratings, peer feedback, open-ended responses, and objective indicators into a comprehensive scoring model for holistic measurement.

Step 5

Psychometric Evaluation and Validation

Conducting extensive psychometric evaluation in order to establish validity, reliability, and overall psychometric quality of the assessment tool

Learn More

Psychometric Evaluation

This step involves rigorous statistical analysis to evaluate the reliability, validity, and overall psychometric quality of the assessment tool. The goal is to confirm that the instrument accurately measures the intended constructs and can be applied consistently across diverse populations and contexts. Specifically, this phase focuses on assessing psychometric properties, reliability, measurement invariance, and both concurrent and predictive validity.

The validation process typically includes:

  • (a) Administering the final instrument electronically to a large, representative internal sample (e.g., N=600), using a cross-sectional, online survey-based research design.

  • (b) Identifying key antecedents and outcomes from the theoretical model and constructing an additional survey to assess these variables for criterion validation.

Quantitative data analysis involves:

  • (i) Confirmatory factor analysis (CFA) using supervised machine learning techniques and structural equation modeling (SEM), comparing traditional CFA with exploratory SEM (ESEM). Measurement quality is assessed using indices such as CFI, TLI, RMSEA, SRMR, factor loadings, and reliability metrics like McDonald’s Omega. Poorly performing items are removed.

  • (ii) Testing for measurement invariance across demographic and organizational subgroups (e.g., age, gender, department) by comparing increasingly restrictive models (configural, metric, scalar, strict, latent means). Model changes are assessed using ΔRMSEA, ΔSRMR, and ΔCFI.

  • (iii) Estimating separate unsupervised machine learning models to evaluate relationships between the assessment results and business-relevant outcomes, with path coefficients and model fit indices reported.

Qualitative data analysis includes:

  • (i) Using topic modeling (e.g., LDA) and NLP models (e.g., BERT) to extract and contextualize themes from open-ended responses.

  • (ii) Training supervised learning classifiers to categorize qualitative responses into relevant domains.

  • (iii) Conducting sentiment analysis to gauge emotional tone and stakeholder sentiment using models trained on labelled text data.

Finally, the tool is compiled and finalized, and a recommended methodology for future assessments is outlined, ensuring the solution is both scientifically robust and practically actionable.

Step 6

Norming, Scoring, and Reporting System Development

The final stage of the assessment development process involves creating a comprehensive norming, scoring, and reporting system

Learn More

Norming, Scoring, and Reporting System Development

The final stage of the assessment development process involves creating a comprehensive norming, scoring, and reporting system. This step consolidates data from both the assessment instrument and relevant objective indicators to produce a standardized framework for score calculation, interpretation, and communication. The aim is to deliver meaningful insights at both the individual and organizational levels, while enabling ongoing benchmarking and progress tracking.

The process typically includes:

  1. Developing a scoring algorithm that integrates data from prior phases, applying appropriate weights to competencies, behaviors, and objective indicators based on their relevance to key performance outcomes.

  2. Establishing clear benchmarks and thresholds to categorize results (e.g., high, moderate, or low alignment) and guide interpretation.

  3. Designing a reporting system that (1) describes the current state of measured constructs, (2) diagnoses the contributing factors to the current scores, (3) forecasts potential changes based on predictive modeling, and (4) recommends targeted interventions to improve future outcomes. Reports are tailored to various stakeholders, from individual feedback to aggregated organizational summaries.

  4. Including visual reporting elements like  as dashboards, heatmaps, and trend charts in order to enhance accessibility and engagement with the results.

  5. Creating user documentation and training resources to ensure consistent, reliable use of the tool across contexts, and equipping key personnel to administer the assessment and interpret findings effectively.


Psychometric Evaluation and Validation

Our validation protocols not only meets but exceed professional standards (APA, SIOP, ITC) to ensure your assessment delivers reliable, meaningful results.

Construct Validity

Confirmatory factor analysis, convergent/discriminant validity testing, and multi-trait multi-method approaches to verify we're measuring intended constructs.

Criterion Validity

Rigorous testing against relevant outcome measures to demonstrate predictive power for your specific use cases.

Typical Validation Metrics

Internal Consistency (α ≥ 0.90)
McDonald’s Omega (α ≥ 0.90)
Test-Retest Reliability (r ≥ 0.85)
Convergent Validity (β ≥ 0.70)
Predictive Validity (β ≥ 0.70)
CLI & TLI (≥ 0.90)
RMSEA & SRMR (< 0.80)
Item Discrimination Index (D ≥ 0.40)
Factor Loadings (λ ≥ 0.50)
Measurement Invariance (ΔCFI ≤ 0.01; ΔTLI ≤ 0.01)

Cross-Cultural Validation

Measurement invariance testing across demographic groups to ensure fairness and comparable score interpretation.

Continuous Innovation

Benefit from our ongoing research and regular updates to keep your assessments at the cutting edge.

Approach

Our Custom Psychometric Assessment Development Process

Assessment Types

  • 1

    Personality Assessments

    Trait-based, state-based, and dynamic personality measures for selection, development, and clinical applications.

  • 2

    Cognitive Ability Tests

    Verbal, numerical, spatial, and abstract reasoning assessments with culture-fair design principles

  • 3

    Situational Judgment Tests

    Multimedia SJTs measuring job-specific competencies with realistic work scenarios.

  • 4

    360° Feedback Instruments

    360° Feedback InstrumentsMulti-rater assessments with sophisticated norming and rater agreement analytics.

Advanced Methodologies

  • 1

    Machine Learning Scoring

    Predictive algorithms that go beyond traditional scoring approaches.

  • 2

    Natural Language Processing

    Automated text analysis for open-ended responses and qualitative data coding.

  • 3

    Item Response Theory

    Precision measurement through adaptive testing and computerized adaptive questionnaires.

  • 4

    Structural Equation Modelling

    Advanced structural equation modelling for CFA, EFA, ESEM and MEasurement Invariance

Ready to Develop Your Custom Assessment?

Our team of PhD-level psychometricians and data scientists will guide you through every step of creating a scientifically validated instrument.