Clinical judgment isn’t optional in disability assessment.

Structured tools can support consistency, but when they begin to outweigh judgment, context, and lived reality, disability assessment risks becoming more process-driven than person-centred.

There’s a real risk in disability services that we need to talk about more honestly: the more we standardise assessment, the easier it becomes to mistake process for understanding.

And when that happens, people with disability are the ones who carry the consequences.

That’s my concern with the growing reliance on structured assessment tools like the I-CAN V6. Not because I oppose structure. Not because I oppose reform. And certainly not because I think disability assessment should be vague, inconsistent, or unaccountable.

My concern is something else entirely: when a tool starts carrying more weight than the professional judgment required to use it properly. Because once that happens, we risk creating a system that looks rigorous on the surface, while becoming less capable of understanding the people it is meant to serve.

That should concern all of us.

Why I’m raising this

I say that as a clinician, practitioner, and service leader who’s spent over a decade working alongside people with disability and their families. I’ve seen the advocacy fatigue many families live with. I’ve seen the exhaustion that comes from having to repeatedly explain needs that are obvious to the people carrying them every day. And I’ve seen how easily complexity gets flattened when systems favour tidy process over real understanding.

To be clear, I can see the value in the intention behind the I-CAN V6. It appears to reflect an effort to move away from purely deficit-based thinking and towards a more contemporary, more respectful way of understanding support needs. That matters. People with disability shouldn’t be reduced to what they can’t do. They should be understood in the context of their strengths, their environment, the support they require, and the life they’re trying to live.

But good intentions aren’t enough.

Because the I-CAN V6 isn’t self-interpreting. It depends heavily on the capability of the person completing it: what they ask, what they notice, where they probe further, how they interpret responses, and how they translate those responses into scoring.

From my understanding and experience of the process, it uses a combination of frequency of support and level of support to arrive at an overall support intensity. That may sound straightforward. In practice, it isn’t.

It is interpretive. It is nuanced. And if it is applied without enough skill, experience, and judgment, it risks creating something that can be more dangerous than inconsistency: false confidence.

What concerned me most

That became very clear to me during the assessor training.

I went through eight versions of the assessment before finally passing. On my final attempt, I even requested a video call because I wanted direct clarity about where I was going wrong. I wanted to properly understand the expectations. But even after that, there were still moments where feedback appeared to shift, or where earlier scoring and wording were effectively revisited and changed again.

That’s what stayed with me. Not whether I passed, but what the process revealed.

If an experienced clinician can go through eight versions, seek direct clarification, and still come away feeling that parts of the process are highly interpretive and at times inconsistent, then we need to ask a serious question: what does this look like when implemented more broadly, and what does that mean for the people whose lives may be shaped by the outcome?

Because these aren’t harmless technicalities. These assessments shape how people are understood. And when people are understood poorly, the consequences are real.

Three concerns we can’t ignore

1. A structured tool is only as strong as the judgment behind it

The I-CAN V6 depends heavily on the skill and reasoning of the assessor. What they ask, what they notice, what they probe, and how they interpret support intensity all shape the outcome.

That means assessor capability is not a side issue. It is central to the integrity of the process.

2. Consistency on paper does not always mean accuracy in practice

As part of the training process, I used a real case I knew well and had recently assessed through a broader Home and Living Assessment. The family agreed because we wanted to compare what a comprehensive functional assessment captured against what the I-CAN V6 process would produce.

My original version felt accurate. More importantly, the family felt it reflected their lived reality. They described it as one of the best pieces of work I had done for them: succinct, clear, and representative of what their son’s life actually looked like.

But as the assessment moved through the training process, the language changed, the scoring changed, and the overall picture, in my view, became less representative of the reality we were trying to capture.

When I later showed that version to the family, their response was immediate and confronting: if I had found the process this difficult to complete accurately, how could they trust someone else to get it right? And how could their son’s needs, in areas requiring such high levels of ongoing support, end up being reflected as moderate rather than extensive?

That question gets to the heart of the issue.

3. The risk is not just poor process, it is distortion

Once complex support needs are compressed into categories that don’t reflect lived reality, we are no longer just talking about assessment methodology. We are talking about distortion. We are talking about a system becoming more comfortable with what it can score than with what it actually needs to understand.

And that should give us pause.

It’s also important to acknowledge that the NDIA has indicated the I-CAN assessment will be introduced through a phased rollout from mid-2026, with further testing and refinement already underway. That should be seen as a positive step. Any effort to improve the framework and make it more fit for purpose should be welcomed.

But the fact that the assessment is being amended and trialled does not, in itself, resolve the concerns raised here. If the revised approach still relies heavily on structured scoring, assessor interpretation, and standardised processes to capture complex lives, then many of the core risks are still likely to remain. Inconsistency, oversimplification, and the loss of real-world context do not disappear simply because a tool is refined.

The real question is whether the amended assessment will genuinely protect the role of clinical judgment and reflect the lived reality of the people whose futures may be shaped by it. Unless those issues are meaningfully addressed, there is still a real danger that the process will look more certain than it actually is, while failing to capture the full complexity of people’s support needs.

Why this is so important

People with disability do not live in neat domains. Their lives are shaped by communication, fatigue, regulation, behaviour, safety, environment, relationships, co-occurring issues, and the constant interplay between vulnerability and support. Anyone who has worked closely with families over time knows that what looks manageable on paper can be unsustainable in practice.

That’s why clinical judgment isn’t an optional extra. It isn’t a nice-to-have that sits beside a tool. It is the thing that protects the integrity of the process.

I learned that years ago in the early days of the NDIS, when I was involved in a planning meeting for a client with very high support needs. A planner initially struggled to grasp the complexity being described. It was only after seeing the situation more directly that the reality landed.

That experience has stayed with me because it exposed something important: some lives can’t be properly understood through surface-level process alone.

And yet that is exactly the risk we run when we become too confident in standardised systems.

Over the past decade, we’ve written detailed assessments, reports, and supporting documents not because anyone enjoys bureaucracy, but because for many families that is the only way to make the full picture visible. Families are already carrying enough. They shouldn’t also have to battle a system that mistakes simplified scoring for genuine understanding.

What we need to protect

This isn’t about resisting change. It’s about protecting integrity.

Structured tools can support consistency. They can support communication. They can support better decision-making when used well.

But they cannot replace:

  1. Experienced professional judgment
  2. Context
  3. The responsibility to see the whole person


If we forget that, we risk building a disability assessment system that looks more efficient, more standardised, and more defensible on paper, while becoming less human and less accurate where it matters most.

Final reflection

People with disability are not scores. They are not categories. They are not administrative profiles to be processed.

They are human beings with strengths, relationships, risks, hopes, vulnerabilities, and needs that deserve to be understood properly.

Any assessment process that helps shape their future should be judged by one standard above all others: does it help us see the person more clearly, or does it simply make them easier for the system to sort?

If it is the second, then we should be very careful what we call progress.

Andrew Charalambous is an Occupational Therapist, Behaviour Support Practitioner, Osteopath, and Founding Director of Back to Basics Health Group. He is passionate about clinical integrity, individualised support, and making sure disability services reflect the real complexity of people’s lives.