Addressing the Future of I/DD AI Tools: What Providers are Saying
Share this pageAcross the health care industry, artificial intelligence (AI) is making significant strides in streamlining administrative tasks and supporting clinical decision-making and care. For providers serving people with intellectual and developmental disabilities (I/DD), including those with Autism, there’s still confusion about the feasibility and safety of the application of AI in the I/DD space.
Core Solutions recently hosted an AI executive roundtable, “Shaping the Future of I/DD and Autism Services,” to provide clarity and achieve mutual understanding around AI’s capabilities and limitations and to discuss current and future AI use cases in I/DD care. Here are key takeaways from this highly engaging and informative meeting.
The State of I/DD Health Care: Needs and Barriers to Digital Transformation
Leaders across I/DD programs aspire to use AI in ways that will deliver efficiencies while transforming their operations and care delivery. They have many ideas for leveraging AI to address unique provider needs and better serve their I/DD population, but internal and external pressures on both providers and payers are not making adoption of these solutions easy.
Challenges Facing Providers Who Serve the I/DD Population
In recent years, I/DD providers have felt the strain of urgent problems from many sides. A staffing crisis has left resources thin, and care delivery has been slowed down even further by a lack of provider training and language barriers for staff who speak English as a second language.
The potential for Medicaid cuts, including and the possible reduction of the 90% federal match rates for the Affordable Care Act (ACA) expansion has left payers, providers, and states worried about the level of care quality that can be provided, loss of access to care for large swaths of the I/DD population, and funding and reimbursement for I/DD health care services. There are significant concerns around the implementation of work requirements for people with disabilities, which would also threaten care access and client volumes, the potential for disruption to I/DD waivers by the restructuring of the Centers for Medicare & Medicaid Services (CMS), and legislation that could possibly lead to reductions in federal Medicaid funding.
In these uncertain times, I/DD AI adoption may seem like a low priority — but the efficiency it creates for back-end operations and the care delivery areas it can support, like AI in clinical decision-making and routing for care, are hot topics among leaders in the space.
What Leaders Would Most Like to See From I/DD AI Technology
Given the demands on their time, I/DD leaders are focusing on AI use cases with the highest potential impact. Roundtable attendees discussed several use cases they’d like to see in their practices.
Lower Burnout and Churn Rates for Direct Support Professionals (DSPs)
In recent years, DSP turnover rates have hovered around 50%. A 2023 survey suggested that leaders focus on recognition, education and skills development, and better supervisor relationships to improve retention. To gain back time for learning and development and the direct client care that they most enjoy, I/DD providers are eyeing the use of AI in clinical decision-making and other areas, including scheduling, note-taking, and reporting. Some AI technologies have already been implemented but aren’t well-integrated into existing systems, said one roundtable participant.
More Positive Outcomes
Several attendees stressed the importance of delivering better care and outcomes with the help of I/DD AI, including finding ways to share and learn from the immense amount of data and insights that machine learning-backed solutions can gather. One possibility discussed was comparing similar client profiles and identifying which treatment approach had better results. This focus on quality over quantity of services aligns well with a move into value-based care and payment, noted an attendee.
Better Documentation to Support Consistent Revenue Streams
When provider notes are inaccurate or don’t comply with regulations and standards, opportunities are missed for billing revenue. This challenge is particularly complex for I/DD providers, who often meet and speak with multiple people simultaneously rather than holding 1:1 client sessions. Ambient listening and documentation AI can simplify these complex encounters by allowing hands-free recording, which frees up providers to engage more actively in the moment, and which can then summarize notes, highlight likely symptoms, and recommend treatments based on the discussion.
Enhanced Person-Centered Support
With no clear end to the DSP staffing crisis in sight, an effective use case for I/DD AI could be more personalized, AI-generated care planning that provides people being supported with education and resources for living an independent life, noted one participant. Tools for AI in clinical decision-making are already supporting treatment plan generation in multiple areas of health care, and machine learning could easily help create learning paths that empower people with I/DD and better engage them in their journeys.
Lessons for I/DD Health Care Leaders From Recent AI Explorations
The possible benefits of I/DD AI are impressive, and tools already used in other health care fields have shown that attaining goals like the above quickly is not just possible, but likely, given the speed with which AI learns.
Still, as a relatively unknown commodity in I/DD care, AI has raised many questions among leaders hesitant to commit to the purchase and implementation of AI solutions. Roundtable participants reviewed the biggest obstacles standing in the way of the fair and safe use of AI and steps to take to safeguard their facilities and client information from potentially harmful data collection and sharing.
Hurdles to Trust and Belief in I/DD AI Technology
Large language models (LLMs) are the engine driving AI data outputs, but despite their processing power, they have multiple weaknesses, shared Core Solutions President Ravi Ganesan. These include:
- Data dependency. The quality of the data that the LLM is trained on may be questionable, and possibly not relevant to certain audiences, depending on where it originated from.
- Limited understanding and context. AI doesn’t understand the outputs or information it provides and has context and memory limitations in how it parses data.
- Inaccurate or unreliable outputs. Poor sources may produce factually incorrect or biased information.
- Ethical and legal challenges. AI may illegally share intellectual property, manipulative content like deep fakes, and harmful information.
There are further ethical considerations that providers must take into account to protect their facilities and I/DD population, Ganesan added, like the level of autonomy I/DD AI is given, patient awareness of when AI is used, and accountability for AI-driven decisions.
While these risks might seem daunting, they can be mitigated, Ganesan shared, through human intervention and transparency.
Supporting the Ethical and Safe Use of AI for I/DD Service Delivery
Unlike other industries, in clinical settings, AI can’t be granted full autonomy, said Ganesan. Although administrative duties may be mostly automated, the stakes are too high not to have a human being involved in decision-making that impacts care — nor in reporting that can influence industry-specific approvals, like the ability to bill for services covered by I/DD waivers.
It’s also essential to have a real person review how the AI they’re using was built, since clinical AI platforms will often start with a foundational model like OpenAI or Meta and then be trained with additional data. Reputable vendors should be able to produce a model card, Ganesan said, which shares this information and acts as a tool to make a more informed buying decision. Also, as more payers use AI to approve or deny claims, providers should ask how the AI is making approval determinations to ascertain if the data it’s fed is biased toward denials.
When using AI in direct care, providers should engage in good clinical practice and be open and transparent, getting informed consent and explaining what information is being collected and how it will be used.
With these guardrails in place, I/DD providers are more likely to have the confidence and certainty needed to pursue and adopt AI tools that deliver on all of their goals.
How Core’s Development Efforts Are Laying the Foundation for I/DD AI
In the past few years, Core Solutions developed multiple behavioral health AI solutions, including “Core Clinician Assist” tools that surface hard-to-identify symptoms, provide health-related social needs (HRSN) information at the point of care, identify anomalies in operations and care delivery, and enhance revenue cycle management. Core’s Cx360 GO mobile application combines several functionalities, including transcribing provider-client sessions anywhere, summarizing notes, and providing insights on symptoms and treatment options.
These are just some of the tools that are strengthening operational efficiency by reducing the time it takes for behavioral health staff to complete monotonous administrative tasks and helping to improve care and outcomes by leveraging AI in clinical decision-making. Their applications across I/DD and ASD care are sure to give a head start to leaders in addressing their own urgent needs.
Existing Behavioral Health models are already functional with Core Solutions. If you would like to see them in action, contact us today.
Terence Blackwell Jr. is an Executive Management Consultant at Core Solutions.
Stay Informed on the Latest Research & Analysis from ANCOR
More News
How to Avoid Common Retirement Plan Administration Errors

Industry Insights: March 2025

Stateside Report: March 17, 2025
