Skip to Content

The Near-Term Potential of AI Remains Shrouded in Regulatory Uncertainty

Published Date: October 14, 2025

The Near-Term Potential of AI Remains Shrouded in Regulatory Uncertainty

Artificial Intelligence (AI) offers enormous potential in improving oncology care and research through automation of myriad time-consuming processes, yet questions over its regulation and application could create stumbling blocks, according to a panel of experts at the 2025 NCODA Oncology Institute.

The session, “From Practices to Manufacturers: The Growing Influence of AI Across Oncology,” featured:

♦ Lekan Ajayi, PharmD, MBA, Chief Operating Officer, Highlands Oncology, based in northwest Arkansas, one of the largest physician-owned cancer programs in the country.

♦ Sanjay Juneja, MD, hematologist and oncologist, Citrus Oncology, a multispecialty platform that manages side effects and preexisting conditions in individuals with cancer, and Vice President of AI Clinical Operations, Tempus AI, a technology company that promotes the adoption of AI to advance precision medicine.

♦ Stephen Speicher, MD, MS, Head of Clinical Oncology & Safety, Flatiron Health, a company that builds software and delivers services for the oncology research community.

The panel discussed a variety of topics pertaining to AI:

WHAT GETS YOU THE MOST EXCITED ABOUT AI?

LA: “When you hear this term Large Language Model (LLM) and its sister component Natural Language Processing (NLP) … they basically mean something can understand the context, the words within something. So, if you asked ChatGPT, it could give you a Kayak receipt because of a pattern recognition. It can understand what you are saying contextually and give you an answer.

“You can imagine how exciting that is for records. For example, you could get the context of a stage and know whether you needed BRCA germline or somatic if it’s pancreatic. NLPs and LLMs allow us to finally query the aggregate tragedy that is cancer diagnoses and outcomes, and actually have it harmonize to be able to process and say what happened, what didn’t and why?”

SS: “I’m hyper-focused on the point of care. I think back to when I was practicing full time and the things that kept me awake at night. There is always this lingering thought for oncologists, specifically, ‘Am I doing the right thing for my patient? Is there a better treatment option? Are there other diagnostics modalities I should be exploring?’ I was constantly in this doom spiral as I was practicing. The idea that we’re bringing these tools that will finally allow us to bring the most up-to-date research and cutting-edge technology to the point of care really gets me excited.

“At a broad level, as I think about AI and healthcare, I feel like I’ve grown up in the time of digital innovation. I remember in medical school, we were still on paper charts and just starting to transition to electronic medical records. So, I’ve seen a full transition and how slow we’ve been bringing digital tools into healthcare delivery and now, for the first time, we’re not that far behind.”

SJ: “I am really excited about how the environment is going to interact with AI. One of my managers came to me a few days ago. We have Microsoft Copilot and all the licenses and he’s ready to start. He came to me and asked, ‘Do I have the go ahead to press Go?’ I said, ‘Hold on a minute.’ Because what he didn’t know was that three days earlier someone had come to me and said ‘I have Microsoft Copilot on my license and was able to search what every other person was doing within the organization.’ So, it’s really amazing in how people are going to interact with these tools and the environment.

“I am excited also to see how that’s going to impact the culture of the organization. When AI came out, we said, ‘Is that going to take our jobs?’ How do we have that conversation in a way that’s meaningful? How do we have conversations around ROI (return on investment)? How do we have conversations about things like security and system implementation? And education and the guardrails around it? That’s what I find exciting as a COO, because it makes my life easier when those things are in place.”

LA: “AI is so immersive in virtually any and every aspect of healthcare. In drug development, AlphaFold really cuts down the time that scientists need in the lab to run permutations of certain proteins. It can just spit out millions. It’s logarithmically faster.

“And then you have AI that can do clinical trial matching and accelerate trials being filled. Why? Because instead of having disparate systems where you have a person in a brick-and-mortar going through all the charts and asking the doctor, you instead can understand context, you can understand those receptors, you can understand what line they’re on.

“From the patient side, FDA pending, you can figure out if someone’s diabetic just by them talking into their phone. There’s this subaudibly appreciative change in what are called ‘shrills’ and ‘thrills,’ the frequency of their voice. And that has a very high specificity on if their sugars are over 200 or not, just by talking to a phone.”

HOW COULD THE CURRENT REGULATORY ENVIRONMENT AFFECT AI DEVELOPMENT AND UTILIZATION IN HEALTHCARE?

SS: “That’s the billion-dollar question. How is this regulatory environment going to change and evolve over the next three to six months? We’re looking at the end of the year and trying to take a look at some past state regulations. How is that going to impact developers? How is that going to impact the utilization of these tools? There’s a growing desire amongst a variety of stakeholders for some sort of framework or federal regulation around this, but it doesn’t feel like that’s going to come.

“So, what does that mean for the overall landscape? Last year alone, there were a thousand bills that were presented across several different states over the regulation of AI. You see a plethora of different types of regulation. You see things that are very industry-specific and hyper-focused on healthcare and those that are pan-industry and spread across a variety of industries that could implicate different healthcare applications and make it incredibly challenging.

“There’s obviously a push and pull of who is interested in regulation and who isn’t. What we know from the data is that physicians and clinicians who are using these tools are very interested in seeing this being regulated in some capacity.

The American Medical Association put out a really interesting survey last year that asked their clinician population what things worried them most. And they are worried about the lack of regulations in this space and what that may mean for these tools. They’re really excited to use them. At the same time, they need to know that they are safe and performing at a level we would expect.

“So, you have the clinicians … wanting these regulations, but then there’s a disparate regulatory environment that can cause all kinds of issues. If we start to see a world in which there are certain states regulating AI at one level and certain states that don’t have regulation, it’s also going to create an interesting environment. You can anticipate a small start-up trying to get tools to the hands of clinicians … going to go to a state that’s potentially more friendly to regulations.

“What does that mean for the type of care that is being given? Is that a good thing? Or bad thing? What does that mean for the clinicians who are actively using these tools versus those who don’t get exposure to them? When working across all the Electronic Health Record (EHR) vendors, there’s a general consensus that framework of those disparate approaches is very difficult to comply with and really not going to be ideal. So, we’re hoping for some kind of consensus across either the federal landscape or elsewhere to make sure that we have some sort of standards … so we can actually get these tools in the hands of providers and ensure they are safe.”

LA: “The regulatory issue could raise a whole new kind of inequity. We usually think about socioeconomic status, proximity to an academic medical center and other such factors. But now you’re talking about what state you live in dictating potentially more insight … a physician in one state has that insight on top of guidelines and real-world data, and the other doesn’t. And that’s crazy.

“On top of that, there’s also a consensus on what is good enough. Because if something is performing in the real world of human fallibility at 40%, is 70% good enough (using AI)? Or, is it the same concept as a self-driving car? ‘Well, it’s the wrong 30%, so, we’ll stick with the 40% humans do today because 70% or 80% isn’t good enough.’”

SS: “When we think about that kind of inequity, where does the regulatory burden land? It might land on the developer or the deployer, or it might land on the part of the practice actually implementing these tools. So, all of a sudden, I’m a small practice that desperately needs these AI tools because of efficiency gains. I need them to keep my clinicians up to date. I have two doctors and a small staff, but all of a sudden, I have this regulatory burden that I need to comply with. How are they expected to do that? Once again, that digital divide begins to creep up even more and you only have your academic medical centers that are able to use these tools.”

SJ: “We have seen this in the insurance landscape where there’s legislation against some insurance companies, how they can’t use AI, but that varies by state. There’s definitely going to be disparities across the board. And we all know the administrative burden that puts on physicians to always come back and argue reviews and claims and things like that. I think the impact of it is real and the inequity can be something profound if we don’t have standards that address everything across the board.”

WHAT REGULATORY CONCERNS HAVE YOU COME ACROSS?

SS: “I think it depends on who you ask. We did a survey last year of all the EHR vendors on behalf of the end users, the physicians using some of these tools. One question we asked: ‘What would you like to see in that regulatory space?’ One of the primary things we continue to hear is that they want basic visibility to know what is going on. At the bare minimum, we need to make sure we’re informing physicians or whoever is using that data or that tool that this is generated by AI. There has to be transparency there. I think there has to be some sort of attribution or reference.

“The idea of the human in the loop continues to make its way back into regulatory conversations. How do we make sure as they are being deployed and used, and not just told by the developer, that there is a human in the loop?

“State by state, there are some really crazy things they are coming up with and a lot of it comes from what they think is most relevant to their state and to the industries within their state.”

WHAT METRICS BEYOND ROI ARE YOU USING TO EVALUATE THE LIFT REQUIRED TO INTEGRATE OR LINK TO THESE TOOLS?

SJ: “There’s a lot of things, like turnaround times. That’s really important in the service you are delivering, how quickly people are getting things done. It’s a huge patient satisfier. We’ve seen a lot of that in our infusion space where we’ve been able to optimize space and drive down infusion wait times. The turnaround time for prior authorizations could be a huge win, too, as we begin to look at that data.

“Even in things as simple as call centers where you can triage calls, there are some solutions that are being explored. We receive close to 1,000 calls per day. You can imagine the amount of manpower that it takes to process those calls and triage them. So, things like SMART Triage can help ease the burden on your staff.”

SS: “This is one of the first times where I’ve seen something like physician satisfaction play into the ROI conversation enough to actually invest in something. We have a huge problem with physician burnout. Physicians are desperate for things that are going to help with basic efficiencies and make their lives easier. So, that has been enough for the first time that I’ve seen at least see some investment in some of the ambient AI dictation work and other tools knowing that is going to help over time.”

DO YOU DISCLOSE TO THE PATIENT IF YOUR PRACTICE IS USING SOMETHING LIKE DEEPSCRIBE DURING THE VISIT?

SJ: “Yes, we’re disclosing to patients and getting their consent before we do that. We had some discussion with the company whether this could be a tool that the patients and caregivers could take home. Because when you are a cancer patient, you don’t remember much of what you are saying (in the doctor’s office). And the answer was ‘No’ for now, just because there are still so many elements and variables that they can’t control yet. (If a patient refuses consent) … then you don’t use it.”

SS: “In the future, it might be impossible to decouple AI from the process so consent becomes irrelevant. I had an interesting discussion with a lawyer about when AI becomes best practice and using only a clinician becomes grounds for malpractice. What is the threshold? When do we get there? We have to start asking those questions. Because the pace of change, the pace at which these tools are evolving, is so rapid that we have to think about these things before it is too far gone.”

SJ: “There are probably security reasons why some people may not want you to record them. It’s valid. Those are things you have to respect. I think it’s going to have to be a cultural and potentially also a generational shift as well.”

SS: “That privacy piece is such an important one. How are you communicating that to your vendors? Are you asking what are they doing with these recordings? Because at the end of the day, you have to equip your doctors to answer those questions when consenting a patient.”

SJ: The number one thing we ask our vendors is, ‘What are you doing with this data? How are you going to use it? Are you going to use it for marketing purposes? And who owns the data?’ I think these questions are going to become more and more valid. Also bringing vendors in and onboarding them from a security standpoint to make sure there are minimum security requirements, I think that’s also key as well.”

Read the full 2025 Fall edition