Skip to main content
UCF Guest homeNews home
Story
3 of 10

UCF Helps Develop AI Tool That May Assist Underserved Hospitals

The open-source AI model analyzes medical images, generates detailed reports, answers clinical questions and integrates multimodal data to streamline diagnostics and improve accuracy.

As the fields of healthcare and technology increasingly evolve and intersect, researchers are collaborating on the best ways to use emerging technologies such as artificial intelligence (AI) to care for patients.

This includes using AI to assist in collecting and deciphering diagnostic data among medical professionals, particularly in underserved communities.

One such model, “BioMedGPT,” has shown great potential to democratize healthcare and reduce disparities amongst patients. The new model has been detailed in a study in Nature Medicine.

The AI tool was developed by a collaborative team of researchers that was led by Lehigh University and included Chen Chen, an associate professor at UCF’s Center for Research in Computer Vision (CRCV).

Chen says there are many existing examples of AI used for healthcare, but many are highly specialized and may only perform limited tasks.

However, BiomedGPT can perform multiple tasks, including image classification, report generation and visual question answering, and is designed to be computationally efficient and open-sourced to foster collaboration, according to the study.

BiomedGPT could find a niche in providing easily accessible data to bolster underserved hospitals that may not have a robust number of personnel, so relying on shared knowledge from medical networks via BiomedGPT could be of great help, he says.

“In these hospitals, they may not a lot of physicians or clinicians that can address a case immediately or they don’t have enough resources to diagnosis,” Chen says. “This powerful AI tool is able to provide that knowledge to help to reduce disparity in healthcare.”

The model is open source, which means practitioners can use the framework and plug in their own data to collaborate and review amongst themselves in a community network.

BiomedGPT also aims to be generalist, meaning it can be more comprehensive and thorough so that it may be applied to a wider breadth of medical data and analysis, Chen says.

“BiomedGPT is a unified AI model that is able to process a variety of data and perform multiple tasks,” he says. “So, this is useful, because it can be potentially can streamline the healthcare workflow, improve the diagnosis accuracy and reduce the need of multiple specialized systems. This model can even generate reasonable results on tasks or data that hadn’t been trained on before.”

Leveraging UCF’s Expertise in Computer Vision

Chen used his computer vision and machine learning expertise to develop the AI model to understand medical images.

“My role was to figure out how we can extract useful information from visual data, especially for medical imaging and how can we integrate this information with other types of data modalities like text,” Chen says. “Imaging modalities are a big part of this because in healthcare, we have a lot of imaging data such as X-rays, CT scans and MRI.”

BiomedGPT can perform multiple tasks, including image classification, report generation and visual question answering, and is designed to be computationally efficient and open-sourced to foster collaboration, the researchers state in their study. A clinician can upload an image and enter queries into BiomedGPT such as “What disease does this image depict?” or “Please determine the patient’s eligibility by comparing the given patient note and clinical trial details” and receive feedback based on an existing set of provided data integrated into the AI model’s framework.

According to the study, BiomedGPT exhibits robust prediction ability with a low error rate of 3.8% in question answering and a satisfactory performance with an error rate of 8.3% in writing complex radiology reports, and competitive summarization ability with a nearly equivalent preference score to human experts,.

Chen emphasizes though that clinicians and experts ultimately are responsible for reviewing the accuracy AI predications and supplementing the data.

“We are not trying to replace the clinician, but rather to enhance or make their workflow more efficient,” Chen says. “A physician can look at an AI report and perhaps for some of the less complex cases they can quickly check to see if it is correct. The human will still be involved and with their expertise, they can make the correct prediction or the diagnosis.”

He says the model is designed to be computation friendly and also fully open sourced.

“This is trying to foster the collaborations with research institute hospitals to use this and also improve the model over the time,” he says.

Next Steps

The study and analysis of BiomedGPT are promising, but there is still much to refine, Chen says.

New datasets and imaging could be integrated while there also remains more evaluations for the platform’s consideration of safety, equity and bias.

“One thing is that we are looking to incorporate is more or diverse data and modalities,” he says. “For example, we can include more video data and physiological signals like EKGs and heart rate monitoring. Another direction is we want to address are some of the most important issues in healthcare AI in general, like the privacy, bias and the fairness. The bias is an important consideration in developing this kind of model to make sure that it is able to generalize well for a wider population.”

The University of Georgia, Harvard University, Massachusetts General Hospital, University of Pennsylvania, Children’s Hospital of Philadelphia, University of California, Santa Cruz, The Mayo Clinic, Samsung Research America, Stanford University and UTHealth (University of Texas) also contributed to this research.

The BiomedGPT open source model is available strictly for academic research purposes here.

Researcher’s Credentials:

Chen is an associate professor at UCF’s CRCV and previously served as a postdoctoral scholar for the center from 2016 to 2018. His main research interests are computer vision, image and video processing, and machine learning. In 2016 Chen earned his doctoral degree in electrical engineering from the University of Texas at Dallas. He is a senior member of the Institute of Electrical and Electronics Engineers and a member of the Association for Computing Machinery.