top of page
  • amandeep860

Designing an Intelligent Chest X-ray Abnormalities Detection (ICXAD) System using AI

Getting Started with ICXAD using AI

What is needed for automated abnormalities detection? (What is this for?)

Have you ever had a fracture? It’s painful, right. When you have a broken arm, radiologists help save the day—and the bone. These doctors diagnose and treat medical conditions using imaging techniques like CT and PET scans, MRIs, and, of course, X-rays. Yet, as it happens when working with such a wide variety of medical tools, radiologists face many daily challenges, the most difficult being the chest radiograph. The interpretation of chest X-rays can lead to a medical misdiagnosis, even for the best practicing doctor. Computer-aided detection and diagnosis systems (CADe/CADx) have helped reduce the pressure on doctors at metropolitan hospitals and improve diagnostic quality in rural areas.

How does AI help doctors?

State-of-the-art AI algorithms can interpret chest X-ray images and classify them into a list of findings with the probability of each diagnosis (so that doctors can consider the highest likely diseases). In addition, the algorithms can specify the locations of anomalies on the image which provides doctors with more meaningful diagnostic assistance. As we collect and label more data we can use it to train the models thus improving their effectiveness further.

Why should you take our word for it?

Established in 2018 ShortHills Tech, a Data Engineering Company aims to promote AI and ML. The company focuses on key fields of data science and artificial intelligence: computational biomedicine, natural language processing, computer vision, and medical image processing. The medical imaging team at ShortHills has developed this model using torchxrayvision API, an open-source library for working with Chest X-rays. They're working to build large-scale medical imaging solutions based on the latest advancements in artificial intelligence to facilitate effective clinical workflows. If you have an idea that can revolutionize the field, do get in touch with our team to help understand how we can help execute it.

Why should you read this article?

In this demo, you’ll learn to deploy a machine learning model (along with a PHP-based pre-trained backend and a VueJS-based frontend) to automatically classify 14 types of thoracic abnormalities from chest radiographs. The model has been trained on a publicly available database of chest x-rays with free-text radiology reports by MIT. We are using pre-trained models (currently all DenseNet121 and each pre-trained model has 14 outputs) to make the predictions.

How to set up and run the code?

You can use the git clone to download the code to your local machine and follow the steps mentioned below (you can also read the steps mentioned in the readme file on GitHub).

Step 1: Ngrok

Ngrok allows you to expose a web server running on your local machine to the internet. Just tell ngrok what port your web server is listening on. Therefore set up an account if you don’t already have one. After logging in, get the ‘authorization code’.

Step 2: Models and technologies used.

This is a pocket application that is mainly focused on aiding medical professionals in their diagnostics and treatments for chest anomalies based on chest X-Rays. On this application, users can upload a chest X-Ray image and a deep learning model will output the probability of 14 different anomalies taking place on that image using the DenseNet model.

Step 3: Setting up the ML engine

Here is the file path for Xray Colab ML Model. /Xray Colab API/XrayColab.ipynb

You can use the Xray Colab file in your Google Colab as it provides 90 minutes of free GPU and CPU computing.

Step 4: Installation

pip install flask-ngrok
pip install pyngrok
pip install -U flask-cors
pip install flask-ngrok
! pip install transformers

Step 5: Instruction

First, set up this repository on Colab (if you face an error, you can manually install that particular library). If you doing it locally then you have to install all the dependencies in your local machine and, authentication using your ngrok token.

Copy and paste your ‘ authorization token’ in line 2 (ngrok authtoken "<_YOUR_NGROK_TOKEN_>" this is your ngrok token). To execute, run all the Jupiter or Colab cells.

Step 6: Laravel and Vue Installation

Clone the Application on your local system. After cloning the application on your local system using the cd X-ray-Colab-ML-Model command to go to the Cartoon ML model Directory.

Install the default dependencies by running the following command.

composer update
npm install

Step 7: Setting Up Database

First, change the default database in config/database.php to MySQL. Add your database credentials in .env file. (either default database change or .env file, both are not required??)

Run php artisan key: generate followed by services start mysql to start the MySQL server and then create a database named Laravel mysql -u root -p and CREATE DATABASE laravel; Next, run php artisan migrate to set up your database migration.

Step 8: To start your Local server

npm run dev
php artisan serve

Step 9: To Add your Ngrok link

Open your local server and go to this path:

Understanding the backend

First, we set up a Laravel project, inside our project we created a model for our ‘link add’ and we created a ‘link add’ controller to handle that specific table where we are adding all the links.


In the backend section we have some routers, in the routers we write some routes. so through routes we handle all the links. In routes we have some get routes through which we are showing the data to our users and we have some post routes which are for sending data to the back end and saving our data in our database.

**Route::get('/',                 function () {return view('welcome');});** 

In this route we are rending our main welcome page blade file (Laravel basic view template)

**Route::get('/linkadd',          function () {return view('welcome');});**

It also renders the main welcome page component, actually it is like this that the Vue application renders everything from a single <div> id. Inside our welcome blade component we are rendering our Vue application.

**Route::get('/hello/{id}',       [App\\Http\\Controllers\\LinkaddController::class, 'view']);**

To fetch the url from our backend table,

**Route::post('/hello',           [App\\Http\\Controllers\\LinkaddController::class, 'store']);**

We are using the post api to save our ngrok API in the backend


We have created a table schema for link add where we are saving the links for different projects (ngrok link).

public function up()
        Schema::create('linkadds', function (Blueprint $table) {

We have table id that is for default id, then next is string ‘name’ which is for storing the name, then we have the ‘url’ to store the ngrok URL.


In controllers we have functions connected to our routes, these functions are called when specific routes are hit.

class LinkaddController extends Controller
    public function view($id)
        return Linkadd::where('name',$id)->get();
    public function store(Request $request)
        $data = $request;
        $check = Linkadd::where('name',$data['name'])->get();
        if (count($check)==0) {
            $new = new Linkadd;
            return $new;

                'name' => $data['name'],
                'url' => $data['url']
            return $data;

We have a ‘link add’ controller, in which we have two methods (functions). First, we have a view, which returns all the data related to a specific ‘name’ (link). Second, we have the store method, which stores or updates the name and the ngrok link related to it.

Understanding the frontend

Then in Vue JS, we created a UI for our X-ray application where the user can come and add his X-ray png images.


In template, we write all the html code, in Vue template we have v-for loops which we can use in every project. They allow us to use the for loops on our template code.

<div v-for="(item, keyValue) in data" :key="{ keyValue }" class="row">
            <div class="col-6 border text-center">
              {{ keyValue }}
            <div class="col-6 border text-center">
              {{ item }}

We can say, we have added a for loop which is running on our data array (output data from the ml code) taking each key and value and rendering it in our frontend.


In VueJS we have some lifecycle hooks like created and mounted, then we have methods where we write our Javascript functions or VueJS functions.

saveItem(e) {
      ( = []), (this.image = ""), (this.loader = true);
      const files =[0];
      var reader = new FileReader();
      console.log("data 1 url", this.url);
      reader.addEventListener("load", () => {
          .get(this.url + "/dicom?dicom=" + reader.result.split(",")[1])
          .then((response) => {
            let value =;
            let val = value.replaceAll("'", '"');
            let value1 = JSON.parse(val);
   = value1;
            this.image = reader.result.split(",")[1];
            console.log("image", this.image);
            this.loader = false;

In this function, we are taking the image into files constant, adding a new file reader in the reader variable through which we are converting the file into a data URL and sending our data URL to the ngrok link through Axios, and saving the response data into a data variable and rendering it in a frontend.


Just add all the CSS styling of that specific page. In VueJS we set the style as scoped which means the CSS written on the style would be used only for that specific component. In the template, you can define classes, and on behalf of those classes, you can add style to make your front end look more appealing.

Understanding the machine learning model

Firstly we are installing all the dependencies, we are using ngrok for creating a hosting that can be used to call the API. Streamlit, torchxrayvision, pydicom, and flask-cors also need to be installed.

Import the libraries required for the model such as torchvision, torchxrayvision, PIL (for importing the image), and base64 (for converting our png or jpg image into base64).

app = Flask(__name__)
cors = CORS(app)
app.config['CORS_HEADERS'] = 'Content-Type'
uploaded_file = st.file_uploader("Choose an X-Ray image to detect anomalies of the chest (the file must be a dicom extension or jpg)") 
def read_image(imgpath):
    if (str(imgpath).find("jpg")!=-1) or (str(imgpath).find("png")!=-1):
          sample =
          return np.array(sample)
    if str(imgpath).find("dcm")!=-1:
        img = dicom.dcmread(imgpath)
        return img
def index():
  return "Welcome to the X-Ray Disease Guesing App"
@app.route("/dicom",methods=['POST', 'GET'])
def get_img():
  org=bytes_in = request.args.get('dicom','').replace(" ","+")
  bytes_in = base64.b64encode(bytes(bytes_in, 'utf-8'))
  with open('decoded_dicom.png', 'wb') as file_to_save:
      decoded_image_data = base64.decodebytes(bytes_in1)

  imgdef = read_image("decoded_dicom.png")

  # Printing the possibility of having anomalies
  model = generatemodel(xrv.models.DenseNet,"densenet121-res224-mimic_ch") ### MIMIC MODEL+
  pr = outputprob2(imgdef,model)
  # # Sort results by the descending probability order 
  pr = dict( sorted(pr.items(), key=operator.itemgetter(1),reverse=True))
  return {"probability_data":p}

In the ‘get_img’ function we are getting an image from our frontend. After, that we will decode the image to base64 format and save the image as ‘decoded_dicom.png’.

Next, we call the read_image function, we are returning an image based on whether it is in jpg/png or dcm format. After this, we call the generated model with two parameters, the model and the weights. We have defined this function as below:

def generatemodel(xrvmodel,wts):
    return xrvmodel(weights=wts)

Next, we are finding the probabilities for the diseases using outputprob2, which is defined as follows:

def outputprob2(img,pr_model,visimage=True):
    ### Read an image
    img = resize(img, (img.shape[0] // 2, img.shape[1] // 2),
    ### Preprocessmodel
    img_t = transform(img)
    ### Test an image
    return testimage(pr_model,img_t)

Then we are making a sorted dictionary (sorting our probabilities in descending order) and then convert it to a string. Therefore we get the output as a string.


If you followed along, you've built what could be a valuable second opinion for radiologists. An automated system that can accurately identify and localize findings on chest radiographs will relieve the stress of busy doctors while also providing patients with a more accurate diagnosis. We have open-sourced the entire source code on GitHub. If you have any questions please reach out to

By: Apurv Sibal

20 views0 comments
bottom of page