Skip to content

Pulse-Ecosystem/pulse-fracture-scan

Repository files navigation

Pulse-Fracture-Scan

A deep learning-powered system for detecting fractures in hand X-ray images. This project combines a React-based frontend interface with a powerful deep learning model trained on Google Colab.

Demo Interface

Features

  • Upload and preview X-ray images
  • Real-time fracture detection using DenseNet121
  • Beautiful, responsive UI with Tailwind CSS
  • Visual feedback for detection results
  • Confidence scores for predictions

Project Structure

pulse-fracture-scan/
├── src/                    # Frontend React application
├── public/                 # Static assets
└── colab/                 # Google Colab training code

Frontend Setup

  1. Install dependencies:
npm install
  1. Start the development server:
npm run dev

Model Training (Google Colab)

Prerequisites

  • Google account with access to Google Colab
  • Google Drive storage for dataset
  • Python 3.7+

Required Libraries

!pip install tensorflow keras numpy matplotlib flask flask_ngrok pillow

Dataset Structure

Create the following structure in your Google Drive:

fracture_dataset/
├── fracture/
│   └── (X-ray images with fractures)
└── no_fracture/
    └── (X-ray images without fractures)

Google Colab Implementation

import tensorflow as tf
from tensorflow.keras.applications import DenseNet121
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import numpy as np
from PIL import Image
import io
from flask import Flask, request, jsonify
from flask_ngrok import run_with_ngrok

# Create the model
def create_model():
    base_model = DenseNet121(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
    
    # Freeze the base model layers
    for layer in base_model.layers:
        layer.trainable = False
    
    x = base_model.output
    x = GlobalAveragePooling2D()(x)
    x = Dense(1024, activation='relu')(x)
    predictions = Dense(1, activation='sigmoid')(x)
    
    model = Model(inputs=base_model.input, outputs=predictions)
    model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
    
    return model

# Data preparation
def prepare_data(data_dir):
    datagen = ImageDataGenerator(
        rescale=1./255,
        validation_split=0.2,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True
    )
    
    train_generator = datagen.flow_from_directory(
        data_dir,
        target_size=(224, 224),
        batch_size=32,
        class_mode='binary',
        subset='training'
    )
    
    validation_generator = datagen.flow_from_directory(
        data_dir,
        target_size=(224, 224),
        batch_size=32,
        class_mode='binary',
        subset='validation'
    )
    
    return train_generator, validation_generator

# Train the model
def train_model(model, train_generator, validation_generator, epochs=10):
    history = model.fit(
        train_generator,
        epochs=epochs,
        validation_data=validation_generator
    )
    return history

# Create Flask app for deployment
app = Flask(__name__)
run_with_ngrok(app)

model = None

def load_model():
    global model
    model = create_model()
    # Load trained weights
    model.load_weights('fracture_detection_model.h5')

@app.route('/predict', methods=['POST'])
def predict():
    if 'file' not in request.files:
        return jsonify({'error': 'No file provided'}), 400
    
    file = request.files['file']
    image = Image.open(file.stream).convert('RGB')
    image = image.resize((224, 224))
    image = np.array(image) / 255.0
    image = np.expand_dims(image, axis=0)
    
    prediction = model.predict(image)
    result = 'Fracture Detected' if prediction[0][0] > 0.5 else 'No Fracture Detected'
    
    return jsonify({
        'prediction': result,
        'confidence': float(prediction[0][0])
    })

# Main execution
if __name__ == '__main__':
    # Mount Google Drive
    from google.colab import drive
    drive.mount('/content/drive')
    
    # Set your data directory
    DATA_DIR = '/content/drive/MyDrive/fracture_dataset'
    
    # Create and train model
    model = create_model()
    train_generator, validation_generator = prepare_data(DATA_DIR)
    history = train_model(model, train_generator, validation_generator)
    
    # Save the model
    model.save('fracture_detection_model.h5')
    
    # Start the Flask server
    load_model()
    app.run()

Training Steps

  1. Create a new Google Colab notebook
  2. Copy the code above into the notebook
  3. Mount your Google Drive
  4. Update the DATA_DIR path to match your dataset location
  5. Run all cells sequentially
  6. When the Flask server starts, you'll receive a public URL for predictions

Model Architecture

  • Base Model: DenseNet121 (pre-trained on ImageNet)
  • Additional Layers:
    • Global Average Pooling
    • Dense Layer (1024 units, ReLU activation)
    • Output Layer (1 unit, Sigmoid activation)
  • Optimization: Adam optimizer
  • Loss Function: Binary Cross-entropy

Data Augmentation

The training pipeline includes the following augmentations:

  • Random rotation (up to 20 degrees)
  • Width shift (up to 20%)
  • Height shift (up to 20%)
  • Horizontal flipping
  • Validation split: 20%

API Endpoints

POST /predict

Accepts X-ray images and returns fracture detection results.

Request:

  • Method: POST
  • Content-Type: multipart/form-data
  • Body: file (image)

Response:

{
  "prediction": "Fracture Detected",
  "confidence": 0.95
}

Dataset Resources

You can use the following datasets for training:

  1. MURA Dataset (Stanford ML Group)
  2. RSNA Bone Age Dataset
  3. Create your own labeled dataset of X-ray images

Future Improvements

  1. Model Enhancements:

    • Fine-tuning of deeper layers
    • Ensemble models for better accuracy
    • Cross-validation during training
  2. Visualization:

    • Grad-CAM for fracture location highlighting
    • Confidence score visualization
    • Training metrics dashboard
  3. Deployment:

    • Docker containerization
    • Cloud deployment options
    • Batch processing capabilities

Contributing

  1. Fork the repository
  2. Create your feature branch
  3. Commit your changes
  4. Push to the branch
  5. Create a new Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • DenseNet121 architecture by Huang et al.
  • React and Tailwind CSS communities
  • Medical imaging research community

Contact

For questions and support, please open an issue in the GitHub repository.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published