Email: hossainimran.maia@gmail.com
Address: Carrer de Manuel de Fall 12, 17190 Girona, Spain.
Please Click to [Download CV]
Copyright © Mohammad Imran Hossain Official Webpage 2024.
About Me
I am an Electrical and Computer Engineer. I am currently pursuing the prestigious
Erasmus Mundus Joint Master Degree in Medical Imaging and Applications,
fully funded by the European Union and jointly coordinated by the
University of Girona (Spain),
the University of Cassino and Southern Lazio (Italy), and
the University of Burgundy (France).
Additionally, I was also a recipient of the
Stipendium Hungaricum Scholarship for
the Master of Science in Electrical Engineering at the
Budapest University of Technology and Economics (Hungary).
I received the Bachelor of Science in Electrical and Electronic Engineering with the highest academic distinction 'Summa Cum Laude' from the United International University (Bangladesh) in 2022. My undergraduate research project "Electronic Toll Collection System Using Optical Wireless Communication"
was supervised by Professor Dr. Raqibul Mostafa.
Specializing in AI-driven Medical Image Analysis and Computer-Aided Diagnosis (CAD), I am committed to advancing healthcare.
My research projects include Skin Lesion Detection, Functional Neuroimaging, Brain Tissue Registration and Segmentation,
Alzheimer's Disease Classification, Breast Mass Segmentation and Detection, and Computational Pathology.
I aim to contribute innovative solutions for accurate diagnosis and improved patient outcomes.
Currently, I am a graduate research intern at the National Center for Scientific Research (CNRS)
in France, working on the project titled "Deep Learning-Based Detection of Homologous Recombination Deficiency (HRD) in Breast and Ovarian Cancer Whole Slide Histopathology Images" with
Professor Dr. Manon Ansart.
Before that, I worked as a visiting researcher at the Diagnostic Image Analysis Group (DIAG) at Radboud University Medical Center in
The Netherlands on the project "Real-time MR Image Reconstruction Using Deep Learning" under the supervision of Stan Noordman.
I am driven by a commitment to advancing healthcare through innovative solutions and look forward to contributing significantly to
the intersection of engineering and medicine in my future endeavours.
Research Interests: Medical Image Analysis & Computation, Computational Pathology, Computer-Aided Diagnosis, Machine & Deep Learning, AI in HealthCare.
University of Zurich and Swiss Federal Institute and Technology Zurich, Switzerland
University of Girona, Spain [Master of Science in Medical Imaging and Applications]
University of Burgundy, France [Master of Science in Computer Vision]
University of Cassino and Southern Lazio, Italy [Master of Science in Computer Engineering]
United International University, Bangladesh
National Center for Scientific Research (CNRS), France
Project: Deep Learning-Based Detection of Homologous Recombination Deficiency (HRD) in Breast and Ovarian Cancer Whole Slide Histopathology Images.
Diagnostic Image Analysis Group (DIAG), Radboud University Medical Center, The Netherlands
Project: Real-time MR Image Reconstruction for Interventional Radiology.
The segmentation of brain tissues in Magnetic Resonance Imaging (MRI) is crucial for neurodegenerative analysis, facilitating the study of diseases such as Alzheimer's, Parkinson's, and Multiple Sclerosis (MS). This project explores several deep learning models, including U-Net, nnU-Net, and LinkNet, for segmenting MRI brain tissues using the IBSR18 dataset. Pretrained weights from ImageNet, such as ResNet 34 and ResNet 50, were utilized as backbones for U-Net and LinkNet to enhance the models' performance and feature extraction capabilities. In addition, statistical method, such as Probabilistic ATLAS, is incorporated for comparison. Performance assessment reveals that 3D nnU-Net excels with an average mean Dice Score of 0.937 ± 0.012. Notably, 2D nnU-Net outperforms in Hausdorff Distance (5.005 ± 0.343) and Absolute Volumetric Difference (3.695 ± 2.931). This comprehensive analysis highlights the unique strengths of each model in different aspects of brain tissue segmentation
Image registration, essential for both direct diagnosis from aligned images and optimizing algorithmic performance in medical imaging, played a central role in this project. In this project, we utilized inspiratory and expiratory Breath-Hold CT image pairs from the COPDgene study repository, with experimentation focusing on key aspects of the registration framework, including the choice of similarity metric, geometric transformation, interpolation, and resolutions. To reduce variations across images and enhance comparability for easier image registration, we preprocessed intensities. Utilizing Elastix and Transformix computer software, we conducted intensity-based registrations, both rigid and non-rigid, on unsegmented and segmented lung structures. Our approach involved defining a region of interest and excluding non-essential areas. As a result, our method demonstrated an average mean Target Registration Error (TRE) of 2.08 ± 2.26 mm, highlighting the effectiveness of our approach and the fine-tuned parameters. Additionally, using VoxelMorph, we achieved an average mean TRE of 2.18 ± 2.74 mm for the same cases. These findings underscore the success of our methodology in achieving accurate image registration and its potential impact on supporting medical imaging diagnosis.
Skin lesions, with the potential to manifest as malignant tumors, pose a significant health risk. This project addresses the need for early diagnosis by developing a skin lesion classification model using transfer learning. The pipeline encompasses image preprocessing, data augmentation, model training and validation, ensemble modeling, and prediction. Leveraging diverse models such as ResNet, ResNext, RegNet, DenseNet, EfficientNet, SwinTransformer, and VisionTransformer underscores the potential of deep learning in enhancing accuracy and speed for dermatologists and healthcare professionals in skin lesion diagnoses. The project utilizes a mixed dataset from HAM10000, BCN_20000, and MSK Dataset, achieving an impressive accuracy of 0.91 in binary classification and a kappa score of 0.94 in multiclass classification on test data.
Skin lesions pose a significant risk due to their potential to manifest as malignant tumors. Early diagnosis plays a pivotal role in effective treatment and improved patient outcomes. Leveraging computer-aided diagnosis systems can revolutionize this process by enhancing accuracy and speed. This project aimed to develop a robust skin lesion classification model using machine learning. The pipeline encompassed image preprocessing, augmentation, feature extraction, feature engineering, and a machine learning classifier. Employing a mixed dataset comprising HAM10000(ViDIR Group, Medical University of Vienna), the BCN_20000 (Hospital Clínic de Barcelona) and the MSK Dataset (ISBI 2017), the binary classification achieved an impressive accuracy of 0.85, while the multiclass classification yielded a kappa score of 0.74 on the test data. These promising results underscore the potential of machine learning in aiding dermatologists and healthcare professionals in accurate and timely skin lesion diagnoses.
In the field of computer vision and medical applications, image segmentation is crucial for developing Computer-Aided Diagnosis (CAD) systems, aiding in early disease detection and clinical treatment planning. Brain tissue segmentation, a challenging task, can be effectively performed using the unsupervised cluster-based Expectation-Maximization (EM) algorithm combined with a Gaussian Mixture Model (GMM) on MRI data. This approach, encompassing White Matter (WM), Gray Matter (GM), and Cerebrospinal Fluid (CSF) segmentation, employs a Gaussian Probability Distribution for soft assignment, providing probabilities to different clusters within the feature space. Essential pre-processing techniques such as bias field correction and skull stripping enhance segmentation outcomes. The obtained Dice score metrics of (0.86, 0.81, 0.86) for mean and (0.03, 0.03, 0.03) for standard deviation underscore the algorithm's accuracy and consistency, positioning it as a valuable tool in CAD systems for robust disease detection and early clinical intervention.
Atlas-based segmentation, a fundamental technique in medical imaging, enhances image segmentation precision by leveraging knowledge from annotated reference images or atlases. This project extends beyond segmentation to investigate the impact of image registration techniques, comparing affine (rigid) and bspline (non-rigid) methods. Results indicate that bspline registration outperforms affine registration, particularly in terms of mutual information. Integrating this refined alignment into segmentation algorithms, such as the Expectation-Maximization method, yields an overall enhanced performance. The study underscores the significance of atlas-based segmentation coupled with advanced image registration techniques, highlighting the superior efficacy of bspline (non-rigid) registration for improved brain tissue segmentation.
Breast cancer stands as the most prevalent form of cancer affecting women worldwide and remains the leading cause of cancer-related mortality. Early detection plays a pivotal role in reducing mortality rates and expanding treatment options. This project focuses on the development of an advanced system for the early detection and segmentation of breast masses in mammograms, leveraging a computer- aided diagnosis (CAD) approach. Mammography, recognized as a highly efficient primary screening tool, is crucial for detecting features such as masses and microcalcifications indicative of breast cancer. However, identifying irregularly shaped masses with low contrast poses a significant challenge. To address this, our proposed approach employs a five-step process for mass detection and segmentation. These steps include pre-processing, multi-scale morphological sifting for region candidate generation, mean-shift filtering, k-means clustering, and post-processing. To enhance accuracy, we integrated the ucasML tool for obtaining classification scores, incorporating a feature extraction process and detailed evaluation methods. The classifier, trained on the extracted features, demonstrated impressive results with an accuracy of 96.96%, specificity of 95.55%, and sensitivity of 95.53%.
Skin cancer is a prevalent and concerning health issue, necessitating advanced diagnostic tools for early detection. Computer-aided diagnosis (CAD) systems, powered by machine learning and deep learning techniques, present a promising avenue to enhance the accuracy of skin lesion detection in medical images. In this project, we aim to contribute to the field by developing an efficient CAD system capable of assisting medical professionals in diagnosing skin cancer. This project pioneers the integration of machine learning and deep learning techniques for the critical task of skin lesion detection in medical images, with a primary focus on advancing computer-aided diagnosis (CAD) systems for precise skin cancer diagnosis. Employing a two-step hierarchal classification pipeline discerning "benign vs. others" and "melanoma vs. seborrheic keratosis," our hybrid model, merging Xception and Random Forest, achieves an impressive Balanced Multiclass Accuracy (BMA) score of 79%. This surpasses the performance of established architectures like VGG16, InceptionResNetv2, and DenseNet201. Beyond showcasing the potential for machine learning in elevating skin cancer diagnostic accuracy, this research underscores the broader applicability of CAD systems in addressing crucial challenges in medical image analysis, particularly in the realm of skin cancer detection.
This study addresses Alzheimer's Disease (AD) classification using MRI and gene expression data through three binary classification problems: AD vs. Control (CTL), AD vs. Mild Cognitive Impairment (MCI), and MCI vs. CTL. Employing feature engineering and machine learning, we developed a classification algorithm for each problem. The Boruta feature selection algorithm combined with Quadratic Discriminative Analysis (QDA) consistently outperformed other approaches, yielding the highest AUC (0.963) and MCC (0.927) for AD vs. CTL, AUC (0.831) and MCC (0.663) for AD vs. MCI, and AUC (0.876) and MCC (0.755) for MCI vs. CTL. These findings contribute to enhancing accuracy in Alzheimer's Disease classification and underscore the importance of tailored feature selection and classifier combination
This project presents the implementation and evaluation of the Regularized General Eigenvalue Classifier (ReGEC) using the R programming language. Employing the generalized eigenvalue problem with regularization, ReGEC demonstrates versatility in computer vision, speech recognition, and bioinformatics. Utilizing four datasets—Cleveland Heart, Pima Indians, Breast Cancer, and German—the study applies Linear Discriminant Analysis (LDA) for feature extraction. Results show a classification accuracy of 86.67% and 77.91% for linear and Gaussian kernels, respectively. Performance metrics encompassing classification accuracy and execution time highlight ReGEC's efficacy in classification tasks, emphasizing its successful implementation in R.
EasyHealth is a Healthcare Information Management System (HIMS) developed using Java, JavaScript, HTML, CSS, and MySQL. The project aimed to create an efficient, robust, and user-friendly platform for managing healthcare information. The system incorporates features such as online appointment booking, report generation based on examinations, and specialized disease information for both doctors and patients. Additionally, EasyHealth includes medical imaging analysis, providing a comprehensive solution for streamlined healthcare information management.
This project details the design and implementation of an Inverse Kinematic Controller for the JACO2 robot, focusing on achieving precise end-effector positioning in three distinct scenarios. The primary objective is to control the robot to follow a vertical line trajectory along the z-axis within a 2-second duration. Additionally, the controller guides the robot through a position-only control scenario, traversing a vertical square path along the yz-axis. Each side of the square is executed with a 2-second trajectory, including 500 ms pauses at corners and a 2-second hold at the initial end-effector position upon completion. The third scenario involves both position and orientation control, where the end-effector follows the square trajectory while maintaining a constant orientation. The MATLAB-based implementation employs a 1 ms sampling time, quaternions for orientation feedback, and visualizes the trajectory, joint positions, and velocities. Successful execution of the specified scenarios demonstrates the efficacy of the proposed Inverse Kinematic Controller.
Magnetic Resonance Imaging (MRI) images are widely used to identify brain tumors such as meningioma and glioma. Gliomas are the most life-threatening brain tumors because of their rapid growth and effect on the function of the brain. However, early detection and appropriate diagnosis of the brain tumor can reduce the chance of death [1]. Manual segmentation requires an expert and more time to identify and classify brain tumors even sometimes it provides an inaccurate result. Brain tumor segmentation from MRI images using computer-aided diagnosis is a fast, automatic, and advanced technique, but it is also a challenging task due to the non-uniform shape and size, and the spread boundaries of tumors within the surrounding area. This article aims to make a comparative study between the traditional segmentation techniques and segmentation using deep learning approaches as well as find out their positive aspects and limitations.
Parkinson’s disease (PD) is a brain disorder that causes unintended or uncontrollable movements, such as shaking, stiffness, and difficulty with balance and coordination. Electroencephalogram (EEG) signals may faithfully represent the changes that occur during PD in the brain. Therefore, EEG signals are required to decompose into multiple sub-bands (SBs) to get detailed and representative information from it. Hence, An automated tunable Q wavelet transform (A-TQWT) is proposed for automatic decomposition. A-TQWT extracts representative SBs for analysis and provides better reconstruction for the synthesis of EEG signals by automatically selecting the tuning parameters. Five features are extracted from the SBs and classified different machine learning techniques. The proposed method yielded an accuracy of 96.13% and 97.65% while the area under the curve of 97% and 98.56% for the classification of HC vs PD (OFF medication) and HC vs PD (ON medication) using least square support vector machine, respectively.
Brain Tumors are the leading cause of cancer death in children They are caused by the abnormal and uncontrolled growth of cells inside the brain or spinal canal Classification of brain tumors using machine learning technology is very relevant for radiologists to confirm their analysis more effectively and quickly Segmentation algorithm identified for detecting the tumor from the MRI brain scans need to detect shapeless tumor growth perfectly Sobel edge detection is one of the widely used edge detection techniques in which only information along horizontal and vertical directions are considered In this research, Sobel algorithm with 8 directional template is implemented for improving the detection of edges in brain tumor MRI images The proposed algorithm is compared with other traditional edge detection algorithms The performance of the proposed algorithm is analyzed in terms of MSE, RMSE, Entropy, SNR and PSNR Analysis shows that 8 Sobel is comparatively the most suitable technique for analyzing brain tumor MRI images Active contouring segmentation algorithm is applied on the edge detected images to verify the classification accuracy of segmented tumor.