Aheli Poddar

Undergrad in Electrical Engineering [at] IEM Kolkata




DeepLense ML4SCI



🌠
 DeepLense: ML4SCI GSoC 2025 Evaluation Tasks 
🔭 Overview

The DeepLense project analyzes gravitational lensing data, which is crucial for understanding dark matter distribution in the universe. Gravitational lensing occurs when massive objects bend light from distant sources, creating distinctive visual patterns that can be analyzed to infer cosmic structures.

This repository demonstrates implementations of three key machine learning tasks related to gravitational lensing analysis:
  1. Classification of lensing images based on substructure types
  2. Detection of gravitational lensing events in highly imbalanced datasets
  3. Generation of synthetic gravitational lensing images using advanced generative models
🚀 Tasks Implemented
Task 1: Multi-Class Classification
Objective: Classify strong gravitational lensing images into three categories (no substructure, subhalo substructure, vortex substructure).

Approach:
  • Implemented DenseNet161 and DenseNet201 architectures
  • Created an ensemble model combining multiple architectures
  • Conducted comprehensive evaluation with confusion matrices and ROC curves
Key Results:
  • Ensemble model achieved 91-92% overall accuracy
  • AUC scores of 0.94-0.95
  • F1 scores ranging from 0.90-0.91 across all classes
Task 2: Lens Finding
Objective: Detect gravitational lensing events in highly imbalanced datasets (up to 1:100 ratio between lens and non-lens classes).

Approach:
  • Evaluated multiple architectures (ResNet18, EfficientNet, MobileViT)
  • Implemented techniques for handling extreme class imbalance:
    • Aggressive data augmentation of minority class
    • Class weighting in loss function
    • Threshold optimization for F1-score maximization
Key Results:
  • ResNet18 emerged as best architecture with:
    • AUC score of 0.98
    • F1 score of 0.21
    • Recall of 0.95 for the rare lens class


Task 4: Diffusion Model

Objective
: Generate synthetic gravitational lensing images using advanced generative models.

Approach:
  • Implemented DDIM (Denoising Diffusion Implicit Models) with U-Net backbone
  • Created GAN with Self-Attention mechanisms
  • Developed memory-optimized implementations to handle GPU constraints
Key Results:
  • DDIM achieved better FID score (197.79 vs. 330.26 for GAN)
  • GAN with Self-Attention showed slightly higher Inception Score (1.13 vs. 1.09)
  • Generated images successfully captured key gravitational lensing features
Results
Task (Best Model): Key  Metrics
  • Multi-Class Classification (Ensemble) Accuracy: 91-92%, AUC: 0.94-0.95
  • Lens Finding (ResNet18) AUC: 0.98, Recall: 0.95, F1: 0.21
  • Diffusion Models (DDIM) FID: 197.79, Inception: 1.09


Tools
Translate to