I build systems that help robots understand where they are and what they see — from camera-LiDAR calibration pipelines to real-time visual SLAM and sensor fusion. Seeking new-grad roles in robotics and autonomous systems.
I'm a Master's student in Robotics at Northeastern University (graduating May 2026), concentrating in ECE. My work sits at the intersection of sensor fusion, SLAM, and deep learning — the systems that let robots understand their environment reliably, even in GPS-denied or visually challenging conditions.
Before Northeastern, I completed a B.Tech in Electronics and Communications at VIT India, and spent time as an IoT intern at SmartInternz building real-time sensing and cloud logging pipelines.
I care about making calibration and perception robust — not just accurate in ideal conditions, but degradation-aware and defensible in the field.
End-to-end implementations spanning calibration, localization, 3D reconstruction, and deep learning perception.
Targetless temporal calibration pipeline estimating real-world camera–LiDAR time offset on unsynchronized ROS data via cross-modal edge alignment optimization. Edge-based scoring using Canny detection and distance transforms; estimated 70ms offset validated via Powell optimization and dense grid search cross-validation. IMU preintegration merges 3 consecutive scans for motion-compensated point cloud densification.
Real-time state estimation pipeline fusing 200 Hz IMU with GNSS. Implements prediction + measurement update steps; IMU-only dead reckoning in GPS-denied zones with adaptive covariance weighting for degraded signals.
Visual SLAM pipeline on ZED Mini stereo camera via RTAB-Map in ROS2. Deployed on underground tunnels; validated loop closure quality, visual odometry drift, and pose graph consistency across long trajectories.
Full SfM pipeline on 24 monocular images using SIFT features and RANSAC. Non-linear bundle adjustment over 1,477 landmarks improved accuracy by 26%. Camera localization in GPS-denied settings via essential matrix decomposition and PnP pose estimation.
From-scratch implementation of 3DGS on a Buddha statue dataset using COLMAP-based SfM reconstruction. Gaussian parameters optimized via differentiable rasterization with EWA splatting; densification via clone/split/prune strategy. Achieved 31.42 dB PSNR and SSIM 0.9531 at 7,500 iterations — demonstrating that early stopping outperforms 30k iterations by over 16 dB on small datasets due to Gaussian overfitting. Containerized with Docker and CI/CD via GitHub Actions.
Seamless 2D mosaic using GTSAM factor graph optimization on 28 underwater images. Loop closure constraints on SIFT-matched overlapping pairs eliminate map drift; robust homography estimation in low-light, feature-sparse conditions.
Object-aware chromaticity estimation to improve color accuracy under varying illumination. Fine-tuned YOLOv8 on 5,600 COCO-derived images; produces corrected outputs across 80 object categories at 130ms CPU inference.
Hybrid U-Net architecture for joint denoising and exposure correction on the LOLv1 benchmark. Perceptual loss + L1 loss eliminates color cast artifacts and preserves structural detail.
YOLOv5 deployed on NVIDIA Jetson Nano for real-time crack, bubble, and scratch detection. Custom 1,300-image dataset; Intel RealSense depth sensing for 3D defect localization to distinguish surface vs. structural defects.
Built through hands-on projects, not just coursework.
I'm actively looking for new-grad roles in robotics — perception, SLAM, localization, calibration, and state estimation. If you're working on something interesting, I'd like to hear about it.