Asia/Kolkata
Projects

IrisAtlasAI: AI-Powered Structural Iridology

image
March 1, 2025
IrisAtlasAI is a research-driven system for converting visible iris structure into measurable, reviewable, and explainable computational evidence. The project is built around a structural-first approach: instead of treating iris observation as only a visual impression, it organizes the iris into boundaries, regions, sectors, and feature-level evidence that can be inspected consistently. The goal is not to replace practitioner judgment or make automated medical claims. IrisAtlasAI is designed as an assistive research framework that improves documentation, reduces observer variability, and supports transparent review of structural iris findings.
The iris contains several structural regions that can be observed and mapped, including the pupil boundary, outer iris boundary, collarette, contraction furrows, radial texture patterns, lacuna-like openings, crypt-like structures, peripheral rims, and visible structural rings. IrisAtlasAI focuses on these visible structures as measurable image evidence. Each finding is treated as a spatial object with location, scale, visibility, and context, rather than as a loose descriptive label. This keeps the workflow grounded in what can actually be seen and reviewed.
The core concept is a structured pipeline that moves from raw iris image to an explainable iris atlas. The image is first checked for quality, then the pupil and iris regions are localized, structural masks are generated, geometry is extracted, and findings are mapped into radial bands and clock-like sectors. This atlas-style representation allows the same iris to be described in a consistent coordinate system. A feature is not only detected; it can also be described by where it lies, which region it belongs to, how close it is to the pupil or outer iris boundary, and how confident the supporting image evidence is.
IrisAtlasAI is designed around human-in-the-loop annotation. Model outputs are treated as assistive proposals, not final truth. A reviewer can accept, correct, redraw, or reject predicted structures, and those decisions become the basis for cleaner datasets and better model evaluation. This approach matters because subtle iris features are not equally visible in every image. Human review helps separate true structural evidence from blur, occlusion, weak contrast, illumination artifacts, and ambiguous texture. It also keeps the system accountable: every output should be traceable back to image evidence and review logic.
A key part of the framework is geometry. The system can represent the pupil center, iris center, radii, annular iris region, usable iris area, and relative feature positions. These measurements create the spatial backbone for structural analysis. Sector mapping then makes the output easier to compare. Instead of saying that a feature appears somewhere in the iris, the system can place it within a defined radial band and angular sector. This supports structured reporting, longitudinal comparison, and future analysis across image sets.
IrisAtlasAI prioritizes near-infrared structural imaging because NIR images emphasize iris texture, folds, boundaries, and annular geometry while reducing many color and lighting variations seen in ordinary photography. This also creates a clear limitation: NIR is not suitable for pigment-color interpretation. Color-dependent observations require RGB imaging, color normalization, and separate validation. Keeping NIR structural analysis separate from RGB color analysis makes the research boundary cleaner and avoids mixing different evidence types too early.
The intended output is not just a mask or a prediction score. IrisAtlasAI is designed to produce explainable evidence objects: structural labels, geometry context, sector location, quality notes, confidence signals, and visual overlays that can be reviewed by a human. This evidence-first design makes the system more transparent than a simple black-box detector. It allows each result to be questioned: What was visible? Where was it located? Was the image quality sufficient? Was the structure reviewed or only proposed by a model?
RGB iris imaging can add information that NIR does not preserve, such as pigment distribution, chromatic rings, and color gradients. A future color-aware branch would need its own annotation rules, lighting controls, calibration strategy, and evaluation metrics. The framework can also support multimodal research, such as combining static iris structure with digital pulse waveform features. Any such extension remains research-oriented, practitioner-assistive, and non-diagnostic, with clear separation between measurable signals and clinical interpretation.
The architecture is suitable for privacy-aware and reproducible workflows. Potential deployment models include local workstation analysis, secure API-based inference, annotation-assist tooling, visual overlay dashboards, and containerized research environments. The same foundation can support manual review, dataset preparation, model comparison, longitudinal tracking, and structured reporting without requiring the system to make autonomous diagnostic conclusions.
IrisAtlasAI is about turning iris observation into structured, measurable, and explainable evidence. It bridges traditional visual assessment with modern machine learning by emphasizing quality control, geometry, sector mapping, human review, and transparent outputs. Artificial intelligence does not replace the practitioner. It can make structural observation easier to document, compare, audit, and study. Built with clear non-diagnostic boundaries, IrisAtlasAI becomes an assistive framework for responsible structural iris analytics.