Pre-trained and fine-tuned models for computer vision, natural language processing, and medical AI.
Vision Transformer for neuropathology, pretrained using DinoMX (DINO + iBOT). Linear probe accuracy: 80.17%, KNN accuracy: 83.76%.
DINOv2 ViT-Large finetuned on CT-RATE for chest CT feature extraction with anatomically-aware cropping.
Lightweight medical LLM. 13.76% improvement over base TinyLlama across 3 medical benchmarks.
Magnification-aware self-supervised neuropathology model. Linear F1: 0.9307, KNN F1: 0.9286.
Medical LLM based on Mistral, fine-tuned on medical datasets.
Medical LLM based on LLaMA 2 7B, fine-tuned on medical datasets.
Medical LLM based on LLaMA 2 70B, fine-tuned on medical datasets.
Neuropathology ViT with hierarchical tile connectivity. Linear probe: 84.51%, KNN: 87.20%.
Medical LLM achieving 68.2% across USMLE/AIIMS/NEET. 4.42% improvement over base Mixtral.
LeJEPA with anatomical guidance: 80% of local crops centered on structures from 118 TotalSegmentator organ classes.