AI Models

Pre-trained and fine-tuned models for computer vision, natural language processing, and medical AI.

14 items

NP-TEST-0

Model

Vision Transformer for neuropathology, pretrained using DinoMX (DINO + iBOT). Linear probe accuracy: 80.17%, KNN accuracy: 83.76%.

image-classification apache-2.0 released ViT-Giant (~1B params)

DINOv2 ViT-Large finetuned on CT-RATE for chest CT feature extraction with anatomically-aware cropping.

feature-extraction cc-by-nc-sa-4.0 released ViT-Large with Registers (1024-dim,

Lightweight medical LLM. 13.76% improvement over base TinyLlama across 3 medical benchmarks.

text-generation apache-2.0 released TinyLlama (1.1B params)

MAD-NP

Model

Magnification-aware self-supervised neuropathology model. Linear F1: 0.9307, KNN F1: 0.9286.

image-feature-extraction apache-2.0 released ViT-Giant with Registers (~1B

Medical LLM based on Mistral, fine-tuned on medical datasets.

text-generation apache-2.0 released Mistral 3x7B

Medical LLM based on LLaMA 2 7B, fine-tuned on medical datasets.

text-generation apache-2.0 released LLaMA-2 7B

Medical LLM based on LLaMA 2 70B, fine-tuned on medical datasets.

text-generation apache-2.0 released LLaMA-2 3x70B

NP-GIANT

Model

Neuropathology ViT with hierarchical tile connectivity. Linear probe: 84.51%, KNN: 87.20%.

image-feature-extraction apache-2.0 released ViT-Giant with Registers (~1B

MELT-Mixtral-8x7B

Model

Medical LLM achieving 68.2% across USMLE/AIIMS/NEET. 4.42% improvement over base Mixtral.

text-generation apache-2.0 released Mixtral 8x7B Mixture-of-Experts (47B

LeJEPA with anatomical guidance: 80% of local crops centered on structures from 118 TotalSegmentator organ classes.

feature-extraction cc-by-nc-sa-4.0 released ViT-Large (1024-dim, patch 14)