ConTEB evaluation datasets Evaluation datasets of the ConTEB benchmark. Use "test" split where available, otherwise "validation", otherwise "train". illuin-conteb/covid-qa Viewer • Updated Jun 2 • 4.46k • 378 • 1 illuin-conteb/geography Viewer • Updated May 30 • 11.4k • 353 • 2 illuin-conteb/esg-reports Viewer • Updated May 30 • 3.74k • 80 • 1 illuin-conteb/insurance Viewer • Updated May 30 • 180 • 373 • 1
ConTEB training datasets Training data for the InSeNT method. illuin-conteb/narrative-qa Viewer • Updated Jun 2 • 47.3k • 556 • 1 illuin-conteb/squad-conteb-train Viewer • Updated Jun 2 • 91.8k • 89 illuin-conteb/mldr-conteb-train Viewer • Updated Jun 2 • 566k • 75
ConTEB models Our models trained with the InSeNT approach. These are the checkpoints that we used to run the evaluations reported in our paper. illuin-conteb/modern-colbert-insent Feature Extraction • 0.1B • Updated Jun 2 • 15 • 6 illuin-conteb/modernbert-large-insent Sentence Similarity • 0.4B • Updated Jun 2 • 59 • 3
ConTEB evaluation datasets Evaluation datasets of the ConTEB benchmark. Use "test" split where available, otherwise "validation", otherwise "train". illuin-conteb/covid-qa Viewer • Updated Jun 2 • 4.46k • 378 • 1 illuin-conteb/geography Viewer • Updated May 30 • 11.4k • 353 • 2 illuin-conteb/esg-reports Viewer • Updated May 30 • 3.74k • 80 • 1 illuin-conteb/insurance Viewer • Updated May 30 • 180 • 373 • 1
ConTEB models Our models trained with the InSeNT approach. These are the checkpoints that we used to run the evaluations reported in our paper. illuin-conteb/modern-colbert-insent Feature Extraction • 0.1B • Updated Jun 2 • 15 • 6 illuin-conteb/modernbert-large-insent Sentence Similarity • 0.4B • Updated Jun 2 • 59 • 3
ConTEB training datasets Training data for the InSeNT method. illuin-conteb/narrative-qa Viewer • Updated Jun 2 • 47.3k • 556 • 1 illuin-conteb/squad-conteb-train Viewer • Updated Jun 2 • 91.8k • 89 illuin-conteb/mldr-conteb-train Viewer • Updated Jun 2 • 566k • 75