Datasets:
Question about discontinuous / incomplete-looking segmentation masks in ReXGroundingCT
Hello,
First of all, thank you for sharing such a valuable dataset.
I am writing to ask about some potential anomalies I observed in the segmentation masks. It appears that some annotations are incomplete or discontinuous across the slice direction (z-axis).
For instance, in the train_6_a_2 case, a lesion that is clearly marked on one slice suddenly disappears completely on the immediately following slice. This transition looks quite unnatural. When examining the data in the coronal view, the lesion mask appears to be abruptly cut off vertically.
Could you please clarify if this is an intended characteristic of the annotation policy, or if it might be an error?
Thank you.
Hello,
Thank you for your comment! The annotation is solely done on the axial view, so the other views will have weird behavior.
Regarding lesions disappearing, this is likely a misuse of the interpolation feature on the annotation platform, so it's likely just an error. Some cases will have annotations only in the first and last slices where the lesion appears, and others will have only a middle slice.
You can probably perform some sort of post-processing to either remove or expand annotations that have less than X slices, but that ofc will not a perfect solution.
We are hoping to fix such issues in the second version of our dataset.
Thank you for the prompt and clarifying response. It was very helpful in understanding the underlying cause of the issues.
Regarding the suggestion for post-processing: While I understand the approach, I believe that correcting or expanding annotations for a large-scale dataset of 3,000 cases—and verifying their anatomical correctness—is likely infeasible for individual users to handle. Consequently, training a robust model with the current version seems quite challenging due to the noise in the ground truth.
Could you share any specific plans or a timeline for the release of the second version? I am curious if these segmentation discontinuity issues will be fully resolved in the upcoming release.
I also have a critical question regarding the private test set (Ground Truth). Does the test set also suffer from similar "sparse" annotations (e.g., only first, middle, or last slices annotated) due to the interpolation errors? If the test GT is incomplete, the evaluation metrics—especially the Dice Similarity Coefficient, which heavily relies on volumetric overlap—would not accurately reflect the true performance of the models.
I am asking these detailed questions because I have a strong interest in this research topic and wish to conduct rigorous experiments with valid data.
Thank you again for your time and engagement.
Thank you for raising these concerns.
The test set and the validation set (which is public) are randomly split from the same distribution so you can check the quality of the validation set to confirm yourself. The test and validation sets went through more stringent quality assurance, as described in the paper, and were done exclusively by radiologists, whereas some examples in the train set were annotated by professional medical annotators and medical students.
Still, this issue should not exist in the majority of the cases in the training set, especially as you go up in index; differently does not exist in most findings/cases.
Moreover, as described in the paper, the training set is only annotated for up to 3 instances per findings, so that is another thing that makes direct supervised fine-tuning more challenging on this data set, as is described in the limitations. The validation and test sets are fully annotated.
Annotation for the 2nd version of the dataset is set to end on February 1st, with the dataset released perhaps closer to mid-year 2026. For all training examples, this issue should not exist, and all instances would be annotated.
What I was describing is a (1) filtering for findings that only have 1 slice annotation and either removing them or keeping them but expanding them 2–3 slices in both direction, where they would still likely capture the finding up to a small margin of error. You can experiment with this, perhaps, and it shouldn't be difficult to see if it results in better scores or not/manually verifying a couple of examples.
Thank you for your detailed response and for sharing the timeline regarding the second version of the dataset. It is good to know that the validation and test splits have undergone stringent quality assurance.
regarding the training set, I agree that the majority of cases seem fine, with only occasional incomplete annotations. However, I would like to clarify the issue I observed in train_6. In this case, there are actually several annotated slices (not just one), yet a significant portion of the lesion remains unannotated and discontinuous. Therefore, the suggested heuristic of filtering or expanding single-slice annotations would not resolve this specific type of incompleteness. It seems inevitable that we will have to proceed with training while accepting a certain level of label noise and segmentation errors in the current version.
I also noted from the paper that the training set is limited to a maximum of 3 instances per finding. I agree that this is a constraint that requires specific engineering approaches to handle effectively.
I have one final question: Once I submit the test set results, approximately how long does it take for the results to appear on the benchmark leaderboard? I need to estimate the turnaround time to align with my research paper submission deadline.
Thank you again for your support.
Hello,
You can expect results within 48 hours during regular working days.
Thanks,
Mohammed



