Participation guidelines

The CuRIOUS 2022 Challenge will offer two different image segmentation tasks:

Task 1: Segmentation of brain tumor in intra-operative ultrasound (pre-resection)

Task 2: Segmentation of resection cavity in intra-operative ultrasound (during- and post-resection)


A participating team can join either or both sub-tasks for the challenge.

The CuRIOUS challenge requires all participating teams to submit a 4-page manuscript in LNCS format (same as the main MICCAI conference, with max 8 pages allowed) to describe their algorithms and report their results on the pre-released training dataset. All automatic and semi-automatic registration approaches are welcome. The methodology manuscripts will be published in the MICCAI post-conference proceedings.

Only participants who submit their manuscripts will receive the test dataset for the final contest, and only those who submit the test data results and present their methods at the CuRIOUS event will be considered for the prizes and included in the follow-up publications. This means that at least one member of each participating team must register for the CuRIOUS2022 MICCAI event.

 

Manuscript submission instruction

Please submit your manuscript via email to curious.challenge@gmail.com and in the body of the email, be sure to include the following:

  • Manuscript title
  • Corresponding author, affiliation, and email address
  • Manuscript abstract
  • Your team name

Please include your manuscript as a PDF document in the email, and make sure that the manuscript:

  • contains concise but sufficient description of your method
  • fully demonstrates the results of the training dataset (metric for each case)
  • includes the computational time and implementation details (software tools & hardware specs)
  • is in MICCAI format with 4-8 pages
  • is free of grammatical errors

The review process is not double-blind.

Evaluation metrics

The automatic segmentation results will be assessed using 95% Hausdorff distance, Dice coefficient, Recall and Precision.

Ranking

The submission will be evaluated based on 95% Hausdorff Distance (HD95), Dice coefficient, precision and recall.

For each team:

  • Average HD95 over all cases
  • Average Dice over all cases
  • Average Recall over all cases
  • Average Precision over all cases

Across all teams with respect to each metric:

  • Rank average HD95
  • Rank average Dice
  • Rank average Recall
  • Rank average Precision

    Final ranking:

    The final rank will be the average of the four ranks. Ties are possible. 

    If certain cases are missing from the results, the Dice, Recall and Precision will be set to zero. The HD95 will be set as if a prediction with all voxels labeled backgound had been provided.


    Test data results submission

    The participants will need to submit their segmentation results for the test data on the Grand Challenge site for the evaluation.

    The prediction files must be NIFTI volumes (.nii.gz), with 0 corresponding to background and 1 foreground, and named TestCase<num>-US-<time>.nii.gz (eg. TestCase3-US-before.nii.gz (task 1), Test-Case6-US-during.nii.gz). The prediction files must have the same shape and be in the same space than the test image files.

    All prediction files are zipped to a flat zip (no directories) and uploaded on the submissions pages:

    Task1: brain tumor in pre-resection US

    Task2: resection cavity in post-resection US

    The zip files for task1 should contain the US-before predictions, and for task 2, the US-during and US-after ones. If participating to both tasks, it is also possible to upload the same zip containing predictions for both tasks, to the two submission pages.