# News & FAQ

### NEWS

1) CHAOS data a DOI number now. You can use it in your scientific work with citation.

https://doi.org/10.5281/zenodo.3367758

2) CHAOS is featured at the December issue of Computer Vision News Magazine!

### FAQ

1) Is it possible to join and submit result to CHAOS?

Yes, online submission is opened. You may find further info from main page.

2) Why my "join request" is rejected or not accepted?

The challenge contains human data which need certain ethical permissions. Only educational and/or non-commercial usage is allowed. Therefore, we are trying to avoid anonymous users in order to keep a true record of interested participants. That is why using official e-mail account of the university/institution/company is strongly recommended. If the organizers cannot verify your e-mail address on a related web site, then the access to the data set is not granted.

3) Will the dataset available in only challenge day in ISBI?

No. A similar approach in previous grand challenges, such as SLIVER07 is preferred. The dataset and the challenge is still available now. New participants may submit their results to us for evaluation. Also previous participants have the option to update their new results. The actual scores is being published in our leaderboard.

4) Can I obtain the ground truths of the test data?

No. The ground truth of the test data will never be published. Only training sets include ground truths.

5) Is it possible to use CHAOS data in another academic study?

Yes, CHAOS data is under  Creative Commons Attribution Non Commercial Share Alike 4.0 International license. It is possible to use it with giving appropriate credit, provide a DOI link of the data, and indicate if changes were made.

Yes. You can download from https://github.com/emrekavur/CHAOS-evaluation for MATLAB, Python and Julia languages.

7) When can I learn result of my submission?

The evaluation is handled by grand-challenge.org's servers.  Results of automatic evaluation will be added to the leader board in a short time.

8) I want to submit my results for evaluation but I do not want them published on the leaderboard. Can you only send my scores to me via e-mail?

No, it is not possible. Each evaluated submission will be automatically published on the leaderbord regardless of score.

9) I do not like my score (or one of my score) and I want it to be cleared from the leaderboard. Is it possible?

Due to the rules, every evaluated submission will be automatically published, so it is not possible.

10) My DICE scores are very high however my final score is lower. How is it possible? Is the evaluation fair and correct? Why do you use many metrics? I think DICE is enough, isn't it?

Segmentation of medical imaging is used for clinical operations. Hence, tolerance of error is fewer for clinical aspects. There is no single metric that evaluates 3D segmented data completely and fairly in terms of clinically acceptable results. That is why combination of different metrics is used for calculating the final score. This approach was used and is still being used in many respectable segmentation challenges.

11) Why is it necessary to prepare PDF file for each submission?

The main goals of challenges are to provide a unique dataset and up-to-date information about the problem. In CHAOS, we are trying to build a significant database about the abdomen organ segmentation to help the scientist all over the world over years. This database includes not only scores but also methods behind these scores. That is how the challenge can contribute to the literature and scientists can use this information for their scientific works.

12) Is it possible to re-use previously uploaded supplementary file?

It is possible if the algorithm of new submission is very similar with previous one. However, we strongly encourage the participants to prepare new file that indicates changes even they are minor. We would like to provide more information as much as possible to all visitors.

13) What if someone uses multiple usernames/nicknames?

The all results of them will be deleted from results page immediately when they are detected.

14) How can I see metric outputs and scores of each task/case.

The results table shows average score of all tasks by default. If you would like see scores of individual tasks, you may click "Additional Metrics" or "Show all metrics" button.

If you would like to examine scores set by set, just click on any score in results page. Detailed scores will be accessible in a new page.

15) What are the differences between the five tasks? Could you explain again ?

1. Liver Segmentation (CT & MRI): This is also called "cross modality" and it is simply based on using a single system, which can segment liver from both CT and MRI. For instance, the training and test sets of a machine learning approach would have images from both modality without explicitly feeding the model with corresponding information. A unique study about this is reference below and this task is one of the most interesting tasks of the challenge (Keep in mind that any kind of ensemble or fusion of individual systems (i.e. two models, one working on CT and the other on MRI and one is selected by some decision criteria) would not be valid for this category. They can be evaluated as individual systems at Tasks 2 and 3.

Valindria, V. V., Pawlowski, N., Rajchl, M., Lavdas, I., Aboagye, E. O., Rockall, A. G., ... & Glocker, B. (2018, March). Multi-modal learning from unpaired images: Application to multi-organ segmentation in CT and MRI. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 547-556). IEEE.

2. Liver Segmentation (CT only): This is mostly a regular task of liver segmentation (such as SLIVER07). This task is easier than SLIVER07 as it only contains healthy livers aligned at the same direction and patient position. However, the challenging part is the enhanced vascular structures (portal phase) due to contrast injection. One of the biggest challenges in this case is the "maximum symmetric surface distance", which measures errors for surgical precision (the datasets are from transplantation donors, who will undergo a very complicated surgery). For instance, an inter-observer score was 95 with maxSSD while it was 97.4 without it. Here, the "score" refers to the average of Volume Overlap Error (or Dice), Mean SSD, RMS SSD, Max SSD and Relative Volume Error (PS. The final evaluation strategy will be announced at the following week)

3. Liver Segmentation (MRI only): Similar to "Task 2", this is also a regular task of liver segmentation, but include two different pulse sequences: T1-DUAL and T2-SPIR. Moreover, T1-DUAL has two forms (in and out phase). The developed system should work on both sequences without explicit knowledge about the pulse sequence. In this task, ensemble or fusion of individual systems (i.e. two models, one working on T1-DUAL and the other on  T2-SPIR ) are allowed. However, there might be a penalty at the scoring.

4. Segmentation of abdominal organs (CT & MRI): In this task, the interesting part is that CT datasets have only liver, but the MRI datasets have four annotated abdominal organs Thus, in addition to the "cross modality"  task described in "task 1", here the output of a system (i.e. single output vs four) should change based on the modality.

5. Segmentation of abdominal organs (MRI only): The same task given in "Task 3" but extended to four abdominal organs.

Last update: 08/08/2019