In case (b), the fixed values of bias c1 = 10, c2 = 6 and c3 = -10 are in the same way θ 2 = 112, in which the absolute conformance population ICC is presented as ρ3A ≈ 0.422. The ICC consistency population is in turn ρ3C = 0.8 and again, the ICC (A,1) and ICC (C,1) distributions are approximately centered on these values. A clinical researcher has developed a new ultrasound-based method to quantify coliotic deconformity. Before using the new method for his routine clinical practice, he conducted a reliability study to assess the reliability of the tests and repeat tests. He recruited 35 scoliosis patients with a variety of deformities in a pediatric hospital and used his new method to measure their coliotic deformity. Measurements were repeated 3 times for each patient. He analyzed his data with a single measurement and absolute agreement, 2-way mixed effects model and reported his ICC results in a peer review journal as ICC = 0.78 with 95% confided interval = 0.72-0.84. Based on the CCI results, he concluded that the test reliability of his new method is « moderate » to « good ». I used the reliability process in analyze->Scale->Reliability Analysis) and requested intraclassical correlations (CIC) with a two-way mixed model. For comparison purposes, I ran this model once with the absolute definition of concordance and once with the definition of consistency. I was surprised to see that the ICC was higher for the absolute agreement than for the coherence agreement. Since I felt that the definition of absolute concordance was the stricter definition, this result seems counterintuitive.
Please indicate some definitions of these criteria that explain this result. Figure 5 clearly shows that, in the winning case, we had an excellent match between CCI values and behavioural data. Meanwhile, in case of loss, the data also show a consistent trend from CCI from a single measurement to medium-measured CCIs, which could be interpreted to mean that the reproducibility/reliability of haemodynamic measurements during the risk-taking task presents an improvement pattern that matches the reliability of the behavioural score. In addition, the low to fair values of the CCI in case of loss may mean that in case of loss, part of the carelessness may be due to the source of the variable behavior when subjects have been confronted with the unwanted loss during risky measures. Overall, these results indicate that the amplitude of HbO is an appropriate biomarker for risk-taking studies. Further research is needed to identify other unstable potential sources that contribute to the variation in repeatability of fNIRS-based tests as part of risk-taking tasks. Our most important, practical conclusion is as follows. It is not necessary and indeed uncomfortable to be linked to a particular statistical model (unilateral, coincidence or doubly mixed) when the analysis of a matrix of individual experimental data obtained for example in a reliability study begins. . .