Kappa statistics are most used to quantify the reproducibility of a discrete variable. See the data in Table 25-9 . Suppose medical records indicate that 39 out of 217 people received a specific drug. When administering a questionnaire, 14 of the 39 people who were prescribed a drug reported that a drug had been prescribed to them, while 171 out of 178 people who had not been prescribed a drug said they had not. The agreement between the two methods of gathering information (85%) That`s pretty good, actually. However, relatively few people actually prescribed the drug, so even if all study participants said they had not prescribed the drug, whether they had it or not, consent would still be high. It is therefore necessary to put in place statistics that take into account the agreement that would be expected at random. Kappa`s statistics, which take into account random agreement, are defined as another performance index in the form of Kappa14 coefficients, which has also been proposed for the harmonization of various classification problems. The Kappa is used to compare BCI systems with a different number of classes, as it is difficult to use CA for comparison . A 50% certification body for a two-speed problem corresponds to a 25% certification body for a four-class problem, making it more difficult to compare fairly for problems with a different number of classes when the CA is used as a quantifier. In addition, measuring performance with CA becomes ambiguous when certain classes are frequent during a given process. However, the kappa coefficient represents the share of the agreement reached after the cancellation of the share of the agreement that could occur accidentally .
Therefore, some of the obvious CA results obtained by chance are suppressed to the extent of performance by the use of Kappa. In the case of a Class N problem, the relationship between the kappa k coefficient and CA can be described as follows if the N classes are equal to a probability of 1/N: the objective of this document is to propose a kappa statistic for dichotomous evaluations with free reaction, which does not require a definition of areas of interest or any other simplification of the observed data. This Kappa statistic also takes into account the formation of patient-specific clusters [4-6] of several observations made for the same patient. The above results show the effectiveness of the purposes in the cases reviewed. They are similar to those manufactured by , Although the latter does not have a clear operational interpretation, the first has a formally defined but intuitive meaning: it measures the amount of standardized information exchanged between the two advisors through the agreement channel. The diagnostic agreement is a measure designed both to assess the reliability of a diagnostic test and to assess the consistency between different interpretations of the same diagnostic results. The same approach has been used successfully in different areas, such as machine learning, .B, to identify noise in data sets and to compare multiple preachers in assembly methods (cf.B.[ 40, 45]). Many different techniques have been put in place to measure the diagnostic agreement.
For example, the gross agreement , Cohens Kappa , intraclassical correlation , the McNemar test  and log odds ratio  were proposed for dichotomy analysis, i.e. where the scale has only two authorized values; on the contrary, kappa weighted weights , square weights , intraclassical correlation [2, 44] and association models  were proposed for categorical value-added valuations, i.e. where the authorized values are greater than 2.