Specially named quantiles include the percentiles[?] (q = 100), deciles (q = 10), quintiles (q = 5), quartiles (q = 4), and the median (q = 2). There are 99 percentiles, each corresponding to a quantile represented by an integer number of percent (such as 99%). Deciles are the 10, 20, 30, 40, 50, 60, 70, 80, and 90th percentiles. Quintiles are the 20, 40, 60, and 80th percentiles. Quartiles are the 25, 50, and 75th percentiles. The median is the 50th percentile. Some software programs regard the minimum and maximum as the 0th and 100th percentile, respectively; however, such terminology is an extension beyond traditional statistics definitions. For an infinite population, the pth q-quantile is the data value where the cumulative distribution function is p/q. For a finite sample of N data points, calculate Np/q--if this is not an integer, then round up to the next integer to get the appropriate sample number (assuming samples ordered by increasing value); otherwise, take the average of the value for that sample number and the next.
For example, given the 10 data values {3, 6, 7, 8, 8, 10, 13, 15, 16, 20}, the first quartile is determined by 10*1/4 = 2.5, which rounds up to 3, and the third sample is 7. The second quartile value (same as the median) is determined by 10*2/4 = 5, which is an integer, so take the average of the fifth and sixth values, that is (8+10)/2 = 9. The third quartile value is determined by 10*3/4 = 7.5, which rounds up to 8, and the eighth sample is 15. The motivation for this method is that the first quartile should divide the data between the bottom quarter and top three-quarters. Ideally, this would mean 2.5 of the samples are below the first quartile and 7.5 are above, which in turn means that the third data sample is "split in two", making the third sample part of both the first and second quarters of data, so the quartile boundary is right at that sample. (Note that the quartile is the boundary between two quarters, which are the data sets. The first quarter are those data below the first quartile, the second quarter those data between the first and second quartiles, etc. In statistics the first quarter is the lowest quarter, whereas in everyday life, such as ranking students by grade, the first quarter is often regarded as the highest quarter. Standardized test results are commonly misinterpreted as a student scoring "in the 80th percentile", for example, as if the 80th percentile is an interval to score "in", which it is not; one can score "at" some percentile or between two percentiles, but not "in" some percentile.)
It should be noted that different software packages use slightly varying algorithms, so the answer they produce may be slightly different for any given set of data. Besides the algorithm given above, which is the proper one based on probability, there are at least four other algorithms commonly used (for various reasons, such as of ease of computation, ignorance, etc.).
If a distribution is symmetrical, then the median is the mean, but this is not generally the case.
Quantiles are useful measures because they are less susceptible to long tailed distributions and outliers. For instance, with a random variable that has an exponential distribution, any particular sample of this random variable will have roughly a 63% chance of being less than the mean. This is because the exponential distribution has a long tail for positive values, but is zero for negative numbers.
Empirically, if the data you are analyzing are not actually distributed according to your assumed distribution, or if you have other potential sources for outliers that are far removed from the mean, then quantiles may be more useful descriptive statistics than means and other moment related statistics.
Closely related is the subject of robust regression[?] in which the sum of the absolute value of the observed errors is used in place of the squared error. The connection is that the mean is the single estimate of a distribution that minimizes expected squared error while the median minimizes expected absolute error. Robust regression shares the ability to be relatively insensitive to large deviations in outlying observations.
Search Encyclopedia
|
Featured Article
|