Statistical regularity has motivated the development of the relative frequency concept of probability. Most of the procedures commonly used to make statistical estimates or tests were developed by statisticians who used this concept exclusively. They are usually called frequentists, and their position is called frequentism. This school is often associated with the names of Jerzy Neyman[?] and Egon Pearson[?] who described the logic of statistical hypothesis testing.
Since the 18th century, there has been a debate among statisticians featured the frequentists versus the Bayesians. The former insisted that statistical procedures only made sense when one uses the relative frequency concept. The Bayesians supported the use of degrees of belief as a basis for statistical practice.
The frequentist position is the one you probably heard at school: perform an experiment lots of times, and measure the proportion where you get a positive result - this proportion, if you perform the experiment enough times, is the probability.
The problem comes in those cases where we haven't performed an experiment yet, or where there's no possible way an experiment could be performed - in these cases, frequentism can't help us. Also there's the category problem, which is normally expressed by asking questions like "what is the probability that the Sun will rise tomorrow? is it:"
In the first case, the category is "The Sun rising on date X". In the second case, the category is "The Sun rising". In the third case, the category is "Observable suns rising". It's not immediately clear which of these is the correct set of 'experiments' to use.