A FIN-learning machine M receives successive values of the
function f it is learning; at some point M outputs CONJECTURE
which should be a correct index of f. When n machines
simultaneously learn the same function f and at least k of these
machines output correct indices of f, we have SYMMETRIC TEAM
LEARNING denoted [k,n]FIN.
[Daley, 1992] shows that sometimes a team or a probabilistic
learner can simulate another one, if their probabilities (or team
success ratios k/n) are close enough. Accordingly to [Daley, 1992]
the critical ratio closest to 1/2 from the left is 24/49; paper
[Daley, 1996] provides other unusual constants. These results are
complicated and provide a full picture of comparisons only for
FIN-learners with success ratio above 12/25.
We generalize [k,n]FIN teams to ASYMMETRIC TEAMS [Smith, Apsitis
COLT'1997]. We introduce a two player game on two 0-1 matrices
defining two asymmetric teams. The result of the game reflects the
comparative power of these asymmetric teams. Hereby we show that
the problem for any a,b,c,d to determine whether [a,b]FIN is a
subset of [c,d]FIN is algorithmically solvable. We also show that
the set of all critical ratios is well-ordered. Simulating
asymmetric teams with probabilistic machines provides an insight
about the origin of the unusual constants like 24/49.