Computer – Bridge Problem
The well known strategy card game bridge playing for computers is as hard as it is for humans. The decision theoretic, imperfect information and uncertainty aspects of pokers makes it a perfect test bed for many AI fields including machine learning.
Reinforcement learning (or latetest deep reinforcement learning) might be the best approach, as the case for many other AI games, nevertheless supervised learning is possible if you could get historical logs including winning information.
The problem can be broken down into :
Hand Strength estimation:
This is to estimate the winning potential of player’s own hand as well as the opponents, based on open cards. The most successful ones are using monte carlo sampling based. The idea is to complete the hands by sampling for inaccessible cards and count the #wins, and there by estimating the probability.
Exact computation of winning probability is slower than sampling. Parametric estimation using historic data might find some machine learning applications.
Opponent modeling:
This involves estimating the probability for available actions (pay card) for each opponent. Here we can use the players historic data for estimation. One successful approach is using neural network for opponent modeling (1). They consider various factors like player count, position, game type etc. Of course there could be different approaches.
Decision making and Risk management:
This involves coming up with utility functions and listing, rating strategies. This is one potential area for ML. We can score strategies based on historical or current data.
Approaches:
Various approaches have been tried. some of them are :
1) Probabilistic approaches ( Bayesian networks (2) etc)
2) Rule based (event, action pairs)
3) Function based (neural networks , etc)
4) Genetic algorithms (3)