Course: Logic in Games

People: Johan Van Benthem.

  1. Introduction: Exploring the Realm of Logic in Games.
  2. Part I: Game Logics and Process Structure
    1. Intro
    2. Games as Processes: Definability and Invariance
    3. Preference, Game Solution, and Best Action
    4. Games with Imperfect Information
    5. Making Strategies Explicit.
    6. Infinite Games and Temporal Evolution
    7. From Games to Models for Games
    8. Conclusion
  3. Part II: Logical Dynamics and Theory of Play
    1. Intro
    2. Logical Dynamics in a Nutshell
    3. Deliberation as Iterated Update
    4. Dynamic Epistemic Mechanisms of Play
    5. Toward a Theory of Play
    6. Conclusion
  4. Part III: Players’ Powers and Strategic Games
    1. Intro
    2. Forcing Powers
    3. Matrix Games and Their Logics
    4. Rational Dynamics for Strategic Games
    5. Conclusion
  5. Part IV: Logic Games
    1. Intro
    2. Formula Evaluation
    3. Model Comparision
    4. Model Construction
    5. Argumentation and Dialogue
    6. General Lines through Logic Games
    7. Conclusion
  6. Part V: Operations on Games
    1. Into
    2. Dynamic (Non -Linear) Logic for Sequential Game Operations
    3. Linear Logic of Parallel Game Operations
    4. Conclusion
  7. Part VI: Comparisons and Merges
    1. Intro
    2. Logic Games with Imperfect Information
    3. Knowledge Games
    4. Sabotage Games and Computation
    5. Logic Games can Represent Game Logics
    6. Merging Logic Games with Game Logics
    7. Conclusion
  8. Conclusion

 

Course #2: Reasoning about knowlegde

Learning from book names the same by  Ronald Faigin, Joseph Y. Halpern, Yoram Moses, Moshe Y. Vardi.

Duration: Two Days,

Day 1: Chapters 1 to 6 and Day 2: Chapters 7 to 11

  1. Introduction and Overview.
  2. A Model for Knowledge.
  3. Completeness and Complexity.
  4. Knowledge in Multi-Agent Systems.
  5. Protocols and Programs.
  6. Common Knowledge and Agreement.
  7. Knowledge-Based Programming.
  8. Evolving Knowledge.
  9. Logical Omniscience.
  10. Knowledge and Computation.
  11. Common Knowledge Revisited.

This course investigates reasoning about knowledge

Finding Optimal Paths

Given a  search space, apply search and find the solution quickly and make sure that solution is optimal. i.e, both look up cost and execution cost must be minimum.

Brute force procedure

BritishMuseum (){

  1. Explore the entire search space;
  2. return the optimal solution;

}

Branch&Bound(){

  1. open <- {(start, NIL, Cost(start))}
  2. closed <- {}
  3. while TRUE
  4.   do if Empty(open)
  5.                   then return FAILURE;
  6.   pick  the cheapest node n from  open
  7.   if GoalTest(Head(n))
  8.            then return ReconstructPath(n)
  9.    else  children <- MoveGen(Head(n))
  10.              for each m ∈ children{
  11.                       do Cost(m) <- Cost(n) + K(m, n)
  12.                        Add  (m, Head(n), Cost(m) ) to open }

}

 

 

GeneralisedBranch&Bound(){

  1. start with all possible solutions
  2. repeat
  3.      Refine the least ( Estimated cost ) solution further
  4. until the cheapest solution is fully refined
  5. return s

}

 

Dijkstra’s Alg(){

  1. Color all nodes white
  2. cost(start) <- 0
  3. parent(start) <- NIL
  4. for all other nodes n
  5.        do cost(n) <- ∞
  6. repeat
  7.     select lowest cost white node n
  8.      color n black
  9.      for all white neighbors m of n
  10.             do if (const(n) + k(n, m)) < cost(m)
  11.                   then cost(m) = const(n) + k(n, m)
  12.                             parent(m) <- n
  13. until all nodes are colored black

}

A*(){

// heuristic fun f(n) = g(n) + h(n)

//g(n) =known cost of path found from S to n

//h(n) = Estimated cost of path from n to G

  1. open <- List(start)
  2. f(start) <- h(start)
  3. parent(start) <- NIL
  4. closed <- {}
  5. while open is not EMPTY
  6.    do
  7.        Remove node n from open such that f(n) has the lowest value
  8.        Add n to closed
  9.        if GoalTest(n) == TRUE
  10.             then return Reconstructpath(n)
  11.        neighbors <- MoveGen(n)
  12.        for each m ∈ neighbors
  13.               do switch
  14.                    case m ∉ open and m ∉ closed : /* new node */
  15.                            Add m to open
  16.                            parent(m) <- n
  17.                            g(m) <- g(n) + k(n, m)
  18.                            f(m) <- g(m) + h(m)
  19.                     case m ∈ open :
  20.                            if (g(n) + k(n, m) ) < g(m)
  21.                                   then  parent(m) <- n
  22.                                              g(m) <- g(n) + k(n, m )
  23.                                              f(m) <- g(m) + h(m)
  24.                     case m ∈ closed :
  25.                            if (g(n) + k(n, m) ) < g(m)
  26.                                   then  parent(m) <- n
  27.                                              g(m) <- g(n) + k(n, m )
  28.                                              f(m) <- g(m) + h(m)
  29.                                             propagateImprovement(m)
  30. return FAILURE

}

PropagateImprovement(m){

  1.  neighbours <- MoveGen(m)
  2. for each  s ∈ neighbours
  3.         do newGvalue <- g(m)  + k(m, s)
  4.               if newGvalue < g(s)
  5.                   then parent(s) <- m
  6.                             g(s) <- newGvalue
  7.                             if s ∈ closed
  8.                                   then PropagateImprovement(s)

}

 

Admissibility of A*

Assumptions:

  1. The branching factor is finite

Game Play

Board Games:

  • two persons,
  • Zero sum,
  • complete information,
  • alternative moves, and
  • deterministic games.

Convention of game trees

 

  • MiniMax()
  • AlphaBeta()

Pruning1

In above figure, Max has found

  • SSS*()
  • B*()

Course #1 : AFCAI Search Algorithms

State Space Search: Constructive search problems

  • Uninformed/Blind Search
    • SimpleSearch1()
    • SimpleSearch2()
    • SimpleSearch3()
    • DepthFirstSearch()
    • BeardthFirstSearch()
    • DepthBoundedDFS()
    • DepthFirstIterativeDeepening()
  • Heuristic Search: Knowledge Based Search
    • BestFirstSearch()
    • HillClimbing()

Solution Space Search:

  • perturbation search problems
    • BeamSearch()
    • TabuSearch()
  • Randomized Search
    • IteratedHillClimbing()
    • RandomWalk()
    • SimulatedAnnealing()
    • GeneticAlgorithm()

Case Study : TSP Problem

  • Constructive method:
    • TSPNearestNeighbour()
    • GA’s
      • Path representation
      • Ordinal representation
      • Adjacency representation
    • Neural Networks
    • TSPACO()

Optimal Solution Search

  • BritishMuseumProcedure()
  • Branch&Bound()
  • Refinement search
    • generalB&B()
    • Dijkstra’s Algorithm()
    • A* ()
    • IDA*()
    • RecursiveBFS()
    • Puring CLOSED
      • DivideAndConquerFrontierSearch()
      • SparseMemoryGraphSearch()
    • Pruning OPEN
      • BreadthFirstHeuristicSearch()
      • DivideandConquerBeamSearch()
      • BeamStackSearch()
      • DivideandConquerBeamStackSearch()

Rule Based Systems

  • AO*
  • RETE net

Planning

  • ForwardStateSpacePlanning()
  • BackwardStateSpacePlanning()
  • GoalStackPlanning()
  • PlanSpacePlanning()
  • Means Ends Analysis
  • NOAH
  • Hierarchical Planning

Constaint Satisfaction

  • Constraint Propagation
    • 1-consistency
    • 2-consistency
      • AC1()
      • AC3()
      • AC4()
    • Huffman clowes scene labelling
    • 3-consistency
      • PC1()
      • Pc2()
    • i-consistency
    • Directional consistency
      • DirectionalArcConsistency()
      • DirectionalPathConsistency()
      • AdaptiveConsistency()
      • Minimum width orderings
    • Back tracking
      • BackTracking()
      • BackTrackingwithLookAhead()
      • BackJumping(){Gashnig’s, Graph, Conflict directed}

Game Play

  • MiniMax()
  • AlphaBeta()
  • SSS*()
  • B*()

 

Rl vs Chunking(EBL) In SOAR

Reinforcement learning (RL) in Soar allows agents to alter behavior over time by
dynamically changing numerical indifferent preferences in procedural memory in response
to a reward signal.

This learning mechanism contrasts starkly with chunking.  Whereas
chunking is a one-shot form of learning that increases agent execution performance by
summarizing sub-goal results, RL is an incremental form of learning that probabilistically
alters agent behavior.

Euchre vs Bridge

Many people say that the games, bridge and Euchre are very similar to each other and if you like to play one than you will more than likely enjoy playing the other as well.

Bridge:

  • four people and two sets of partners that are playing with each other against the other team
  • A full standard deck of 52 cards is used;
  • all the cards are dealt in a circle one by one until each player has 13 cards.
  • Then a team bids how many tricks they will at least win and also what  suit is trump or no suit as trump.
  • The player then begins to play to the left and whoever wins the most tricks or the number that they declared wins that round.

Euchre:

  • four players in two competing partnerships.
  • only 24 cards in play, the cards that are used are 9, 10, Jacks, Queens, Kings and Aces.
  • The first team to score 10 points win the whole game.
  • the first team to win max tricks wins the round
  • Five cards are dealt to each player and the rest are put face down in the middle – these cards are used for different purposes depending on the rules you are using similarly to bridge a trump suit is declared and then the game follows.
  • Many say that Euchre is an abstracted version of Bridge but both games are very enjoyable to play!

Not many games are as methodical as Bridge, and this makes many Bridge players fanatical in their dedication to the game.   The game developed from Whist, and in particular a variant called Russian Whist.  Computer bridge players though not yet challenging to Experts but they has a real time explanation of what bids mean as well as an extensive help file, so is ideal for beginners and experts.

Bridge Best Links

wolrdbridge.org – World Bridge Federation
acbl.org – the American Contract Bridge League
Cats at Cards – how to play Contract Bridge
Bridge Winners – great information on all things Bridge
Bridge+ this is a great French Bridge Site
Fifth Chair Bridge Foundation for those who need another chair!
Karen’s Bridge Library – well worth a look
The House Of Cards – a great resource for all card games
Wikipedia – the classic resource

Euchre Best Links

Cats at Cards – a very good manual on Euchre rules
The House Of Cards – great Euchre resource
Pagat – a terrific site for lots of card game rules
Euchre – There Is No Other Card Game site
Championship Euchre – for PalmOS PDA’s

Knowledge is in need of scripting

These days in pursuit of certainty, I have made many many academic gurus online and offline, directly and indirectly, through textbooks or Moocs or live class rooms here at IITM.

So these knowledge i have been gaining must pen down …else due to my bounded resources and capabilities, i am highly prone to forgetfulness .

Few courses i have taken at IIT

AI: Artificial Intelligence By Prof. Deepak Khemani

NLP: Natural Language Processing By Prof. Sutanu Chakraborthy

FunDaS: Algorithmic foundations of Data Science By Prof. John Augustine

CA: Computer Architecture By Prof.Chandra shekar

MCCS: Mathematical Concepts for Computer Science By Prof. Raghavendra rao and Prof. Jayalal sharma

ADSA: Advanced Data Structures and Algorithms by Prof. John Augustine and Prof. Meghana Nasre

APL: Advanced Programming Lab by Prof. John Augustine and Prof. Meghana Nasre

KRR: Knowledge Representation and Reasoning. by Prof. Deepak Khemani

PnCS: Planning and Constraint Satisfaction by Prof.Deepak Khemani and Prof. Narayana Swami.

TAO: Theory and Applications of Ontologies by Prof.Sreenivas Kumar.

MBR: Memory Based Reasoning in AI.by Prof. Sutanu Chakraborthy.

Few Courses online

Society of Mind by Prof.Marvin Minsky.

Artificial Intelligence by Patrick Winston.

Intro to Logic by M.Genesereth

General Game Playing by M.Genesereth

KBAI : Prof.Ashok goel

Game Theory by

NLP by Jurafsky

Algorithmic Game Theory