Course #3 :Schema and Child Concept Acquisition:

Intuition initially sparse nodes of concepts and as the knowledge gets added nodes gets close and acts as a semi-symbolic concepts.

Course: Schemas in Problem solving

Instructor: Sandra P Marshall

Lec1: Fundamentals

  • Schema roots
  • The nature of schema
  • The Schemas of arithmetic story problem

Lec2: Schemas and Instruction

  • Theoretical issues for instruction
  • The story problem solver
  • The problem-solving environment

Lec3: Learning from Instruction

  • Learning and schema theory
  • Learning from schema-based instruction
  • The acquisition of planning knowledge

Lec4: Schemas and Assessment

  • Schema-based assessment
  • Assessment in SPE and PSE

Lec5: Schemas and Models

  • Rule-based production systems.
  • Neural networks
  • Hybrid models
  • The performance model
  • The Learning model
  • The full schema model

Coda

very interesting and very insightful. a good start to know how humans acquire knowledge. Moreover this theory also gives insight to how logic and probability bring two school of thoughts (Classical and Modern AI) together.

 

 

 

Games with Imperfect Information

Part 1 and  Chapter 3:

 

3.1 Varieties of knowledge in games

  • Perfect information
  • Imperfect information

3.2 Imperfect information games at a glance

3.3 Modal-epistemic logic

  • Process graphs with uncertainty
  • Model epistemic language
  • Iterations and group knowledge
  • Uniform strategies and non-determinacy

3.4 Correspondence for logical axioms

  • Correspondence analysis of special axioms
  • general logical methods

3.5 C

Probabilistic Logic and Monty hall game

Consider a single player game with a nugget hidden under one of the with 3 upside-down cups  and suppose (without loss of generality) that the player chooses cup 1. Now the game maker lifts cup 2 and shows the player that it is empty  and gives him the option to either change his earlier choice or  stick to it.
Monty.png
We want to know if the player can gain from switching his choice to cup 3. So we have to compare the probability of choosing the nugget by
staying with cup 1, and the probability of choosing the nugget by switching to cup 3.
Let E1 be the event “nugget is under is cup 1,” let E2 be the event “nugget is under cup 3,” and let E3 be the event “Game maker lifts cup 2 and its empty.”
Since from the player’s initial reasoning, the nugget is equally likely to be under any of the three doors, Pr(E1) = Pr(E2) = 1/3
We want to know Pr(E1|E3) and Pr(E2|E3).
According to bayes rule:
Pr(E1|E3) = Pr(E3|E1).Pr(E1)/ Pr(E3) = 1/3 × Pr(E3|E1) / Pr(E3)
Pr(E2|E3) = Pr(E3|E2).Pr(E2)/ Pr(E3) = 1/3 × Pr(E3|E2) / Pr(E3)
First, suppose Game maker randomly picks one of the remaining cups. The probability that he picks cup 2 is then 1/2. The probability that there’s a no nugget  under any given cup is 1 − 1/3 = 2/3, so the probability that game maker picks cup 2 and it is empty is Pr(E3) = 1/2× 2/3 = 1/3
because the two events are independent by our assumption that game maker picks randomly.
We now need to determine Pr(E3|E1) and Pr(E3|E2). If the nugget is under cup 1, then cup 2 certainly is empty, and hence the probability that game maker picks cup2 and cup 2 is empty, is equal the probability that game maker picks cup2 : Pr(E3|E1) = 1/2.
Similarly, if the nugget is under cup 3, then also  cup 2 is empty for sure, yielding Pr(E3|E2) = 1/2.
Putting all this together yields:
Pr(E1|E3) =1/3 ×1/2 1/3=1/2
Pr(E2|E3) = 1/3 × 1/2 1/3 = 1/2
Hence, the player cannot gain from switching. This is not surprising: if game maker makes a random choice, his action reveals nothing.

we  realize that there is another case we should deal with and which is an example of strategic thinking.

Suppose now that game maker would never pick a cup with a nugget under it (much more likely in a real monty hall show).  As before, Pr(E3 | E1) = 1/2 because if the nugget is under  cup 1, then both cup 2 and 3 certainly are empty, and so game maker will pick randomly between them. However, Pr(E3|E2) = 1 because if the nugget is under cup 3, then cup 2 certainly is empty, but sincegame maker never reveals the nugget (and cannot pick cup 1 because the player already picked it), he will certainly pick cup 2. Finally, because game maker will never pick a cup with nugget under it, Pr(E3) = 1/2.
You can use the total probability theorem to obtain this number.
Let “2” denote the event “the nugget is under cup 2.” Then:
Pr(E3) = Pr(E3|E1)Pr(E1) + Pr(E3|2) Pr(2) + Pr(E3|E2) Pr(E2) = 1/2×1/3 + 0×1/3 + 1 × 1/3 = 1/6 + 1/3 = 1/2.
We used the fact that the probability of the event “Game maker picks cup 2 and it is empty” is zero if there is a nugget under that cup: Pr(E3|2) = 0.
This now yields:
Pr(E1|E3) = (1/2×1/3)/( 1/2) = 1/3
Pr(E2|E3) = (1×1/3 )/(1/2) = 2/3.
Since Pr(E2|E3) > Pr(E1|E3), the player should definitely switch.
The reason is that game maker’s informed choice is a public announcement that reveals additional information which the player should incorporate in his beliefs.

Course: Epistemic Game Theory

People: Andres Perea

  1. Introduction
  2. Part I: Standard beliefs in static games
    1. Belief in the opponent’s rationality
    2. Common belief in rationality
    3. Simple belief hierarchies
  3. Part II: Lexicographic beliefs in static games
    1. Primary belief in the opponent’s rationality
    2. Respecting the opponent’s preferences
    3. Assuming the opponent’s rationality
  4. Part III: Conditional beliefs in dynamic games
    1. Belief in the opponents’ future rationality
    2. Strong belief in the opponent’s  rationality

 

Practical and Theoretical problems in every chapter.

 

MAS and Distributed AI

Multiagent Systems and Distributed Artificial Intelligence
People : Nikos Vlassis

 

Keywords:

Multiagent Systems, Distributed Artificial Intelligence, Game Theory, Decision Making or Reasoning under Uncertainty, Coordination, Knowledge and Information, Mechanism Design, Reinforcement Learning.
Intro:
MAS is an expanding field that blends classical areas like game theory and
decentralized control with modern fields like computer science and machine learning.
This 7-lecture course provides us a concise introduction to the subject, covering the theoretical foundations as well as more recent developments.
Note: An intelligent agent is a decision maker or reasoner or problem solver.
Lecture 1 is a short introduction to the field of multiagent systems. Lecture 2 covers the basic theory of single-agent decision making under uncertainty.
Lecture 3 is a brief introduction to game theory, explaining classical concepts like Nash equilibrium.
Lecture 4 deals with the fundamental problem of coordinating a team of collaborative agents.
Lecture 5 studies the problem of multiagent reasoning and decision making under partial observability.
Lecture 6 focuses on the design of protocols that are stable against manipulations by self-interested agents.
Lecture 7 provides a short introduction to the rapidly expanding field of multiagent reinforcement learning.