{ "cells": [ { "cell_type": "markdown", "id": "ef7d397e", "metadata": {}, "source": [ "# 5) Quantal response equilibria\n", "\n", "Gambit implements the idea of [McKPal95](https://gambitproject.readthedocs.io/en/latest/biblio.html#general-game-theory-articles-and-texts) and [McKPal98](https://gambitproject.readthedocs.io/en/latest/biblio.html#general-game-theory-articles-and-texts) to compute Nash equilibria via path-following a branch of the logit quantal response equilibrium (LQRE) correspondence using the function `logit_solve`.\n", "As an example, we will consider an asymmetric matching pennies game from [Och95](https://gambitproject.readthedocs.io/en/latest/biblio.html#general-game-theory-articles-and-texts) as analyzed in [McKPal95](https://gambitproject.readthedocs.io/en/latest/biblio.html#general-game-theory-articles-and-texts)." ] }, { "cell_type": "code", "execution_count": 1, "id": "ebc4c60e", "metadata": {}, "outputs": [], "source": [ "import pygambit as gbt" ] }, { "cell_type": "code", "execution_count": 2, "id": "202786ef", "metadata": {}, "outputs": [ { "data": { "text/latex": [ "$\\left[[0.5000000234106035, 0.49999997658939654],[0.19998563837426647, 0.8000143616257336]\\right]$" ], "text/plain": [ "[[0.5000000234106035, 0.49999997658939654], [0.19998563837426647, 0.8000143616257336]]" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "g = gbt.Game.from_arrays(\n", " [[1.1141, 0], [0, 0.2785]],\n", " [[0, 1.1141], [1.1141, 0]],\n", " title=\"Ochs (1995) asymmetric matching pennies as transformed in McKelvey-Palfrey (1995)\"\n", ")\n", "gbt.nash.logit_solve(g).equilibria[0]" ] }, { "cell_type": "markdown", "id": "1ce76964", "metadata": {}, "source": [ "`logit_solve` returns only the limiting (approximate) Nash equilibrium found.\n", "Profiles along the QRE correspondence are frequently of interest in their own right.\n", "Gambit offers several functions for more detailed examination of branches of the QRE correspondence.\n", "\n", "The function `logit_solve_branch` uses the same procedure as `logit_solve`, but returns a list of LQRE profiles computed along the branch instead of just the limiting approximate Nash equilibrium." ] }, { "cell_type": "code", "execution_count": 3, "id": "840d9203", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "193" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "qres = gbt.qre.logit_solve_branch(g)\n", "len(qres)" ] }, { "cell_type": "code", "execution_count": 4, "id": "be419db2", "metadata": {}, "outputs": [ { "data": { "text/latex": [ "$\\left[[0.5, 0.5],[0.5, 0.5]\\right]$" ], "text/plain": [ "[[0.5, 0.5], [0.5, 0.5]]" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "qres[0].profile" ] }, { "cell_type": "code", "execution_count": 5, "id": "582838de", "metadata": {}, "outputs": [ { "data": { "text/latex": [ "$\\left[[0.5182276540742868, 0.4817723459257562],[0.49821668880066783, 0.5017833111993909]\\right]$" ], "text/plain": [ "[[0.5182276540742868, 0.4817723459257562], [0.49821668880066783, 0.5017833111993909]]" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "qres[5].profile" ] }, { "cell_type": "markdown", "id": "61e86949", "metadata": {}, "source": [ "`logit_solve_branch` uses an adaptive step size heuristic to find points on the branch.\n", "The parameters `first_step` and `max_accel` are used to adjust the initial step size and the maximum rate at which the step size changes adaptively.\n", "The step size used is computed as the distance traveled along the path, and, importantly, not the distance as measured by changes in the precision parameter lambda.\n", "As a result the lambda values for which profiles are computed cannot be controlled in advance.\n", "\n", "In some situations, the LQRE profiles at specified values of lambda are of interest.\n", "For this, Gambit provides `logit_solve_lambda`.\n", "This function provides accurate values of strategy profiles at one or more specified values of lambda." ] }, { "cell_type": "code", "execution_count": 6, "id": "ce354b49", "metadata": {}, "outputs": [ { "data": { "text/latex": [ "$\\left[[0.5867840364385154, 0.4132159635614846],[0.4518070316997103, 0.5481929683002897]\\right]$" ], "text/plain": [ "[[0.5867840364385154, 0.4132159635614846], [0.4518070316997103, 0.5481929683002897]]" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "qres = gbt.qre.logit_solve_lambda(g, lam=[1, 2, 3])\n", "qres[0].profile" ] }, { "cell_type": "code", "execution_count": 7, "id": "280fa428", "metadata": {}, "outputs": [ { "data": { "text/latex": [ "$\\left[[0.6175219458400859, 0.3824780541599141],[0.3719816648492249, 0.6280183351507751]\\right]$" ], "text/plain": [ "[[0.6175219458400859, 0.3824780541599141], [0.3719816648492249, 0.6280183351507751]]" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "qres[1].profile" ] }, { "cell_type": "code", "execution_count": 8, "id": "3dee57df", "metadata": {}, "outputs": [ { "data": { "text/latex": [ "$\\left[[0.6168968501329284, 0.3831031498670716],[0.31401636202001226, 0.6859836379799877]\\right]$" ], "text/plain": [ "[[0.6168968501329284, 0.3831031498670716], [0.31401636202001226, 0.6859836379799877]]" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "qres[2].profile" ] }, { "cell_type": "markdown", "id": "5601be33", "metadata": {}, "source": [ "LQRE are frequently taken to data by using maximum likelihood estimation to find the LQRE profile that best fits an observed profile of play.\n", "This is provided by the function `logit_estimate`.\n", "We replicate the analysis of a block of the data from [Och95](https://gambitproject.readthedocs.io/en/latest/biblio.html#general-game-theory-articles-and-texts) for which [McKPal95](https://gambitproject.readthedocs.io/en/latest/biblio.html#general-game-theory-articles-and-texts) estimated an LQRE." ] }, { "cell_type": "code", "execution_count": 9, "id": "b34a9278", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "pygambit.qre.LogitQREMixedStrategyFitResult" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data = g.mixed_strategy_profile([[128*0.527, 128*(1-0.527)], [128*0.366, 128*(1-0.366)]])\n", "fit = gbt.qre.logit_estimate(data)\n", "type(fit)" ] }, { "cell_type": "markdown", "id": "12534924", "metadata": {}, "source": [ "The returned `LogitQREMixedStrategyFitResult` object contains the results of the estimation.\n", "The results replicate those reported in [McKPal95](https://gambitproject.readthedocs.io/en/latest/biblio.html#general-game-theory-articles-and-texts), including the estimated value of lambda, the QRE profile probabilities, and the log-likelihood.\n", "\n", "Because `data` contains the empirical counts of play, and not just frequencies, the resulting log-likelihood is correct for use in likelihoood-ratio tests.\n", "[[1](#f1)]" ] }, { "cell_type": "code", "execution_count": 10, "id": "e10e9abd", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1.8456097536855862\n", "[[0.615651314427859, 0.3843486855721409], [0.38329094004562914, 0.6167090599543709]]\n", "-174.76453191087447\n" ] } ], "source": [ "print(fit.lam)\n", "print(fit.profile)\n", "print(fit.log_like)" ] }, { "cell_type": "markdown", "id": "0316795f", "metadata": {}, "source": [ "All of the functions above also support working with the agent LQRE of [McKPal98](https://gambitproject.readthedocs.io/en/latest/biblio.html#general-game-theory-articles-and-texts).\n", "Agent QRE are computed as the default behavior whenever the game has a extensive (tree) representation.\n", "\n", "For `logit_solve`, `logit_solve_branch`, and `logit_solve_lambda`, this can be overriden by passing `use_strategic=True`;\n", "this will compute LQRE using the reduced strategy set of the game instead.\n", "\n", "Likewise, `logit_estimate` will perform estimation using agent LQRE if the data passed are a `MixedBehaviorProfile`, and will return a `LogitQREMixedBehaviorFitResult` object." ] }, { "cell_type": "markdown", "id": "486f68a7", "metadata": {}, "source": [ "**Footnotes:**\n", "\n", " The log-likelihoods quoted in [McKPal95](https://gambitproject.readthedocs.io/en/latest/biblio.html#general-game-theory-articles-and-texts) are exactly a factor of 10 larger than those obtained by replicating the calculation." ] } ], "metadata": { "kernelspec": { "display_name": "gambitvenv313", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.13.5" } }, "nbformat": 4, "nbformat_minor": 5 }