| CARVIEW |
Hello, this is
Aurghya Maiti (Rana).
Solving problems by day, chasing passions by night!
#Bio
I am a Ph.D. student in the Computer Science department at Columbia University advised by Prof. Elias Bareinboim. My current research focuses on decision-making in multi-agent systems with causal knowledge.
Previously, I received my B.Tech in Computer Science from Indian Institute of Technology Kharagpur, where I worked with Prof. Niloy Ganguly and Prof. Sourangshu Bhattacharya.
I also worked as a Research Associate at Adobe Research, where I worked with Gaurav Sinha and Atanu Sinha on causal bandits and marketing.
Research: I like causal inference and game theory.
#Selected Publications (see the full list or Google Scholar)
1. Counterfactual Rationality: A Causal Approach to Game Theory
Aurghya Maiti, Prateek Jain, Elias Bareinboim
Under Review
| Paper
The tension between rational and irrational behaviors in human decision-making has been acknowledged across a wide range of disciplines, from philosophy to psychology, neuroscience to behavioral economics. Models of multi-agent interactions, such as von Neumann and Morgenstern’s expected utility theory and Nash’s game theory, provide rigorous mathematical frameworks for how agents should behave when rationality is sought. However, the rationality assumption has been extensively challenged, as human decision-making is often irrational, influenced by biases, emotions, and uncertainty, which may even have a positive effect in certain cases. Behavioral economics, for example, attempts to explain such irrational behaviors, including Kahneman’s dual-process theory and Thaler’s nudging concept, and accounts for deviations from rationality. In this paper, we analyze this tension through a causal lens and develop a framework that accounts for rational and irrational decision-making, which we term Causal Game Theory. We then introduce a novel notion called counterfactual rationality, which allows agents to make choices leveraging their irrational tendencies. We extend the notion of Nash Equilibrium to counterfactual actions and Pearl Causal Hierarchy (PCH), and show that strategies following counterfactual rationality dominate strategies based on standard game theory. We further develop an algorithm to learn such strategies when not all information about other agents is available.
2. Counterfactual Identification Under Monotonicity Constraints
Aurghya Maiti, Drago Plecko, Elias Bareinboim
The Thirty-Ninth AAAI Conference on Artificial Intelligence (AAAI-25)
Reasoning with counterfactuals is one of the hallmarks of human cognition, involved in various tasks such as explanation, credit assignment, blame, and responsibility. Counterfactual quantities that are not identifiable in the general non-parametric case may be identified under shape constraints on the functional mechanisms, such as monotonicity. One prominent example of such an approach is the celebrated result by Angrist and Imbens on identifying the Local Average Treatment Effect (LATE) in the instrumental variable setting. In this paper, we study the identification problem of more general settings under monotonicity constraints. We begin by proving the monotonicity reduction lemma, which simplifies counterfactual queries using monotonicity assumptions and facilitates the reduction of a larger class of these queries to interventional quantities. We then extend the existing identification results on Probabilities of Causation (PoCs) and LATE to a broader set of queries and graphs. Finally, we develop an algorithm, M-ID, for identifying arbitrary counterfactual queries from combinations of observational and experimental data, which takes as input a causal diagram with monotonicity constraints. We show that M-ID subsumes the previously known identification results in the literature. We demonstrate the applicability of our results using synthetic and real data.
3. A causal bandit approach to learning good atomic interventions in presence of unobserved confounders
Aurghya Maiti, Vineet Nair, Gaurav Sinha
Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence (UAI 2022)
| Paper
We study the problem of determining the best atomic intervention in a Causal Bayesian Network (CBN) specified only by its causal graph. We model this as a stochastic multi-armed bandit (MAB) problem with side-information, where interventions on CBN correspond to arms of the bandit instance. First, we propose a simple regret minimization algorithm that takes as input a causal graph with observable and unobservable nodes and in T exploration rounds achieves O(m(C)/T) expected simple regret. Here m(C) is a parameter dependent on the input CBN C and could be much smaller than the number of arms. We also show that this is almost optimal for CBNs whose causal graphs have an n-ary tree structure. Next, we propose a cumulative regret minimization algorithm that takes as input a causal graph with observable nodes and performs better than the optimal MAB algorithms that do not use causal side-information. We experimentally compare both our algorithms with the best known algorithms in the literature.
#Interests
I am interested in a lot of things.