I hold a PhD in Economics from the Toulouse School of Economics. I am also a research fellow at the Center for Research in Economics and Statistics (CREST). I was coordinator of statistical and mathematical teachings at ENSAE in 2017-2019. You can find my CV here.
My research focuses on Econometrics, Statistics, and Machine Learning.
I have also interests in Labor Economics and Political Science.
You can find here the handout of the course Machine learning for Econometrics. New version coming soon!
Keywords: Partially Linear Model; Data combination; Partial Identification; Intergenerational Mobility.
We consider the identification of and inference on a partially linear model, when the outcome of interest and some of the covariates are observed in two different datasets that cannot be linked. This type of data combination problem arises very frequently in empirical microeconomics. Using recent tools from optimal transport theory, we derive a constructive characterization of the sharp identified set. We then build on this result and develop a novel inference method that exploits the specific geometric properties of the identified set. Our method exhibits good performances in finite samples, while remaining very tractable. Finally, we apply our methodology to study intergenerational income mobility over the period 1850-1930 in the United States. Our method allows to relax the exclusion restrictions used in earlier work while delivering confidence regions that are informative.
Designing Labor Market Recommender Systems: the Importance of Job Seeker Preferences and Competition, New version coming soon! with Victor Alfonso Naya (LISN), Guillaume Bied (CREST-LISN), Philippe Caillou (LISN), Bruno Crépon (CREST), Elia Perennes (CREST) and Michele Sebag (LISN).
Keywords: Recommender systems, Matching, Congestion, Optimal Transport.
We examine the properties of a recommender algorithm currently under construction at the Public Employment Service (PES) in France, before its implementation in the field. The algorithm associates to each offer-job seeker pair a predicted "matching probability" using a very large set of covariates. We first compare this new AI algorithm with a matching tool mimicking the one currently used at the PES, based on a score measuring the "proximity" between the job seeker's profile or preference and the characteristics of the offer. We detail and discuss the trade-off between matching probability and preference score when switching from one system to the other. We also examine the issue of congestion. We show on the one hand that the AI algorithm tends to increase congestion and on the other hand that this strongly reduces its performance. We finally show that the use of optimal transport to derive recommendations from the matching probability matrix allows to mitigate this problem significantly. The main lesson at this stage is that an algorithm ignoring preferences and competition in the labor market would have very limited performances but that tweaking the algorithm to fit these dimensions improves substantially its properties, at least "in the lab".
Keywords: Random Coefficients, Quasi-analyticity, Deconvolution, Identification.
Summary: This paper studies point identification of the distribution of the coefficients in some random coefficients models with exogenous regressors when their support is a proper subset, possibly discrete but countable. We exhibit trade-offs between restrictions on the distribution of the random coefficients and the support of the regressors. We consider linear models including those with nonlinear transforms of a baseline regressor, with an infinite number of regressors and deconvolution, the binary choice model, and panel data models such as single-index panel data models and an extension of the Kotlarski lemma.
Rationalizing Rational Expectations? Characterization and Tests, with Xavier D’Haultfoeuille (CREST) and Arnaud Maurel (Duke university), Quantitative Economics, 12 (3): 817-842 (2021).
Keywords: Rational expectations, Test, Subjective expectations, Data combination.
Summary: In this paper, we build a new test of rational expectations based on the marginal distributions of realizations and subjective beliefs. This test is widely applicable, including in the common situation where realizations and beliefs are observed in two different datasets that cannot be matched. We show that whether one can rationalize rational expectations is equivalent to the distribution of realizations being a mean-preserving spread of the distribution of beliefs. The null hypothesis can then be rewritten as a system of many moment inequality and equality constraints, for which tests have been recently developed in the literature. The test is robust to measurement errors under some restrictions and can be extended to account for aggregate shocks. Finally, we apply our methodology to test for rational expectations about future earnings. While individuals tend to be right on average about their future earnings, our test strongly rejects rational expectations.
R Package: RationalExp. This package implements a test of the rational expectations hypothesis based on the marginal distributions of realizations and subjective beliefs. The package also computes the estimator of the minimal deviations from rational expectations than can be rationalized by the data. R and the package RationalExp are open-source software projects and can be freely downloaded from CRAN: http://cran.r-project.org
Adaptive estimation in the linear random coefficients model when regressors have limited variation, with Eric Gautier (TSE), Bernoulli 28 (1) 504 - 524, February 2022.
Keywords: Adaptation, Ill-posed Inverse Problem, Minimax, Random Coefficients.
Summary: We consider a linear model where the coefficients - intercept and slopes - are random with a distribution in a nonparametric class and independent from the regressors. The main drawback of this model is that identification usually requires the regressors to have a support which is the whole space. This is rarely satisfied in practice. Rather, in this paper, the regressors can have a support which is a proper subset. This is possible by assuming that the slopes do not have heavy tails. Lower bounds on the supremum risk for the estimation of the joint density of the random coefficients density are derived for this model and a related white noise model. We present an estimator, its rates of convergence, and a data-driven rule which delivers adaptive estimators.
R Package: RandomCoefficients. This package implements the estimator proposed in Gaillac and Gautier (2019), which is based on Prolate Spheroidal Wave functions which are computed efficiently in RandomCoefficients based on Osipov, Rokhlin, and Xiao (2013). This package also provides a parallel implementation of the estimator.
Estimates for the SVD of the truncated Fourier transform on L2(exp(b|×|)) and stable analytic continuation, with Eric Gautier (TSE), Journal of Fourier Analysis and Applications (2021) 27:72.
Keywords: Analytic continuation, Nonbandlimited functions, Heavy tails, Uniform estimates, Extrapolation, Singular value decomposition, Truncated Fourier transform, Singular Sturm Liouville Equations, Superresolution.
Summary: The Fourier transform truncated on [-c,c] is usually analyzed when acting on L2(-1/b,1/b) and its right-singular vectors are the prolate spheroidal wave functions. This paper considers the operator acting on the larger space L2(exp(b|.|)) on which it remains injective. We give nonasymptotic upper and lower bounds on the singular values with similar qualitative behavior in m (the index), b, and c. The lower bounds are used to obtain rates of convergence for stable analytic continuation of possibly nonbandlimited functions whose Fourier transform belongs to L2(exp(b|.|)). We also derive bounds on the sup-norm of the singular functions. Finally, we propose a numerical method to compute the SVD and apply it to stable analytic continuation when the function is observed with error on an interval.
OTHER SOFTWARE PROGRAMS
Stata program mfelogit and the vignette with Xavier D’Haultfoeuille (CREST), Laurent Davezies (CREST), and Louise Laage (Georgetown University), associated with their paper Identification and Estimation of Average Marginal Effects in Fixed Effect Logit Models.
Install it by typing: ssc install mfelogit
Keywords: Fixed effects logit models, Panel Data, Partial Identification.
mfelogit implements the estimators of the sharp bounds on the AME and the related confidence intervals on the AME and ATE from Davezies et al. (DDL hereafter). It also implements the second method proposed in DDL, which is faster to compute but may result in larger confidence intervals. When the covariate is binary, the command computes the ATE; otherwise it computes the AME.
R Package MarginalFElogit and the vignette with Xavier D’Haultfoeuille (CREST), Laurent Davezies (CREST), and Louise Laage (Georgetown University), associated with their paper Identification and Estimation of Average Marginal Effects in Fixed Effect Logit Models.
Keywords: Fixed effects logit models, Panel Data, Partial Identification.
This package implements the estimators of the sharp bounds on the AME and the related confidence intervals on the AME and ATE from Davezies et al. (DDL hereafter). It also implements the second method proposed in DDL, which is faster to compute but may result in larger confidence intervals. When the covariate is binary, the command computes the ATE; otherwise it computes the AME.
HANDOUTS & BOOKS
You can now download a copy of the handout: link
Keywords: High-Dimension, Variable Selection, Post-Selection Inference, Methodology, Endogeneity, Synthetic Control Method, Heterogeneous Treatment Effects, Policy Evaluation, Text Data.
This course covers recent applications of high-dimensional statistics and machine learning to econometrics, including variable selection, inference with high-dimensional nuisance parameters in different settings, heterogeneity, networks and text data. The focus will be on policy evaluation problems. Recent advances in causal inference such as the synthetic controls method will be reviewed.
The goal of the course is to give insights about these new methods, their benefits and their limitations. It will mostly benefit students who are highly curious about recent advances in econometrics, whether they want to study theory or use them in applied work. Students are expected to be familiar with Econometrics 2 (2A) and Statistical Learning (3A).
In 2020, the outline was:
High-Dimension, Variable Selection and Post-Selection Inference
Methodology: Using Machine Learning Tools in Econometrics
High-Dimension and Endogeneity
The Synthetic Control Method
Machine Learning Methods for Heterogeneous Treatment Effects
Prediction Policy Problems
Fairness and optimal treatment allocation
Machine Learning for Econometrics (2018, 2019, 2020), ENSAE Paris and Institut Polytechnique de Paris (previously "High-Dimensional Econometrics"), joint with Jérémy L'hour (INSEE, CREST) and Bruno Crépon (CREST).
Mathematics for Economists (Analysis and Optimisation) (2018), Master in Economics, Paris-Saclay university, Phd track
Mathematics for Economists (2017, 2018), Sciences-Po Paris, Phd track
Mathematics for Economists (2018), ENSAE Paris, Specialised Master
Statistics 1 (2017-2018), ENSAE Paris, Nicolas Chopin
Numerical Analysis (2016-2018), ENSAE Paris, Cristina Butucea
Econometrics 2, (2017-2018), ENSAE Paris, Xavier D’Haultfoeuille
Simulations and Monte-Carlo (2018), ENSAE Paris, Nicolas Chopin
Time Series analysis (2015-2017), ENSAE Paris, Christian Franck