Christophe Gaillac

I am a Postdoctoral Prize Research Fellow in Economics at Nuffield College, University of Oxford, and an affiliated researcher at CREST.

My research focuses on Econometrics, Statistics, and Machine Learning.

I have also strong interests in Labor Economics

I am on the 2023-2024 job market.  

I hold a PhD in Economics from the Toulouse School of Economics

During my PhD, I was coordinator of statistical and mathematical teachings at ENSAE Paris in 2017-2019. 

I studied at Ecole Polytechnique (X10, mostly mathematics and physics), at La Sorbonne (political science) and ENSAE Paris (economics and statistics). 

You can find my CV here.  



Jérémy L'hour and I are writing a textbook on machine learning methods for econometrics.

The French version "Machine Learning pour l'économétrie", published by Economica is now available! You can find it here or usual online libraries.

The enriched and updated English version will follow very soon.

Nevertheless, you can now download a previous copy of the handout here of the course Machine Learning for Econometrics, ENSAE Paris and Institut Polytechnique de Paris, joint with Jérémy L'hour (CFM, CREST) and Bruno Crépon (CREST). 

Keywords: High-Dimension, Variable Selection, Post-Selection Inference, Methodology, Endogeneity, Synthetic Control Method, Heterogeneous Treatment Effects, Policy Evaluation, Text Data

Machine Learning for Econometrics is a book for economists who want to understand modern machine learning techniques, from their predictive power to their revolutionary processing of unstructured data, to infer causal relationships from data.

It covers automatic variable selection in various high-dimensional contexts, heterogeneity estimation of treatment effects, natural language processing (NLP) techniques, and synthetic control and macroeconomic forecasting.

The fundamentals of machine learning methods are presented in such a way as to provide both an in-depth theoretical treatment of their use in econometrics and numerous economic applications. Each chapter includes a series of empirical examples, programs, and exercises to facilitate the reader's adoption and implementation of the techniques.

This book is aimed at Master's and Grandes Ecoles students, researchers, and practitioners who want to understand and perfect their knowledge of machine learning and apply it in a context traditionally reserved for econometrics.


Keywords: Partially Linear Model; Data combination; Partial Identification; Intergenerational Mobility.

R Package available on CRAN: RegCombin and vignette with several simulated and real examples.

We study partially linear models when the outcome of interest and some of the covariates are observed in two different datasets that cannot be linked. This type of data combination problem arises very frequently in empirical microeconomics. Using recent tools from optimal transport theory, we derive a constructive characterization of the sharp identified set. We then build on this result and develop a novel inference method that exploits the specific geometric properties of the identified set. Our method exhibits good performances in finite samples, while remaining very tractable. We apply our approach to study intergenerational income mobility over the period 1850-1930 in the United States. Our method allows us to relax the exclusion restrictions used in earlier work, while delivering confidence regions that are informative.

Keywords: Adaptation, Ill-posed Inverse Problem, Minimax, Random Coefficients.

R Package archive available on CRAN: RandomCoefficients and vignette.

Summary: We consider a linear model where the coefficients - intercept and slopes - are random with a distribution in a nonparametric class and independent from the regressors. The main drawback of this model is that identification usually requires the regressors to have a support which is the whole space. This is rarely satisfied in practice. Rather, in this paper, the regressors can have a support which is a proper subset. This is possible by assuming that the slopes do not have heavy tails. Lower bounds on the supremum risk for the estimation of the joint density of the random coefficients density are derived for this model and a related white noise model. We present an estimator, its rates of convergence, and a data-driven rule which delivers adaptive estimators. 

R Package: RandomCoefficients. This package implements the estimator proposed in Gaillac and Gautier (2019), which is based on Prolate Spheroidal Wave functions which are computed efficiently in RandomCoefficients based on Osipov, Rokhlin, and Xiao (2013). This package also provides a parallel implementation of the estimator. 

Keywords: Rational expectations, Test, Subjective expectations, Data combination.

R Package available on CRAN: RationalExp and vignette.

Summary:  In this paper, we build a new test of rational expectations based on the marginal distributions of realizations and subjective beliefs. This test is widely applicable, including in the common situation where realizations and beliefs are observed in two different datasets that cannot be matched. We show that whether one can rationalize rational expectations is equivalent to the distribution of realizations being a mean-preserving spread of the distribution of beliefs. The null hypothesis can then be rewritten as a system of many moment inequality and equality constraints, for which tests have been recently developed in the literature. The test is robust to measurement errors under some restrictions and can be extended to account for aggregate shocks. Finally, we apply our methodology to test for rational expectations about future earnings. While individuals tend to be right on average about their future earnings, our test strongly rejects rational expectations.

R Package: RationalExp. This package implements a test of the rational expectations hypothesis based on the marginal distributions of realizations and subjective beliefs. The package also computes the estimator of the minimal deviations from rational expectations than can be rationalized by the data. R and the package RationalExp are open-source software projects and can be freely downloaded from CRAN: 

Keywords: Analytic continuation, Nonbandlimited functions, Heavy tails, Uniform estimates, Extrapolation, Singular value decomposition, Truncated Fourier transform, Singular Sturm Liouville Equations, Superresolution. 

Summary: The Fourier transform truncated on [-c,c] is usually analyzed when acting on L2(-1/b,1/b) and its right-singular vectors are the prolate spheroidal wave functions. This paper considers the operator acting on the larger space L2(exp(b|.|)) on which it remains injective. We give nonasymptotic upper and lower bounds on the singular values with similar qualitative behavior in m (the index), b, and c. The lower bounds are used to obtain rates of convergence for stable analytic continuation of possibly nonbandlimited functions whose Fourier transform belongs to L2(exp(b|.|)). We also derive bounds on the sup-norm of the singular functions. Finally, we propose a numerical method to compute the SVD and apply it to stable analytic continuation when the function is observed with error on an interval.  


Presentation at the French Ministry of Labor (DARES) available here.

Keywords: Fairness, Job recommender systems, Adversarial de-biasing, Gender gaps, Human ressources

Algorithmic recommendations of job ads have the potential to reduce frictional unemployment, but raise concerns about fairness due to biases in past data. Our research investigates the issue of algorithmic fairness with a specific focus on gender in a hybrid job recommendation system developed in partnership with the French Public Employment Service (PES), which is trained on past hires. First, by viewing job ads as a set of characteristics (such as wage and contract type), we document how the algorithm treats job seekers differently based on gender, both unconditionally and conditionally on their search parameters and qualifications. Second, we discuss the notion(s) of algorithmic fairness applicable in this context and the trade-offs involved. We show that the considered system reflects some existing differences in hiring or applications but does not exacerbate them. Finally, we consider adversarial de-biasing technique as a practical tool to demonstrate the trade-offs between recall and reduced differentiated treatment.

Keywords: Job recommender systems, E-recruitment, Sparse data, Matching, Fairness, RCT

This paper presents a job recommendation algorithm designed and validated in the context of the French Public Employment Service. The challenges, owing to the confdential data policy, are related with the extreme sparsity of the interaction matrix and the mandatory scalability of the algorithm, aimed to deliver recommendations to millions of job seekers in quasi real-time, considering hundreds of thousands of job ads. The experimental validation of the approach shows similar or better performances than the state of the art in terms of recall, with a gain in inference time of 2 orders of magnitude. The study includes some fairness analysis of the recommendation algorithm. The gender related gap is shown to be statistically similar in the true data and in the counter-factual data built from the recommendations 

Keywords: Job recommender systems, Congestion, Matching, Optimal Transport


I introduce a new method to predict individual-level heterogeneity in the causal effect of a variable, conditional on the latter but also on the observed outcome.

I show how to identify these ''posterior effects'' then derive tractable estimators in various empirical contexts. 

In an example application it turns out that they reveal substantial variations in the effects of teachers’ knowledge of the program on their performance and could substantially improve the cost-effectiveness of training programs.

Keywords: Empirical Bayes, teacher’s value-added, random coefficients, optimal transport, generalized Tweedie’s formula, voting analysis,  inverse problem. 

R Package RegPE soon available.

Measuring accurately heterogeneous effects is key for the design of efficient public policies. This paper considers the prediction of unobserved individual-level causal effects in linear random coefficients models, conditional on all the available data. In the application I consider, these ``posterior effects'' are the average effects of teachers' knowledge on their students' performance, conditional on both variables. I derive two strategies for recovering these posterior effects nonparametrically, assuming independence between the effects and the covariates. The first strategy recovers the distribution of the random coefficients by a minimum distance approach, and then obtains the posterior effects from this distribution. The corresponding estimator can be computed using an optimal transport algorithm. The second approach, which is only valid for continuous regressors, expresses the posterior effects directly as a function of the data. The corresponding estimator is rate optimal. I discuss several extensions, in particular the relaxation of the independence condition. Finally, the application reveals large heterogeneity in the effect of teacher knowledge, suggesting that we could substantially improve the cost-effectiveness of their training.

Keywords: Job Recommender Systems, Two-sided Markets, Value Misalignment.

This paper questions the design of job recommender systems (RS) and their potential to enhance job search. We argue that RS should align with a rational version of job seekers' objectives. Policy makers thus need combining hiring probabilities and job seekers' utilities, both of which are challenging to estimate. Otherwise, our empirical findings underscore that even state-of-the-art machine learning RS may not enhance job seekers' outcomes. We address three key dimensions of RS: the differences between algorithms, the optimal objective they should pursue, and the needs of job seekers. Our results highlight the value of RS in revealing unexplored opportunities.

Keywords: Job Recommender Systems, Two-sided Market, Congestion, Optimal Transport.

This paper questions the design of job recommender systems (RS). A direct application of sophisticated Machine Learning (ML) algorithms to build recommendations, such as identifying offers most likely to lead to a job from the prediction of successful matches, does not necessarily lead to an improvement in the situation of job seekers. This is because the objectives of these recommendations do not align with the ones of the job seekers and they are usually generated independently of each other, without taking into account the competition. Using a theoretical model of two-sided market with a step of applications, we show that the ML tools from which the recommendations are directly derived can be more usefully mobilized to identify quantities that job seekers might have difficulties to access. Our empirical analysis confirms these insights using the RS designed inside the framework of a long-term project we are conducting with the French Public Employment Service (Pôle Emploi), which leverages rich and detailed data on applicants, firms, and past job searches. It illustrates that RS based solely on the chances of being hired or on the utility of the jobs are dominated by ones that would mix the two dimensions, to come closer to the expected utility. We also discuss how RS can avoid increasing congestion in using a collective objective rather that an individual one to generate the recommendations, using optimal transport to make it tractable.

Keywords: Random Coefficients, Quasi-analyticity, Deconvolution, Identification.

Summary: This paper studies point identification of the distribution of the coefficients in some random coefficients models with exogenous regressors when their support is a proper subset, possibly discrete but countable. We exhibit trade-offs between restrictions on the distribution of the random coefficients and the support of the regressors. We consider linear models including those with nonlinear transforms of a baseline regressor, with an infinite number of regressors and deconvolution, the binary choice model, and panel data models such as single-index panel data models and an extension of the Kotlarski lemma. 


Install it by typing: ssc install mfelogit

Keywords: Fixed effects logit models, Panel Data, Partial Identification. 

mfelogit implements the estimators of the sharp bounds on the AME and the related confidence intervals on the AME and ATE from Davezies et al. (DDL hereafter). It also implements the second method proposed in DDL, which is faster to compute but may result in larger confidence intervals. When the covariate is binary, the command computes the ATE; otherwise it computes the AME.

Keywords: Fixed effects logit models, Panel Data, Partial Identification. 

This package implements the estimators of the sharp bounds on the AME and the related confidence intervals on the AME and ATE from Davezies et al. (DDL hereafter). It also implements the second method proposed in DDL, which is faster to compute but may result in larger confidence intervals. When the covariate is binary, the command computes the ATE; otherwise it computes the AME.



High-dimensional econometrics at the The Fime Lab Summer School on Big Data & Finance, 12-16 June 2023.

Advanced econometrics: forecasting (2023), Saïd Business School, Oxford, on High-Dimensional Macroeconomic Forecasting.

Machine Learning for Econometrics (2018, 2019, 2020), ENSAE Paris and Institut Polytechnique de Paris (previously "High-Dimensional Econometrics"), joint with Jérémy L'hour (INSEE, CREST) and Bruno Crépon (CREST).

Mathematics for Economists (Analysis and Optimisation) (2018), Master in Economics, Paris-Saclay university, Phd track

Mathematics for Economists (2017, 2018), Sciences-Po Paris, Phd track

Mathematics for Economists (2018), ENSAE Paris, Specialised Master 

Algebra and Python (2018), HEC Paris and ENSAE Paris, Undergraduate.

TA sessions:

Advanced Econometrics (2024), University of Oxford, Bent Nielsen and Martin Weidner 

Advanced Econometrics (2021-2023), University of Oxford, Anders Kock and Martin Weidner

Statistics 1 (2017-2018), ENSAE Paris, Nicolas Chopin

Numerical Analysis (2016-2018), ENSAE Paris, Cristina Butucea

Econometrics 2, (2017-2018), ENSAE Paris, Xavier D’Haultfoeuille

Simulations and Monte-Carlo (2018), ENSAE Paris, Nicolas Chopin 

Time Series analysis (2015-2017), ENSAE Paris, Christian Franck