Survey Research in Economics – Insights from my LEAP Internship

By Edoardo Ardito, LEAP Intern

Within the various fields of Economics, scholars are often confronted with the challenge of translating the complex and sometimes blurry reality of human behaviour into elegant models and assumptions, or into datasets and observable variables. This is often not an easy job: picture for example ability, or motivation. Labor economics’ models largely study how the interplay of these variables can affect productivity and other labor market outcomes, or what role they play in defining the returns to human capital investments such as university education. But how these variables should be uniquely defined and, most importantly, how they can be observed and measured is far from obvious.

The aim of this blog post is to propose some food for thought on surveys as a key data collection and measurement method in economic research, specifically by pointing out some of their strengths, drawbacks, and possible innovative alternatives. Most of what I write about in this post I have had the opportunity to closely observe during my internship at LEAP as a Research Assistant. Indeed, my experience at LEAP was an opportunity to interact with experts in the field of survey research and not only to learn from them about clever survey designs from a theoretical standpoint, but also to get my hands on real survey data and understand how it is managed.

Overall, surveys are an extremely powerful instrument to turn elusive human attributes into manageable variables and indices. For example, survey research employs an array of techniques to capture empirical data on individuals’ subjective expectations regarding the future. These may include point expectations (such as the expected age of retirement for a given individual), but also comprehensive distributions of expected outcomes. To illustrate, the latter measure involves soliciting respondents’ belief of the likelihood that a given outcome will exceed a number of predefined thresholds in the future. For example, respondents may be asked to express the probability that, at the age of 40 and with a university degree, their salary will surpass a specific threshold X. By aggregating a respondent’s answer to such a question for various values of X, a researcher may elicit the respondent’s subjective distribution of their expected salary.

Surveys also serve as an effective tool to capture individuals’ habits and beliefs. A well known illustration of belief measurement is through Likert scale questions, which ask respondents to express a degree of agreement (say, from 0 to 10) to a provided statement. By strategically employing multiple Likert scale questions, researchers can construct multifaceted and comprehensive indices that encapsulate the nuances of the respondent’s beliefs. For example, large-scale surveys like the European Social Survey collect data on individuals’ beliefs and perceptions, that can be aggregated into indexes of perceived fairness of the public services, understanding and evaluation of democracy, or attitudes towards climate change. In addition, survey questions can be used to observe individuals’ beliefs and attitudes by scrutinising their everyday habits: information about how many times an individual drives the car to cover relatively short distances, consumes meat, or recycles waste is (at least in part) informative about their attitudes towards climate change.

These examples are only breadcrumbs compared to the mass of information that surveys can unlock and to the vast repertoire of survey designs and elicitation techniques that are available. Still, they are informative on some of two major strengths I have learned to associate to surveys as measurement tools:

  1. Flexibility: surveys can be tailor-made for any research question and this adaptability empowers researchers with a canvas on which they can unleash their creativity, devising survey questions that meticulously gauge even the most intricate or concealed aspects of respondents’ characters.
  2. Competitive advantage: not many other data collection techniques are available to turn human characteristics such as beliefs, expectations, habits, preferences into quantitative variables as easily and cheaply as surveys can. In my view, this capability renders surveys particularly indispensable in the realm of social science research.

Clearly, surveys don’t come without imperfections: how many questions the survey has, the order in which questions are presented , the conditions under which respondents answer them, how easy it is to understand what the survey questions are asking, whether respondents may feel like their privacy is being violated are only a few of the many sources of error that can arise from relying on surveys as measurement instruments. It is well known how crucial it is to carefully design and test a survey before delivering it to the field, and how costly it can be to make sure that data collection is bias-proof, especially in contexts in which it may be necessary to have thoroughly trained interviewers and supervisors meet and interview the respondents on the field.

Still, even the perfectly designed and well-performing survey may produce data that is not free of all bias. Together with the values of the measured variables, survey data also carries a hidden story about some patterns unconsciously followed by respondents follow, triggered by the very fact of answering survey questions. For example, respondents may predict what would be the most socially acceptable answer to a given question, unconsciously tend not to select extreme values in a Likert scale, or to report an idealised version of their habits rather than the reality.

Interestingly, we may consider how the ever-evolving technological advancements of our time have the potential to give way to innovative strategies to gather data on large populations and set the basis for the development of new measurement techniques to observe variables that we typically do through surveys. Smartphone applications have a disarming capability of data collection nowadays. They can easily track users’ commutes, consumption habits, physical activity, or sleeping schedules as they happen in real time. An even more ventured example is the one of the so called neuroeconomic experiments, which allow researchers to observe individuals’ preferences towards risk and uncertainty, reward processing, or temporal discounting from an observation of their recorded brain activities rather than through the survey intermediary (see, for example, Berns et al., 2008). In the literature of development economics, some alternative methods of data collections have been used already. Haushofer and Shapiro (2016), for example, paired survey data and saliva samples to obtain measures of concern and the psychological distress of respondents. Indeed, stress increases the production of the cortisol hormone, whose levels can be detected in the saliva samples.

Needless to say, alternatives to survey data come with their flaws as well: ethical or privacy concerns, high costs and resource/technical requirements, or the fact that they may not be optimised for economic research, just to name a few. Still, it is interesting to discover whether measuring people’s habits, preferences, or beliefs through surveys or through complex brain-reading contraptions would result in similar observations or not, and in turn to understand to what extent the findings of economic research would change across different methods of data collection.

To conclude, I include a list of related readings, some of which I studied during my internship experience at LEAP, that I found interesting and that useful to learn more about survey research:

  1. Delavande (2023): It explores how probabilistic expectations have been measured in LMICs and highlights variations in these measurements.
  2. Giustinelli (2022): A literature review on subjective expectations in education, with a specific emphasis on methods and analysis of youth’s expectations of the returns to schooling through survey elicitation.
  3. (With the risk of being out of topic) Nudge, by Thaler and Sunstein (2009): An interesting book that effectively describes how people’s decision-making and perceptions change with the specific context they are in.

References

Berns, G. S., Capra, C. M., Moore, S., & Noussair, C. (2008). Three studies on the neuroeconomics of decision-making when payoffs are real and negative. In Advances in health economics and health services research (pp. 1–29). https://doi.org/10.1016/s0731-2199(08)20001-4

Delavande, A. (2023). Expectations in development economics. In Elsevier eBooks (pp. 261–291). https://doi.org/10.1016/b978-0-12-822927-9.00016-1

Giustinelli, P. (2022). Expectations in Education: Framework, elicitation, and evidence. Social Science Research Network. https://doi.org/10.2139/ssrn.4318127

Haushofer, J., & Shapiro, J. P. (2016). The Short-term Impact of Unconditional Cash Transfers to the Poor: Experimental Evidence from Kenya*. Quarterly Journal of Economics, 131(4), 1973–2042. https://doi.org/10.1093/qje/qjw025

Thaler, R. H., & Sunstein, C. R. (2009). Nudge: Improving Decisions About Health, Wealth, and Happiness. Penguin.

Leave a comment