The abiding concern of my research is an updated version of a classic sociological question: is the purpose of quantitative knowledge to understand the world, or to change it? My current work is divided into three primary areas:
Governance of Artificial Intelligence/Machine Learning Models in Healthcare
Much of my current work concerns the social classification of AI/ML models and the ethical implications for US healthcare. The vast majority of AI/ML applications are not classified as medical devices and therefore not subject to FDA regulation. Based on interviews with experts in clinical informatics, I am studying how experts use what I call reflexive analogical reasoning to imagine the future of AI/ML governance by drawing on a dizzying array of regulatory models: nutrition labels, “citizen assemblies,” aviation safety, space travel, chemical weapons safety, and more. The first manuscript from this research is under review at a medical sociology journal.
At NYU, I am also engaged in more practical, team-based work in this area. My collaborator Kellie Owens and I have designed a checklist based on interviews and participant observation with experts in the medical school that is guiding the development and application of AI/ML models within the NYU healthcare system. This work informs two recent publications that appear in the American Journal of Bioethics, in which we grapple with thorny issues such as whose expertise should be consulted in model implementation and whether the IRB is an appropriate venue for ethical oversight of AI. Because activities that involve AI/ML models are typically designated “quality improvement” (QI) rather than clinical research (which carries federally mandated ethical obligations), there remains significant uncertainty over what local governance of these tools ought to look like. In practice, the boundary between QI and clinical research is not well defined, so in the next phase of this research, I plan to use content analysis, survey methods, and social network analysis to identify how QI emerged as a distinct category, and how this fuzzy approach to classification affects the interpretation of scientific evidence in clinical medicine. This has implications for novel uses of AI/ML models that are difficult to categorize such as “silent trials,” in which tools are evaluated on prospective patients by researchers while clinicians are blinded from model predictions.
The Sociology of Economic ans Statistical Expertise
Quality improvement techniques in healthcare originally were developed by management scientists experimenting with statistical methods in the mid-twentieth century, which relates to my second major area of interest: the sociology of statistical and economic expertise. This work rethinks and reframes the enormously consequential economics of U.S. social policy as a predominantly reactive enterprise, in which existing social programs and data sources constrain economists' capacity to effect policy change. I find that when it comes to topics like healthcare or education, economics is not an unchanging monolith in policy settings, and that the further one gets from the field’s disciplinary core, economic theory is less essential to the work of economists than a common methodological language (which is not always legible to policy audiences). In the wake of the COVID-19 pandemic, this analytical approach has found economists joining the fray of experts investigating issues such as 'health equity,' a state of affairs which contrasts sharply with popular critiques of the field. Research related to this project has been published in Theory and Society, the Journal of Cultural Economy, Science, Technology, & Human Values, the Journal of Education Policy, and Economy and Society. Most recently, I wrote a piece for TIME Magazine's Made By History series based on this research.
Genetics and Uncertainty in Social Policy
A final project, Polygenic Prediction, turns a sociological lens on the production of new quantitative indicators in the field of behavior genetics that are beset by a host of uncertainties. While so-called polygenic scores have received critical attention primarily for their potentially eugenic implications in policy settings, this project investigates how uncertainty is inherent to research in this domain beginning with the collection of biobank data that disproportionately feature people with European ancestry, resulting in reference 'populations' that are not representative. My research on polygenic scores is a collaboration with UCLA's Aaron Panofsky and Nanibaa' Garrison that aims to empower clinicians, policymakers, and the public so that when polygenic prediction is applied, it occurs as ethically and equitably as possible.