Working the Refs
Ro Khanna’s Academic Roundtable Stacked with Big Tech-Connected Scholars
Silicon Valley Rep. Ro Khanna convened a meeting of academics ‘ignored’ in the debate over AI. More than two-thirds of them had ties to Big Tech that Khanna did not disclose.

When Rep. Ro Khanna announced plans to hold a Capitol Hill roundtable on artificial intelligence technology in February, he billed it as an opportunity to hear from academic experts, “a critical voice” that he argued was being “ignored” in the AI policy debate in Washington.

Khanna, who represents part of Silicon Valley, told the Washington Post he would bring together the “leading minds in AI, technology, economics, ethics to bring some objectivity” to the subject, rather than “technology leaders at corporations telling us how to regulate technology.”

But a Tech Transparency Project (TTP) review of the 22 academics who participated in the roundtable show that 15, or roughly two-thirds, are tied to Big Tech companies that dominate the AI industry. The review found that the 15 have either worked at or consulted for Big Tech companies, received tech research funding, or are linked to organizations that have received funding from Big Tech companies or tech executives.

To be sure, many of these academics are respected experts in the field of AI and may have important contributions to make on how to deploy and regulate the technology. Some have been critical of Big Tech. But at a time when tech companies and executives are extending their reach into academia, the lack of disclosure around Khanna’s roundtable raises questions about the role of tech industry influence on the discussion.

Khanna’s office did not respond to a request for comment.

Tech ties

Since ChatGPT kicked off the generative AI boom, Khanna, a Democrat and self-described progressive capitalist, has often talked about giving constituencies other than Big Tech a seat at the table on how the technology is developed.

In a May 2023 op-ed, Khanna called on political and business leaders to “democratize access” to AI jobs. He later warned that “large parts of the working class and middle class could fall further behind” as Silicon Valley cashes in on the AI frenzy, and advocated for the inclusion of workers on the boards of AI companies.

But Khanna has also staked out positions on AI that align with Big Tech’s interests. For example, he has proposed legislation that would prod the U.S. government to speed its adoption of commercial AI tools—a potential boon to tech companies eager to sell their AI products to federal agencies.

One bill he introduced last year directs the White House Office of Management and Budget to guide federal agencies on incorporating new technologies like AI into their website search functions. Khanna used ChatGPT to write the bill. He also co-sponsored a measure to set up a working group of countries in the Five Eyes intelligence alliance to “leverage commercially available artificial intelligence technologies” to advance their joint activities.

Khanna’s academic roundtable on AI, held behind closed doors in February 2024, had strong Big Tech connections. The event, meant to explore AI’s impact on workers, education, elections, and other issues, was dominated by participants with ties to the tech industry.

TTP obtained the list of roundtable participants and details those with Big Tech connections below. Most of them did not respond to a request for comment.

·      Fei Fei Li, a Google veteran and co-director of the Stanford Institute for Human-Centered AI (HAI), which is backed by Google and Microsoft. A leaked email from Li played a prominent role in the 2018 controversy over Google’s AI contract with Pentagon, which sparked widespread Google employee protests. In an email exchange published by the New York Times, Li—then serving as chief scientist for AI at Google Cloud—warned colleagues to avoid “at ALL COSTS” any mention of AI when talking about the program, known as Project Maven, because it would be “red meat to the media.”

·      Stanford professors Erik Brynjolfsson, a HAI faculty member and senior fellow whose Digital Economy Lab at Stanford has gotten financial support from Google and its DeepMind division as well as Schmidt Futures, the investment fund of former Google CEO Eric Schmidt; Andrew Ng, a HAI faculty member and the co-founder of Google Brain, the deep learning AI research team at Google that later merged with DeepMind; and Rob Reich, who has served as a faculty member, senior fellow, and associate director at HAI.

Asked for comment, Reich pointed to HAI's fundraising policy, which states among other things that the institute does not accept donations or conditions on donations that "might compromise the independence, accuracy, or autonomy of our work, or restrain the views expressed by researchers at HAI."

·      Noah Feldman, a Harvard Law School professor and co-founder of Ethical Compass Advisors, which counts Facebook and its parent company Meta among his consulting clients. Feldman was influential in the creation of the Meta’s Oversight Board, a quasi-independent body that makes rulings on content issues and has been criticized as a public relations and “self-regulation” ploy by Meta to fend off government regulation.

·      Kristian Lum, who was a research associate professor at the University of Chicago Data Science Institute at the time of Khanna’s roundtable. She is a founding executive committee member of the ACM conference on Fairness, Accountability, and Transparency (FAccT), which is regularly sponsored by Big Tech companies. The conference’s 2022 sponsors included Google’s DeepMind, Microsoft and Amazon. In February 2024, days after the Khanna event, Lum tweeted that she joined Google DeepMind, taking a position as a staff research scientist.

·      Lawrence Lessig, a Harvard Law School professor. Lessig is a founder and board member emeritus of Creative Commons, the copyright licensing system that launched in 2001 and receives funding from Google, Microsoft, Amazon Web Services, and Facebook CEO Mark Zuckerberg’s philanthropy, among others. Asked for comment, Lessig said he was “chair of the board until 2007, and have raised no money for the organization since then.”

In 2007, the Wall Street Journal reported that Google pledged $2 million to Stanford Law School's Center for Internet and Society, which was founded by Lessig, though a later report indicated that Lessig founded the center without raising money from outside the university. In an emailed comment, Lessig said he “had no obligation to raise money for the center ever, and did not,” and said his “recognition of the issue of how contributions might undermine the integrity of any center” led him to Harvard in 2009 to work on a lab focused on institutional corruption. Lessig said he is working on an organization that will allow academics to certify their independence, “which roughly tracks their not taking money that would undermine views about the integrity of their research.” He also pointed to his disclosure statement.

According to the website Open Secrets, which tracks money in politics, Google employees were the largest source of donations for Lessig’s longshot bid for the White House in the 2016 election cycle.

·      Markus Anderljung, head of policy for the Centre for the Governance of AI (GovAI), which spun out of Oxford University's Future of Humanity Institute. He’s also an adjunct fellow with the Center for a New American Security, which has received funding from Google, Facebook, and Amazon. GovAI’s research led to the creation of the Cooperative AI Foundation (CAIF), a U.K. charity whose founding directors include Microsoft Chief Scientific Officer Eric Horvitz and Google DeepMind’s Allan Dafoe.

In an emailed comment, Anderljung said he does not receive compensation from the Center for a New American Security and pointed to the group's funding policy, which states that it only accepts contributions on the condition that it "retains intellectual independence and full control over any content funded in whole or in part by the contribution."

He also said GovAI is funded by philanthropic organizations and individual donors, and does not accept financial support from for-profit companies. He said the group's funders "have never exerted any influence on GovAI’s research agenda or topics, and do not instruct researchers in any way," adding that GovAI is "rigorous in ensuring that it does not accept donations that it believes might compromise the neutrality or accuracy of its work."

·      Deborah Raji, an AI research scientist who has worked with the Google Ethical AI team and served as a research fellow at the Partnership on AI (PAI), a nonprofit coalition founded by Facebook, Google, Microsoft, and Amazon.

·      Andrew Selbst, an assistant professor of law at the UCLA School of Law. He previously served as a postdoctoral scholar at the Data & Society Research Institute, which was founded in 2013 with gifts from Microsoft and Microsoft Research.

·      Rediet Abebe, a University of California, Berkeley assistant professor of computer science who co-founded Black in AI in 2017 to increase the presence of Black people in the field of AI. The group’s corporate sponsors included Microsoft, Google, Google’s DeepMind division, and Facebook.

·      Ziad Obermeyer, an associate professor of health policy and management at the UC Berkeley School of Public Health. He co-founded Nightingale, a medical AI nonprofit that received a $2 million grant from Eric Schmidt’s investment fund Schmidt Futures. Obermeyer also received a $1 million gift from Mark Zuckerberg’s philanthropy.

·      Ifeoma Ajunwa, an Emory University law professor. She received a $690,000 grant from Microsoft in 2021 for an AI decision-making research program when she was at the University of North Carolina School of Law.

In an emailed comment, Ajunwa said she received an unrestricted gift from Microsoft that “does not come with any obligations to Microsoft whatsoever.” She wrote, “I also firmly believe that academics should make use of industry resources for research that will benefit the public good,” adding that she disagrees with the notion that “academics should have no contact at all with the tech industry and help inform their approach by revealing the ethical and legal implications of the products they are developing.”

·      Ayanna Howard, the dean of the Ohio State University College of Engineering, who served as a visiting researcher at Microsoft Research in 2016.

·      Sarah Myers West, co-executive director of New York University’s AI Now Institute, which launched with funding from Microsoft Research, Google, and DeepMind. (The institute later stopped taking Big Tech funding and has become increasingly critical of Big Tech efforts to shape AI policy.)

May 21, 2024
Top stories_
May 6, 2024

Facebook hosts a thriving black market for fake and stolen accounts. Some sellers are offering accounts that can run political ads in India, raising election interference fears.

April 11, 2024

The former Google CEO has repeatedly called China’s AI ambitions a threat to the U.S. His personal investments reveal a much friendlier stance.

February 14, 2024

The U.S. imposes sanctions on individuals, groups, and countries deemed to be a threat to national security. Elon Musk’s X appears to be selling premium service to some of them.

January 30, 2024

Meta gave the green light to teen-targeted ads for drug parties and anorexia that violated its policies and used images produced by its AI image generator.