[social-share]

DeepSeek and the Future of AI: Congressional Testimony from Julia Stoyanovich
Author
Date 12 April 2025
On 9 April, Associate Prof Julia Stoyanovich, Director of the Center for Responsible AI at NYU Tandon School of Engineering and Partner Investigator at the ARC Centre of Excellence for Automated Decision-Making and Society, testified at the Research & Technology Subcommittee Hearing – DeepSeek. A Deep Dive.
Her testimony focused on the national security and competitive advantage implications of DeepSeek for the US.
“It was an honor and a privilege to testify at the U.S. House of Representatives today, at a Research & Technology Subcommittee Hearing of the Committee on Science, Space, and Technology,” said Prof Stoyanovich.
In her remarks, Professor Stoyanovich offered three key recommendations with regards to the technology implications of DeepSeek:
Recommendation 1: Foster an Open Research Environment
To close the strategic gap, the federal government must support an open, ambitious
research ecosystem. This includes robust funding for fundamental AI science, public datasets, model development, and compute access. The National AI Research Resource (NAIRR) is essential here—providing academic institutions, startups, and public agencies with tools to compete globally. Federal support for the National Science Foundation and other agencies is vital to sustaining open research and building a skilled AI workforce.
Recommendation 2: Incentivise Transparency Across the AI Lifecycle
Transparency drives progress, safety, and accountability. The government should require public disclosure of model architecture, training regimes, and evaluation protocols in federally funded AI work—and incentivize similar practices in commercial models. Public benchmarks, shared leaderboards, and reproducibility audits can raise the floor for all developers.
Recommendation 3: Establish a strong data protection regime
The U.S. must lead not only in AI performance, but in responsible, privacy-respecting AI infrastructure. This includes clear guardrails on how AI models collect and use data, especially when deployed in sensitive sectors. It also means restricting exposure of U.S. data to jurisdictions that lack safeguards. International frameworks like GDPR offer useful reference points—but our approach must reflect U.S. values and strategic interests.
About the Hearing
The hearing examined DeepSeek’s AI models, which have drawn international attention for achieving comparable performance to U.S. models while using less advanced chips and appearing more cost-effective. The session also explored the role of U.S. technologies in DeepSeek’s development and how federal support can drive innovation in the private sector.
Other expert witnesses included Adam Thierer (R Street Institute), Gregory Allen (Center for Strategic and International Studies), and Tim Fist (Institute for Progress).
Another related hearing will be held Wednesday by the House Energy and Commerce Committee, focusing on the federal role in accelerating advancements in computing.
View the hearing Research and Technology Subcommittee Hearing – DeepSeek: A Deep Dive on YouTube.