INHR
INHR is a U.S.-registered 501(c)(3) non-profit organization with a presence in New York, the U.K., Italy, and Geneva. It focuses on leveling the international playing field through convening international AI safety dialogues. INHR collaborates with experts from China, the U.S., and other regions to address risk identification and mitigation, particularly in high-risk and existential risk contexts such as weapons of mass destruction, military decision support, and human rights.
About the Role
INHR is seeking early-career professionals and students (bachelor’s or master’s programs, graduating in 2025-26) to conduct research on advanced AI and AI safety. This fellowship provides exposure to leading academics and practitioners in AI governance and offers the opportunity to contribute to a research project focusing on risks and mitigation strategies for cutting-edge AI technologies.
Responsibilities
- Conduct research on AI safety and governance issues.
- Collaborate with international experts from diverse regions, including China, the U.S., Europe, India, and Korea.
- Contribute to dialogues and research projects addressing AI-related risks.
- Work on policy recommendations for AI governance.
Qualifications
- Interest in AI governance and policy.
- Excellent written English skills.
- Additional skills (preferred but not required):
- Fluency in Chinese or Korean.
- Programming skills.
- Work experience involving Large Language Models (LLMs) or Big Data Technologies (BDTs).
Time Commitment
- Minimum commitment: 10 weeks.
- Expected workload: ~10 hours per week (with flexibility).
Application Process
Interested candidates should submit the following documents:
- CV (reverse chronological order with dates by month & year).
- Writing sample (maximum 10 pages).
- Motivation letter (expressing interest and availability).
Note: Strong performance in this role may lead to consideration for future paid positions as a researcher or program officer.