The Safe AI Forum

The Safe AI Forum (SAIF) is a nonprofit organisation focused on advancing global action and collaboration to minimise extreme AI risks and ensure AI benefits are shared by all. Founded in late 2023 by Fynn Heide and Conor McGurk, SAIF leads the International Dialogues on AI Safety (IDAIS), facilitating global discussions among scientists and AI governance experts. Our team also conducts research on AI governance, provides advisory services to aligned nonprofits, and runs workshops on critical AI safety topics.

About the Role

SAIF is seeking applications for a 6 or 12-month fellowship to develop and execute projects aligned with our mission. The fellowship is fully remote, and fellows will be integrated into our team with access to SAIF’s extensive network of experts and collaborators.

Fellows will design and execute high-impact projects in AI safety and governance. Exceptional candidates may propose their own projects. SAIF leadership will provide support in scoping, resourcing, and overseeing these projects. Fellows will primarily focus on their research initiatives but may also contribute to other team efforts. There may be an opportunity for permanent employment post-fellowship.

Qualifications & Responsibilities

Research Fellow

Senior Research Fellow

Special Projects Fellow

Project Areas

Fellows may work on a variety of projects, including:

Ideal Candidate Profile

You may be a good fit if you:

Logistics

SAIF is committed to fostering an inclusive and diverse workplace. We welcome applicants from all backgrounds and encourage individuals of all identities and experiences to apply. If you require accommodations during the application process, please contact us.

✉️
If you have any questions, reach out to Conor McGurk at conor@far.ai or SAIF at info@saif.org.