AI Safety Grant Program: Funding for High-Impact Projects
About the Grant Program
This grant program supports projects that aim to reduce existential risks from AI across four underexplored areas:
- Automating Research and Forecasting – Scaling AI-enabled research and forecasting methods to support safe AGI development.
- Neurotech to Integrate with or Compete Against AGI – Advancing Brain-Computer Interfaces (BCI) and Whole Brain Emulations (WBE) for human-AI collaboration.
- Security Technologies for Securing AI Systems – Implementing security techniques, automated red-teaming, and cryptographic solutions.
- Safe Multi-Agent Scenarios – Exploring game theory and coordination mechanisms to ensure safe AI interactions.
This initiative provides $4.5 – 5.5M USD annually to support projects that strengthen human capabilities and cooperation architectures in AI safety.
Funding Areas
1. Automating Research and Forecasting
- AI-driven scientific research to enhance AGI safety.
- Forecasting methods for AGI risk assessment.
- Other innovative approaches in this domain.
Reference materials: Superhuman Automated Forecasting, Decision Forecasting AI & Futarchy, Superhuman Scientific Literature Research.
2. Neurotech for Human-AI Integration
- Brain-Computer Interfaces (BCI) for human cognition enhancement.
- Whole Brain Emulations (WBE) as human-aligned alternatives to AGI.
- Deep learning-based lo-fi emulations.
Reference materials: Foresight Institute Neurotech Reports, Whole Brain Emulations, A Hybrid Approach to AI Safety.
3. Security Technologies for AI Systems
- Secure computing techniques (POLA, SeL4, hardware security).
- Automated vulnerability discovery and red-teaming.
- Cryptographic methods for secure AI coordination.
Reference materials: AI Infosec, Cryptographic Technologies, Security Without Dystopia.
4. Safe Multi-Agent Scenarios
- Game theory models for AI-human cooperation.
- Strategies to prevent collusion and deception.
- Principal-agent models and Active Inference approaches.
Reference materials: Foundations of Cooperative AI, Multi-Agent Safety Hackathon, Paretotopian Goal Alignment.
Application and Evaluation Process
How We Evaluate Projects
- Selection: Reviewed by Foresight staff and external advisors.
- Criteria: Potential for high-impact outcomes within short AGI timelines.
- Openness: Preference for publicly shared results unless confidentiality is required.
⌛
Applications are accepted year-round and reviewed quarterly, with deadlines on the last day of March, June, September, and December.
Application Process
- Fill out the grant application form.
- Review by at least three technical advisors.
- If shortlisted, attend a brief interview.
- Decision made within 8 weeks after the deadline.
Additional Details
For detailed information about the grant program, including eligibility, focus areas, and application guidelines, please refer the document below: