Gray Swan AI
Gray Swan AI is on a mission to maximize the benefits of AI by making it safe for everyone. As a growing startup, we develop cutting-edge security solutions for AI systems, helping organizations ensure the safety and reliability of their AI-driven products.
About the Role
Location: In Office (Pittsburgh, PA) | Remote (International)
Paid internship: $30-$50/hour (based on experience and skills)
Supports OPT and CPT visa holders.
We are looking for a Machine Learning Engineer Intern to collaborate with our research and engineering teams. This role offers hands-on experience in developing, testing, and deploying AI security solutions. You will contribute to machine learning model evaluations, build tools for AI risk assessment, and help integrate ML-driven insights into our products.
Responsibilities
- Support research on AI robustness, adversarial attacks, and defense strategies.
- Assist in developing and testing machine learning models' security.
- Build and optimize tools for AI risk assessment and adversarial testing.
- Collaborate with engineers and researchers to integrate ML solutions into production environments.
Qualifications
Required:
- Currently pursuing or have completed a Master’s or Ph.D. in Computer Science, Machine Learning, AI, or a related field.
- Work experience or research involving:
- Experience with machine learning models using PyTorch through coursework, personal projects, or internships.
- Familiarity with Python (C++ is a plus) and basic programming skills for implementing ML models.
- Understanding of neural network architectures like sequence models, transformers, and deep learning approaches.
- Ability to preprocess, transform, and analyze datasets, with exposure to large or multimodal datasets.
- Strong problem-solving skills with a basic understanding of ML theory, optimization techniques, and algorithmic principles.
Bonus Points:
- Exposure to cloud platforms like AWS, GCP, or Azure for running ML experiments or deploying models.
- Experience building small-scale ML pipelines or integrating ML models into applications.
- Interest in AI security, adversarial ML, or model robustness evaluation techniques.
- Experience conducting empirical analysis of ML experiments, even at a research prototype level.