Research Scientist – Responsible AI
This role is for a Responsible AI & AI Safety Specialist to help develop and implement strategies for building safe, ethical, and beneficial AI systems. You will play a crucial role in identifying potential risks, drafting policies, conducting rigorous evaluations, and shaping the development of AI technologies to align with responsible AI principles and safety best practices. An ideal candidate possesses a strong understanding of AI safety concerns, experience in conducting risk assessments and evaluations, and the ability to translate research findings into actionable recommendations for engineering, product, and policy teams.
Essential functions
Risk Assessment & Mitigation: Work closely with engineers, PMs, designers, and data scientists to identify potential risks and unintended consequences associated with AI systems (e.g., bias, fairness, privacy, security, robustness, interpretability, controllability). Develop and implement appropriate risk mitigation strategies.
AI Safety Evaluations: Design and conduct rigorous evaluations of AI systems using a variety of methods, including quantitative analysis, simulations, red teaming, and user studies, to assess their safety, reliability, and alignment with intended goals.
Empirical Research: Investigate AI system behavior and impact using empirical methods such as simulations, adversarial testing, robustness analyses, and log analysis; and apply state-of-the-art statistical methods to analyze and interpret results.
Policy & Guidelines: Contribute to the development of internal guidelines, policies, and best practices for responsible AI development and deployment, ensuring compliance with relevant regulations and ethical standards.
Reporting & Communication: Report on your research findings and recommendations to stakeholders across the organization, including engineering, product, legal, and executive teams. Communicate complex technical concepts clearly and concisely.
Collaboration & Mentorship: Work autonomously and collaboratively with cross-functional teams. Mentor other researchers and engineers to promote a culture of responsible AI and AI safety.
Qualifications
4+ years of experience in an applied research or engineering setting related to AI safety, responsible AI, or a closely related field.
Advanced degree (MS/PhD) in Information Science, Computer Science, Science and Technology Studies, or Artificial Intelligence or a socio-technical field.
Strong understanding of AI safety concerns, including alignment, adversarial and robustness testing, content moderation.
Comfortable analyzing and producing high-level reports of quantitative and qualitative data related to AI system performance, safety metrics, and risk assessments, including data from simulations and real-world deployments.
Able to produce meaningful visualizations and reports of the data, and effectively present results to different audiences and across multiple media formats.
Able to work independently to drive outcomes among cross-functional teams, with minimal direction.
Organized, highly attentive to detail, and manages time well.
Excellent written and oral communication skills.
Experience working in industry.
Would be a plus
Experience with designing and conducting experiments to evaluate AI system behavior under various conditions, including adversarial attacks and edge cases.
Experience with AI safety tools and frameworks.
Data science experience, especially with large, distributed data frameworks (e.g., Spark or Hadoop).
Prior experience on AI safety, responsible AI, or related topics.
General familiarity with relevant regulations and ethical guidelines related to AI (e.g., EU AI Act).
General familiarity with red teaming and adversarial testing of AI systems.
Experience with technical AI safety mitigations.
We offer
- Opportunity to work on cutting-edge projects
- Work with a highly motivated and dedicated team
- Competitive salary
- Flexible schedule
- Benefits package - medical insurance, vision, dental, etc.
- Corporate social events
- Professional development opportunities
- Well-equipped office
About us
Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, AI, and advanced analytics services. Fusing technical vision with business acumen, we solve the most pressing technical challenges and enable positive business outcomes for enterprise companies undergoing business transformation. A key differentiator for Grid Dynamics is our 8 years of experience and leadership in enterprise AI, supported by profound expertise and ongoing investment in data, analytics, cloud & DevOps, application modernization and customer experience. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India.Apply to the position
Thank you!
You applied for the position Research Scientist – Responsible AI successfully. We will get back to you soon. Have a great day!
Something went wrong...
There are possible difficulties with connection or other issues. Please try to use another browser (it's recommended to use the latest version of Google Chrome browser). If the problem still persists, please send your application to cv@griddynamics.com
RetrySomething went wrong...
Please double-check the information filled in the form, and make sure to provide valid data.
RetryDon’t see the right opportunity?
Contact us anyway and let’s talk! To apply, send your resume and cover letter to jobs@griddynamics.com
Grid Dynamics is an equal opportunity employer. We are committed to creating an inclusive environment for all employees during their employment and for all candidates during the application process.
All qualified applicants will receive consideration for employment without regard to, and will not be discriminated against based on, age, race, gender, color, religion, national origin, sexual orientation, gender identity, veteran status, disability or any other protected category. All employment is decided on the basis of qualifications, merit, and business need.