Research Scientist, Humanity, Ethics and Alignment

Posted:
10/25/2024, 2:40:26 AM

Location(s):
England, United Kingdom ⋅ London, England, United Kingdom

Experience Level(s):
Mid Level ⋅ Senior

Field(s):
AI & Machine Learning

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

Snapshot

The Humanity, Ethics and Alignment Research Team (HEART) is part of GDM responsibility research (R2), which focuses on interdisciplinary research and technologies which advance safe and beneficial AI development.

HEART focuses on research questions at the ethical frontier of AI technology. It aims to anticipate and address the broader implications of advanced AI systems ahead of time, align these technologies with human values, and guide their design and deployment. HEART currently partners with foundational research, safety and policy teams to lead work on AI agents, advanced assistants and value alignment, contributing to safe and beneficial AI.

We are a cross-disciplinary team with expertise spanning philosophy, political science, science and technology studies, computer science, and other fields. We are seeking to recruit a research scientist with expertise in AI ethics to help drive this work forward.

About us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

The role

Research Scientists join Google DeepMind to work collaboratively within and across a range of research fields. They develop solutions to fundamental questions in machine learning, computational neuroscience, AI and AI policy and governance. We are looking to hire in early and mid-career levels, though will also consider more senior candidates. Those coming from academia may join us from their PhD, post-doc, or professor positions.

Key responsibilities

  • Propose, contribute to, and lead research projects related to the ethics and the impact of advanced AI technologies.
  • Produce state of the art research, into questions surrounding the ethical and social implications of advanced AI systems, via high quality research publications.
  • Leverage domain specific expertise to advance overall AI foresight and preparedness via institutional processes both within GDM and more widely.
  • Support the responsible development, evaluation and deployment of advanced AI technology by contributing to GDM planning, development and evaluation practices.
  • Proactively build relationships across the organisation to inform research and find opportunities for how your work can support other teams.
  • Build and contribute to internal and external collaborations, through involvement in working groups, presentations, and contributions to policy work.
  • Collaborate with our Foundational Research, Safety, Policy and Responsibility teams to ensure advances in artificial intelligence are developed ethically and provide broad benefits to humanity.

About you

In order to set you up for success as a Research Scientist at Google DeepMind, we look for the following skills and experience:

  • A PhD in philosophy, social science, computer science or a related field, with a demonstrable research interest in the social, political and ethical consequences of AI. 
  • High-impact research publications on topics concerning AI, ethics and society.
  • Professional or research experience in the field of AI ethics, AI policy or AI development. 
  • Knowledge of the technical AI landscape and sociotechnical AI research landscape.
  • The ability to work quickly and adaptively in a cross-disciplinary environment, communicating clearly and demonstrating an ability to drive projects towards completion.
  • Ability to synthesise complex material into accessible documents, tailored to different audiences.
  • Professional communication, writing, and presentation skills. 

In addition, the following would be an advantage: 

  • Experience incorporating the perspectives and interests of a diverse range of communities, groups and partners.
  • Familiarity with qualitative and quantitative research methods.

Application deadline: 5pm GMT Friday 8th October 2024 

Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy.

DeepMind

Website: https://deepmind.com/

Headquarter Location: London, England, United Kingdom

Employee Count: 501-1000

Year Founded: 2010

IPO Status: Private

Last Funding Type: Series A

Industries: Artificial Intelligence (AI) ⋅ Business Development ⋅ Machine Learning