Posted:
7/29/2024, 9:22:43 PM
Location(s):
England, United Kingdom ⋅ London, England, United Kingdom
Experience Level(s):
Mid Level ⋅ Senior
Field(s):
AI & Machine Learning
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
We are looking for an AGI Safety Manager to join our Responsible Development & Innovation (ReDI) team at Google DeepMind.
In this role you will be responsible for partnering with research, product and policy teams focused on AGI. You will help anticipate the risks and challenges from AGI and assess AGI related efforts and technologies. Among your core responsibilities, you will manage the operation of our AGI Safety Council. You will contribute to our efforts to ensure that our AGI work is conducted in line with our responsibility and safety best practices, helping Google DeepMind to progress towards its mission to build AI responsibly to benefit humanity.
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and responsibility are the highest priority.
We constantly iterate on our workplace experience with the goal of ensuring it encourages a balanced life. From excellent office facilities through to extensive manager support, we strive to support our people and their needs as effectively as possible.
As an AGI Safety Manager within the ReDI team, you’ll use your expertise to deliver impactful work through direct collaboration on groundbreaking research projects and to help develop the broader governance ecosystem at Google DeepMind.
In your role, you will support the operation of the AGI Safety Council by producing analyses and reports that inform decision making. The AGI Safety Council is concerned with extreme risk from AGI, whether from misalignment, misuse, or structural risks.
Your role will be broad and cross-cutting, requiring a variety of skills. As the first operational hire supporting the AGI Safety Council, you will help define the role and the mode of operation of the committee. Synthesising and producing research ideas, prioritising effectively, and building trusted relationships are critical skills for this role.
The responsibilities include:
In order to set you up for success as an AGI Safety Manager at Google DeepMind, we look for the following skills and experience:
In addition, the following would be an advantage:
Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy.
Website: https://deepmind.com/
Headquarter Location: London, England, United Kingdom
Employee Count: 501-1000
Year Founded: 2010
IPO Status: Private
Last Funding Type: Series A
Industries: Artificial Intelligence (AI) ⋅ Business Development ⋅ Machine Learning