Safe Intelligence is on a mission to make AI safe and reliable for anyone to use. To help us succeed, our team is looking for Machine Learning Specialists, and we’re hoping it’s you! In this role, you’ll play a leading role in helping both customers and driving our product forward by helping our internal team apply advanced validation techniques to their machine-learning models. This role will have a specific focus on tabular data models used in finance and insurance, but there will be plenty of opportunity to branch out to other model types and industries.
The role has significant customer & user facing elements working on real world problems to help ML teams (R&D and Product within other organizations) improve the quality of their models. In addition you will also work closely with the Safe Intelligence R&D team to help improve the company’s tools based on the challenges you see in your domains of expertise. This can range from inputs to products to working on the product itself.
Previous knowledge of Machine Learning Verification isn’t required, but a solid knowledge of existing testing practices, metrics, training, and validation methods are extremely valuable.
We’re looking forward to having you on board!
Responsibilities
As a Safe IntelligenceMachine Learning Specialist, you will:
- Work closely with customers and end-users to understand their ML models and help them assess performance. Generally, these will be R&D and product teams at customer organisations.
- Implement prototypes, usecases, and solutions that apply the algorithms developed at Safe Intelligence to address user-specific problems, particularly in the field of tabular data for Finance and Insurance.
- Conduct experiments to evaluate various approaches and weigh their respective trade-offs.
- Coordinate with the research and platform teams to guide future development based on use-case-specific challenges.
- Contribute to the development of an efficient and scalable package for performing verification and robust learning.
Qualifications
The technical requirements for the role are:
- Experience in training, evaluating, and deploying Machine Learning learning models in the Finance or Insurance industries.
- Experience talking to stakeholders in these models to understand their requirements and guiding them through what is and is not possible or desirable in a model.
- Familiarity with Python and the packages widely used in data science and machine learning. Developers should be familiar with libraries like NumPy, pandas, scikit-learn, TensorFlow, and PyTorch.
- Familiarity with common machine-learned model types used for tabular data.
- Fluency in validation and evaluation frameworks and metrics frameworks for machine learning such as Accuracy, Recall, F1 scores, and others.
- An in-depth understanding of Neural Networks and Decision Trees enabling you to train such models to high performance and modify/tune their architecture based on given constraints.
Additional beneficial experience includes:
- Familiarity with best practice in Machine Learning workflows and MLOps tools.
- Technical experience in developing non-ML solutions in the Finance or Insurance industries.
At a personal level, we’re also looking for someone who is:
- Passionate about helping engineering teams achieve their AI and ML goals.
- Excited about interacting with others and digging in to help solve their problems collaboratively.
- Technical and constantly in a state of learning.
- Able to communicate clearly and efficiently with a variety of audiences, including developers, customers, researchers, partners, and executives.
- Fearless in getting "hands-on" with technology and execution.
- Has a strong understanding of modern software engineering processes.
- Comfortable with ambiguity with a drive for clarity.
- Collaborative with and respectful of others on the team.
- Honest, straightforward, and caring about each other’s well-being.
Why Safe Intelligence is for you:
We strongly believe AI can bring great benefits to individuals and society, but these will only be achieved if the systems we build are safe to use. To meet this need, we are developing advanced deep validation techniques and tools that allow AI/ML engineers world-wide to validate the robustness of their models, as well as repair the fragilities that they discover.
By joining us, you’ll be able to help advance the techniques, bring advanced technologies to AI/ML engineers worldwide and contribute to our shared mission to realise successful and reliable AI.
Grow with us!
If you think you can bring something special to this role, please apply even if you do not meet all listed criteria. Safe Intelligence is exploring uncharted waters, and finding the right crewmates is important to us. We support ongoing learning for the whole team, ranging from individual mentorship to internal seminars and support for sector and technology-specific upskilling.
Compensation & Benefits
Safe Intelligence provides competitive compensation based on role and candidate experience. Our salary guidance range for this role is:
- 55,000-90,000 GBP / Annual + Stock Option Benefits
In addition, company benefits for all roles include:
- Stock option benefits
- Mentoring, learning, and development allowance
- Regular team social and work events
- Flexible and generous holidays. We work hard and encourage everyone to take time off to recharge and enjoy other aspects of our lives.
Equality and Inclusion
We are proud to be an equal opportunity employer and work hard to create an environment where people of diverse backgrounds and life experiences can thrive. The team is highly collaborative and meritocratic. Great ideas come from everywhere, and we strive to make it easy for people to express themselves and be heard.
Location & Office Culture
Safe Intelligence is based in London, UK, and we’re focused on building the initial team here. We highly value the ability to work flexibly and remotely at times, but we also have a strong belief that regular in-office interactions make for a much more fulfilling and productive work experience.
Our company culture combines optimism for the future (hard problems can be solved with the right effort), speed of iteration (the best ideas come from many ideas tested), and rigour in what matters (correctness and precision are critical for safety).
Come and join us to add your skills and passion to the future of Safe Artificial Intelligence!
How to apply
Find us on LinkedIn and submit for this role. If you have any questions, please feel free to email .
Not ticking every box on our list? If you don’t meet all the criteria but feel you have something special to bring to the table, we encourage you to apply anyway.