Blue chip company/Contract/Remote/6 + Months (Long-term project)/Start: ASAP/InsideIR35 The Person This role is for a machine learning engineer whose key focus is on scalable software engineering relevant to model deployment and monitoring. Well versed at designing and building big data pipelines with machine learning workloads which are repeatable and scalable for extremely large datasets.
- Deploying latest NLP techniques such as Transformer Models in production, with awareness of the challenges.
- Creating performance metrics and tracking processes to measure the effectiveness of Data Science solutions
- Conceptualising necessary data governance models to support the technical solution and assure the veracity of the data
- Working collaboratively with other members of the Data Science, Data Engineering and Information Architecture teams to innovate and create compelling data-centric stories and experiences
- Proficient with programming languages in Big Data platforms, like Python, R, Scala
- Knowledge on at least one of the mainstream deep learning frameworks such as PyTorch, TensorFlow
- Understanding software development best practices
- GCP platform: Dataflow, Composer, BigQuery, Vertex AI or similar techniques in other cloud platforms
- MLOps - MLFlow, Kubeflow, BentoML or similar
- Productionising machine learning pipelines with Apache Beam and Apache Airflow
- Track record in staying conversant in new analytic technologies, architectures, and languages - where necessary - for storing, processing, and manipulating this type of data
- Demonstrated Data Science consultancy skills, eg running hypotheses workshops, mentoring more junior team members, preparing reports and presenting data science results.
- Skilled to communicate with a variety of stakeholders in the organization
- Planning and organisation skills so as to work with a high-performance team, handle demanding clients and multitask effectively and in an agile way
- Team management experience preferred
5+ years of experience in AI, data science, data engineering and/or other technology related capabilities in one or multiple industries. Experience in Financial Service sector, in particular ESG analytics and risk management, is preferred.
BSc (ideally MSc or PhD) in Computer Science, Statistics, Engineering or similar technical field
A combination of one or more of the following:
- Proficient with programming languages like Python, R, Scala,
- Proficient with Git, Linux, Docker
- Software Engineering best practices and Object-Oriented Programming
- Skills in big data technologies like Hadoop, HDFS, Spark, Apache Beam, Apache Airflow
- SQL and NoSQL databases
£630 (Negotiable, Inside IR35) Long term project with a view to extend.