Data Engineer,Group Consumer Banking and Big Data Analytics Technology, Technology and Operations
Group Technology and Operations (T&O) enables and empowers the bank with an efficient, nimble and resilient infrastructure through a strategic focus on productivity, quality & control, technology, people capability and innovation. In Group T&O, we manage the majority of the Bank's operational processes and inspire to delight our business partners through our multiple banking delivery channels. Responsibilities
- The position exists to continuously enhance the efficiency of existing Data pipeline Job/framework and setup a best practice for Big Data projects planned on the BI & Analytics platform.
- Review and tune the code for the Spark framework to increase the throughput.
- Establish best practices and guidelines to be followed by tech teams while writing codes in Spark.
- Create a framework to continuously monitor the performance and identify the resource intensive jobs.
- Able to write code, services and components in Java, Apache Spark and Hadoop.
- Responsible for systems analysis Design, Coding, Unit testing, CICD and other SDLC activities.
- Proven experience working with Apache Spark streaming and batch framework.
- Proven experience in performance tuning Java, Spark based applications is a must.
- Knowledge of working with different file formats like Parquet , JSON and AVRO.
- Data Warehouse experience working with RDBMS and Hadoo, Well versed with concepts of change data capture and SCD implementation.
- Hands-on experience in writing codes to efficiently handle files on Hadoop.
- Ability to work in projects following Micro Services based architecture.
- Knowledge of S3 will be a plus.
- Ability to work proactively, independently and with global teams.
- Strong communication skills should be able to communicate effectively with the stakeholders and present the outcome of analysis.
- Working experience in projects following Agile methodology is preferred.
- At least 3 to 9 years of working experience, preferably in banking environments
- Expert knowledge of technologies e.g. Hadoop, Hive, Presto, Spark, Java, RDBMS like Teradata and File storage like S3 are necessary.
- Candidate must speak and write well.
- Experienced on Hadoop frameworks such as Hive, Spark, Java.
- Strong knowledge of relational database systems and data warehousing techniques.
- Strong listening and communication skill set, with ability to clearly and concisely explain complex technical issues.
We offer a competitive salary and benefits package and the professional advantages of a dynamic environment that supports your development and recognises your achievements.