• Competitive
  • Raleigh, NC, USA
  • Festanstellung, Vollzeit
  • Credit Suisse -
  • 16 Jan 19

System Analyst/Big Data Engineer #122021

We Offer
The Treasury Global MI team works on the critical Liquidity Risk and regulatory reporting aspects of LRP program. The team consists of developers, IT BA's and testers based in NY, Raleigh, Wroclaw and Pune. We use Agile methodologies and uses Scrum as framework for project deliveries.
  • An interesting and challenging System Analyst/Big Data Engineer role in a dynamic and international team of Agile developers working on development of key regulatory and risk applications used for Liquidity Management and Reporting by Credit Suisse Treasury
  • You are taking an active part of our transformation into a more data and analytical insights driven bank
  • Closely liaise with Architects, Business Analysts and Change Partners to understand the functional requirements and develop technical solutions
  • You design and build data pipelines to ingest, integrate, standardize, clean and publish data
  • You are responsible for defining and crafting integration patterns with downstream applications
  • You provide input into overall IT architecture of the solution, including technical environment configuration, data modelling, data integration, data presentation, information security and other IT architecture related topics
  • You support Data Scientist in their day-to-day work on a variety of topics, including tools and data analysis methods

Credit Suisse maintains a Working Flexibility Policy, subject to the terms as set forth in the Credit Suisse United States Employment Handbook.

You Offer
  • You have 3+ years of overall IT experience as a Big Data Engineer with System Analyst
  • You have 2+ years of System Analyst experience with understanding the IT systems, converting the business requirements in to functional requirements, interacting with the business stakeholders, supporting the testing and production release activities
  • You have proven work experience of at least 3 years as a Big Data Engineer using Big Data technologies and Cloudera Hadoop components such as HDFS, Sentry, HBase
  • Hands on experience with Hadoop based analytical solutions, ideally on Cloudera distribution, as well as practical skills in creating data processing pipelines using Spark, Scala, Python, Scoop or similar tools
  • Good knowledge on Big Data querying tools, such as Pig, Hive, and Impala
  • You are able to work in a dynamic environment, be comfortable with changing priorities and tight deadlines, manage you own workload with minimum supervision
  • You have deep understanding of Dev ops and Agile software development methodologies
  • Degree from an accredited university, preferably in Computer Science or related discipline, or comparable industry experience

For more information visit Technology Careers