Skip to main content

ETL Support Consultant

Experience: 6-9
Posted: 06 Oct,2021
Location: United Arab Emirates

Very Urgent Requirement for ELT Support consultant ( Data Sets experience) this role is long-term role with PARAM in a banking domain.



Summary of role:

 

Connect and model complex distributed data sets to build repositories, such as data warehouses, data lakes, using appropriate technologies.

Managing data related contexts ranging across addressing small to large data sets, structured/unstructured or streaming data, extraction, transformation, curation, modelling, building data pipelines, identifying right tools, writing SQL/Java/Scala code, etc


Responsibilities:

 

  • Create and maintain optimal data pipeline architecture.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Keep data secure.
  • Create data tools for analytics and data scientist team members
  • Work with data and analytics experts to strive for greater functionality data systems


Typical skills and background:

 

SKILLS:

 

  • Experience of development of ETLs using Informatica BDM and Power Center
  • Knowledge and experience of OFSAA Data Model, Data Integration and Processes
  • Solid development skills in Java, Scala and SQL
  • Good knowledge of working with different Hadoop based services like Hive, Impala, Kudu, HBase, Kafka, Flume, Sqoop, Oozie etc.
  • ·Clear hands-on mastery in big data systems - Hadoop ecosystem, Cloud technologies (AWS, Azure, Google), in-memory database systems (HANA, Hazel cast, etc) and other database systems - traditional RDBMS (Terradata, SQL Server, Oracle), and NoSQL databases (Cassandra, MongoDB, DynamoDB)

 

EXPERIENCE AND QUALIFICATION:

 

  • Bachelor's degree in Computer Science or equivalent; Masters preferred
  • Minimum of 6 years' hands-on experience with a strong data background
  • Extensive experience working with Big Data tools and building data solutions for advanced analytics
  • Practical knowledge across data extraction and transformation tools - traditional ETL tools (Informatica, Altryx) as well as more recent big data tools

Required Skills

Skill
Years
Months
OFSAA Data Model, Data Integration and Processes
7
0
JAVA
7
5
Scala
7
4
SQL
7
5
Transformation Tools(ETL tools -Informatica, Altryx)
7
0
Informatica
8
0
Hadoop (Hive, Impala, Kudu, HBase, Kafka, Flume, Sqoop, Oozie)
7
0
Big Data tools
7
0
Hadoop Ecosystem
8
0
Cloud technologies (AWS, Azure, Google)
7
0
Data Systems (HANA, Hazel, cas)
6
0
RDBMS(Terradata, SQL Server, Oracle)
7
0