About Your Job:
In this role, using your deep expertise in building cloud data lake architecture you should be an authority at crafting, implementing, and operating stable, scalable cloud data lake platform with required cost considerations to offer business users a highly scalable data and analytics platform. You are responsible for setting up standards and processes for data cataloguing, data semantics, data authorizations, data security / governance and supporting data scientists and business analysts to access Data as a Service from the platform. You should have excellent business and interpersonal skills to be able to work with business owners to understand data requirements.
Expert in architect, design, construct, install, test and maintain highly scalable and optimized data pipelines with state-of-the-art monitoring and established baseline pipeline architecture.
Architect, build the optimal data extraction & transformation mechanisms for various kinds of data, optimal technology for storing data in the data lake; based on size, complexity and needs from business teams
Architect the solution to ensure optimal maintenance process for the data in the lake; in terms of real time needs; data analysis needs and data archiving/long term data retention.
Bring together large, complex and sparse data sets to meet functional and non-functional business requirements and use a variety of languages, tools and frameworks to marry data
Design and implement data tools for analytics and data scientist team members to help them in building, optimizing and tuning of use cases.
Architect, design and build data warehouse solutions.
Establish the data access guidelines covering data cataloguing, data semantics, data security, data governance where needed.
Provide technical leadership to the data engineering team and review / contribute to the artefacts delivered, including technical architecture, functional and non-functional requirements, interface specifications and high level design documents.
Proficient in architectural patterns and ensure to QR-IT Software Governance Integration standards.
Lead root cause analysis of reported critical incidents and recommend / ensure implementation of effective preventive actions to avoid repetition.
Review functional and non-functional requirements for the deliveries assigned and recommend / develop technology frameworks.
Drive high performance and accountability for superior results and employee engagement. Provide staff with timely, candid and constructive performance feedback; develop employees to their fullest potential and provide challenging opportunities that enhance employee career growth; recognize and reward employees for accomplishments.
Promote, train, support, and advocate Data Architecture best practices, such as, master data management, data modeling, data modernization etc.
Establish the department or teams objectives and priorities to align with and support business objectives.
Regularly evaluate the department or team objectives, plans, procedures, practices, and makes appropriate changes if needed.
Recruit, train and develop team members to create a high quality data engineering capability
The applicant should have a Bachelor’s Degree or equivalent (Degree in engineering, computer applications, commerce, or business administration). You must have minimum 8 years of data engineering experience. Should have excellent verbal and written communications skills. Also possess good analytical, interpersonal skills and a proven team player
Overall 8+ years of data engineering experience with minimum 5+ years hands-on experience and proficient with big data using technologies like Hadoop/Hive, Hyperscale PostgreSQL, Java/Scala, Spark, Kafka, SQL and NoSQL, Python, azure cloud-based data engineering solutions (ex: Azure Data Factory, Azure Data Lake Store, Azure Databricks, Azure HDInsight)
Experience leading teams that own very large data warehouses or data lakes.
Hands-on experience on data ingestion tools (ex: striim, streamsets, NiFi).
Authoritative in ETL optimization and hands-on experience with ETL tool Informatica
Hands on experience in architecting Data & Analytics solution
Hands-on experience in data modelling, data visualization, and pipeline design & development
Hands on experience with data warehouse platforms (ex: Snowflake, Azure Data Lake Analytics)
Strong solution design skills to build scalable, resilient and sustainable solutions to address business requirements
Designing, coding, and tuning big data processes using Apache Spark or similar technologies.
Experience with building data pipelines and applications to stream and process datasets at low latencies.
Demonstrate efficiency in handling data – tracking data lineage, ensuring data quality, and improving discoverability of data.
Hands on experience in setting up data governance frameworks.
Very good command of English language.
Experience with elasticsearch
Experience with cloud-based data-warehousing system Snowflake
Experience with data virtualization, semantic layer tool dremio
Experience with visualization tools – Tableau, Power BI
Knowledge of airline domain
Knowledge of agile/lean development methodologies
Click here to Apply Online