Description:
This is a contract Data Engineer Consultant role at (#ADS) located in Lahore with flexibility for remote work. As a Data Engineer Consultant, we seek an experienced Senior Data Engineer with expertise in Azure Data Factory, Azure Synapse, Databricks, Apache Spark, Data Build Tool (DBT), PowerBI and other data engineering tools and frameworks. The ideal candidate will have deep technical knowledge and hands-on experience building and optimizing large-scale data pipelines and must have a solid understanding of cloud infrastructure. You will work closely with cross-functional teams to design, implement, and maintain High Volume data solutions.
Key Responsibilities:
- Must have hands-on experience working with infrastructure for large-scale data projects.
- Mentor junior data engineers and provide technical leadership across the team.
- Design, build, and maintain large-scale data pipelines using Azure Data Factory, Databricks, Data Build Tool (DBT), and other tools in our data engineering stack.
- Collaborate with data scientists, analysts, and software engineers to build Data and Domain Models that drive business insights and analytics. Should have good command on PowerBI.
- Optimize data pipelines and Databricks workflows for efficiency, scalability, and reliability.
- Implement and manage ETL/ELT processes, ensuring data quality and integrity across all systems.
- Develop and maintain data architecture documentation and best practices.
- Lead initiatives around automation, CI/CD for data pipelines, and infrastructure as code (IaC) to ensure the robustness of data environments.
- Troubleshoot performance issues in data pipelines and distributed systems.
- Proven experience with Test-Driven Development (TDD) and a strong ability to write unit tests, system integration tests and data quality tests.
Core skills & Experience:
- 5+ years of experience in data engineering or a related role.
- Proficient in building scalable data pipelines using Azure Data Services
- Strong knowledge and experience with DBT for transforming data within a modern data stack.
- Solid understanding of ETL/ELT processes and data warehousing concepts.
- Solid experience of Large Scale OLAP/OLTP databases.
- Hands-on experience with version control systems like Git and CI/CD tools.
- Strong programming skills in Python.
- Excellent problem-solving abilities and experience in troubleshooting complex cloud architectures.
- Strong understanding of cloud architecture (preferably Microsoft Azure).
- Experience with data governance, security, and data observability frameworks (e.g Elementary or Anomalo).