Data Engineer
Compunnel Inc. - richmond, VA
Apply NowJob Description
As a Data Engineer, you will be responsible for the development of complex data sources and pipelines into our data platform (i.e. Snowflake) along with other data applications (i.e. Azure, Terraform, etc.), automation and innovation. Primary Responsibilities: • Create & maintain data pipelines using Azure & Snowflake as primary tools • Create SQL Stored procs and Functions to perform complex transformations • Understand data requirements and design optimal pipelines to fulfil the use-cases • Creating logical & physical data models to ensure data integrity is maintained • Code management, CICD pipeline creation & automation using GitHub & GIT Actions • Tuning and optimizing data processes • Design and build best in class processes to clean and standardize data • Code Deployments to production environment, troubleshoot production data issues • Modelling of big volume datasets to maximize performance for our Business Intelligence & Data Science Team Qualifications Required Qualifications: • Computer Science bachelor's degree or similar • Min 1-4 years of industry experience as a Hands-on Data engineer • Excellent communication skills - Verbal and Written • Excellent knowledge of SQL • Excellent knowledge of Azure Services such as - Blobs, Functions, Azure Data Factory, Service Principal, Containers, Key Vault, etc. • Excellent knowledge of Snowflake - Architecture, Features, Best practices • Excellent knowledge of Data warehousing & BI Solutions • Excellent Knowledge of change data capture (CDC), ETL, ELT, SCD etc. • Hands on experience on the following technologies: o Developing data pipelines in Azure & Snowflake o Writing complex SQL queries o Building ETLELTdata pipelines using SCD logic o Query analysis and optimization • Analytical and problem-solving experience applied to a Big Data datasets • Data warehousing principles, architecture and its implementation in large environments • Experience working in projects with agilescrum methodologies and high performing team(s) • Knowledge of different data modelling techniques such as Star Schema, Dimensional models, Data vault is an Advantage • Experience in code lifecycle management and repositories such as GIT & GitHub • Exposure to DevOps methodology • Good understanding of Access control and Data masking
Created: 2024-10-13