Data Integration Engineer/ 100% Remote/ Only on W2 role
Stash Talent Services - Yonkers, NY
Apply NowJob Description
Job Title: Data Integration EngineerDuration: 12 months CTHLocation: Rochester, New York (100% RemoteTop Skills Details1) 5 years of experience in Data with a focus on data integration, data processing, data quality, data migration and data governance.2) Azure Cloud - Azure Datalake, Databricks, Data Factory3) 3 years of experience with Oracle, SQL Server and NoSQL databases.(Oracle preferred).DescriptionClient is looking to bring in a SME to help with their Data Orchestration, Migrating and Processing. This candidate will help source, extract, and manipulate large data sets to serve the Analytics, Artificial Intelligence, and Machine Learning needs of Client Business.ResponsibilitiesAssemble large, complex data sets that meet business requirements.Deliver automation & efficient processes to ensure high quality throughput & performance of the entire data & analytics platform.Create and maintain optimal data pipeline architectureBuild the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.Ensure data extraction, transformation and loading data meet data security & compliance requirementsEngage with data source platform leads to gain tactical and strategic understanding of data sources required for consumption.Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.Work closely with architects, solution leads, data owners, Data Scientists as well as key stakeholders to facilitate and coordinate the data platform backlog grooming process, triaging new feature requests in preparation for future project activities.Create data tools for data scientist team members that assist them in building and optimizing models for use in Client products.Work with data and analytics experts to strive for greater functionality in our data systems.Ensure data quality compliance to established SLAs.Provide guidance on data governance, security, and privacy.QualificationsBachelor/'s Degree in Computer Science, Data Analytics, Data Science, Engineering or equivalent software/services experience - RequiredMaster/'s Degree in Computer Science, Data Analytics, Data Science, Engineering or equivalent software/services experience - Preferred7 years of experience in Developing data platform with large and complex datasets.4 years of experience in Working with SQL, Databricks, Spark and/or other Big Data technologies.4 years of experience in Experience building and optimizing data pipelines, architectures and data sets to answer specific business questions and identify opportunities for improvement.2 years of experience in Experience working with large-scale data processing and storage using Azure Data Factory, Integration Runtime, Data Lake, Databricks, Spark, Azure ML, Snowflake, SAS Dataflux.4 years of experience in Proficiency with languages such as Python, SQL, PySpark, R.2 years of experience in Understanding of Privacy, Compliance and security aspects of data storage & processing.2 years of experience in Experience delivering data solutions via Agile methodologies.4 years of experience in a successful history of manipulating, processing and extracting value from large disconnected datasets.2 years of experience in Proficiency for Software Development & CI/CD methodologies and tools for automated infrastructure code; ML Ops.2 years of experience in Design, implement, and maintain automation platform and tools, including Ansible Tower, Azure, ARM, Terraform Enterprise, Azure DevOps and GitHub Actions.Excellent collaboration and team building skills.Demonstrates problem solving skills.Strong verbal communication and listening skills.Possesses a high degree of initiative.
Created: 2025-01-16