Senior Data Engineer

Job title: Senior Data Engineer

Company: Thermo Fisher Scientific

Job description: Job Description : Senior Data Engineer Requisition ID: When your part of the team at ThermoFisher Scientific, you’ll do important work, like helping customers in finding cures for cancer, protecting the environment or making sure our food is safe. Y

Job Description : Senior Data Engineer Requisition ID: When your part of the team at ThermoFisher Scientific, you’ll do important work, like helping customers in finding cures for cancer, protecting the environment or making sure our food is safe. Your work will have real-world impact, and you’ll be supported in achieving your career goals. Location/Division Specific Information Data Engineer III, based out of our Bangalore location, plays a key role in Enterprise Data Platform (EDP) Operations organization providing business continuity for critical business processes, IT systems and IT solutions through project implementations, enhancements, documentation and operational support. How will you make an impact Being part of an organization that provides analytics driven data solutions for all businesses across ThermoFisher Scientific, you will be instrumental in helping our business partners and customers with their data and analytics needs. What will you do Proven experience in Design and architecture for Enterprise Data Platform Strong knowledge of Data Operations including but limited to Pipeline optimizations, Orchestration, Cloud services for Data. Proven experience in implementing Data Observability (using tools like Monte Carlo), Monitoring and Alerting (Splunk, Data dog) Communicate and translate the technical solution to the team and other stakeholders Mentor and guide other Data engineers in the team. Enhance/Support solutions using Pyspark/EMR, SQL and databases, AWS Athena, S3, Redshift, AWS API Gateway, Lambda, Glue, and other Data Engineering technologies. Measure environment and performance of application with system & application log tools and act to improve accordingly. Define and follow agile development methodologies to deliver solutions and product features by following DevOps, Data Ops and Dev Sec Ops best practices. Propose Data load optimizations and continuously implement to improve the performance of the Data loads Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Keep the data separated and secure across through multiple data centers and AWS regions. Be available and participate in on-call schedule to address critical operational incidents and business requests How will you get here Bachelor’s degree in Computer Science with at least 10 years of Data Engineering , Data Analytics & Business Intelligence Solutions experience using AWS services, Pyspark/EMR. Having Certifications like AWS Certified Data Analytics, CCA Spark and Hadoop Developer or CCP Data Engineer is highly desirable. Experience & Skills: 10+ Years of Experience in Data Lake, Data Analytics & Business Intelligence Solutions and at least 4+ as AWS Data Engineer Strong experience in Full life cycle project implementation of Delta Lake, Data Lakes using Databricks, Pyspark/EMR, Athena, S3, Redshift, AWS API Gateway, Lambda, Glue, and other managed services/tools Strong hands-on experience in Orchestration of Data Pipelines using Airflow. Strong experience in building ETL data pipelines using Pyspark on EMR framework Hands on experience in using S3, AWS Glue jobs, S3 Copy, Lambda and API Gateway. Should be able to troubleshoot SQL code. Redshift knowledge is an added advantage. Strong experience in DevOps and CI/CD using Git and Jenkins, experience in cloud native scripting such as CloudFormation and ARM templates Hands-on with system & application log tools like Datadog, CloudWatch, Splunk etc. Experience working with Python, Python ML libraries for data analysis, wrangling and insights generation Experience using Jira for task prioritization and Confluence and other tools for documentation. Experience in Python and common python libraries. Strong analytical experience with database in writing complex queries, query optimization, debugging, user defined functions, views, indexes etc. Experience with source control systems such as Git, Bitbucket, and Jenkins build and continuous integration tools. Strong understanding of AWS Data Lake and data bricks. Exposure to Kafka, Redshift, Sage Maker would be added advantage Exposure to data visualization tools like Power BI, Tableau etc. Functional Knowledge in the areas of Sales & Distribution, Material Management, Finance and Production Planning is preferred

Expected salary:

Location: Bangalore, Karnataka

Job date: Mon, 14 Nov 2022 23:51:56 GMT

Apply for the job now!