Sr. Data Engineer

  WORK FROM HOME

Job title: Sr. Data Engineer

Company: Thermo Fisher Scientific

Job description: Job Description : How will you get here? * Have solid experience in data analytics, quality, MDM, Data migrations & data integrations * Ability to work independently and as a member of a multi-functional team * Is passionate about applicati

Job Description : How will you get here Have solid experience in data analytics, quality, MDM, Data migrations & data integrations Ability to work independently and as a member of a multi-functional team Is passionate about applications, data analytics, end-user efficiency Exceptional focus on our Business counterparts & customer Desire to learn from other team members about technology in the area of expertise Experience on project management framework Agile (Jira toolset) Master’s degree in computer science engineering from an accredited university (desired) 4-year degree with major in computer science engineering (or equivalent) from an accredited university (preferred) will substitute for minimum 5 years’ professional IT experience. Experience in Data lake, analytics & Oracle, SQL Server or AWS Redshift type databases Experience in ETL/ELT (Data extraction, data transformation and data load processes) Solid Experience in Data Visualizations 3 – 5 years of proven track record at Enterprise-level ETL development & architecture in Informatica PC 9.x 2 – 4 years Proficient with Informatica Data Quality (IDQ) 2 – 4 years with MDM related projects 3 – 5 years of SQL/PLSQL & MSFT SQL Server experience 3 – 5 year of Data Profiling / Data Analysis Tool & Technology Skills 8+ Years of Experience in Data Lake, Data Analytics & Business Intelligence Solutions Solid Experience in ETL Tools preferably Informatica & AWS Glue 6-8 years of experience at Enterprise-level ETL development & architecture in Informatica Power Center 9.x, 10.x and Informatica Cloud environment. Solid experience in Data Lake using AWS Databricks, Apache Spark & Python 2+ years of working experience in a DevOps environment, data integration and pipeline development. 2+ years of Experience with AWS Cloud on data integration with Apache Spark, EMR, Glue, Kafka, Kinesis, and Lambda in S3, Redshift, RDS, MongoDB/DynamoDB ecosystems 2 – 3 years of Project Management experience (Agile/Waterfall) Proven skill and ability in the development of data warehouse projects/applications (Oracle & SQL Server) Strong real-life experience in python development especially in pySpark in AWS Cloud environment. Experience in Python and common python libraries. Strong analytical experience with database in writing sophisticated queries, query optimization, debugging, user defined functions, views, indexes etc. Strong Informatica technical knowledge in the areas on Informatica Designer Components -Source Analyzer, Warehouse Designer, Mapping Designer & Mapplet Designer, Transformation Developer, Workflow Manager, Workflow Monitor, IDQ 9.x and above Extensive experience in using IDQ for customer cleanse and standardization in Oracle/Siebel UCM environment Extensive experience on using Informatica Data Profiling for Customer Data & Product Data Profiling analysis Understand reusability, parameterization, workflow design, ETL Counsels team members on the evaluation of data using the Informatica Data Quality (IDQ) toolkit. Applies data analysis, cleansing, matching, exception handling, and reporting and supervising capabilities of IDQ Performs technical walk-through as needed to communicate design/coded solution and to seek input from team members. Develops ETL processes using Informatica, ETL control tables, error logging, auditing, data quality, etc. Must have good communication skills Experience with source control systems such as Git, Bitbucket, and Jenkins build and continuous integration tools. Knowledge of extract development against ERPs – SAP, Siebel, JDE, BAAN preferred Solid grasp of AWS Data lake and data bricks. Experience in SAP ERP application, data and processes desired Exposure to AWS Data Lake, AWS Lambda, AWS S3, Kafka, Redshift, Sage Maker would be added advantage Exposure to data visualization tools like Power BI, Tableau etc Functional Knowledge in the areas of Sales & Distribution, Material Management, Finance and Production Planning is desirable. Knowledge, Skills, Abilities Highly dedicated, execution-focused, with a willingness to do ‘what it takes’ to deliver results as you will be encouraged to rapidly cover a considerable amount of demands on data integration Critical Thinking – Think big picture. Set priorities aligned with major goals. Encourage innovation by backing good people who take thoughtful risks. Critical Thinking – Question conventional wisdom by identifying and ambitious assumptions made that cause actions or inaction. Strive to inject independent thinking, checking biases, promotes action and decision-making. Communication – communicate optimally in the way reach audiences with ease, clarity, and visibility: one-to-one, small group, full staff, email, social media, and of course, listening. Provide interpersonal support for relationship development to cultivate partnership, establish relationships, and promote teamwork to nurture and strengthen a network for the exchange of ideas Problem-solving – Analytical

Expected salary:

Location: Bangalore, Karnataka

Job date: Sat, 14 May 2022 22:29:58 GMT

Apply for the job now!