
20 – 28 Lakh/Year (Annual salary)
Longterm (Duration)
Partially Remote Hyderabad, Telangana, India
Role: Data Engineer
Job Location : Hyderabad, Work from Office
Mode of employment: Permanent
Skills and Responsibilities.
- Strong programming skills in Python, Scala, or SQL, as well as Pyspark.
- Architect and implement data pipelines using Apache Spark for efficient data processing and transformation.
- Develop and maintain data workflows using Apache Airflow to automate data pipeline orchestration and scheduling.
- Create and optimize SQL queries for data extraction, transformation, and loading (ETL) processes.
- Harness the power of Python for data manipulation, scripting, and automation.
- Manage and optimize data storage and processing on AWS EMR, Redshift, and Azure Synapse.
- Collaborate with teams to design and maintain Azure Data Factory (ADF) pipelines for data integration.
- Leverage DBT for data modelling and version control, ensuring data accuracy and consistency.
- Integrate data from diverse sources using Fivetran, ensuring seamless data ingestion.
- Continuously improve data quality, performance, and reliability across the data ecosystem.
- Monitor and troubleshoot data pipelines, ensuring data availability and integrity.
- Implement data security best practices and ensure compliance with data privacy regulations.

Job Type
Payroll
Must have Skills
- Python
- SQL
- Apache Spark
- ElasticSearch
- AWS EMR
- Apache Airflow
- Data Warehouse
