Cloud Data Engineering
Full Time
Bangalore
Posted 3 days ago
Role | Cloud Data Engineering |
Experience | 6-8 Years |
Educational Qualification | Bachelor’s degree in computer science, Engineering, or a related field. |
Location | Bangalore |
Technical Competencies | Python, SQL, Cloud Data Engineering, Azure and AWS |
Job Overview:
We are seeking a highly skilled and experienced Cloud Data Engineer to join our Digital Technologies team. The ideal candidate will have 6-8 years of experience in designing, building, and maintaining cloud-based data solutions on Azure and AWS platforms. This role will require expertise in data integration, pipeline development, data modelling
Key Responsibilities:
- Design, build, and optimize scalable, reliable, and efficient ETL/ELT pipelines for large data set processing.
- Automate data workflows to ensure real-time and batch data availability.
- Develop and manage data solutions using Azure services such as Data Factory, Synapse Analytics, Azure Databricks, and Azure SQL.
- Work with AWS services like Glue, Redshift, S3, Lambda, and Athena to create robust data solutions.
- Integrate structured, semi-structured, and unstructured data from various sources.
- Create and maintain data models that support analytics and reporting needs.
- Collaboration and Stakeholder Engagement:
- Work closely with cross-functional teams, including Data Scientists, Analysts, and Software Engineers.
- Translate business requirements into technical specifications for data solutions.
- Manage IAM roles, policies, and encryption for data at rest and in transit.
Qualifications:
- Bachelor’s degree in computer science, Engineering, or a related field.
- 6-8 years of hands-on experience in cloud data engineering, with a focus on Azure and AWS.
- Proficiency in Python, SQL, and other scripting languages for data manipulation and automation.
- Experience in data orchestration tolls like Airflow and relevant cloud components.
- Strong understanding of distributed computing and big data frameworks like Apache Spark and Hadoop.
- Knowledge of Cloud data Warehouses like Synapse, Snowflake.
- Experience with CI/CD pipelines for data workflows using tools like Jenkins, GitHub Actions, or Azure DevOps.
- Certification in Azure/AWS and Databricks
- Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes).
- Strong problem-solving skills and the ability to work independently or as part of a team.
Performance Optimization:
- Monitor and optimize the performance of cloud data pipelines and infrastructure.
- Debug and resolve data-related issues in a timely manner.
Job Features
Job Category | Digital Technologies |