Detalles Equipo Calendario Documento FAQ
Challenge

Data Engineer (Azure Ecosystem)

Ranking: 37

As a Data Engineer, you will be responsible for designing, developing, and maintaining robust data pipelines and applications within the Azure ecosystem. You will work closely with cross-functional teams to enable seamless data flow, optimize processing efficiency, and implement scalable architectures tailored to business needs.

This role requires a strong foundation in Python, distributed computing, and Azure data services.

Key Responsibilities

  • Design, build, and maintain large-scale data processing pipelines using Spark and Azure technologies.

  • Develop data-driven applications with a focus on performance, scalability, and reliability.

  • Implement and optimize ETL/ELT workflows within Azure Synapse, Data Factory, and related services.

  • Work with stakeholders to understand data requirements and translate them into efficient engineering solutions.

  • Ensure data quality, governance, and compliance across all data processes.

  • Troubleshoot production pipelines, monitor performance, and apply optimizations when necessary.

  • Collaborate with analytics, cloud, and product teams to enable end-to-end data delivery.

Required Skills

  • Python (advanced) + PySpark + Pandas

  • Strong understanding of Spark and distributed data processing concepts

  • SQL expertise

  • Hands-on experience with:

    • Azure Synapse Analytics

    • Azure Data Factory

    • Azure Data Services (core components)

Nice to Have

  • Experience with Azure Functions, Azure Virtual Machines, and Azure DevOps

  • Familiarity with CI/CD pipelines and Infrastructure as Code (ARM templates)

  • Knowledge of data modeling, ETL frameworks, and data governance best practices

Data Engineer (Azure Ecosystem)

Ranking: 37

As a Data Engineer, you will be responsible for designing, developing, and maintaining robust data pipelines and applications within the Azure ecosystem. You will work closely with cross-functional teams to enable seamless data flow, optimize processing efficiency, and implement scalable architectures tailored to business needs.

This role requires a strong foundation in Python, distributed computing, and Azure data services.

Key Responsibilities

  • Design, build, and maintain large-scale data processing pipelines using Spark and Azure technologies.

  • Develop data-driven applications with a focus on performance, scalability, and reliability.

  • Implement and optimize ETL/ELT workflows within Azure Synapse, Data Factory, and related services.

  • Work with stakeholders to understand data requirements and translate them into efficient engineering solutions.

  • Ensure data quality, governance, and compliance across all data processes.

  • Troubleshoot production pipelines, monitor performance, and apply optimizations when necessary.

  • Collaborate with analytics, cloud, and product teams to enable end-to-end data delivery.

Required Skills

  • Python (advanced) + PySpark + Pandas

  • Strong understanding of Spark and distributed data processing concepts

  • SQL expertise

  • Hands-on experience with:

    • Azure Synapse Analytics

    • Azure Data Factory

    • Azure Data Services (core components)

Nice to Have

  • Experience with Azure Functions, Azure Virtual Machines, and Azure DevOps

  • Familiarity with CI/CD pipelines and Infrastructure as Code (ARM templates)

  • Knowledge of data modeling, ETL frameworks, and data governance best practices

  • Equipo
  • Evaluador
  • Manager
  • Agencia
  • Cliente

GFT Cliente

Cliente

Comentarios: 0

Sandra Lobero

Agencia

Comentarios: 0

Paco Romero

Agencia

Comentarios: 0

Teba Gomez-Monche

Agencia

Comentarios: 0

claudia herrero

Agencia

Comentarios: 0

Hugo Herrero

Manager

Comentarios: 0

Víctor M. herrero

Evaluador

Comentarios: 3