Detalles Equipo Calendario Documento FAQ
Challenge

Technical Data Analyst (Spark/Scala – GCP)

Ranking: 38

Job Description

As a Technical Data Analyst , you will play a key role in designing, developing, and optimizing data processes within cloud-based ecosystems , primarily leveraging Spark, Scala, and GCP . You will collaborate closely with data engineering and platform teams to ensure efficient data modeling, processing and pipeline performance within the SDDA/CCO environment .

This position requires solid technical skills, analytical thinking, and the ability to work in complex, distributed data architectures.

Responsibilities

  • Develop and optimize data processing pipelines using Spark and Scala.

  • Work within GCP-based data platforms , implementing scalable and efficient data solutions.

  • Contribute to analysis, design, and documentation within the SDDA/CCO development ecosystem.

  • Collaborate with cross-functional teams to understand data requirements and deliver robust analytical solutions.

  • Ensure data quality, performance, and reliability across all processing flows.

  • Participate in continuous improvement of data workflows and best practices.

Required Skills

  • 4+ years of experience in data engineering or technical data analysis.

  • Strong hands-on experience with Spark (batch and/or streaming) and Scala .

  • Familiarity with GCP environments , especially data-focused services.

  • Experience working in structured development frameworks such as SDDA or CCO .

  • Ability to operate in complex, large-scale data ecosystems.

Nice-to-Have

  • Prior experience in the CCO program (highly valued).

  • Knowledge of modern data governance and metadata management practices.

  • Exposure to CI/CD workflows and automated data pipelines.

Technical Data Analyst (Spark/Scala – GCP)

Ranking: 38

Job Description

As a Technical Data Analyst , you will play a key role in designing, developing, and optimizing data processes within cloud-based ecosystems , primarily leveraging Spark, Scala, and GCP . You will collaborate closely with data engineering and platform teams to ensure efficient data modeling, processing and pipeline performance within the SDDA/CCO environment .

This position requires solid technical skills, analytical thinking, and the ability to work in complex, distributed data architectures.

Responsibilities

  • Develop and optimize data processing pipelines using Spark and Scala.

  • Work within GCP-based data platforms , implementing scalable and efficient data solutions.

  • Contribute to analysis, design, and documentation within the SDDA/CCO development ecosystem.

  • Collaborate with cross-functional teams to understand data requirements and deliver robust analytical solutions.

  • Ensure data quality, performance, and reliability across all processing flows.

  • Participate in continuous improvement of data workflows and best practices.

Required Skills

  • 4+ years of experience in data engineering or technical data analysis.

  • Strong hands-on experience with Spark (batch and/or streaming) and Scala .

  • Familiarity with GCP environments , especially data-focused services.

  • Experience working in structured development frameworks such as SDDA or CCO .

  • Ability to operate in complex, large-scale data ecosystems.

Nice-to-Have

  • Prior experience in the CCO program (highly valued).

  • Knowledge of modern data governance and metadata management practices.

  • Exposure to CI/CD workflows and automated data pipelines.

  • Equipo
  • Evaluador
  • Manager
  • Agencia
  • Cliente

GFT Cliente

Cliente

Comentarios: 0

Sandra Lobero

Agencia

Comentarios: 0

Paco Romero

Agencia

Comentarios: 0

Teba Gomez-Monche

Agencia

Comentarios: 0

claudia herrero

Agencia

Comentarios: 0

Hugo Herrero

Manager

Comentarios: 0

Víctor M. herrero

Evaluador

Comentarios: 3