Ranking: 38
As a Technical Data Analyst , you will play a key role in designing, developing, and optimizing data processes within cloud-based ecosystems , primarily leveraging Spark, Scala, and GCP . You will collaborate closely with data engineering and platform teams to ensure efficient data modeling, processing and pipeline performance within the SDDA/CCO environment .
This position requires solid technical skills, analytical thinking, and the ability to work in complex, distributed data architectures.
Develop and optimize data processing pipelines using Spark and Scala.
Work within GCP-based data platforms , implementing scalable and efficient data solutions.
Contribute to analysis, design, and documentation within the SDDA/CCO development ecosystem.
Collaborate with cross-functional teams to understand data requirements and deliver robust analytical solutions.
Ensure data quality, performance, and reliability across all processing flows.
Participate in continuous improvement of data workflows and best practices.
4+ years of experience in data engineering or technical data analysis.
Strong hands-on experience with Spark (batch and/or streaming) and Scala .
Familiarity with GCP environments , especially data-focused services.
Experience working in structured development frameworks such as SDDA or CCO .
Ability to operate in complex, large-scale data ecosystems.
Prior experience in the CCO program (highly valued).
Knowledge of modern data governance and metadata management practices.
Exposure to CI/CD workflows and automated data pipelines.
As a Technical Data Analyst , you will play a key role in designing, developing, and optimizing data processes within cloud-based ecosystems , primarily leveraging Spark, Scala, and GCP . You will collaborate closely with data engineering and platform teams to ensure efficient data modeling, processing and pipeline performance within the SDDA/CCO environment .
This position requires solid technical skills, analytical thinking, and the ability to work in complex, distributed data architectures.
Develop and optimize data processing pipelines using Spark and Scala.
Work within GCP-based data platforms , implementing scalable and efficient data solutions.
Contribute to analysis, design, and documentation within the SDDA/CCO development ecosystem.
Collaborate with cross-functional teams to understand data requirements and deliver robust analytical solutions.
Ensure data quality, performance, and reliability across all processing flows.
Participate in continuous improvement of data workflows and best practices.
4+ years of experience in data engineering or technical data analysis.
Strong hands-on experience with Spark (batch and/or streaming) and Scala .
Familiarity with GCP environments , especially data-focused services.
Experience working in structured development frameworks such as SDDA or CCO .
Ability to operate in complex, large-scale data ecosystems.
Prior experience in the CCO program (highly valued).
Knowledge of modern data governance and metadata management practices.
Exposure to CI/CD workflows and automated data pipelines.