THE ROLE:


Are you passionate about building scalable data pipelines and optimizing data architectures? Do you thrive in a fast-paced environment where data-driven decision-making and real-time analytics are essential? Are you excited to collaborate with cross-functional teams to design and implement modern cloud-based data solutions? If so, you may be ready to take on the Senior Data Engineer role within our team.


As a Senior Data Engineer, you will play a key role in designing, building, and maintaining cloud-native data pipelines and architectures to support our Composable digital platforms. You will collaborate with engineers, product teams, and analytics stakeholders to develop scalable, secure, and high-performance data solutions that power real-time analytics, reporting, and machine learning workloads.


This role requires deep expertise in data engineering, cloud technologies (Google Cloud Platform - BigQuery, Lookers preferred), SQL, Python, and data pipeline orchestration tools (Dagster and DBT).

WHAT YOU'LL DO:

  • Design, build, and maintain scalable data pipelines and architectures to support analytical and operational workloads.
  • Develop and optimize ETL/ELT pipelines, ensuring efficient data extraction, transformation, and loading from various sources.
  • Work closely with backend and platform engineers to integrate data pipelines into cloud-native applications.
  • Manage and optimize cloud data warehouses, primarily BigQuery, ensuring performance, scalability, and cost efficiency.
  • Implement data governance, security, and privacy best practices, ensuring compliance with company policies and regulations.
  • Collaborate with analytics teams to define data models and enable self-service reporting and BI capabilities.
  • Develop and maintain data documentation, including data dictionaries, lineage tracking, and metadata management.
  • Monitor, troubleshoot, and optimize data pipelines, ensuring high availability and reliability.
  • Stay up to date with emerging data engineering technologies and best practices, continuously improving our data infrastructure.

WHAT WE'RE LOOKING FOR:

  • Strong proficiency in English (written and verbal communication) is required.
  • Experience working with remote teams across North America and Latin America, ensuring smooth collaboration across time zones.
  • 5+ years of experience in data engineering, with expertise in building scalable data pipelines and cloud-native data architectures.
  • Strong proficiency in SQL for data modeling, transformation, and performance optimization.
  • Expertise in Python for data processing, automation, and pipeline development.
  • Experience with cloud data platforms, particularly Google Cloud Platform (GCP).Hands-on experience with Google BigQuery, Cloud Storage, and Pub/Sub.
  • Strong knowledge of ETL/ELT frameworks such as DBT, Dataflow, or Apache Beam.
  • Familiarity with workflow orchestration tools like Dagster, Apache Airflow or Google Cloud Workflows.
  • Understanding of data privacy, security, and compliance best practices.
  • Strong problem-solving skills, with the ability to debug and optimize complex data workflows.
  • Excellent communication and collaboration skills.

NICE TO HAVES:

  • Experience with real-time data streaming solutions (e.g., Kafka, Pub/Sub, or Kinesis).
  • Familiarity with machine learning workflows and MLOps best practices.
  • Experience with BI and data visualization tools (e.g., Looker, Tableau, or Google Data Studio).
  • Knowledge of Terraform for Infrastructure as Code (IaC) in data environments.
  • Familiarity with data integrations involving Contentful, Algolia, Segment, and Talon.One.