Software Engineer- Data Platform
Upload My Resume
Drop here or click to browse · PDF, DOCX, DOC, RTF, TXT
Requirements
• 5+ years in software/data engineering or infrastructure roles • Strong Python skills (backend APIs a plus) • Proven ability to build scalable data pipelines from scratch • Hands-on with Apache Iceberg/Delta Lake + Snowflake/Databricks • Workflow orchestration expertise (Airflow, Luigi, etc.) • Big data frameworks experience (Spark, Hadoop) • Familiar with monitoring/analytics tools (Prometheus, Grafana, ELK, Datadog) • Skilled in designing scalable, reliable, cost-efficient systems • Experience with large-scale distributed data architectures • Thrives in fast-paced startup environments • Excellent problem-solving, communication, and customer-facing skills • Hands-on experience with Terraform or other infrastructure-as-code tools. • Familiarity with security and privacy best practices in data processing pipelines. • Exposure to cloud platforms (AWS, GCP, Azure) and containerisation (Docker, Kubernetes). • Fundamental Research Meets Enterprise Impact. Work at the intersection of science and engineering, turning foundational research into deployed systems serving enterprise workloads at exabyte scale. • AI by Design. Build the infrastructure that defines how efficiently the world can create and apply intelligence. • Real Ownership. Design primitives that will underpin the next decade of AI infrastructure. • High-Trust Environment. Deep technical work, minimal bureaucracy, shared mission. • Enduring Horizon. Backed by NEA, Bain Capital, and various luminaries from tech and business. We are building a generational company for decades, not quarters or a product cycle.
Responsibilities
• Build backend APIs and scalable data pipelines using Python. • Work with modern data lakehouse/warehouse technologies such as Iceberg, Delta Lake, Snowflake, Databricks. • Orchestrate workflows utilizing Airflow or similar tools to optimize big data frameworks like Spark and Hadoop. • Manage infrastructure using Terraform code for reliability with monitoring/logging in place. • Collaborate across teams and with customers on complex data challenges, designing integration solutions. • Drive best practices focusing on scalability, reliability, and cost efficiency of systems. • Have experience working at the intersection of science and engineering to turn research into enterprise workloads handling exabyte scale datasets.
Benefits
• Fundamental Research Meets Enterprise Impact. Work at the intersection of science and engineering, turning foundational research into deployed systems serving enterprise workloads at exabyte scale. • AI by Design. Build the infrastructure that defines how efficiently the world can create and apply intelligence. • Real Ownership. Design primitives that will underpin the next decade of AI infrastructure. • High-Trust Environment. Deep technical work, minimal bureaucracy, shared mission. • Enduring Horizon. Backed by NEA, Bain Capital, and various luminaries from tech and business. We are building a generational company for decades, not quarters or a product cycle. • Competitive salary, meaningful equity, and substantial bonus for top performers • Flexible time off plus comprehensive health coverage for you and your family • Support for research, publication, and deep technical exploration • At Granica, you will shape the fundamental infrastructure that makes intelligence itself efficient, structured, and enduring. Join us to build the foundational data systems that power the future of enterprise AI!