Select
Discover openings Why join our team Growth opportunities Work perks Locations
Cupertino, CA, US

We are looking for a Senior/Staff Software/Platform Engineer that can design and code critical components for a platform which will interconnect Big Data pipelines for 15+ teams and 100s of engineers. Our customer is one of the world's largest technology companies based in Silicon Valley with operations all over the world. The ideal candidate is a driven, enthusiastic and technology-proficient software engineer with strong Kubernetes and event messaging system design and coding experience. The candidate will need to quickly design, build and extend services from scratch.

Responsibilities:

  • Design/architect and code components for a data platform to support big data pipelines for 100s of engineers

  • Drive/remain responsible for development of end-to-end for specific components

  • Contribute to project discussions, collaborate directly with architect team and present results to key stakeholders

  • Design, build and continuously enhance the project codebase
  • Write detailed design documentation, present decisions and motivate these

  • Work inside a team of industry experts on the cutting edge Big Data technologies to develop solutions for deployment at massive scale

  • Set coding and deployment best practices

Requirements:

  • +6 years experience designing and coding platform solutions for Big Data pipelines

  • +3 years of experience working with event-messaging systems - Kafka is a big plus

  • +2 years coded and deploying services running on Kubernetes
  • Strong Java knowledge - Scala is a plus

  • Understanding Microservices and how to architect/design scalable solutions on Kubernetes
  • Strong understanding of the challenges in building end-to-end big data pipelines for a large variety of use-cases at scale

  • Strong communication skills

What will be a big plus:

  • Worked with Cassandra
  • Understanding challenges of working with many disjunct big data technologies

  • Understanding streaming pipelines and its challenges

  • Worked with big data pipelines at terabyte/petabyte scale

  • Worked with HDFS

  • Understanding how to run Spark on Kubernetes

  • Experience working with Big Data scheduling technologies and their APIs - Airflow

  • Experience with JVM build systems (Gradle, Maven)

We offer:

  • Opportunity to work on bleeding-edge projects
  • Work with a highly motivated and dedicated team
  • Competitive salary
  • Flexible schedule
  • Medical insurance
  • Benefits program
  • Corporate social events

NB:

Placement and Staffing Agencies need not apply. We do not work with C2C at this time.

At this moment, we are not able to process H1B transfers.

About us:

Grid Dynamics is the engineering services company known for transformative, mission-critical cloud solutions for retail, finance and technology sectors. We architected some of the busiest e-commerce services on the Internet and have never had an outage during the peak season. Founded in 2006 and headquartered in San Ramon, California with offices throughout the US and Eastern Europe, we focus on big data analytics, scalable omnichannel services, DevOps, and cloud enablement.

Don’t see the right opportunity?

Contact us anyway and let’s talk! To apply, send your resume and cover letter to moc.scimanyddirg@sboj

Grid Dynamics is an equal opportunity employer. We are committed to creating an inclusive environment for all employees during their employment and for all candidates during the application process. All qualified applicants will receive consideration for employment without regard to, and will not be discriminated against based on, age, race, gender, color, religion, national origin, sexual orientation, gender identity, veteran status, disability or any other protected category. All employment is decided on the basis of qualifications, merit, and business need.

Grid Dynamics Privacy Policy and E-verify