This job was posted over 90 days ago and may no longer be available.

Senior Data Platform Engineer (Python) (Contractor)

“The battery is the technology of our time.” -The Economist

Voltaiq is a Battery Intelligence software company. Our data platform brings unprecedented analytics, visualization, and predictive capabilities to any company with a battery-powered business model. World-leading brands — including global automakers in Detroit and Germany, household-name tech giants, and decacorn startups — depend on Voltaiq software to accelerate product development, optimize performance, ensure safety and reliability, and unlock financial value in their products. Our high-powered team is composed of battery industry veterans, PhD scientists, a highly skilled product and engineering team, and an advisory board of C-level industry execs, all of whom are passionate about enabling the global energy transition.

The Role:

At Voltaiq, we receive battery test data from research labs, manufacturers, and deployed energy systems around the world. As a Software Engineer on the data platform you will help us architect and develop scalable systems for data processing, storage, and access. You will learn how data is used in battery research and manufacturing, and in the systems of mobile devices, electric vehicles, and the power grid. If you love data, have an interest in expanding your ETL/ELT pipeline skill set, and learning about the battery industry, all while having a positive impact on how the world consumes energy, then this is the job for you.

This is a 3 month contract position with the opportunity to extend or convert to FTE.

Responsibilities:

  • Design and implement reliable data pipelines.
  • Design and implement efficient data storage.
  • Establish conventions and create new APIs for data analysis, data ingestion and data integration and evolve them as the product and underlying services change.
  • Create understandable SLAs for each of the production data pipelines.
  • Develop best practices and frameworks for unit, functional and integration tests around data pipelines, and drive the team towards increased overall test coverage.

Required Skills & Qualifications:

  • 8+ years of experience working in large-scale distributed systems
  • Excellent programming skills in Python
  • Excellent knowledge of Relational Databases (Postgres preferred), NOSQL and ORM technologies (Django ORM preferred)
  • Schema design and data modeling
  • Experience in writing, analyzing and debugging SQL queries
  • Hands on development of highly available, scalable, distributed back-end services
  • Familiarity with different types of data file formats (Parquet, Avro, etc.)
  • Experience with performing data analysis, ingestion, and integration
  • Experience with Spark
  • Team player and positive collaborator
  • Excellent written and verbal communication skills
  • Strong analytical and problem solving skills
  • Passion for data engineering and for enabling others by making their data easier to access

Preferred Skills & Qualifications:

  • Understanding of REST principles and experience working with and implementing backend APIs
  • Experience in data privacy and security related projects
  • Experience working with and operating workflow or orchestration frameworks ( Celery preferred)
  • Experience with large scale messaging systems like Kafka or RabbitMQ

Desired Skills

Contact Info

Posted: Sept. 5, 2020

Apply


Get Updates