This job was posted over 90 days ago and may no longer be available.
Senior DevOps Engineer
Note: We're only looking for candidates in North/South America. We have capacity for 1 fulltime DevOps engineer, and 3 more for a short 3 month contract.
Who we are:
- We're focused and work efficiently. We optimize for productivity, not hours.
- We're optimistic. We focus on how we can succeed, instead of worrying about why we might fail.
- We're strategic. We validate assumptions as early as possible. Our sales team is a key part of testing our product scope.
- We schedule meetings and limit interruptions so you can focus.
- Currently US time zones are a requirement.
What you'll do:
- You will turn ideas into useful products. Our metric for success is, "did I make life easier for a data scientist this week?"
- You will use Kubernetes, Python, Docker, and AWS/GCP/Azure.
- We rotate developers through sales and support so that we always know what our customers are going through.
- Telecommuting is OK
- No Agencies Please
Who you are:
- Pedigree isn't important.
- You have some industry experience, ideally over 4 years. We're not ready to mentor junior hires.
- You're efficient - You can identify the essentials and ship them quickly.
- You love the PyData ecosystem, and would love to have a job where you help make it thrive.
- You're customer focused, you care that people enjoy using our products.
- You have some experience working remotely. Remote isn't for everyone, and we don't have a better way to know whether it will work for you.
- 1 or 2 phone calls to evaluate fit (who you are, what do you want to do, what have you done). There will be no brain teasers or coding challenges on this call.
- 2 hour take home problem.
- 1 or 2 days of in person work with us on some code. Depending on your constraints, this will either be on our code base or on a related open source project. You will be paid for this work, and if you cannot accept payment, we'll make a donation to a non-profit of your choice.
About the Company
Saturn is building DataBricks for Dask. We're ex Anaconda folks and members of the open source community. We're building a data science platform using the common OSS tools (Jupyter, Dask, Airflow) everyone loves. Why? Because you shouldn't have to learn Scala in order to work with big data.