Mass Street Analytics, Inc is looking for three to five professional freelance data engineers to work on projects on an ongoing basis. Engineers will work with the Chief Engineer to deliver data management solutions to clients.
Candidates will be working with the full range of big data tools. We are not looking for people with specific experience in any particular piece of software. There are some core things you should know like Hadoop, Hive, Spark, and Kafka. Aside from that candidates need to have the skills to design and implement solutions in a wide range of use cases. As such, we’re looking for problem solvers, innovators, and creative thinkers that stay up to date on the latest advancements in technology. We’re looking for people that like to stay on the bleeding edge. In short, we’re looking for ENGINNERS not point and click tool users.
We’re looking for people that we can go to when we have projects. We’re bringing on three to five to make sure we have a deep bench of talent that will be available. As such, we’re looking for people that make their living freelance consulting. We will not consider someone where this is their “side hustle”. You will not be expected to work 40 hours a week like staff aug. You will be given the freedom to manage your other client's projects.
You will need to be available during business hours of CONUS.
All work will be managed though the Upwork platform. All this time, there are no plans to convert people to full time employees.
Mass Street Analytics strives to develop a 100% remote team. However, client needs sometimes necessitate having boots on the ground. People willing to be forward deployed and be on site during the week will be given priority consideration. In those instances where you will need to be onsite, we will negotiate how to handle the cost of travel on a case by case basis.
The requirements below are not an all-inclusive list and will expand as we start screening candidates. Intentionally there is no years of experience requirements, but we are looking for senior people. Your resume should reflect that.
Computer Science degree from an engineering school or commensurate real-world work experience
Experience with working with open source “big data” software (Hadoop, Kafka, Druid etc.)
Experience developing solutions with one of the big three Hadoop vendors
Experience with either Java or Scala
Experience with DevOps software and techniques (containerization: K8s, Puppet, etc., CD/CI: Jenkins, Travis CI, etc.)
Experience with *ix shell scripting
Freelance software development is your primary source of income
Experience with Python in addition to Java or Scala
Experience with the Hortonworks Hadoop Distribution
Willingness to travel