Site Reliability Engineer - Hadoop

Unlock Employer New York, NY Full Time Live

Required Skills

My Compatibility Score

Choose Match Score option:

The SRE team works on coding, automating, and increasing the availability, reliability and performance of company's internal and external services. What you'll do: Architect and manage our data warehouse and pipeline Work with developers to build a data pipeline for heterogeneous production and consumption of data Build a data lake and related for long term data archival from different sources Build and utilize ETL tooling to acquire and provide data from different systems and different groups Integrate our container efforts with our non-container infrastructure to deliver production data Architect and code the systems that empower data gathering in the Deep & Dark Web Ensure our systems are available, scalable, and monitored Focus on internal tooling, automation, data warehousing, and security Test and tune performance issues across components and services REQUIREMENTS Who you are: Skilled in either Python or Ruby (OOP experience a huge plus!) Willingness to learn, teach, and code review Strong background in Linux Previous experience and responsibilities in critical and complex systems Experience with config management systems (Ansible, Chef, Puppet, Salt) Experience with AWS or GCE (API usage a plus!) Tools we like. (Experience in the following or similar is a plus, not a requirement) ZooKeeper, etcd Hadoop (HDFS, YARN, Pig, Hive) Postgres, MySQL/MariaDB, Elasticsearch, HBase Kafka Metrics (OpenTSDB) Containerization (Docker, Kubernetes) Monitoring (Icinga2) Logging (Kibana, Logstash) read more