Senior Data Engineer

Unlock Employer New York, NY Closed
Employer is looking for Senior Data Engineer in New York, NY.
This local job opportunity with ID 701609 is live since 03/08/2018.
The most exciting part is the enormous potential for personal and professional growth. We are always seeking new and better tools to help us meet challenges such as adopting proven open-source technologies to make our data infrastructure more nimble, scalable and robust. Some of the cutting edge technologies we have recently implemented are Kafka, Spark Streaming, Docker and Mesos.

What you'll be doing:

Design, build and maintain reliable and scalable enterprise level distributed transactional data processing systems for scaling the existing business and supporting new business initiatives
Optimize jobs to utilize Kafka, Hadoop, Vertica, Spark Streaming and Mesos resources in the most efficient way
Monitor and provide transparency into data quality across systems (accuracy, consistency, completeness, etc)
Increase accessibility and effectiveness of data (work with analysts, data scientists, and developers to build/deploy tools and datasets that fit their use cases)
Collaborate within a small team with diverse technology backgrounds
Provide mentorship and guidance to junior team members

Team Responsibilities:

Installation, upkeep, maintenance and monitoring of Kafka, Hadoop, Vertica, RDBMS
Ingest, validate and process internal & third party data
Create, maintain and monitor data flows in Hive, SQL and Vertica for consistency, accuracy and lag time
Maintain and enhance framework for jobs(primarily aggregate jobs in Hive)
Create different consumers for data in Kafka such as flafka for Hadoop, flume for Vertica and Spark Streaming for near time aggregation
Train Developers/Analysts on tools to pull data
Tool evaluation/selection/implementation
Backups/Retention/High Availability/Capacity Planning
Disaster Recovery- We have all our core data services in another Data Center for complete business continuity
Review/Approval - DDL for database, Hive Framework jobs and Spark Streaming to make sure they meet our standards
24*7 On call rotation for Production support
Technologies We Use:

Chronos - for job scheduling
Docker - Packaged container image with all dependencies
Graphite/Beacon - for monitoring data flows
Hive - SQL data warehouse layer for data in HDFS
Impala- faster SQL layer on top of Hive
Kafka- distributed commit log storage
Marathon – cluster wide init for Docker Containers
Mesos - Distributed cluster resource manager
Spark Streaming - Near time aggregation
SQL Server - Reliable OLTP RDBMS
Sqoop - Import/Export data to RDBMS
Vertica - fast parallel data warehouse
Required Skills:

BA/BS degree in Computer science or related field
5+ years of software engineering experience
Knowledge and exposure to distributed production systems i.e Hadoop is a huge plus
Proficiency in Linux
Fluency in Python, Experience in Scala/Java is a huge plus
Strong understanding of RDBMS, SQL;
Passion for engineering and computer science around data
Willingness to participate in 24x7 on-call rotation read more

Required Skills

My Compatibility Score

Choose Match Score option: