Senior Data Engineer

McLean Full Time Live

Required Skills

My Compatibility Score

Choose Match Score option:

hadoop Java Python agile akka AWS BASIC big data C Cassandra CLOUD HBase Kafka Linux MongoDB nosql Perl Perl Postgres Redshift Scala scaling scripting Shell Spark Unix
show more
Automatch with LinkedIn
As a Data Engineer, you'll be part of an Agile or platform team dedicated to breaking the norm and pushing the limits of continuous improvement and innovation. You will participate in detailed technical design, development and implementation of applications using existing and emerging technology platforms. Working within an Agile environment, you will provide input into architectural design decisions, develop code to meet story acceptance criteria, and ensure that the applications we build are always available to our customers. You'll have the opportunity to mentor other engineers and develop your technical knowledge and skills to keep your mind and our business on the cutting edge of technology. We have seas of big data and rivers of fast data. To tame it all we're using tools like Spark, Scala, Kafka, Cassandra, Accumulo, HBase, Hadoop, HDFS, AVRO, MongoDB, and Mesos and we're on the look-out for anything else that will help us.

What you will be doing:
- Expose criminal activities by driving the re-invention and automation of AWS based data frameworks that operate on massive amounts of data
- Use data to identify risks in business strategies by designing and building cloud based analytical data warehouses and fit for purpose data stores
- Enable sophisticated methods of identifying fraudulent behaviors by scaling complex, machine learning based analytical models
- Bring energy, collaboration, and agility to our teams by working directly with product managers and end-users to deliver data-driven products
- Build applications from ground up using a modern cloud based open source technology stack such as Scala, Java, Spark, Redshift, Postgres and HBase
- Help grow and mentor team members on the fine art of data engineering and software abstractions


Basic Qualifications:
- Bachelors Degree or military experience
- At least 2 years of experience in C, Java, or Scala
- At least 2 years of experience with Unix/Linux systems with scripting experience in Shell, Perl or Python
- At least 2 years of experience building data pipelines
- At least 1 year experience with leading big and fast data technologies like Spark, Scala, Akka, Cassandra, Accumulo, Hbase, Hadoop, HDFS, AVRO, MongoDB, Mesos, AWS

Preferred Qualifications:
- Master's Degree
- Experience with recognized industry patterns, methodologies, and techniques
- Familiarity with Agile engineering practices
- 2+ years of experience with Spark, Scala and/or Akka.
- 2+ years of experience with Spark Streaming, Kafka, Storm, Flink, or other Stream Processing technologies.
- 3+ years experience with NoSQL implementation (Mongo, Cassandra, etc. a plus)
- 3+ years experience developing Java based software solutions
read more