Based in Downtown Los Angeles, CA, we're a fast-paced, growing, and cool entertainment company. We're looking for a talented Data Engineer to join our team. Not just any run-of-the-mill Data Engineer will do, though; we need someone who pays particular attention to detail and has extremely high standards for the work they do. The person has to be comfortable working in a fast-paced environment where they will develop and support an end-to-end Data solution.
The Sr Data Engineer is detail oriented and has extremely high standards for the work produced. The incumbent must be comfortable working in a fast-paced environment, and is willing to develop and support an end-to-end Data solution, and have skills and experience with Hadoop, PostrgreSQL, Map Reduce, Hive and Pig.
•Assess requirements, define the strategy and implementation plan, and deliver Data projects.
•Design and develop components of big data processing, as part of an agile/scrum team
•Create and support ETL(Extract, Transform and Load) processes to facilitate bringing data into the Data Warehouse.
•Conduct daily maintenance, monitoring, performance analysis, trouble shooting and problem resolution of the ETL processes.
•Identify data discrepancies and data quality issues, and works to ensure data consistency and integrity.
•Provide technical expertise to implement Hadoop and Big Data systems and other related projects.
•Experience writing Unix Shell and SQL scripting
•Experience with message queues like RabbitMQ and/or working with real-time systems helpful.
•Experience in, and passionate about, quality and engineering fundamentals (performance/scalability, reliability, diagnosis, deployment, manageability, security, compatibility)
•Lead and mentor a team of developers
•Architect and document scalable Big Data systems
•B.S. or M.S. in Computer Science or other related fields preferred.
•Sound knowledge of ETL Process and Data Warehouse concepts.
•5 years of hands on experience using PostgreSQL, SQL Server, TeraData, Aster, Netezza, or equivalent technology.
•Significant experience with the overall Hadoop eco-system (HDFS, Map Reduce, Pig, Hive, etc.).
•Experience with Zookeeper, Scribe or Flume, Mahout. Skills with visualization tools will be a highly preferred
•Experience with Machine Learning/Artificial Engine/Recommendations Engine is
•Good knowledge of Linux architecture and tools.
•Strong Shell scripting skills.
•Experience in designing, developing and implementing processes involving large data sets.
•Strong Performance Tuning, Research and Analytical skills.
•Ability to troubleshoot problems and quickly resolve issues.
•Experience with relational database concepts and database architecture and design.
•Strong verbal and written communication skills, with an ability to express complex business concepts in technical terms.
•Experience with visualization tools is a plus