BI Engineer

picture_2

Job Description


RoundForest is seeking a BI Engineer to lead design and implement Big Data warehousing platform. You will support several projects simultaneously and design, model and develop end to end data pipelines and data integration processes that lead to Big Data Analytics. Successful applicants will be energetic team players who have a proven analytical background with a web media or consumer-based online company.

Responsibilities

 

  • Lead the design and development of scalable data pipelines and data Integration processes from disparate source systems

  • Help plan and maintain the technical infrastructure, its configuration, performance, and storage requirements, with consideration of tiered data and data archiving

  • Work with DevOps and cross functional teams to plan upgrades and any maintenance to our data platform

  • Effectively work with Data Science, BI, ETL and other cross functional teams to identify the best data solution

  • Mentor junior developers, perform code reviews, and help establish best practices

Skills & Experience

  • 5+ years Data Warehouse/Business Intelligence experience with 2+ years handling Big Data platforms

  • Proven track record design/development of large, complex Data Warehouse solutions pulling data from various sources

  • Expert level understanding of Data Modeling and Data Warehouse implementation, management, and performance tuning

  • Strong experience working with semi-structured and unstructured data at scale and developing scripts that extract data from external data sources such as web APIs, real time feeds, etc.

  • Hands on experience designing and implementing processes to move data efficiently between different data sources and file systems

  • Hands on experience in Hadoop technologies (HDFS, Spark, Hive, MapReduce, Flume, Oozie, Kylin etc) – a big advantage

  • Experience in Java, Scala and scripting languages like Python, Ruby etc.
    Strong SQL and analytical skills

  • Hands on with big data operations on Postgresql or similar relational DB

  • Hands on with Hadoop query languages like Pig & Hive – an advantage

  • Experience in version control and tracking systems such as GIT, SVN, JIRA

  • Must have good understanding in traditional ETL development tools such as Informatica, Pentaho etc., and Big Data tools. Be able to differentiate on how to leverage one over the other for effective enterprise scale Big Data process implementation

  • Familiarity with data visualization tools such as Tableau

  • Must be a self-motivated, energetic, detail oriented team player passionate about producing high quality work that meets or often exceeds SLA’s and thrives in a collaborative environment

  • Ability to explain technical concepts to non-technical associates and vice-versa a must

  • Good understanding of the business KPIs

  • Excellent written and verbal communication skills