Port existing data pipelines and make data available to an internal data fabric.
Build new data acquisition and transformation pipelines using big data and cloud technologies.
Work with the broader technology team, including information architecture and data fabric teams to align pipelines with the lodestone initiative.
What We’re Looking For
BS in Computer Science or Engineering with at least 4 years of professional software work experience
Experience with Big Data platforms such as Apache Hadoop and Apache Spark
Deep understanding of REST, good API design, and OOP principles
Experience with object-oriented/object function scripting languages: Python, C#, Scala, etc.
In-depth knowledge of Structured Query Language (SQL) including advanced SQL coding, relational database design, data warehousing
Experience in maintaining and developing software in production utilizing cloud-based tooling (AWS, Docker & Kubernetes, Okta)
Strong collaboration and teamwork skills with excellent written and verbal communications skills
Self-starter and motivated with ability to work in a fast-paced software development environment
Agile experience highly desirable
Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools
Basic understanding on software development programming languages like Java, Oracle PL/SQL.
Experience in Snowflake, Databricks will be a big plus.