Description:
The Data Engineering team will be responsible for implementing & maintaining services and/or tools to support existing feed systems which allows users to consume different datasets and make data available to a data fabric for wider consumption and processing within the company.
Responsibilities Include
- Port existing data pipelines and make data available to an internal data fabric
- Build new data acquisition and transformation pipelines using big data and cloud technologies
- Work with the broader technology team, including information architecture and data fabric teams to align pipelines with the lodestone initiative
What We’re Looking For
- BS in Computer Science or Engineering with at least 4 years of professional software work experience
- Experience with Big Data platforms such as Apache Hadoop and Apache Spark
- Deep understanding of REST, good API design, and OOP principles
- Experience with object-oriented/object function scripting languages: Python, C#, Scala, etc.
- Good working knowledge of relational SQL and NoSQL databases
- Experience in maintaining and developing software in production utilizing cloud-based tooling (GCP,AWS, Docker & Kubernetes, Okta)
- Strong collaboration and teamwork skills with excellent written and verbal communications skills
- Self-starter and motivated with ability to work in a fast-paced software development environment
- Agile experience highly desirable
- Experience in Snowflake, Databricks will be a big plus