Description:
Must-have
- Bachelor’s and/or Master’s degree in Computer Science, Computer Engineering, or related technical discipline
- 8+ years of experience with Spark, Hadoop, Hive, Presto, and Kafka
- Currently hands-on building big data pipelines
- Relational, NoSQL, and SQL databases including MongoDB and Postgres
- Stream-processing systems like Spark-Streaming and Storm
- Data storage formats like parquet
- Object function/object-oriented scripting languages including Scala, C++, Java, and Python
- Familiar with configuring CI/CD pipelines in GitHub, bitbucket, or Jenkins for building application binaries
- Workflow management and pipeline tools such as Airflow, Luigi and Kubeflow
Good-to-have
- Experience with machine learning frameworks such as Tensorflow, Keras and libraries such as scikit-learn
- Columnar Stores such as AWS Redshift
Organization
|
Dbank
|
Industry
|
Engineering Jobs
|
Occupational Category |
Data Engineering |
Job Location
|
Karachi,Pakistan |
Shift Type
|
Morning |
Job Type
|
Full Time
|
Gender
|
No Preference
|
Career Level
|
Experienced Professional
|
Experience
|
7 Years
|
Posted at
|
2023-01-02 4:54 pm
|
Expires on
|
2024-12-23
|