Accepting Applications
Full-time
On-site
Posted 2 hours, 51 minutes ago
0 views
0 applications
Job Description
**Role Summary**
About the job Data Engineer
We are seeking a Data Engineer to design, build, and operate data pipelines and warehouses powering
enterprise analytics and reporting. The role covers ingestion, transformation, modelling, and operationalisation
across cloud and on\-prem data sources.
**Key Responsibilities**
* Design and implement batch \+ streaming data pipelines (ETL/ELT).
* Build and maintain dimensional/data\-warehouse models (Star/Snowflake schema, slowly\-changing
dimensions).
* Develop in SQL, Python, and at least one orchestrator (Airflow, Azure Data Factory, AWS Glue).
* Operate data quality checks, lineage, and observability (Great Expectations, Monte Carlo, or similar).
* Optimise warehouse performance (Snowflake, Synapse, BigQuery, Redshift).
* Partner with BI/analytics teams on semantic models and self\-service consumption.
* Document pipelines, schemas, and runbooks.
**Required Qualifications**
* Bachelor's degree in CS, Engineering, Statistics, or equivalent.
* 4\+ years building production data pipelines.
* Strong SQL (window functions, CTEs, query tuning) and Python.
* Hands\-on with at least one major DW/Lakehouse: Snowflake, BigQuery, Synapse, Redshift, Databricks.
* Experience with at least one orchestrator: Airflow, ADF, Glue, dbt \+ scheduler.
* Familiarity with cloud object storage (S3, ADLS, GCS) and file formats (Parquet, ORC, Avro).
* Professional English — mandatory.
**Preferred / Nice To Have**
* Working knowledge of Arabic is a plus.
* Streaming experience: Kafka, Kinesis, Event Hubs, Spark Structured Streaming.
* dbt Analytics Engineer or cloud data engineer certifications.
* Exposure to data governance/cataloguing (Purview, Unity Catalog, Collibra).
More jobs from ByteCrew
Login to Apply
Don't have an account? Register