46 Reporting Engineer jobs in Nigeria
Data Engineer
Posted today
Job Viewed
Job Description
- To design, build, and maintain robust, scalable, and high-performance ETL/ELT data pipelines for reporting, business intelligence, and machine learning initiatives
- The role is critical for ensuring the quality, lineage, and governance of all critical data assets.
Responsibilities
- Build and optimize data pipelines using tools like Airflow/Prefect to ingest data from core banking, payment, and third-party sources.
- Design and implement dimensional and denormalized data models within the Data Warehouse (e.g., Postgres/Oracle/BigQuery).
- Utilize streaming technologies like Kafka and transformation tools like DBT to process data in real-time or near real-time.
- Implement data quality checks and maintain data lineage documentation for governance.
- Leverage Python and SQL extensively for scripting, data manipulation, and pipeline development.
Key Performance Indicators
KPI Area:
- Data Pipeline Reliability
- Data Freshness & Delivery
- Efficiency
Measure:
- Pipeline SLA Adherence (Uptime) & Percentage of automated tests (e.g., dbt tests).
- Mean Latency for critical reports/data sets
- Data Warehouse query run time (p95).
Target:
99.9% Uptime, 90% of Critical Data Models Tested.
- <1 hour for batch, <5 seconds for streaming data
- Reduced by 20% QoQ.
Requirements
- Education: Bachelor's degree or Higher National Diploma from any Approved University or Polytechnic.
- Experience: 4 - 6 years of hands-on experience in full-stack development.
- Relevant certifications for DBA are an added advantage
- Specific Experience: Prior experience in banking, fintech, or other financial services environments is highly desirable.
Knowledge:
- Warehousing principles, Data Governance frameworks, Dimensional Modelling, Kafka/Stream processing, Data Quality
Skills / Competencies:
- Python, Advanced SQL, ETL/ELT tools (Airflow/Prefect), Postgres/Oracle/BigQuery, DBT, APIs, Git, CI/CD
Salary
Open to discussion.
Method of Application
Interested and qualified candidates should please send their application to: using the Job Position as the subject of the email.
Data Engineer
Posted today
Job Viewed
Job Description
Today
D
Data EngineerDangote Industries Limited
Software & Data
Lagos Full Time
Shipping & Logistics Confidential
- Minimum Qualification :
- Experience Level : Mid level
- Experience Length : 3 years
Dancom Technologies Limited
Job Summary
The ideal candidate will have a strong background in data engineering, with proficiency in ETL/ELT processes, big data technologies, and cloud platforms. You will play a critical role in ensuring data accessibility, quality, and integrity by maintaining a scalable infrastructure that supports data analytics and other data-related initiatives. This position offers an exciting opportunity to work collaboratively with cross-functional teams and leverage your expertise to shape our data infrastructure.
Responsibilities
- Acts as the architect and builder of the data infrastructure, creating the necessary foundation for proper analytics engagements.
- Design and implement efficient ETL (Extract, Transform, Load) pipelines to support data integration and analytics.
- Build and maintain data warehouses and data lakes to support business intelligence and related data applications.
- Develop data models and schemas for effective data analysis and reporting.
- Optimise data processing, query performance, and platform efficiency.
- Ensure data quality, integrity, and security through robust governance practices
- Lead data migration initiatives across multiple platforms.
- Modernise applications and databases to leverage Azure's advanced capabilities.
- Implement monitoring solutions for database usage, performance, and reliability.
Develop automated data quality checks and testing procedures.
Requirements
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 3+ years of experience in a similar role within the technology industry.
- Proven experience in data engineering, with a strong foundation in ETL/ELT processes and data warehousing.
- Proven ability to work with both OLTP (Microsoft SQL Server/MySQL) and OLAP systems (Snowflake).
- Hands-on experience in designing, optimising, and managing batch and streaming data pipelines.
- Expertise in data migration and modernisation, particularly on Azure cloud platforms.
- Microsoft Certification in Azure Data Engineering is highly desirable.
- Expertise in SQL, Python, and other programming languages relevant to data engineering.
- Experience with big data technologies such as Hadoop, Spark, and Hive.
- Knowledge and hands-on experience with cloud platforms like AWS, Azure, or GCP, including services such as Azure Data Factory, Azure Synapse Pipeline, Azure Event Hub, and Azure IoT Hub.
- Familiarity with data visualisation tools like Power BI.
On-site availability required
Benefits
- Private Health Insurance.
- Paid Time Off.
- Opportunities for Professional Growth and Career Advancement.
- Training and Development Programs.
- Competitive Salary.
Collaborative and Supportive Work Environment.
<
Data Engineer
Posted today
Job Viewed
Job Description
Department: Development/Engineering
Reports To: Data Architect
Supervises: N/A
FLSA Status: Exempt
Shift: Core business hours/extended hours as required to support key deliverables
Work Location: Lagos, Nigeria (Remote)
Position Summary:
The Data Engineer will design, build, and maintain our data infrastructure, ensuring seamless data integration and optimizing data processing for high-performance analytics. The ideal candidate will have a strong background in data engineering, database management, and collaboration with cross-functional teams to drive data-driven decision-making across the organization.
Essential Duties and Responsibilities:
The essential duties of the position include the following, and other duties may be assigned:
Ensure the seamless integration of data from various sources, enabling consistent and accurate data flow across platforms.
Optimize data processing and query performance by fine-tuning data pipelines, configuring databases, and implementing effective data partitioning strategies.
Establish and enforce data quality checks and validations to identify and resolve data inconsistencies, ensuring high-quality, reliable data for downstream applications and analytics.
Work closely with cross-functional teams, including data scientists, analysts, software engineers, and business stakeholders, to understand data requirements and deliver solutions that support business objectives.
Design and develop robust data infrastructure, including data warehouses, data lakes, and scalable data pipelines, to support the organization's growing data needs.
Conduct regular design process reviews and ensure adherence to development standards within the data engineering team.
Monitor and optimize query performance, identify performance bottlenecks, and implement solutions to enhance overall database efficiency and reliability.
Create and maintain comprehensive documentation related to database design, configuration, and processes, ensuring clarity and continuity for ongoing development and maintenance efforts.
Stay updated on industry best practices and emerging technologies in data engineering, and proactively implement improvements to enhance data architecture and processes.
Supervisory Responsibilities:
None
Qualifications:
To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
Education/Experience:
Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. At least 5+ years of experience as a Data Engineer or in a similar role. Experience with data governance, data security, and compliance considerations in data engineering. Proven experience in data engineering, including data integration, pipeline optimization, and database management. Strong knowledge of data modeling, ETL processes, and data warehousing concepts. Proficiency in SQL and experience with database systems such as MySQL, PostgreSQL, or NoSQL databases. Experience with big data technologies and cloud platforms like AWS, Azure, or Google Cloud is a plus. Excellent problem-solving skills and the ability to work effectively in a collaborative, fast-paced environment. Strong communication skills, with the ability to convey complex technical concepts to non-technical stakeholders.
Language Ability:
Strong verbal and written communication skills, enabling effective collaboration with technical and non-technical stakeholders. Ability to create clear, concise, and comprehensive technical documentation, including design specifications, user guides, and operational procedures. Proficiency in articulating complex data concepts and solutions to a variety of audiences, ensuring alignment across teams.
Math Ability:
Proficient in mathematical concepts essential for data engineering, including linear algebra, probability, and statistics. Capable of applying mathematical techniques to optimize data storage, query performance, and data processing tasks. Skilled in analyzing data trends, patterns, and anomalies using statistical methods to support data-driven decision-making.
Reasoning Ability:
Demonstrated ability to approach data challenges with logical, structured thinking, and to develop innovative solutions to complex problems. Strong analytical skills for troubleshooting and resolving data issues, optimizing processes, and improving system performance. Ability to evaluate the impact of design choices on system scalability, efficiency, and data integrity, ensuring robust and reliable data architecture.
Computer Skills:
Advanced proficiency in SQL, including complex queries, indexing, and query optimization within various database systems (e.g., MySQL, PostgreSQL, Oracle). Hands-on experience with data pipeline tools and big data technologies, enabling efficient data processing and integration. Expertise in cloud computing platforms (e.g., AWS, Azure, Google Cloud), with experience in deploying and managing data infrastructure in a cloud environment. Proficient in scripting languages (e.g., Python, Bash) for automating data workflows and performing data transformations. Familiarity with version control systems (e.g., Git) and collaboration tools (e.g., Jira, Confluence) to manage code and track project progress.
Certificates and Licenses:
Relevant certifications are highly desirable.
Travel:
This position does not require travel.
ISM and Privacy Statement:
Data Security and Encryption: Ensure that sensitive data, especially personal and financial data, is encrypted and securely stored within systems.
Access Control and Authorization: Implement role-based access controls and ensure that system permissions align with job functions.
System Security: Perform regular security audits, vulnerability testing, and maintain a secure development environment to avoid unauthorized data access.
Compliance with Privacy Laws: Ensure compliance with privacy regulations (e.g., GDPR) and data retention policies when handling personal data.
Data Engineer
Posted today
Job Viewed
Job Description
LEAD Enterprise Support Company Limited is a foremost Human Resources Solutions organization with many years of cumulative experience and expertise. We are prolific in Outsourcing, Recruitment, Head hunting and HR Advisory. We are a multi-sectorial servicing company, with landmark service deliverables to our clients in varied industries.
We are recruiting to fill the position below:
Job Position: Data Engineer
Job Location: Maryland, Lagos
Employment Type: Full-time
Summary
- To design, build, and maintain robust, scalable, and high-performance ETL/ELT data pipelines for reporting, business intelligence, and machine learning initiatives.
- The role is critical for ensuring the quality, lineage, and governance of all critical data assets.
Responsibilities
- Build and optimize data pipelines using tools like Airflow/Prefect to ingest data from core banking, payment, and third-party sources.
- Design and implement dimensional and denormalized data models within the Data Warehouse (e.g., Postgres/Oracle/BigQuery).
- Utilize streaming technologies like Kafka and transformation tools like DBT to process data in real-time or near real-time.
- Implement data quality checks and maintain data lineage documentation for governance.
- Leverage Python and SQL extensively for scripting, data manipulation, and pipeline development.
Requirements
- Education: Bachelor's degree or Higher National Diploma from an approved University or Polytechnic.
- Experience: 4-6+ years of hands-on experience in full-stack development.
- Relevant certifications for DBA are an added advantage
- Specific Experience: Prior experience in banking, fintech, or other financial services environments is highly desirable
Knowledge / Skills / Competencies:
- Warehousing principles, Data Governance frameworks, Dimensional
- Python, Advanced SQL, ETL/ELT tools (Airflow/Prefect), Modeling, Kafka/Stream processing, Data Quality Postgres/Oracle/BigQuery, DBT, APIs, Git, CI/CD.
Data Engineer
Posted today
Job Viewed
Job Description
We are Helium Health…
A full-service Healthtech company on a journey to digitize healthcare across Africa. We have been doing this since 2016, and our pioneering approach has helped to transform healthcare delivery systems across Sub-Saharan Africa and the GCC, creating an alternative vision for patients and providers, whilst facilitating hundreds of thousands of high-quality patient experiences every month.
We are very good at creating for the market we live in, because we fully understand from our own experiences. Our innovative and reliable products are engineered for the nuances of the African landscape, empowering frontline providers to deliver excellent healthcare to patients. Our products include the Provider Digitization Suite—HeliumOS and HeliumDoc, Financial Solutions—HeliumCredit and HeliumWallet, and Our Public Health & DataPartnerships - Public Health and Data and Insights.
We are looking to hire a Data Engineer, who will be responsible for designing, building, and maintaining the infrastructure required to effectively collect, store, and analyze large volumes of healthcare data. You will collaborate with data scientists, software engineers, and healthcare professionals to develop scalable data pipelines and ensure data integrity, security, and accessibility.
As a Data Engineer, you will be required to:
- Translate business and functional requirements into robust, scalable, operable solutions that work well within the overall data architecture
- Design, implement, and maintain scalable ELT/ETL (Extract, Transform, Load) processes for healthcare data integration from various sources (e.g., EHR systems, clinical databases, third-party applications).
- Develop and optimize database architectures, including relational databases, data warehouses, and big data technologies (e.g., SQL, NoSQL, Hadoop).
- Ensure data quality, security, and compliance with healthcare regulations such as HIPAA.
- Collaborate with cross-functional teams (data scientists, software engineers, product managers) to design data models that support clinical decision-making and analytics.
- Build and manage data pipelines to ensure efficient data flow and real-time analytics.
- Implement data governance and master data management (MDM) strategies.
- Monitor and troubleshoot data pipelines and performance issues to ensure system reliability and uptime.
- Document data engineering processes, procedures, and data lineage for auditing and compliance purposes.
- Stay updated with industry trends and emerging technologies to recommend improvements for our data infrastructure.
Requirements
Bachelor's degree in Computer Science, Engineering, Information Technology, or related field (Master's degree preferred).
3+ years of experience in data engineering, ideally in the healthcare or EHR domain.
- Strong experience with database technologies (e.g., MySQL, PostgreSQL, MongoDB, Redshift, Snowflake).
- Proficiency in ETL tools and frameworks (e.g., FiveTran, Dbt, MAGE, Prefect, Airflow).
- Expertise in programming languages such as Python, SQL, and Java/Scala.
- Familiarity with healthcare data standards and formats (e.g., HL7, FHIR, DICOM).
- Experience working with cloud platforms (e.g., AWS, Azure, GCP) and big data tools (e.g., Hadoop, Spark).
- Knowledge of healthcare regulations (e.g., HIPAA, GDPR) and data privacy best practices.
- Experience with data visualization and reporting tools (e.g. Power BI).
- Strong analytical skills and the ability to troubleshoot complex data problems.
Helium Health is an equal opportunity employer, thus it is not biased in its employment to any race, colour, creed, religion, sex or physical disability
Data Engineer
Posted today
Job Viewed
Job Description
Job Title:
Data Engineer
Location:
Surulere, Lagos
Employment Type:
full-time
Position Summary:
The Data Engineer is responsible for designing, building, and maintaining data pipelines and architectures that enable the collection, processing, and analysis of large datasets. The role ensures that data is clean, reliable, and accessible for business intelligence, analytics, and decision-making.
Key Responsibilities:
Data Architecture & Pipeline Development:
- Design, develop, and maintain scalable
ETL (Extract, Transform, Load)
pipelines for structured and unstructured data. - Build and optimize data architectures to support reporting, analytics, and machine learning use cases.
- Integrate data from various sources (databases, APIs, cloud systems, third-party tools).
Data Quality & Governance:
- Ensure
data integrity, security, and compliance
with company and industry standards. - Monitor, troubleshoot, and resolve issues in data pipelines.
- Implement data validation and cleansing processes to maintain accuracy and consistency.
Collaboration & Support:
- Work closely with
data scientists, analysts, and business teams
to understand data needs. - Provide support for analytics and reporting by ensuring data availability and reliability.
- Collaborate with DevOps/Cloud engineers to optimize data storage and processing performance.
Innovation & Optimization:
- Continuously improve data infrastructure for efficiency and scalability.
- Stay updated with emerging technologies in
big data, cloud, and data engineering tools
. - Recommend and implement best practices for data modeling and database design.
Qualifications:
- Education:
B.Sc./B.Eng./M.Sc. in Computer Science, Data Science, Engineering, Information Systems, or a related field. - Experience:
- Proven experience as a
Data Engineer, ETL Developer, or similar role
. - Hands-on experience with
SQL, Python, Spark, or Scala
. - Strong knowledge of
data warehousing solutions
(e.g., Snowflake, Redshift, BigQuery). - Familiarity with
cloud platforms
(AWS, Azure, or GCP). - Experience with workflow/orchestration tools (e.g., Airflow, Luigi).
- Skills:
- Strong problem-solving and analytical skills.
- Excellent communication and teamwork abilities.
- Knowledge of data governance, security, and compliance practices.
Data Engineer
Posted today
Job Viewed
Job Description
Dancom Technologies Limited
Job Summary
The ideal candidate will have a strong background in data engineering, with proficiency in ETL/ELT processes, big data technologies, and cloud platforms. You will play a critical role in ensuring data accessibility, quality, and integrity by maintaining a scalable infrastructure that supports data analytics and other data-related initiatives. This position offers an exciting opportunity to work collaboratively with cross-functional teams and leverage your expertise to shape our data infrastructure.
Responsibilities
- Acts as the architect and builder of the data infrastructure, creating the necessary foundation for proper analytics engagements.
- Design and implement efficient ETL (Extract, Transform, Load) pipelines to support data integration and analytics.
- Build and maintain data warehouses and data lakes to support business intelligence and related data applications.
- Develop data models and schemas for effective data analysis and reporting.
- Optimise data processing, query performance, and platform efficiency.
- Ensure data quality, integrity, and security through robust governance practices
- Lead data migration initiatives across multiple platforms.
- Modernise applications and databases to leverage Azure's advanced capabilities.
- Implement monitoring solutions for database usage, performance, and reliability.
- Develop automated data quality checks and testing procedures.
Requirements
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 3+ years of experience in a similar role within the technology industry.
- Proven experience in data engineering, with a strong foundation in ETL/ELT processes and data warehousing.
- Proven ability to work with both OLTP (Microsoft SQL Server/MySQL) and OLAP systems (Snowflake).
- Hands-on experience in designing, optimising, and managing batch and streaming data pipelines.
- Expertise in data migration and modernisation, particularly on Azure cloud platforms.
- Microsoft Certification in Azure Data Engineering is highly desirable.
- Expertise in SQL, Python, and other programming languages relevant to data engineering.
- Experience with big data technologies such as Hadoop, Spark, and Hive.
- Knowledge and hands-on experience with cloud platforms like AWS, Azure, or GCP, including services such as Azure Data Factory, Azure Synapse Pipeline, Azure Event Hub, and Azure IoT Hub.
- Familiarity with data visualisation tools like Power BI.
- On-site availability required
Benefits
- Private Health Insurance.
- Paid Time Off.
- Opportunities for Professional Growth and Career Advancement.
- Training and Development Programs.
- Competitive Salary.
- Collaborative and Supportive Work Environment.
Be The First To Know
About the latest Reporting engineer Jobs in Nigeria !
Data Engineer
Posted today
Job Viewed
Job Description
Data Engineer – Volunteer
About SynctIQ
At SynctIQ, we're building the lightest, on-demand data infrastructure to meet the business needs of all sizes—anywhere in the world.
Our Vision
To be the lightest data infrastructure on-demand that meets the business needs of all sizes, everywhere.
Our Mission
To empower data-driven teams with seamless, scalable, and efficient tools that remove the complexity of modern data engineering—enabling them to focus on insights, not infrastructure, through an API-first approach.
About the Role
We're seeking a passionate and skilled Data Engineer to join our team on a volunteer basis. This is an opportunity to work with a dynamic group of technologists on meaningful data infrastructure projects using modern tools and frameworks. If you're looking to gain hands-on experience or contribute your skills to a collaborative effort, this role is for you.
Key Responsibilities
- Design, build, and maintain scalable data pipelines and streaming systems.
- Work with large-scale data in AWS S3 using Data Lake architectures.
- Develop and manage data ingestion using Apache Kafka and processing using Apache Spark.
- Integrate and manage storage formats such as Apache Iceberg or Apache Hudi.
- Optimize and manage data stores such as PostgreSQL, Redis, and Apache Druid.
- Write clean, maintainable Java code and leverage Java-based frameworks for data processing and orchestration.
- Collaborate with other engineers, analysts, and stakeholders to ensure data quality and availability.
- Job Orchestration: Experience with Apache Livy for Spark job execution. Understanding of data modeling, data partitioning, performance tuning, and schema evolution.
Required Skills
- Programming: Strong proficiency in Java; experience with Java-based data engineering frameworks.
- Data Streaming & Processing: Hands-on with Apache Kafka and Apache Spark.
- Data Storage & Querying:
- Familiarity with AWS S3, Data Lakes
- Experience with PostgreSQL, Redis, and Apache Druid
- Data Lake Formats: Knowledge of Apache Iceberg and/or Apache Hudi.
- Job Orchestration: Experience with Apache Livy for Spark job execution.
- Understanding of data modeling, data partitioning, performance tuning, and schema evolution.
Nice to Have:
- Familiarity with DevOps tools and practices (CI/CD, Docker, etc.)
- Spring cloud framework
- Exposure to data observability or monitoring tools
What You'll Gain
- Real-world experience with cutting-edge big data technologies
- Opportunities to collaborate with skilled professionals
- Flexibility and autonomy in your work
- A strong project to add to your portfolio
- Potential recommendation or reference based on performance
Important Note:
This is an
unpaid
position intended for individuals looking to gain experience, contribute to impactful projects, or explore new technologies. We deeply value your time and effort and will ensure that your contributions are acknowledged and meaningful.
Data Engineer
Posted today
Job Viewed
Job Description
Position Title:
Data Engineer
Department:
Development/Engineering
Reports To:
Data Architect
Supervises:
N/A
FLSA Status:
Exempt
Shift:
Core business hours/extended hours as required to support key deliverables
Work Location:
Lagos, Nigeria (Remote)
Position Summary:
The Data Engineer will design, build, and maintain our data infrastructure, ensuring seamless data integration and optimizing data processing for high-performance analytics. The ideal candidate will have a strong background in data engineering, database management, and collaboration with cross-functional teams to drive data-driven decision-making across the organization.
Essential Duties and Responsibilities:
The essential duties of the position include the following, and other duties may be assigned:
- Ensure the seamless integration of data from various sources, enabling consistent and accurate data flow across platforms.
- Optimize data processing and query performance by fine-tuning data pipelines, configuring databases, and implementing effective data partitioning strategies.
- Establish and enforce data quality checks and validations to identify and resolve data inconsistencies, ensuring high-quality, reliable data for downstream applications and analytics.
- Work closely with cross-functional teams, including data scientists, analysts, software engineers, and business stakeholders, to understand data requirements and deliver solutions that support business objectives.
- Design and develop robust data infrastructure, including data warehouses, data lakes, and scalable data pipelines, to support the organization's growing data needs.
- Conduct regular design process reviews and ensure adherence to development standards within the data engineering team.
- Monitor and optimize query performance, identify performance bottlenecks, and implement solutions to enhance overall database efficiency and reliability.
- Create and maintain comprehensive documentation related to database design, configuration, and processes, ensuring clarity and continuity for ongoing development and maintenance efforts.
- Stay updated on industry best practices and emerging technologies in data engineering, and proactively implement improvements to enhance data architecture and processes.
Supervisory Responsibilities:
None
Qualifications:
To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
Education/Experience:
Bachelor's or Master's degree in Computer Science, Information Systems, or a related field.
At least 5+ years of experience as a Data Engineer
or in a similar role. Experience with data governance, data security, and compliance considerations in data engineering. Proven experience in data engineering, including data integration, pipeline optimization, and database management. Strong knowledge of data modeling, ETL processes, and data warehousing concepts. Proficiency in
SQL
and experience with database systems such as
MySQL, PostgreSQL, or NoSQL
databases. Experience with big data technologies and cloud platforms like
AWS, Azure, or Google Cloud
is a plus. Excellent problem-solving skills and the ability to work effectively in a collaborative, fast-paced environment. Strong communication skills, with the ability to convey complex technical concepts to non-technical stakeholders.
Language Ability:
Strong verbal and written communication skills, enabling effective collaboration with technical and non-technical stakeholders. Ability to create clear, concise, and comprehensive technical documentation, including design specifications, user guides, and operational procedures. Proficiency in articulating complex data concepts and solutions to a variety of audiences, ensuring alignment across teams.
Math Ability:
Proficient in mathematical concepts essential for data engineering, including linear algebra, probability, and statistics. Capable of applying mathematical techniques to optimize data storage, query performance, and data processing tasks. Skilled in analyzing data trends, patterns, and anomalies using statistical methods to support data-driven decision-making.
Reasoning Ability:
Demonstrated ability to approach data challenges with logical, structured thinking, and to develop innovative solutions to complex problems. Strong analytical skills for troubleshooting and resolving data issues, optimizing processes, and improving system performance. Ability to evaluate the impact of design choices on system scalability, efficiency, and data integrity, ensuring robust and reliable data architecture.
Computer Skills:
Advanced proficiency in SQL, including complex queries, indexing, and query optimization within various database systems (e.g., MySQL, PostgreSQL, Oracle). Hands-on experience with data pipeline tools and big data technologies, enabling efficient data processing and integration. Expertise in cloud computing platforms (e.g., AWS, Azure, Google Cloud), with experience in deploying and managing data infrastructure in a cloud environment. Proficient in scripting languages (e.g., Python, Bash) for automating data workflows and performing data transformations. Familiarity with version control systems (e.g., Git) and collaboration tools (e.g., Jira, Confluence) to manage code and track project progress.
Certificates and Licenses:
Relevant certifications are highly desirable.
Travel:
This position does not require travel.
ISM and Privacy Statement:
- Data Security and Encryption:
Ensure that sensitive data, especially personal and financial data, is encrypted and securely stored within systems. - Access Control and Authorization:
Implement role-based access controls and ensure that system permissions align with job functions. - System Security:
Perform regular security audits, vulnerability testing, and maintain a secure development environment to avoid unauthorized data access. - Compliance with Privacy Laws:
Ensure compliance with privacy regulations (e.g., GDPR) and data retention policies when handling personal data.
What We Offer:
- Competitive salary
- Comprehensive medical coverage
- Access to a gym and spa
- Pension plan
- The opportunity to collaborate with brilliant minds across a global team
Ready to take your career to the next level? Join us at ExamRoom.AI, where your contributions make a difference
Data Engineer
Posted today
Job Viewed
Job Description
Today
D
Data EngineerData2Bots
Software & Data
Abuja Full Time
IT & Telecoms Confidential
- Minimum Qualification :
- Experience Level : Mid level
- Experience Length : 3 years
Job Summary
We are looking for a highly skilled data engineer to design, develop, and deploy robust data solutions that enhance our clients' digital transformation initiatives at Data2Bots. You will be responsible for building scalable data pipelines, developing ETL processes, and integrating data capabilities into diverse client systems across multiple industries. This role is ideal for someone who enjoys problem-solving, has strong software engineering fundamentals, and is passionate about turning complex data into real-world business applications that drive measurable client outcomes.
Key Responsibilities
- Design, develop, and optimize scalable data pipelines and ETL processes to ingest, transform, and load large, complex datasets from diverse sources into data warehouses and data lakes.
- Build and maintain robust data infrastructure, ensuring data quality, integrity, and accessibility for analytics, machine learning, and business intelligence initiatives.
- Implement data governance policies, data security measures, and data privacy regulations (e.g., GDPR, CCPA) to ensure compliance and protect sensitive information.
- Develop and integrate data solutions into cloud-based and on-premise environments, ensuring seamless deployment across client infrastructures.
- Work with structured and unstructured data from various sources, applying advanced data modeling, schema design, and data optimization techniques.
- Collaborate with cross-functional teams, including data scientists, software engineers, DevOps specialists, and business analysts, to align data solutions with client business objectives and technical requirements.
- Monitor data pipeline performance, troubleshoot data-related issues, and continuously improve data processing efficiency, scalability, and cost-effectiveness.
- Research and stay up to date with the latest data engineering advancements, frameworks, and industry best practices to enhance Data2Bots' data capabilities and competitive advantage.
- Optimize data systems for efficiency, scalability, and real-time processing to meet demanding client performance requirements.
- Participate in client meetings, technical presentations, and solution demonstrations to communicate data engineering capabilities and project outcomes effectively.
- Mentor junior team members and contribute to Data2Bots' knowledge base through documentation, best practices, and technical guidelines.
Required Skills & Qualifications
- Bachelor's or master's degree in computer science, data engineering, data science, or a related technical field.
- Minimum 3-5 years of proven experience in data engineering, data warehousing, and building scalable data pipelines in business environments.
- Hands-on experience with big data technologies such as Apache Spark, Hadoop, Kafka, Flink, or other distributed computing frameworks for handling large-scale datasets.
- Strong understanding of data warehousing concepts, ETL/ELT processes, and data modeling techniques (e.g., Kimball, Inmon).
- Extensive experience working with cloud platforms (AWS, Azure, GCP) and their data services (e.g., S3, Redshift, Snowflake, BigQuery, Azure Data Lake, Databricks).
- Proficiency in SQL and experience with various relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra).
- Experience with data orchestration tools and practices, including Apache Airflow, Luigi, or other workflow management systems, along with CI/CD pipelines for data solutions.
- Strong analytical and problem-solving skills with demonstrated ability to translate business requirements into technical data solutions.
- Excellent communication and teamwork skills, with proven ability to explain complex data concepts to both technical and non-technical stakeholders in client-facing environments.
- Experience working in consulting or client services environments with ability to manage multiple projects simultaneously.
- Demonstrated track record of delivering robust data engineering projects from conception to production deployment.
Preferred Qualifications
- Experience in deploying data solutions in real-world applications such as real-time analytics platforms, data lakes for machine learning, data migration projects, or data warehousing for business intelligence.
- Familiarity with data governance tools, master data management (MDM), and data quality frameworks.
- Knowledge of stream processing technologies and event-driven architectures.
- Experience with specific industry domains, including finance, healthcare, retail, manufacturing, telecommunications, or government sectors.
- Contributions to open-source data projects or technical community initiatives.
- Experience with API development, microservices architecture, and system integration for data applications.
- Familiarity with agile development methodologies and project management tools used in consulting environments.
- Experience mentoring technical teams or leading data initiative implementations.
Certifications in cloud data engineering services (e.g., AWS Certified Data Analytics, Azure Data Engineer Associate, Google Cloud Professional Data Engineer) or relevant data specializations.
<