Design, develop, and maintain data pipelines and ETL processes using AWS services (Glue, Athena, S3, RDS) Work with data virtualisation tools like Denodo and develop VQL queries Ingest and process data from various internal and external data sources Perform data extraction, cleaning, transformation, and loading operations Implement automated data collection processes including API integrations when necessary Design and implement data models (conceptual, logical, and physical) using tools like ER Studio Develop and maintain data warehouses, data lakes, and operational data stores Develop and maintain data blueprints Create data marts and analytical views to support business intelligence needs using Denodo, RDS Implement master data management practices and data governance standards At least 3 years of experience in data engineering or similar role Strong proficiency in Python, VQL, SQL Experience with AWS services (Glue, Athena, S3, RDS, Sagemaker) Knowledge of data virtualisation concepts and tools (preferably Denodo) Experience with BI tools (preferably Tableau, Power BI) Understanding of data modelling and database design principles Familiarity with data governance and master data management concepts Experience with version control systems (Gitlab) and CI/CD pipelines Experience working in Agile environments with iterative development practices Strong problem-solving skills and attention to detail Excellent communication skills and ability to work in a team environment * Knowledge of AI technologies (AWS Bedrock, Azure AI, LLMs) would be advantageous
MNCJobz.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.