267 Hadoop jobs in Kuala Lumpur

Deployment Engineer (Hadoop)

Kuala Lumpur, Kuala Lumpur Tookitaki

Posted 10 days ago

Job Viewed

Tap Again To Close

Job Description

Position Overview

Job Title: Deployment Engineer (Hadoop)

Department: Services Delivery

Reporting To: Regional Head of Service Delivery

The Deployment Engineer is a critical technical role within Tookitaki’s Services Delivery team. This position is responsible for deploying Tookitaki’s FinCense platform across both on-premise and cloud-hosted (CaaS) environments. The role requires a strong understanding of infrastructure, system integrations, and big data technologies to ensure successful deployments for banking and fintech clients.

Position Purpose

The Deployment Engineer ensures smooth and efficient deployment of Tookitaki’s FinCense platform by working closely with internal teams and client stakeholders. This includes end-to-end deployment, configuration, and initial troubleshooting, ensuring the platform is fully functional and integrated into the client’s infrastructure. This role is critical for achieving a seamless transition during the implementation phase, setting the foundation for client success.

Key Responsibilities
  • 1. Deployment of FinCense Platform
    • On-Premise Deployments: Install and configure the FinCense platform on the client’s infrastructure; ensure full integration with client systems and databases; conduct rigorous testing to validate deployment success.
  • 2. Cloud (CaaS) Deployments
    • Install and configure client tenants on Tookitaki’s cloud-hosted infrastructure (AWS/GCP); ensure scalability and seamless operation within the hosted cloud environment.
  • 3. System Configuration and Integration
    • Configure system settings, APIs, and data pipelines to meet client-specific requirements; collaborate with Data Engineers to integrate client data into the platform; optimize system performance for stability and scalability.
  • 4. Client Collaboration and Support
    • Work with the Client Enablement team to gather deployment requirements; provide technical guidance to client teams during the deployment phase; act as the primary technical point of contact for deployment-related queries.
  • 5. Post-Deployment Validation
    • Conduct end-to-end system tests to validate platform performance and functionality; resolve deployment issues and ensure the platform meets agreed-upon SLAs; document deployment processes and configurations for future reference.
  • 6. Cross-Functional Collaboration
    • Collaborate with Product Management and Engineering teams to address technical challenges during deployment; provide feedback to improve product features and deployment efficiency; support the handover to Client Enablement and Support teams.
Qualifications and Skills
  • Education : Required: Bachelor’s degree in Computer Science, IT, or a related technical field; Preferred: Master’s degree in IT, Cloud Computing, or Big Data Analytics.
  • Experience : Minimum 6 years of experience in system deployment, cloud computing, or IT infrastructure; proven expertise deploying SaaS platforms or big data systems for financial or regulated industries.
  • Technical Expertise (All must have)
    • Big Data Technologies: Hadoop, Spark, Hive, Kubernetes; Docker.
    • Cloud Infrastructure: Hands-on experience with AWS (preferred) or GCP, including EC2, S3, and VPC configuration.
    • System Integration: Proficient in integrating systems via APIs, connectors, and middleware solutions.
    • Scripting and Automation: Proficiency in Python, Bash, or PowerShell.
  • Soft Skills : Excellent problem-solving and troubleshooting skills; strong communication to interact with technical and non-technical stakeholders; ability to manage multiple deployments with deadlines.
  • Preferred : Certifications in AWS, Kubernetes, or Big Data technologies; AML and fraud detection systems experience is a strong plus.
Key Competencies
  • Client-Centric Approach; Technical Acumen in big data and cloud technologies; Collaboration across cross-functional teams; Ownership of deployment activities; Adaptability in dynamic environments.
Benefits
  • Competitive salary; Professional development opportunities; Comprehensive benefits including health insurance and flexible working options; Growth opportunities within Tookitaki’s Services Delivery
About Tookitaki

Tookitaki is transforming financial services by building a robust trust layer focused on fraud prevention and AML compliance. Our solutions leverage collaborative intelligence and federated AI for real-time detection and regulatory compliance.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Deployment Engineer (Hadoop)

Kuala Lumpur, Kuala Lumpur Tookitaki

Posted 11 days ago

Job Viewed

Tap Again To Close

Job Description

Position Overview

Job Title:

Deployment Engineer (Hadoop) Department:

Services Delivery Reporting To:

Regional Head of Service Delivery The Deployment Engineer is a critical technical role within Tookitaki’s Services Delivery team. This position is responsible for deploying Tookitaki’s FinCense platform across both on-premise and cloud-hosted (CaaS) environments. The role requires a strong understanding of infrastructure, system integrations, and big data technologies to ensure successful deployments for banking and fintech clients. Position Purpose The Deployment Engineer ensures smooth and efficient deployment of Tookitaki’s FinCense platform by working closely with internal teams and client stakeholders. This includes end-to-end deployment, configuration, and initial troubleshooting, ensuring the platform is fully functional and integrated into the client’s infrastructure. This role is critical for achieving a seamless transition during the implementation phase, setting the foundation for client success. Key Responsibilities

1. Deployment of FinCense Platform On-Premise Deployments: Install and configure the FinCense platform on the client’s infrastructure; ensure full integration with client systems and databases; conduct rigorous testing to validate deployment success. 2. Cloud (CaaS) Deployments Install and configure client tenants on Tookitaki’s cloud-hosted infrastructure (AWS/GCP); ensure scalability and seamless operation within the hosted cloud environment. 3. System Configuration and Integration Configure system settings, APIs, and data pipelines to meet client-specific requirements; collaborate with Data Engineers to integrate client data into the platform; optimize system performance for stability and scalability. 4. Client Collaboration and Support Work with the Client Enablement team to gather deployment requirements; provide technical guidance to client teams during the deployment phase; act as the primary technical point of contact for deployment-related queries. 5. Post-Deployment Validation Conduct end-to-end system tests to validate platform performance and functionality; resolve deployment issues and ensure the platform meets agreed-upon SLAs; document deployment processes and configurations for future reference. 6. Cross-Functional Collaboration Collaborate with Product Management and Engineering teams to address technical challenges during deployment; provide feedback to improve product features and deployment efficiency; support the handover to Client Enablement and Support teams. Qualifications and Skills

Education : Required: Bachelor’s degree in Computer Science, IT, or a related technical field; Preferred: Master’s degree in IT, Cloud Computing, or Big Data Analytics. Experience : Minimum 6 years of experience in system deployment, cloud computing, or IT infrastructure; proven expertise deploying SaaS platforms or big data systems for financial or regulated industries. Technical Expertise (All must have) Big Data Technologies: Hadoop, Spark, Hive, Kubernetes; Docker. Cloud Infrastructure: Hands-on experience with AWS (preferred) or GCP, including EC2, S3, and VPC configuration. System Integration: Proficient in integrating systems via APIs, connectors, and middleware solutions. Scripting and Automation: Proficiency in Python, Bash, or PowerShell. Soft Skills : Excellent problem-solving and troubleshooting skills; strong communication to interact with technical and non-technical stakeholders; ability to manage multiple deployments with deadlines. Preferred : Certifications in AWS, Kubernetes, or Big Data technologies; AML and fraud detection systems experience is a strong plus. Key Competencies

Client-Centric Approach; Technical Acumen in big data and cloud technologies; Collaboration across cross-functional teams; Ownership of deployment activities; Adaptability in dynamic environments. Benefits

Competitive salary; Professional development opportunities; Comprehensive benefits including health insurance and flexible working options; Growth opportunities within Tookitaki’s Services Delivery About Tookitaki

Tookitaki is transforming financial services by building a robust trust layer focused on fraud prevention and AML compliance. Our solutions leverage collaborative intelligence and federated AI for real-time detection and regulatory compliance.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Big Data Hadoop Developer

Kuala Lumpur, Kuala Lumpur Unison Consulting Pte Ltd

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Job Summary:
We are looking for a Big Data Hadoop Developer to design, develop, and maintain large-scale data processing solutions. The ideal candidate should have strong hands-on experience with the Hadoop ecosystem and integration with relational databases such as MariaDB or Oracle DB for analytics and reporting.

Key Responsibilities:

  • Design, develop, and optimize Hadoop-based big data solutions for batch and real-time data processing.
  • Work with data ingestion frameworks to integrate data from MariaDB/Oracle DB into Hadoop (Sqoop, Apache Nifi, Kafka).
  • Implement Hive, Spark, and MapReduce jobs for data transformation and analytics.
  • Optimize Hive queries, Spark jobs, and HDFS usage for performance and cost efficiency.
  • Create and maintain ETL pipelines for structured and unstructured data.
  • Troubleshoot and resolve issues in Hadoop jobs and database connectivity.
  • Collaborate with BI, analytics, and data science teams for data provisioning.
  • Ensure data security, governance, and compliance in all solutions.

Technical Skills:

  • Big Data Ecosystem: Hadoop (HDFS, YARN), Hive, Spark, Sqoop, MapReduce, Oozie, Flume.
  • Databases: MariaDB and/or Oracle DB (SQL, PL/SQL).
  • Programming: Java, Scala, or Python for Spark/MapReduce development.
  • Data Ingestion: Sqoop, Kafka, Nifi (for integrating RDBMS with Hadoop).
  • Query Optimization: Hive tuning, partitioning, bucketing, indexing.
  • Tools: Ambari, Cloudera Manager, Git, Jenkins.
  • OS & Scripting: Linux/Unix shell scripting.

Soft Skills:

  • Strong analytical skills and problem-solving abilities.
  • Good communication skills for working with cross-functional teams.
  • Ability to manage priorities in a fast-paced environment.

Nice to Have:

  • Experience with cloud-based big data platforms (AWS EMR, Azure HDInsight, GCP Dataproc).
  • Knowledge of NoSQL databases (HBase, Cassandra).
  • Exposure to machine learning integration with Hadoop/Spark.
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Big Data Hadoop Developer

Kuala Lumpur, Kuala Lumpur Unison Consulting Pte Ltd

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Job Summary: We are looking for a Big Data Hadoop Developer to design, develop, and maintain large-scale data processing solutions. The ideal candidate should have strong hands-on experience with the Hadoop ecosystem and integration with relational databases such as MariaDB or Oracle DB for analytics and reporting. Key Responsibilities: Design, develop, and optimize Hadoop-based big data solutions for batch and real-time data processing. Work with data ingestion frameworks to integrate data from MariaDB/Oracle DB into Hadoop (Sqoop, Apache Nifi, Kafka). Implement Hive, Spark, and MapReduce jobs for data transformation and analytics. Optimize Hive queries, Spark jobs, and HDFS usage for performance and cost efficiency. Create and maintain ETL pipelines for structured and unstructured data. Troubleshoot and resolve issues in Hadoop jobs and database connectivity. Collaborate with BI, analytics, and data science teams for data provisioning. Ensure data security, governance, and compliance in all solutions. Technical Skills: Big Data Ecosystem:

Hadoop (HDFS, YARN), Hive, Spark, Sqoop, MapReduce, Oozie, Flume. Databases:

MariaDB and/or Oracle DB (SQL, PL/SQL). Programming:

Java, Scala, or Python for Spark/MapReduce development. Data Ingestion:

Sqoop, Kafka, Nifi (for integrating RDBMS with Hadoop). Query Optimization:

Hive tuning, partitioning, bucketing, indexing. Tools:

Ambari, Cloudera Manager, Git, Jenkins. OS & Scripting:

Linux/Unix shell scripting. Soft Skills: Strong analytical skills and problem-solving abilities. Good communication skills for working with cross-functional teams. Ability to manage priorities in a fast-paced environment. Nice to Have: Experience with cloud-based big data platforms (AWS EMR, Azure HDInsight, GCP Dataproc). Knowledge of NoSQL databases (HBase, Cassandra). Exposure to machine learning integration with Hadoop/Spark.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

ETL Developer (Informatica/Hadoop)

Kuala Lumpur, Kuala Lumpur Maybank

Posted 10 days ago

Job Viewed

Tap Again To Close

Job Description

Overview

Maybank Federal Territory of Kuala Lumpur, Malaysia

ETL Developer (Informatica/Hadoop)

Maybank Federal Territory of Kuala Lumpur, Malaysia

Responsibilities
  • Should be able to troubleshoot errors on Informatica, Oracle Data Integrator (ODI), Teradata, Hadoop platforms and supports applications built on top of it.
  • Strong problem solving knowledge on Databases like Hadoop, Teradata & SQL.
  • Monitoring Jobs/Workflow and working with technical teams to derive permanent fix.
  • Hands on experience in building, troubleshooting Informatica mappings.
  • Working experience on handling and managing ETL processes and data warehousing platform.
  • Hands on experience in writing, debugging, testing shell scripts.
  • Hands on experience in Teradata utilities like BTEQ, Fast Load, Multi Load, TPT.
  • Experience in writing, debugging, testing Hive scripts.
  • Hands on experience in any scheduling tool.
  • Ability to logically prioritize tasks and schedule work accordingly.
  • Deep understanding of data warehouse concepts.
  • Hands on experience in incident management, problem management and change management processes.
  • Strong team player and able to work as an individual contributor.
  • Ability to interact with both technical and non-technical users and address their queries.
  • Strong analytical and problem solving skills.
  • Good working experience in a support role.
  • Flexible for working hours.
  • Hands-on experience with Informatica and Oracle Data Integrator.
  • Big Data/Hadoop.
  • Java/Python.
  • UNIX shell scripting.
  • Experience with any scheduling tool.
Seniority level
  • Mid-Senior level
Employment type
  • Full-time
Job function
  • Information Technology
Industries
  • Banking

Note: Referrals increase your chances of interviewing.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

ETL Developer (Informatica/Hadoop)

Kuala Lumpur, Kuala Lumpur Maybank

Posted 11 days ago

Job Viewed

Tap Again To Close

Job Description

Overview

Maybank Federal Territory of Kuala Lumpur, Malaysia ETL Developer (Informatica/Hadoop)

Maybank Federal Territory of Kuala Lumpur, Malaysia Responsibilities

Should be able to troubleshoot errors on Informatica, Oracle Data Integrator (ODI), Teradata, Hadoop platforms and supports applications built on top of it. Strong problem solving knowledge on Databases like Hadoop, Teradata & SQL. Monitoring Jobs/Workflow and working with technical teams to derive permanent fix. Hands on experience in building, troubleshooting Informatica mappings. Working experience on handling and managing ETL processes and data warehousing platform. Hands on experience in writing, debugging, testing shell scripts. Hands on experience in Teradata utilities like BTEQ, Fast Load, Multi Load, TPT. Experience in writing, debugging, testing Hive scripts. Hands on experience in any scheduling tool. Ability to logically prioritize tasks and schedule work accordingly. Deep understanding of data warehouse concepts. Hands on experience in incident management, problem management and change management processes. Strong team player and able to work as an individual contributor. Ability to interact with both technical and non-technical users and address their queries. Strong analytical and problem solving skills. Good working experience in a support role. Flexible for working hours. Hands-on experience with Informatica and Oracle Data Integrator. Big Data/Hadoop. Java/Python. UNIX shell scripting. Experience with any scheduling tool. Seniority level

Mid-Senior level Employment type

Full-time Job function

Information Technology Industries

Banking Note: Referrals increase your chances of interviewing.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Deployment Engineer- MY (Big Data / Hadoop Admin)

Kuala Lumpur, Kuala Lumpur Tookitaki Holding PTE LTD

Posted 7 days ago

Job Viewed

Tap Again To Close

Job Description

Position Overview
Job Title: Deployment Engineer
Department: Services Delivery
Reporting To: Regional Head of Service Delivery


The Deployment Engineer is a critical technical role within Tookitaki’s Services Delivery team. This position is responsible for deploying Tookitaki’s FinCense platform across both on-premise and cloud-hosted (CaaS) environments. The role requires a strong understanding of infrastructure, system integrations, and big data technologies to ensure successful deployments for banking and fintech clients.

Position Purpose

The Deployment Engineer ensures smooth and efficient deployment of Tookitaki’s FinCense platform by working closely with internal teams and client stakeholders. This includes end-to-end deployment, configuration, and initial troubleshooting, ensuring the platform is fully functional and integrated into the client’s infrastructure.

This role is critical for achieving a seamless transition during the implementation phase, setting the foundation for client success.

Key Responsibilities

1. Deployment of FinCense Platform
  • On-Premise Deployment:

    • Install and configure the entire FinCense platform on the client’s infrastructure.

    • Ensure full integration with the client’s existing systems and databases.

    • Conduct rigorous testing to validate deployment success.

  • CaaS Deployment:

    • Install and configure client tenants on Tookitaki’s cloud-hosted infrastructure (AWS/GCP).

    • Ensure scalability and seamless operation within Tookitaki’s hosted cloud environment.

2. System Configuration and Integration
  • Configure system settings, APIs, and data pipelines to meet client-specific requirements.

  • Collaborate with Data Engineers to integrate client data into the platform.

  • Optimize system performance to ensure stability and scalability.

3. Client Collaboration and Support
  • Work closely with the Client Enablement team to gather deployment requirements and ensure alignment with client needs.

  • Provide technical guidance to client teams during the deployment phase.

  • Act as the primary technical point of contact for all deployment-related queries.

4. Post-Deployment Validation
  • Conduct end-to-end system tests to validate the platform’s performance and functionality.

  • Resolve any deployment issues and ensure the platform meets agreed-upon SLAs.

  • Document deployment processes and configurations for future reference.

5. Collaboration with Cross-Functional Teams
  • Work closely with Product Management and Engineering teams to address technical challenges during deployment.

  • Provide feedback on deployment experiences to improve product features and deployment efficiency.

  • Support the Client Enablement and Support teams during the handover process.

Qualifications and Skills

Education
  • Required: Bachelor’s degree in Computer Science, IT, or a related technical field.

  • Preferred: Master’s degree in IT, Cloud Computing, or Big Data Analytics.

Experience
  • Minimum: 6 years of experience in system deployment, cloud computing, or IT infrastructure roles.

  • Proven expertise in deploying SaaS platforms or big data systems for financial or regulated industries.


Technical Expertise
  • Big Data Technologies: Strong knowledge of Hadoop, Spark, Hive, Kubernetes, and Docker.

  • Cloud Infrastructure: Hands-on experience with AWS (preferred) or GCP, including EC2, S3, and VPC configurations.

  • System Integration: Proficient in integrating systems via APIs, connectors, and middleware solutions.

  • Scripting and Automation: Proficiency in scripting languages such as Python, Bash, or PowerShell.

Soft Skills
  • Excellent problem-solving and troubleshooting skills.

  • Strong communication skills to interact with technical and non-technical stakeholders.

  • Ability to manage multiple deployments simultaneously while meeting deadlines.

Preferred
  • Certifications in AWS, Kubernetes, or Big Data technologies.

  • Experience with AML and fraud detection systems is a strong plus.

Key Competencies

  • Client-Centric Approach: Focused on delivering high-quality deployments tailored to client needs.

  • Technical Acumen: Expertise in big data and cloud technologies to ensure flawless deployments.

  • Collaboration: Works effectively with cross-functional teams to ensure deployment success.

  • Ownership: Takes full responsibility for deployment activities and outcomes.

  • Adaptability: Thrives in dynamic environments with changing requirements.

Success Metrics

  1. Deployment Accuracy:

  • 100% of deployments completed successfully without post-deployment issues.

  • Timeliness:

    • Deployments delivered within agreed timelines for both on-premise and CaaS clients.

  • System Performance:

    • Achieve target SLAs for platform performance and stability post-deployment.

  • Client Satisfaction:

    • Positive feedback from clients on deployment experience and system functionality.

  • Knowledge Sharing:

    • Maintain and share deployment documentation to improve team efficiency.

    #J-18808-Ljbffr
    This advertiser has chosen not to accept applicants from your region.
    Be The First To Know

    About the latest Hadoop Jobs in Kuala Lumpur !

    Deployment Engineer- MY (Big Data / Hadoop Admin)

    Kuala Lumpur, Kuala Lumpur Tookitaki Holding PTE LTD

    Posted 7 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    Position Overview Job Title: Deployment Engineer Department: Services Delivery Reporting To: Regional Head of Service Delivery

    The Deployment Engineer is a critical technical role within Tookitaki’s Services Delivery team. This position is responsible for deploying Tookitaki’s FinCense platform across both on-premise and cloud-hosted (CaaS) environments. The role requires a strong understanding of infrastructure, system integrations, and big data technologies to ensure successful deployments for banking and fintech clients. Position Purpose The Deployment Engineer ensures smooth and efficient deployment of Tookitaki’s FinCense platform by working closely with internal teams and client stakeholders. This includes end-to-end deployment, configuration, and initial troubleshooting, ensuring the platform is fully functional and integrated into the client’s infrastructure. This role is critical for achieving a seamless transition during the implementation phase, setting the foundation for client success.

    Key Responsibilities 1. Deployment of FinCense Platform

    On-Premise Deployment:

    Install and configure the entire FinCense platform on the client’s infrastructure.

    Ensure full integration with the client’s existing systems and databases.

    Conduct rigorous testing to validate deployment success.

    CaaS Deployment:

    Install and configure client tenants on Tookitaki’s cloud-hosted infrastructure (AWS/GCP).

    Ensure scalability and seamless operation within Tookitaki’s hosted cloud environment.

    2. System Configuration and Integration

    Configure system settings, APIs, and data pipelines to meet client-specific requirements.

    Collaborate with Data Engineers to integrate client data into the platform.

    Optimize system performance to ensure stability and scalability.

    3. Client Collaboration and Support

    Work closely with the Client Enablement team to gather deployment requirements and ensure alignment with client needs.

    Provide technical guidance to client teams during the deployment phase.

    Act as the primary technical point of contact for all deployment-related queries.

    4. Post-Deployment Validation

    Conduct end-to-end system tests to validate the platform’s performance and functionality.

    Resolve any deployment issues and ensure the platform meets agreed-upon SLAs.

    Document deployment processes and configurations for future reference.

    5. Collaboration with Cross-Functional Teams

    Work closely with Product Management and Engineering teams to address technical challenges during deployment.

    Provide feedback on deployment experiences to improve product features and deployment efficiency.

    Support the Client Enablement and Support teams during the handover process.

    Qualifications and Skills Education

    Required: Bachelor’s degree in Computer Science, IT, or a related technical field.

    Preferred: Master’s degree in IT, Cloud Computing, or Big Data Analytics.

    Experience

    Minimum: 6 years of experience in system deployment, cloud computing, or IT infrastructure roles.

    Proven expertise in deploying SaaS platforms or big data systems for financial or regulated industries.

    Technical Expertise

    Big Data Technologies: Strong knowledge of Hadoop, Spark, Hive, Kubernetes, and Docker.

    Cloud Infrastructure: Hands-on experience with AWS (preferred) or GCP, including EC2, S3, and VPC configurations.

    System Integration: Proficient in integrating systems via APIs, connectors, and middleware solutions.

    Scripting and Automation: Proficiency in scripting languages such as Python, Bash, or PowerShell.

    Soft Skills

    Excellent problem-solving and troubleshooting skills.

    Strong communication skills to interact with technical and non-technical stakeholders.

    Ability to manage multiple deployments simultaneously while meeting deadlines.

    Preferred

    Certifications in AWS, Kubernetes, or Big Data technologies.

    Experience with AML and fraud detection systems is a strong plus.

    Key Competencies Client-Centric Approach: Focused on delivering high-quality deployments tailored to client needs.

    Technical Acumen: Expertise in big data and cloud technologies to ensure flawless deployments.

    Collaboration: Works effectively with cross-functional teams to ensure deployment success.

    Ownership: Takes full responsibility for deployment activities and outcomes.

    Adaptability: Thrives in dynamic environments with changing requirements.

    Success Metrics Deployment Accuracy:

    100% of deployments completed successfully without post-deployment issues.

    Timeliness:

    Deployments delivered within agreed timelines for both on-premise and CaaS clients.

    System Performance:

    Achieve target SLAs for platform performance and stability post-deployment.

    Client Satisfaction:

    Positive feedback from clients on deployment experience and system functionality.

    Knowledge Sharing:

    Maintain and share deployment documentation to improve team efficiency.

    #J-18808-Ljbffr
    This advertiser has chosen not to accept applicants from your region.

    AVP, Data Integration (pySpark, Nifi, Hadoop)

    Kuala Lumpur, Kuala Lumpur Maybank

    Posted 4 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    AVP, Data Integration (pySpark, Nifi, Hadoop)

    Maybank WP. Kuala Lumpur, Federal Territory of Kuala Lumpur, Malaysia

    Get AI-powered advice on this job and more exclusive features.

    • Implement ETL systems that are operationally stable, efficient, and automated. This includes technical solutions that are scalable, aligned with the enterprise architecture, and adaptable to business changes.
    • Collaborate with internal and external teams to define requirements for data integrations, specifically for Data Warehouse/Data Marts implementations.

    Responsibilities of the Role

    • Review business and technical requirements to ensure the data integration platform meets specifications.
    • Apply industry best practices for ETL design and development.
    • Produce technical design documents, system testing plans, and implementation documentation.
    • Conduct system testing: execute job flows, investigate and resolve system defects, and document results.
    • Work with DBAs, application specialists, and technical support teams to optimize ETL system performance and meet SLAs.
    • Assist in developing, documenting, and applying best practices and procedures.
    • Strong SQL writing skills are required.
    • Familiarity with ETL tools such as pySpark, NiFi, Informatica, and Hadoop is preferred.
    • Understanding of data integration best practices, including master data management, entity resolution, data quality, and metadata management.
    • Experience with data warehouse architecture, source system data analysis, and data profiling.
    • Ability to work effectively in a fast-paced, adaptive environment.
    • Financial domain experience is a plus.
    • Ability to work independently and communicate effectively across various levels, including product owners, executive sponsors, and team members.
    • Experience working in an Agile environment is advantageous.

    Qualifications

    • Bachelor’s Degree in Computer Science, Information Technology, or equivalent.
    • Over 5 years of total work experience, with experience programming ETL processes using Informatica, NiFi, pySpark, and Hadoop.
    • At least 4 years of experience in data analysis, profiling, and designing ETL systems/programs.
    Seniority level
    • Mid-Senior level
    Employment type
    • Full-time
    Job function
    • Information Technology
    Industries
    • Banking
    #J-18808-Ljbffr
    This advertiser has chosen not to accept applicants from your region.

    AVP, Data Integration (pySpark, Nifi, Hadoop)

    Kuala Lumpur, Kuala Lumpur Maybank

    Posted 4 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    AVP, Data Integration (pySpark, Nifi, Hadoop)

    Maybank WP. Kuala Lumpur, Federal Territory of Kuala Lumpur, Malaysia

    Get AI-powered advice on this job and more exclusive features.

    • Implement ETL systems that are operationally stable, efficient, and automated. This includes technical solutions that are scalable, aligned with the enterprise architecture, and adaptable to business changes.
    • Collaborate with internal and external teams to define requirements for data integrations, specifically for Data Warehouse/Data Marts implementations.

    Responsibilities of the Role

    • Review business and technical requirements to ensure the data integration platform meets specifications.
    • Apply industry best practices for ETL design and development.
    • Produce technical design documents, system testing plans, and implementation documentation.
    • Conduct system testing: execute job flows, investigate and resolve system defects, and document results.
    • Work with DBAs, application specialists, and technical support teams to optimize ETL system performance and meet SLAs.
    • Assist in developing, documenting, and applying best practices and procedures.
    • Strong SQL writing skills are required.
    • Familiarity with ETL tools such as pySpark, NiFi, Informatica, and Hadoop is preferred.
    • Understanding of data integration best practices, including master data management, entity resolution, data quality, and metadata management.
    • Experience with data warehouse architecture, source system data analysis, and data profiling.
    • Ability to work effectively in a fast-paced, adaptive environment.
    • Financial domain experience is a plus.
    • Ability to work independently and communicate effectively across various levels, including product owners, executive sponsors, and team members.
    • Experience working in an Agile environment is advantageous.

    Qualifications

    • Bachelor’s Degree in Computer Science, Information Technology, or equivalent.
    • Over 5 years of total work experience, with experience programming ETL processes using Informatica, NiFi, pySpark, and Hadoop.
    • At least 4 years of experience in data analysis, profiling, and designing ETL systems/programs.
    Seniority level
    • Mid-Senior level
    Employment type
    • Full-time
    Job function
    • Information Technology
    Industries
    • Banking
    #J-18808-Ljbffr
    This advertiser has chosen not to accept applicants from your region.
     

    Nearby Locations

    Other Jobs Near Me

    Industry

    1. request_quote Accounting
    2. work Administrative
    3. eco Agriculture Forestry
    4. smart_toy AI & Emerging Technologies
    5. school Apprenticeships & Trainee
    6. apartment Architecture
    7. palette Arts & Entertainment
    8. directions_car Automotive
    9. flight_takeoff Aviation
    10. account_balance Banking & Finance
    11. local_florist Beauty & Wellness
    12. restaurant Catering
    13. volunteer_activism Charity & Voluntary
    14. science Chemical Engineering
    15. child_friendly Childcare
    16. foundation Civil Engineering
    17. clean_hands Cleaning & Sanitation
    18. diversity_3 Community & Social Care
    19. construction Construction
    20. brush Creative & Digital
    21. currency_bitcoin Crypto & Blockchain
    22. support_agent Customer Service & Helpdesk
    23. medical_services Dental
    24. medical_services Driving & Transport
    25. medical_services E Commerce & Social Media
    26. school Education & Teaching
    27. electrical_services Electrical Engineering
    28. bolt Energy
    29. local_mall Fmcg
    30. gavel Government & Non Profit
    31. emoji_events Graduate
    32. health_and_safety Healthcare
    33. beach_access Hospitality & Tourism
    34. groups Human Resources
    35. precision_manufacturing Industrial Engineering
    36. security Information Security
    37. handyman Installation & Maintenance
    38. policy Insurance
    39. code IT & Software
    40. gavel Legal
    41. sports_soccer Leisure & Sports
    42. inventory_2 Logistics & Warehousing
    43. supervisor_account Management
    44. supervisor_account Management Consultancy
    45. supervisor_account Manufacturing & Production
    46. campaign Marketing
    47. build Mechanical Engineering
    48. perm_media Media & PR
    49. local_hospital Medical
    50. local_hospital Military & Public Safety
    51. local_hospital Mining
    52. medical_services Nursing
    53. local_gas_station Oil & Gas
    54. biotech Pharmaceutical
    55. checklist_rtl Project Management
    56. shopping_bag Purchasing
    57. home_work Real Estate
    58. person_search Recruitment Consultancy
    59. store Retail
    60. point_of_sale Sales
    61. science Scientific Research & Development
    62. wifi Telecoms
    63. psychology Therapy
    64. pets Veterinary
    View All Hadoop Jobs View All Jobs in Kuala Lumpur