2,166 Hadoop jobs in Malaysia

Hadoop Developer

Kuala Lumpur, Kuala Lumpur MYR120000 - MYR240000 Y Luxoft

Posted today

Job Viewed

Tap Again To Close

Job Description

Project Description:

Our Client a leading bank in Asia with a global network of more than 500 branches and offices in 19 countries and territories in Asia Pacific, Europe, and North America, are looking for Consultants to be part of the project.

The Technology and Operations function is comprised of five teams of specialists with distinct capabilities: business partnership, technology, operations, risk governance, and planning support and services. They work closely together to harness the power of technology to support our physical and digital banking services and operations. This includes developing, centralising, and standardising technology systems as well as banking operations in Malaysia and overseas branches.

The client has more than 80 years of history in the banking industry and is expanding its footprint in Malaysia. You will be working in a newly set-up technology centre located in Kuala Lumpur as part of Technology and Operations to deliver innovative financial technology solutions that enable business growth and technology transformation.

Responsibilities:


• Design, develop, and enhance AML applications using Tookitaki and Hadoop ecosystem.


• Work on big data solutions, including HDFS, Hive, and related components for AML monitoring and reporting.


• Develop and maintain Unix Shell Scripts (ksh, Perl, etc.) for automation and data processing.


• Support application deployment and middleware (JBoss) configurations.


• Write and optimize SQL queries (MariaDB / Oracle) to handle large data sets (not DBA role).


• Collaborate with business and compliance teams to translate AML requirements into technical solutions.


• Troubleshoot and resolve L3 issues related to AML platforms, ensuring system stability and performance.


• Participate in code reviews, unit testing, and system integration testing.


• Provide production support escalation (L3) and RCA for recurring technical issues.


• Create and maintain technical documentation and knowledge base for AML systems.

Mandatory Skills Description:


• Hands-on experience with Hadoop (HDFS, Hive).


• Strong Unix/Linux scripting skills.


• JBoss deployment knowledge.


• SQL expertise on MariaDB/Oracle.

Nice-to-Have Skills Description:

Java, Shell Scripting (ksh, perl, etc), Python, PL/SQL

This advertiser has chosen not to accept applicants from your region.

Big Data Hadoop Developer

Kuala Lumpur, Kuala Lumpur Unison Consulting Pte Ltd

Posted 14 days ago

Job Viewed

Tap Again To Close

Job Description

Job Summary:
We are looking for a Big Data Hadoop Developer to design, develop, and maintain large-scale data processing solutions. The ideal candidate should have strong hands-on experience with the Hadoop ecosystem and integration with relational databases such as MariaDB or Oracle DB for analytics and reporting.

Key Responsibilities:

  • Design, develop, and optimize Hadoop-based big data solutions for batch and real-time data processing.
  • Work with data ingestion frameworks to integrate data from MariaDB/Oracle DB into Hadoop (Sqoop, Apache Nifi, Kafka).
  • Implement Hive, Spark, and MapReduce jobs for data transformation and analytics.
  • Optimize Hive queries, Spark jobs, and HDFS usage for performance and cost efficiency.
  • Create and maintain ETL pipelines for structured and unstructured data.
  • Troubleshoot and resolve issues in Hadoop jobs and database connectivity.
  • Collaborate with BI, analytics, and data science teams for data provisioning.
  • Ensure data security, governance, and compliance in all solutions.

Technical Skills:

  • Big Data Ecosystem: Hadoop (HDFS, YARN), Hive, Spark, Sqoop, MapReduce, Oozie, Flume.
  • Databases: MariaDB and/or Oracle DB (SQL, PL/SQL).
  • Programming: Java, Scala, or Python for Spark/MapReduce development.
  • Data Ingestion: Sqoop, Kafka, Nifi (for integrating RDBMS with Hadoop).
  • Query Optimization: Hive tuning, partitioning, bucketing, indexing.
  • Tools: Ambari, Cloudera Manager, Git, Jenkins.
  • OS & Scripting: Linux/Unix shell scripting.

Soft Skills:

  • Strong analytical skills and problem-solving abilities.
  • Good communication skills for working with cross-functional teams.
  • Ability to manage priorities in a fast-paced environment.

Nice to Have:

  • Experience with cloud-based big data platforms (AWS EMR, Azure HDInsight, GCP Dataproc).
  • Knowledge of NoSQL databases (HBase, Cassandra).
  • Exposure to machine learning integration with Hadoop/Spark.
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Big Data Hadoop Developer

Kuala Lumpur, Kuala Lumpur Unison Consulting Pte Ltd

Posted 11 days ago

Job Viewed

Tap Again To Close

Job Description

Job Summary: We are looking for a Big Data Hadoop Developer to design, develop, and maintain large-scale data processing solutions. The ideal candidate should have strong hands-on experience with the Hadoop ecosystem and integration with relational databases such as MariaDB or Oracle DB for analytics and reporting. Key Responsibilities: Design, develop, and optimize Hadoop-based big data solutions for batch and real-time data processing. Work with data ingestion frameworks to integrate data from MariaDB/Oracle DB into Hadoop (Sqoop, Apache Nifi, Kafka). Implement Hive, Spark, and MapReduce jobs for data transformation and analytics. Optimize Hive queries, Spark jobs, and HDFS usage for performance and cost efficiency. Create and maintain ETL pipelines for structured and unstructured data. Troubleshoot and resolve issues in Hadoop jobs and database connectivity. Collaborate with BI, analytics, and data science teams for data provisioning. Ensure data security, governance, and compliance in all solutions. Technical Skills: Big Data Ecosystem:

Hadoop (HDFS, YARN), Hive, Spark, Sqoop, MapReduce, Oozie, Flume. Databases:

MariaDB and/or Oracle DB (SQL, PL/SQL). Programming:

Java, Scala, or Python for Spark/MapReduce development. Data Ingestion:

Sqoop, Kafka, Nifi (for integrating RDBMS with Hadoop). Query Optimization:

Hive tuning, partitioning, bucketing, indexing. Tools:

Ambari, Cloudera Manager, Git, Jenkins. OS & Scripting:

Linux/Unix shell scripting. Soft Skills: Strong analytical skills and problem-solving abilities. Good communication skills for working with cross-functional teams. Ability to manage priorities in a fast-paced environment. Nice to Have: Experience with cloud-based big data platforms (AWS EMR, Azure HDInsight, GCP Dataproc). Knowledge of NoSQL databases (HBase, Cassandra). Exposure to machine learning integration with Hadoop/Spark.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

ETL Developer (Informatica/Hadoop)

Kuala Lumpur, Kuala Lumpur Maybank

Posted 8 days ago

Job Viewed

Tap Again To Close

Job Description

Overview

Maybank Federal Territory of Kuala Lumpur, Malaysia

ETL Developer (Informatica/Hadoop)

Maybank Federal Territory of Kuala Lumpur, Malaysia

Responsibilities
  • Should be able to troubleshoot errors on Informatica, Oracle Data Integrator (ODI), Teradata, Hadoop platforms and supports applications built on top of it.
  • Strong problem solving knowledge on Databases like Hadoop, Teradata & SQL.
  • Monitoring Jobs/Workflow and working with technical teams to derive permanent fix.
  • Hands on experience in building, troubleshooting Informatica mappings.
  • Working experience on handling and managing ETL processes and data warehousing platform.
  • Hands on experience in writing, debugging, testing shell scripts.
  • Hands on experience in Teradata utilities like BTEQ, Fast Load, Multi Load, TPT.
  • Experience in writing, debugging, testing Hive scripts.
  • Hands on experience in any scheduling tool.
  • Ability to logically prioritize tasks and schedule work accordingly.
  • Deep understanding of data warehouse concepts.
  • Hands on experience in incident management, problem management and change management processes.
  • Strong team player and able to work as an individual contributor.
  • Ability to interact with both technical and non-technical users and address their queries.
  • Strong analytical and problem solving skills.
  • Good working experience in a support role.
  • Flexible for working hours.
  • Hands-on experience with Informatica and Oracle Data Integrator.
  • Big Data/Hadoop.
  • Java/Python.
  • UNIX shell scripting.
  • Experience with any scheduling tool.
Seniority level
  • Mid-Senior level
Employment type
  • Full-time
Job function
  • Information Technology
Industries
  • Banking

Note: Referrals increase your chances of interviewing.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Developer - Hadoop, Teradata, Python

Kuala Lumpur, Kuala Lumpur Luxoft

Posted 10 days ago

Job Viewed

Tap Again To Close

Job Description

Project description

Our Client a leading bank in Asia with a global network of more than 500 branches and offices in 19 countries and territories in Asia Pacific, Europe, and North America, are looking for Consultants to be part of the project. The Technology and Operations function is comprised of five teams of specialists with distinct capabilities: business partnership, technology, operations, risk governance, and planning support and services. They work closely together to harness the power of technology to support our physical and digital banking services and operations. This includes developing, centralising, and standardising technology systems as well as banking operations in Malaysia and overseas branches. The client has more than 80 years of history in the banking industry and is expanding its footprint in Malaysia. You will be working in a newly set-up technology centre located in Kuala Lumpur as part of Technology and Operations to deliver innovative financial technology solutions that enable business growth and technology transformation.

Responsibilities
  • Design, develop, and maintain data pipelines and ETL workflows using Informatica Data Integration Suite, Python, and R.
  • Build and optimize large-scale data processing systems on Cloudera Hadoop (6.x) and Teradata Inteliflex platforms.
  • Implement data ingestion, transformation, and storage solutions integrating diverse data sources, including Oracle, SQL Server, PostgreSQL, and AS400.
  • Develop and deploy dashboards and analytics solutions using QlikSense, Microsoft Power BI, and other visualization tools.
  • Collaborate with business teams to deliver analytics and decision-support solutions across domains like Credit Risk Analytics, Credit Scoring,
  • Treasury & Wealth Management, and Trade Finance.
  • Leverage data science tools (Python, R Studio, Kafka, Spark) to support predictive modeling, scoring, and advanced analytics use cases.
  • Participate in code reviews, performance tuning, and data quality validation using tools like QuerySurge, SonarCube, and JIRA.
  • Automate workflows, deployments, and job scheduling using Jenkins, Control-M, and Bitbucket.
  • Ensure scalability, security, and governance of data solutions in production environments across Linux, AIX, Windows, and AS400 platforms.
Must have
  • 3 to 5 years experience in Big Data & Data Engineering: Cloudera Hadoop (6.x), Spark, Hive, HUE, Impala, Kafka
  • ETL & Data Integration: Informatica (BDM, IDQ, IDL), QuerySurge
  • Databases: Teradata Inteliflex, Oracle, SQL Server, PostgreSQL
  • Data Visualization: QlikSense Discovery, Microsoft Power BI
  • Programming & Analytics: Python, R, R Studio
  • Version Control & Automation: Jenkins, Bitbucket, Control-M
  • OS: AS400, AIX, Linux, Windows
  • Domain Knowledge: Minimum 1 of the following:
  • Credit Risk Analytics
  • Credit Scoring & Decision Support
  • Treasury & Wealth Management (Murex)
  • Trade Finance & Accounts Receivable (FITAS, ARF)
  • Retail Banking & Cards (Silver Lake)
  • Data Modeling (FSLDM / Data Marts)
Nice to have

AS400, Experian PowerCurve, SAS

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Developer - Hadoop, Teradata, Python

Kuala Lumpur, Kuala Lumpur MYR60000 - MYR120000 Y LUXOFT MALAYSIA SDN. BHD.

Posted today

Job Viewed

Tap Again To Close

Job Description

Project description

Our Client a leading bank in Asia with a global network of more than 500 branches and offices in 19 countries and territories in Asia Pacific, Europe, and North America, are looking for Consultants to be part of the project.

The Technology and Operations function is comprised of five teams of specialists with distinct capabilities: business partnership, technology, operations, risk governance, and planning support and services. They work closely together to harness the power of technology to support our physical and digital banking services and operations. This includes developing, centralising, and standardising technology systems as well as banking operations in Malaysia and overseas branches.

The client has more than 80 years of history in the banking industry and is expanding its footprint in Malaysia. You will be working in a newly set-up technology centre located in Kuala Lumpur as part of Technology and Operations to deliver innovative financial technology solutions that enable business growth and technology transformation.

Skills

Must have

3 to 5years experience in Big Data & Data Engineering: Cloudera Hadoop (6.x), Spark, Hive, HUE, Impala, Kafka

ETL & Data Integration: Informatica (BDM, IDQ, IDL), QuerySurge

Databases: Teradata Inteliflex, Oracle, SQL Server, PostgreSQL

Data Visualization: QlikSense Discovery, Microsoft Power BI

Programming & Analytics: Python, R, R Studio

Version Control & Automation: Jenkins, Bitbucket, Control-M

OS: AS400, AIX, Linux, Windows

Domain Knowledge:

Minimum 1 of the following:

Credit Risk Analytics

Credit Scoring & Decision Support

Treasury & Wealth Management (Murex)

Trade Finance & Accounts Receivable (FITAS, ARF)

Retail Banking & Cards (Silver Lake)

Data Modeling (FSLDM / Data Marts)

Nice to have

AS400, Experian PowerCurve, SAS

This advertiser has chosen not to accept applicants from your region.

Developer - Hadoop, Teradata, Python

Kuala Lumpur, Kuala Lumpur Luxoft

Posted 10 days ago

Job Viewed

Tap Again To Close

Job Description

Project description

Our Client a leading bank in Asia with a global network of more than 500 branches and offices in 19 countries and territories in Asia Pacific, Europe, and North America, are looking for Consultants to be part of the project. The Technology and Operations function is comprised of five teams of specialists with distinct capabilities: business partnership, technology, operations, risk governance, and planning support and services. They work closely together to harness the power of technology to support our physical and digital banking services and operations. This includes developing, centralising, and standardising technology systems as well as banking operations in Malaysia and overseas branches. The client has more than 80 years of history in the banking industry and is expanding its footprint in Malaysia. You will be working in a newly set-up technology centre located in Kuala Lumpur as part of Technology and Operations to deliver innovative financial technology solutions that enable business growth and technology transformation.

Responsibilities

Design, develop, and maintain data pipelines and ETL workflows using Informatica Data Integration Suite, Python, and R.

Build and optimize large-scale data processing systems on Cloudera Hadoop (6.x) and Teradata Inteliflex platforms.

Implement data ingestion, transformation, and storage solutions integrating diverse data sources, including Oracle, SQL Server, PostgreSQL, and AS400.

Develop and deploy dashboards and analytics solutions using QlikSense, Microsoft Power BI, and other visualization tools.

Collaborate with business teams to deliver analytics and decision-support solutions across domains like Credit Risk Analytics, Credit Scoring,

Treasury & Wealth Management, and Trade Finance.

Leverage data science tools (Python, R Studio, Kafka, Spark) to support predictive modeling, scoring, and advanced analytics use cases.

Participate in code reviews, performance tuning, and data quality validation using tools like QuerySurge, SonarCube, and JIRA.

Automate workflows, deployments, and job scheduling using Jenkins, Control-M, and Bitbucket.

Ensure scalability, security, and governance of data solutions in production environments across Linux, AIX, Windows, and AS400 platforms.

Must have

3 to 5 years experience in Big Data & Data Engineering: Cloudera Hadoop (6.x), Spark, Hive, HUE, Impala, Kafka

ETL & Data Integration: Informatica (BDM, IDQ, IDL), QuerySurge

Databases: Teradata Inteliflex, Oracle, SQL Server, PostgreSQL

Data Visualization: QlikSense Discovery, Microsoft Power BI

Programming & Analytics: Python, R, R Studio

Version Control & Automation: Jenkins, Bitbucket, Control-M

OS: AS400, AIX, Linux, Windows

Domain Knowledge: Minimum 1 of the following:

Credit Risk Analytics

Credit Scoring & Decision Support

Treasury & Wealth Management (Murex)

Trade Finance & Accounts Receivable (FITAS, ARF)

Retail Banking & Cards (Silver Lake)

Data Modeling (FSLDM / Data Marts)

Nice to have AS400, Experian PowerCurve, SAS

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Hadoop Jobs in Malaysia !

ETL Developer (Informatica/Hadoop)

Kuala Lumpur, Kuala Lumpur Maybank

Posted 11 days ago

Job Viewed

Tap Again To Close

Job Description

Overview

Maybank Federal Territory of Kuala Lumpur, Malaysia ETL Developer (Informatica/Hadoop)

Maybank Federal Territory of Kuala Lumpur, Malaysia Responsibilities

Should be able to troubleshoot errors on Informatica, Oracle Data Integrator (ODI), Teradata, Hadoop platforms and supports applications built on top of it. Strong problem solving knowledge on Databases like Hadoop, Teradata & SQL. Monitoring Jobs/Workflow and working with technical teams to derive permanent fix. Hands on experience in building, troubleshooting Informatica mappings. Working experience on handling and managing ETL processes and data warehousing platform. Hands on experience in writing, debugging, testing shell scripts. Hands on experience in Teradata utilities like BTEQ, Fast Load, Multi Load, TPT. Experience in writing, debugging, testing Hive scripts. Hands on experience in any scheduling tool. Ability to logically prioritize tasks and schedule work accordingly. Deep understanding of data warehouse concepts. Hands on experience in incident management, problem management and change management processes. Strong team player and able to work as an individual contributor. Ability to interact with both technical and non-technical users and address their queries. Strong analytical and problem solving skills. Good working experience in a support role. Flexible for working hours. Hands-on experience with Informatica and Oracle Data Integrator. Big Data/Hadoop. Java/Python. UNIX shell scripting. Experience with any scheduling tool. Seniority level

Mid-Senior level Employment type

Full-time Job function

Information Technology Industries

Banking Note: Referrals increase your chances of interviewing.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

AVP, Data Integration (pySpark, Nifi, Hadoop)

Kuala Lumpur, Kuala Lumpur Maybank

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

AVP, Data Integration (pySpark, Nifi, Hadoop)

Maybank WP. Kuala Lumpur, Federal Territory of Kuala Lumpur, Malaysia

Get AI-powered advice on this job and more exclusive features.

  • Implement ETL systems that are operationally stable, efficient, and automated. This includes technical solutions that are scalable, aligned with the enterprise architecture, and adaptable to business changes.
  • Collaborate with internal and external teams to define requirements for data integrations, specifically for Data Warehouse/Data Marts implementations.

Responsibilities of the Role

  • Review business and technical requirements to ensure the data integration platform meets specifications.
  • Apply industry best practices for ETL design and development.
  • Produce technical design documents, system testing plans, and implementation documentation.
  • Conduct system testing: execute job flows, investigate and resolve system defects, and document results.
  • Work with DBAs, application specialists, and technical support teams to optimize ETL system performance and meet SLAs.
  • Assist in developing, documenting, and applying best practices and procedures.
  • Strong SQL writing skills are required.
  • Familiarity with ETL tools such as pySpark, NiFi, Informatica, and Hadoop is preferred.
  • Understanding of data integration best practices, including master data management, entity resolution, data quality, and metadata management.
  • Experience with data warehouse architecture, source system data analysis, and data profiling.
  • Ability to work effectively in a fast-paced, adaptive environment.
  • Financial domain experience is a plus.
  • Ability to work independently and communicate effectively across various levels, including product owners, executive sponsors, and team members.
  • Experience working in an Agile environment is advantageous.

Qualifications

  • Bachelor’s Degree in Computer Science, Information Technology, or equivalent.
  • Over 5 years of total work experience, with experience programming ETL processes using Informatica, NiFi, pySpark, and Hadoop.
  • At least 4 years of experience in data analysis, profiling, and designing ETL systems/programs.
Seniority level
  • Mid-Senior level
Employment type
  • Full-time
Job function
  • Information Technology
Industries
  • Banking
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

AVP, Data Integration (pySpark, Nifi, Hadoop)

Kuala Lumpur, Kuala Lumpur Maybank

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

AVP, Data Integration (pySpark, Nifi, Hadoop)

Maybank WP. Kuala Lumpur, Federal Territory of Kuala Lumpur, Malaysia

Get AI-powered advice on this job and more exclusive features.

  • Implement ETL systems that are operationally stable, efficient, and automated. This includes technical solutions that are scalable, aligned with the enterprise architecture, and adaptable to business changes.
  • Collaborate with internal and external teams to define requirements for data integrations, specifically for Data Warehouse/Data Marts implementations.

Responsibilities of the Role

  • Review business and technical requirements to ensure the data integration platform meets specifications.
  • Apply industry best practices for ETL design and development.
  • Produce technical design documents, system testing plans, and implementation documentation.
  • Conduct system testing: execute job flows, investigate and resolve system defects, and document results.
  • Work with DBAs, application specialists, and technical support teams to optimize ETL system performance and meet SLAs.
  • Assist in developing, documenting, and applying best practices and procedures.
  • Strong SQL writing skills are required.
  • Familiarity with ETL tools such as pySpark, NiFi, Informatica, and Hadoop is preferred.
  • Understanding of data integration best practices, including master data management, entity resolution, data quality, and metadata management.
  • Experience with data warehouse architecture, source system data analysis, and data profiling.
  • Ability to work effectively in a fast-paced, adaptive environment.
  • Financial domain experience is a plus.
  • Ability to work independently and communicate effectively across various levels, including product owners, executive sponsors, and team members.
  • Experience working in an Agile environment is advantageous.

Qualifications

  • Bachelor’s Degree in Computer Science, Information Technology, or equivalent.
  • Over 5 years of total work experience, with experience programming ETL processes using Informatica, NiFi, pySpark, and Hadoop.
  • At least 4 years of experience in data analysis, profiling, and designing ETL systems/programs.
Seniority level
  • Mid-Senior level
Employment type
  • Full-time
Job function
  • Information Technology
Industries
  • Banking
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Hadoop Jobs