2,166 Hadoop jobs in Malaysia
Hadoop Developer
Posted today
Job Viewed
Job Description
Project Description:
Our Client a leading bank in Asia with a global network of more than 500 branches and offices in 19 countries and territories in Asia Pacific, Europe, and North America, are looking for Consultants to be part of the project.
The Technology and Operations function is comprised of five teams of specialists with distinct capabilities: business partnership, technology, operations, risk governance, and planning support and services. They work closely together to harness the power of technology to support our physical and digital banking services and operations. This includes developing, centralising, and standardising technology systems as well as banking operations in Malaysia and overseas branches.
The client has more than 80 years of history in the banking industry and is expanding its footprint in Malaysia. You will be working in a newly set-up technology centre located in Kuala Lumpur as part of Technology and Operations to deliver innovative financial technology solutions that enable business growth and technology transformation.
Responsibilities:
• Design, develop, and enhance AML applications using Tookitaki and Hadoop ecosystem.
• Work on big data solutions, including HDFS, Hive, and related components for AML monitoring and reporting.
• Develop and maintain Unix Shell Scripts (ksh, Perl, etc.) for automation and data processing.
• Support application deployment and middleware (JBoss) configurations.
• Write and optimize SQL queries (MariaDB / Oracle) to handle large data sets (not DBA role).
• Collaborate with business and compliance teams to translate AML requirements into technical solutions.
• Troubleshoot and resolve L3 issues related to AML platforms, ensuring system stability and performance.
• Participate in code reviews, unit testing, and system integration testing.
• Provide production support escalation (L3) and RCA for recurring technical issues.
• Create and maintain technical documentation and knowledge base for AML systems.
Mandatory Skills Description:
• Hands-on experience with Hadoop (HDFS, Hive).
• Strong Unix/Linux scripting skills.
• JBoss deployment knowledge.
• SQL expertise on MariaDB/Oracle.
Nice-to-Have Skills Description:
Java, Shell Scripting (ksh, perl, etc), Python, PL/SQL
Big Data Hadoop Developer
Posted 14 days ago
Job Viewed
Job Description
Job Summary:
We are looking for a Big Data Hadoop Developer to design, develop, and maintain large-scale data processing solutions. The ideal candidate should have strong hands-on experience with the Hadoop ecosystem and integration with relational databases such as MariaDB or Oracle DB for analytics and reporting.
Key Responsibilities:
- Design, develop, and optimize Hadoop-based big data solutions for batch and real-time data processing.
- Work with data ingestion frameworks to integrate data from MariaDB/Oracle DB into Hadoop (Sqoop, Apache Nifi, Kafka).
- Implement Hive, Spark, and MapReduce jobs for data transformation and analytics.
- Optimize Hive queries, Spark jobs, and HDFS usage for performance and cost efficiency.
- Create and maintain ETL pipelines for structured and unstructured data.
- Troubleshoot and resolve issues in Hadoop jobs and database connectivity.
- Collaborate with BI, analytics, and data science teams for data provisioning.
- Ensure data security, governance, and compliance in all solutions.
Technical Skills:
- Big Data Ecosystem: Hadoop (HDFS, YARN), Hive, Spark, Sqoop, MapReduce, Oozie, Flume.
- Databases: MariaDB and/or Oracle DB (SQL, PL/SQL).
- Programming: Java, Scala, or Python for Spark/MapReduce development.
- Data Ingestion: Sqoop, Kafka, Nifi (for integrating RDBMS with Hadoop).
- Query Optimization: Hive tuning, partitioning, bucketing, indexing.
- Tools: Ambari, Cloudera Manager, Git, Jenkins.
- OS & Scripting: Linux/Unix shell scripting.
Soft Skills:
- Strong analytical skills and problem-solving abilities.
- Good communication skills for working with cross-functional teams.
- Ability to manage priorities in a fast-paced environment.
Nice to Have:
- Experience with cloud-based big data platforms (AWS EMR, Azure HDInsight, GCP Dataproc).
- Knowledge of NoSQL databases (HBase, Cassandra).
- Exposure to machine learning integration with Hadoop/Spark.
Big Data Hadoop Developer
Posted 11 days ago
Job Viewed
Job Description
Hadoop (HDFS, YARN), Hive, Spark, Sqoop, MapReduce, Oozie, Flume. Databases:
MariaDB and/or Oracle DB (SQL, PL/SQL). Programming:
Java, Scala, or Python for Spark/MapReduce development. Data Ingestion:
Sqoop, Kafka, Nifi (for integrating RDBMS with Hadoop). Query Optimization:
Hive tuning, partitioning, bucketing, indexing. Tools:
Ambari, Cloudera Manager, Git, Jenkins. OS & Scripting:
Linux/Unix shell scripting. Soft Skills: Strong analytical skills and problem-solving abilities. Good communication skills for working with cross-functional teams. Ability to manage priorities in a fast-paced environment. Nice to Have: Experience with cloud-based big data platforms (AWS EMR, Azure HDInsight, GCP Dataproc). Knowledge of NoSQL databases (HBase, Cassandra). Exposure to machine learning integration with Hadoop/Spark.
#J-18808-Ljbffr
ETL Developer (Informatica/Hadoop)
Posted 8 days ago
Job Viewed
Job Description
Overview
Maybank Federal Territory of Kuala Lumpur, Malaysia
ETL Developer (Informatica/Hadoop)Maybank Federal Territory of Kuala Lumpur, Malaysia
Responsibilities- Should be able to troubleshoot errors on Informatica, Oracle Data Integrator (ODI), Teradata, Hadoop platforms and supports applications built on top of it.
- Strong problem solving knowledge on Databases like Hadoop, Teradata & SQL.
- Monitoring Jobs/Workflow and working with technical teams to derive permanent fix.
- Hands on experience in building, troubleshooting Informatica mappings.
- Working experience on handling and managing ETL processes and data warehousing platform.
- Hands on experience in writing, debugging, testing shell scripts.
- Hands on experience in Teradata utilities like BTEQ, Fast Load, Multi Load, TPT.
- Experience in writing, debugging, testing Hive scripts.
- Hands on experience in any scheduling tool.
- Ability to logically prioritize tasks and schedule work accordingly.
- Deep understanding of data warehouse concepts.
- Hands on experience in incident management, problem management and change management processes.
- Strong team player and able to work as an individual contributor.
- Ability to interact with both technical and non-technical users and address their queries.
- Strong analytical and problem solving skills.
- Good working experience in a support role.
- Flexible for working hours.
- Hands-on experience with Informatica and Oracle Data Integrator.
- Big Data/Hadoop.
- Java/Python.
- UNIX shell scripting.
- Experience with any scheduling tool.
- Mid-Senior level
- Full-time
- Information Technology
- Banking
Note: Referrals increase your chances of interviewing.
#J-18808-LjbffrDeveloper - Hadoop, Teradata, Python
Posted 10 days ago
Job Viewed
Job Description
Project description
Our Client a leading bank in Asia with a global network of more than 500 branches and offices in 19 countries and territories in Asia Pacific, Europe, and North America, are looking for Consultants to be part of the project. The Technology and Operations function is comprised of five teams of specialists with distinct capabilities: business partnership, technology, operations, risk governance, and planning support and services. They work closely together to harness the power of technology to support our physical and digital banking services and operations. This includes developing, centralising, and standardising technology systems as well as banking operations in Malaysia and overseas branches. The client has more than 80 years of history in the banking industry and is expanding its footprint in Malaysia. You will be working in a newly set-up technology centre located in Kuala Lumpur as part of Technology and Operations to deliver innovative financial technology solutions that enable business growth and technology transformation.
Responsibilities- Design, develop, and maintain data pipelines and ETL workflows using Informatica Data Integration Suite, Python, and R.
- Build and optimize large-scale data processing systems on Cloudera Hadoop (6.x) and Teradata Inteliflex platforms.
- Implement data ingestion, transformation, and storage solutions integrating diverse data sources, including Oracle, SQL Server, PostgreSQL, and AS400.
- Develop and deploy dashboards and analytics solutions using QlikSense, Microsoft Power BI, and other visualization tools.
- Collaborate with business teams to deliver analytics and decision-support solutions across domains like Credit Risk Analytics, Credit Scoring,
- Treasury & Wealth Management, and Trade Finance.
- Leverage data science tools (Python, R Studio, Kafka, Spark) to support predictive modeling, scoring, and advanced analytics use cases.
- Participate in code reviews, performance tuning, and data quality validation using tools like QuerySurge, SonarCube, and JIRA.
- Automate workflows, deployments, and job scheduling using Jenkins, Control-M, and Bitbucket.
- Ensure scalability, security, and governance of data solutions in production environments across Linux, AIX, Windows, and AS400 platforms.
- 3 to 5 years experience in Big Data & Data Engineering: Cloudera Hadoop (6.x), Spark, Hive, HUE, Impala, Kafka
- ETL & Data Integration: Informatica (BDM, IDQ, IDL), QuerySurge
- Databases: Teradata Inteliflex, Oracle, SQL Server, PostgreSQL
- Data Visualization: QlikSense Discovery, Microsoft Power BI
- Programming & Analytics: Python, R, R Studio
- Version Control & Automation: Jenkins, Bitbucket, Control-M
- OS: AS400, AIX, Linux, Windows
- Domain Knowledge: Minimum 1 of the following:
- Credit Risk Analytics
- Credit Scoring & Decision Support
- Treasury & Wealth Management (Murex)
- Trade Finance & Accounts Receivable (FITAS, ARF)
- Retail Banking & Cards (Silver Lake)
- Data Modeling (FSLDM / Data Marts)
AS400, Experian PowerCurve, SAS
#J-18808-LjbffrDeveloper - Hadoop, Teradata, Python
Posted today
Job Viewed
Job Description
Project description
Our Client a leading bank in Asia with a global network of more than 500 branches and offices in 19 countries and territories in Asia Pacific, Europe, and North America, are looking for Consultants to be part of the project.
The Technology and Operations function is comprised of five teams of specialists with distinct capabilities: business partnership, technology, operations, risk governance, and planning support and services. They work closely together to harness the power of technology to support our physical and digital banking services and operations. This includes developing, centralising, and standardising technology systems as well as banking operations in Malaysia and overseas branches.
The client has more than 80 years of history in the banking industry and is expanding its footprint in Malaysia. You will be working in a newly set-up technology centre located in Kuala Lumpur as part of Technology and Operations to deliver innovative financial technology solutions that enable business growth and technology transformation.
Skills
Must have
3 to 5years experience in Big Data & Data Engineering: Cloudera Hadoop (6.x), Spark, Hive, HUE, Impala, Kafka
ETL & Data Integration: Informatica (BDM, IDQ, IDL), QuerySurge
Databases: Teradata Inteliflex, Oracle, SQL Server, PostgreSQL
Data Visualization: QlikSense Discovery, Microsoft Power BI
Programming & Analytics: Python, R, R Studio
Version Control & Automation: Jenkins, Bitbucket, Control-M
OS: AS400, AIX, Linux, Windows
Domain Knowledge:
Minimum 1 of the following:
Credit Risk Analytics
Credit Scoring & Decision Support
Treasury & Wealth Management (Murex)
Trade Finance & Accounts Receivable (FITAS, ARF)
Retail Banking & Cards (Silver Lake)
Data Modeling (FSLDM / Data Marts)
Nice to have
AS400, Experian PowerCurve, SAS
Developer - Hadoop, Teradata, Python
Posted 10 days ago
Job Viewed
Job Description
Our Client a leading bank in Asia with a global network of more than 500 branches and offices in 19 countries and territories in Asia Pacific, Europe, and North America, are looking for Consultants to be part of the project. The Technology and Operations function is comprised of five teams of specialists with distinct capabilities: business partnership, technology, operations, risk governance, and planning support and services. They work closely together to harness the power of technology to support our physical and digital banking services and operations. This includes developing, centralising, and standardising technology systems as well as banking operations in Malaysia and overseas branches. The client has more than 80 years of history in the banking industry and is expanding its footprint in Malaysia. You will be working in a newly set-up technology centre located in Kuala Lumpur as part of Technology and Operations to deliver innovative financial technology solutions that enable business growth and technology transformation.
Responsibilities
Design, develop, and maintain data pipelines and ETL workflows using Informatica Data Integration Suite, Python, and R.
Build and optimize large-scale data processing systems on Cloudera Hadoop (6.x) and Teradata Inteliflex platforms.
Implement data ingestion, transformation, and storage solutions integrating diverse data sources, including Oracle, SQL Server, PostgreSQL, and AS400.
Develop and deploy dashboards and analytics solutions using QlikSense, Microsoft Power BI, and other visualization tools.
Collaborate with business teams to deliver analytics and decision-support solutions across domains like Credit Risk Analytics, Credit Scoring,
Treasury & Wealth Management, and Trade Finance.
Leverage data science tools (Python, R Studio, Kafka, Spark) to support predictive modeling, scoring, and advanced analytics use cases.
Participate in code reviews, performance tuning, and data quality validation using tools like QuerySurge, SonarCube, and JIRA.
Automate workflows, deployments, and job scheduling using Jenkins, Control-M, and Bitbucket.
Ensure scalability, security, and governance of data solutions in production environments across Linux, AIX, Windows, and AS400 platforms.
Must have
3 to 5 years experience in Big Data & Data Engineering: Cloudera Hadoop (6.x), Spark, Hive, HUE, Impala, Kafka
ETL & Data Integration: Informatica (BDM, IDQ, IDL), QuerySurge
Databases: Teradata Inteliflex, Oracle, SQL Server, PostgreSQL
Data Visualization: QlikSense Discovery, Microsoft Power BI
Programming & Analytics: Python, R, R Studio
Version Control & Automation: Jenkins, Bitbucket, Control-M
OS: AS400, AIX, Linux, Windows
Domain Knowledge: Minimum 1 of the following:
Credit Risk Analytics
Credit Scoring & Decision Support
Treasury & Wealth Management (Murex)
Trade Finance & Accounts Receivable (FITAS, ARF)
Retail Banking & Cards (Silver Lake)
Data Modeling (FSLDM / Data Marts)
Nice to have AS400, Experian PowerCurve, SAS
#J-18808-Ljbffr
Be The First To Know
About the latest Hadoop Jobs in Malaysia !
ETL Developer (Informatica/Hadoop)
Posted 11 days ago
Job Viewed
Job Description
Maybank Federal Territory of Kuala Lumpur, Malaysia ETL Developer (Informatica/Hadoop)
Maybank Federal Territory of Kuala Lumpur, Malaysia Responsibilities
Should be able to troubleshoot errors on Informatica, Oracle Data Integrator (ODI), Teradata, Hadoop platforms and supports applications built on top of it. Strong problem solving knowledge on Databases like Hadoop, Teradata & SQL. Monitoring Jobs/Workflow and working with technical teams to derive permanent fix. Hands on experience in building, troubleshooting Informatica mappings. Working experience on handling and managing ETL processes and data warehousing platform. Hands on experience in writing, debugging, testing shell scripts. Hands on experience in Teradata utilities like BTEQ, Fast Load, Multi Load, TPT. Experience in writing, debugging, testing Hive scripts. Hands on experience in any scheduling tool. Ability to logically prioritize tasks and schedule work accordingly. Deep understanding of data warehouse concepts. Hands on experience in incident management, problem management and change management processes. Strong team player and able to work as an individual contributor. Ability to interact with both technical and non-technical users and address their queries. Strong analytical and problem solving skills. Good working experience in a support role. Flexible for working hours. Hands-on experience with Informatica and Oracle Data Integrator. Big Data/Hadoop. Java/Python. UNIX shell scripting. Experience with any scheduling tool. Seniority level
Mid-Senior level Employment type
Full-time Job function
Information Technology Industries
Banking Note: Referrals increase your chances of interviewing.
#J-18808-Ljbffr
AVP, Data Integration (pySpark, Nifi, Hadoop)
Posted 2 days ago
Job Viewed
Job Description
Maybank WP. Kuala Lumpur, Federal Territory of Kuala Lumpur, Malaysia
Get AI-powered advice on this job and more exclusive features.
- Implement ETL systems that are operationally stable, efficient, and automated. This includes technical solutions that are scalable, aligned with the enterprise architecture, and adaptable to business changes.
- Collaborate with internal and external teams to define requirements for data integrations, specifically for Data Warehouse/Data Marts implementations.
Responsibilities of the Role
- Review business and technical requirements to ensure the data integration platform meets specifications.
- Apply industry best practices for ETL design and development.
- Produce technical design documents, system testing plans, and implementation documentation.
- Conduct system testing: execute job flows, investigate and resolve system defects, and document results.
- Work with DBAs, application specialists, and technical support teams to optimize ETL system performance and meet SLAs.
- Assist in developing, documenting, and applying best practices and procedures.
- Strong SQL writing skills are required.
- Familiarity with ETL tools such as pySpark, NiFi, Informatica, and Hadoop is preferred.
- Understanding of data integration best practices, including master data management, entity resolution, data quality, and metadata management.
- Experience with data warehouse architecture, source system data analysis, and data profiling.
- Ability to work effectively in a fast-paced, adaptive environment.
- Financial domain experience is a plus.
- Ability to work independently and communicate effectively across various levels, including product owners, executive sponsors, and team members.
- Experience working in an Agile environment is advantageous.
Qualifications
- Bachelor’s Degree in Computer Science, Information Technology, or equivalent.
- Over 5 years of total work experience, with experience programming ETL processes using Informatica, NiFi, pySpark, and Hadoop.
- At least 4 years of experience in data analysis, profiling, and designing ETL systems/programs.
- Mid-Senior level
- Full-time
- Information Technology
- Banking
AVP, Data Integration (pySpark, Nifi, Hadoop)
Posted 2 days ago
Job Viewed
Job Description
Maybank WP. Kuala Lumpur, Federal Territory of Kuala Lumpur, Malaysia
Get AI-powered advice on this job and more exclusive features.
- Implement ETL systems that are operationally stable, efficient, and automated. This includes technical solutions that are scalable, aligned with the enterprise architecture, and adaptable to business changes.
- Collaborate with internal and external teams to define requirements for data integrations, specifically for Data Warehouse/Data Marts implementations.
Responsibilities of the Role
- Review business and technical requirements to ensure the data integration platform meets specifications.
- Apply industry best practices for ETL design and development.
- Produce technical design documents, system testing plans, and implementation documentation.
- Conduct system testing: execute job flows, investigate and resolve system defects, and document results.
- Work with DBAs, application specialists, and technical support teams to optimize ETL system performance and meet SLAs.
- Assist in developing, documenting, and applying best practices and procedures.
- Strong SQL writing skills are required.
- Familiarity with ETL tools such as pySpark, NiFi, Informatica, and Hadoop is preferred.
- Understanding of data integration best practices, including master data management, entity resolution, data quality, and metadata management.
- Experience with data warehouse architecture, source system data analysis, and data profiling.
- Ability to work effectively in a fast-paced, adaptive environment.
- Financial domain experience is a plus.
- Ability to work independently and communicate effectively across various levels, including product owners, executive sponsors, and team members.
- Experience working in an Agile environment is advantageous.
Qualifications
- Bachelor’s Degree in Computer Science, Information Technology, or equivalent.
- Over 5 years of total work experience, with experience programming ETL processes using Informatica, NiFi, pySpark, and Hadoop.
- At least 4 years of experience in data analysis, profiling, and designing ETL systems/programs.
- Mid-Senior level
- Full-time
- Information Technology
- Banking