531 Spark jobs in Kuala Lumpur
Data Engineer ( Spark & Hive)
Posted 9 days ago
Job Viewed
Job Description
Unison Group Kuala Lumpur, Federal Territory of Kuala Lumpur, Malaysia
1 day ago Be among the first 25 applicants
This range is provided by Unison Group. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more.
Base pay rangeSGD5,000.00/yr - SGD7,500.00/yr
OverviewWe are seeking a skilled Data Engineer to join our team. The ideal candidate will have strong expertise in SQL, Python, Spark, and Hive , with hands-on experience building scalable data pipelines, data models, and analytical solutions. You will work closely with data scientists, analysts, and business stakeholders to deliver high-quality data solutions that support business decision-making.
Responsibilities- Design, develop, and maintain robust, scalable data pipelines and ETL processes.
- Optimize data storage, retrieval, and processing using SQL, Spark, and Hive.
- Collaborate with cross-functional teams to gather requirements and translate them into technical solutions.
- Implement data quality, validation, and monitoring processes to ensure reliability.
- Support data modeling, warehousing, and integration for analytics and reporting.
- Write clean, efficient, and reusable Python code for automation and data processing.
- Ensure data security, compliance, and governance across all solutions.
- Bachelor's degree in Computer Science, Engineering, or related field.
- 4-7 years of experience as a Data Engineer (or similar role).
- Strong proficiency in SQL, Python, Spark, and Hive .
- Experience with distributed data processing and big data ecosystems.
- Hands-on experience with data pipeline tools (e.g., Airflow, NiFi, or similar) is a plus.
- Familiarity with cloud platforms (AWS, Azure, or GCP) is advantageous.
- Strong problem-solving and communication skills.
- Associate
- Full-time
- Analyst
- IT Services and IT Consulting
Referrals increase your chances of interviewing at Unison Group by 2x
Get notified about new Data Engineer jobs in Kuala Lumpur, Federal Territory of Kuala Lumpur, Malaysia .
Kuala Lumpur, Federal Territory of Kuala Lumpur, Malaysia
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrData Engineer (Spark, SQL)
Posted 9 days ago
Job Viewed
Job Description
Overview
We are looking for a proficient Data Engineer with strong skills in Spark and SQL to join Unison Group. The successful candidate will be responsible for designing, building, and managing data pipelines that facilitate real-time data processing. This role involves optimizing data flow and enhancing data quality to support the analytical needs of various teams within the organization.
Base pay rangeSGD48,000.00/yr - SGD95,000.00/yr
This range is provided by Unison Group. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more.
Responsibilities- Develop and maintain robust data pipelines using Apache Spark to process large datasets efficiently
- Utilize SQL for data querying, manipulation, and management across various relational database systems
- Collaborate with data scientists and analytics teams to understand data requirements and deliver suitable solutions
- Implement data transformation and aggregation processes to optimize data for analysis
- Monitor and troubleshoot data pipelines, ensuring high availability and reliability
- Perform data quality checks and maintain data integrity throughout the data lifecycle
- Stay updated on industry trends and best practices in data engineering, particularly in Spark and SQL technologies
- Bachelor's degree in Computer Science, Information Technology, or a related field
- Experience as a Data Engineer, with a focus on Spark and SQL
- Good to have experience in Banking/Finance domain
- Proficiency in Spark for distributed data processing and ability to write efficient Spark jobs
- Strong SQL skills, with experience working with relational databases (e.g., MySQL, PostgreSQL, Oracle)
- Experience with data integration, ETL processes, and data warehousing concepts
- Knowledge of big data technologies and frameworks is a plus
- Strong analytical and problem-solving skills
- Excellent communication and teamwork capabilities
- Salary: SGD 6000 - 8000
- Great Work Culture
- Includes Standard Annual and Sick Leave
- Seniority level: Associate
- Employment type: Full-time
- Job function: Information Technology
- Industries: IT Services and IT Consulting
Data Engineer (Spark, SQL)
Posted 9 days ago
Job Viewed
Job Description
We are looking for a proficient Data Engineer with strong skills in Spark and SQL to join Unison Group. The successful candidate will be responsible for designing, building, and managing data pipelines that facilitate real-time data processing. This role involves optimizing data flow and enhancing data quality to support the analytical needs of various teams within the organization. Base pay range
SGD48,000.00/yr - SGD95,000.00/yr This range is provided by Unison Group. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more. Responsibilities
Develop and maintain robust data pipelines using Apache Spark to process large datasets efficiently Utilize SQL for data querying, manipulation, and management across various relational database systems Collaborate with data scientists and analytics teams to understand data requirements and deliver suitable solutions Implement data transformation and aggregation processes to optimize data for analysis Monitor and troubleshoot data pipelines, ensuring high availability and reliability Perform data quality checks and maintain data integrity throughout the data lifecycle Stay updated on industry trends and best practices in data engineering, particularly in Spark and SQL technologies Requirements
Bachelor's degree in Computer Science, Information Technology, or a related field Experience as a Data Engineer, with a focus on Spark and SQL Good to have experience in Banking/Finance domain Proficiency in Spark for distributed data processing and ability to write efficient Spark jobs Strong SQL skills, with experience working with relational databases (e.g., MySQL, PostgreSQL, Oracle) Experience with data integration, ETL processes, and data warehousing concepts Knowledge of big data technologies and frameworks is a plus Strong analytical and problem-solving skills Excellent communication and teamwork capabilities Benefits
Salary: SGD 6000 - 8000 Great Work Culture Includes Standard Annual and Sick Leave Job details
Seniority level: Associate Employment type: Full-time Job function: Information Technology Industries: IT Services and IT Consulting
#J-18808-Ljbffr
Data Engineer ( Spark & Hive)
Posted 9 days ago
Job Viewed
Job Description
SGD5,000.00/yr - SGD7,500.00/yr Overview
We are seeking a skilled
Data Engineer
to join our team. The ideal candidate will have strong expertise in
SQL, Python, Spark, and Hive , with hands-on experience building scalable data pipelines, data models, and analytical solutions. You will work closely with data scientists, analysts, and business stakeholders to deliver high-quality data solutions that support business decision-making. Responsibilities
Design, develop, and maintain robust, scalable data pipelines and ETL processes. Optimize data storage, retrieval, and processing using SQL, Spark, and Hive. Collaborate with cross-functional teams to gather requirements and translate them into technical solutions. Implement data quality, validation, and monitoring processes to ensure reliability. Support data modeling, warehousing, and integration for analytics and reporting. Write clean, efficient, and reusable Python code for automation and data processing. Ensure data security, compliance, and governance across all solutions. Requirements
Bachelor's degree in Computer Science, Engineering, or related field. 4-7 years of experience as a Data Engineer (or similar role). Strong proficiency in
SQL, Python, Spark, and Hive . Experience with distributed data processing and big data ecosystems. Hands-on experience with data pipeline tools (e.g., Airflow, NiFi, or similar) is a plus. Familiarity with cloud platforms (AWS, Azure, or GCP) is advantageous. Strong problem-solving and communication skills. Seniority level
Associate Employment type
Full-time Job function
Analyst Industries
IT Services and IT Consulting Referrals increase your chances of interviewing at Unison Group by 2x Get notified about new Data Engineer jobs in
Kuala Lumpur, Federal Territory of Kuala Lumpur, Malaysia . Kuala Lumpur, Federal Territory of Kuala Lumpur, Malaysia We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-Ljbffr
SPARK & REACT UI-Malaysia
Posted 3 days ago
Job Viewed
Job Description
We're Hiring: SPARK & REACT UI Developer!
We are seeking an experienced SPARK & REACT UI Developer to join our dynamic team. The ideal candidate will have extensive expertise in Apache Spark and React.js to build scalable data processing solutions and create intuitive user interfaces that deliver exceptional user experiences. Location:
Kuala Lumpur, Malaysia Work Mode:
Work From Office Role:
SPARK & REACT UI Developer What Youll Do
Develop and optimize Apache Spark applications for large-scale data processing Build responsive and interactive user interfaces using React.js Design and implement efficient data pipelines and ETL processes Integrate frontend applications with backend data services Optimize application performance and ensure scalability Collaborate with cross-functional teams to deliver high-quality solutions What Were Looking For
7+ years of experience in software development Strong expertise in Apache Spark and big data technologies Proficiency in React.js, JavaScript, HTML, and CSS Experience with data processing frameworks and distributed systems Knowledge of modern development tools and practices Strong problem-solving and analytical skills Ready to make an impact? Apply now and let's grow together! Seniority level
Mid-Senior level Employment type
Full-time Job function
Other Industries
IT Services and IT Consulting
#J-18808-Ljbffr
Senior Software Engineer – Cloud, Spark, ML
Posted today
Job Viewed
Job Description
We are looking for an experienced Java based Spark focused Senior Software Engineer to join our data and identity delivery platform group. You should be familiar with machine learning concepts, preferred experience with libraries.
You will be joining the team responsible for developing business critical systems to support our data modeling efforts. As a software engineer you will be provided the opportunity to grow and sharpen your development skills as we develop the next generation of highly performant, cost-efficient, secure, scalable cloud components for latest products.
Key Responsibilities
• Develop and maintain our enterprise AI/ML/GenAI platforms and components
• Facilitate hyper-scale MLOps
• Fulfill enhancement requests from the product team
• Remediate identified code and dependency chain vulnerabilities
• Maintain systems compliance
• Facilitate systems privacy and security reviews
• Work closely with stakeholders in an agile development methodology
• Resolve any functional or performance deficiencies
• Provide formal and informal mentorship to junior engineers
• Actively participate in peer code review and approvals
• Ensure high quality coding practices are followed
Requirements
• Experience developing in leading cloud providers (AWS, Azure, GCP)
• Experience building Java/Scala based Spark components
• Experience using library
• Demonstrated understanding of machine learning concepts
• Pipeline/workflow/MLOps orchestration (Airflow, DAGster, Mlflow, …)
• Experience developing following Agile methodologies
• Build/DevOps (Jenkins, Git, Jira, Terraform)
Preferred Qualifications:
• Experience with hyper scale MLOps
• Extensive experience with library
- AWS Associate / Professional Certification (or GCP equivalent)
- • Generative AI Experience
Data Engineer (SQL, Python, Spark, Hive, Hadoop)
Posted 6 days ago
Job Viewed
Job Description
Unison Group Kuala Lumpur, Federal Territory of Kuala Lumpur, Malaysia
We are looking for a skilled Data Engineer with 5 years of hands-on experience in designing, developing, and optimizing big data pipelines and solutions. The ideal candidate will have strong expertise in SQL, Python, Apache Spark, Hive, and Hadoop ecosystems and will be responsible for building scalable data platforms to support business intelligence, analytics, and machine learning use cases.
Key Responsibilities
- Design, develop, and maintain scalable ETL pipelines using Spark, Hive, and Hadoop.
- Write efficient SQL queries for data extraction, transformation, and analysis.
- Develop automation scripts and data processing workflows using Python.
- Optimize data pipelines for performance, reliability, and scalability.
- Work with structured and unstructured data from multiple sources.
- Ensure data quality, governance, and security throughout the data lifecycle.
- Collaborate with cross-functional teams (Data Scientists, Analysts, and Business stakeholders) to deliver data-driven solutions.
- Monitor and troubleshoot production data pipelines.
Requirements
Required Skills & Qualifications
- 5+ years of experience in Data Engineering / Big Data development.
- Strong expertise in SQL (query optimization, performance tuning, stored procedures).
- Proficiency in Python for data manipulation, scripting, and automation.
- Hands-on experience with Apache Spark (PySpark/Scala) for large-scale data processing.
- Solid knowledge of Hive for querying and managing data in Hadoop environments.
- Strong working knowledge of Hadoop ecosystem (HDFS, YARN, MapReduce, etc.).
- Experience with data pipeline orchestration tools (Airflow, Oozie, or similar) is a plus.
- Familiarity with cloud platforms (AWS, Azure, or GCP) is preferred.
- Excellent problem-solving, debugging, and communication skills.
Mid-Senior level
Employment typeFull-time
Job functionOther
IndustriesIT Services and IT Consulting
#J-18808-LjbffrBe The First To Know
About the latest Spark Jobs in Kuala Lumpur !
Data Engineer (SQL, Python, Spark, Hive, Hadoop)
Posted 6 days ago
Job Viewed
Job Description
Data Engineer
with 5 years of hands-on experience in designing, developing, and optimizing big data pipelines and solutions. The ideal candidate will have strong expertise in
SQL, Python, Apache Spark, Hive, and Hadoop
ecosystems and will be responsible for building scalable data platforms to support business intelligence, analytics, and machine learning use cases. Key Responsibilities Design, develop, and maintain scalable ETL pipelines using Spark, Hive, and Hadoop. Write efficient SQL queries for data extraction, transformation, and analysis. Develop automation scripts and data processing workflows using Python. Optimize data pipelines for performance, reliability, and scalability. Work with structured and unstructured data from multiple sources. Ensure data quality, governance, and security throughout the data lifecycle. Collaborate with cross-functional teams (Data Scientists, Analysts, and Business stakeholders) to deliver data-driven solutions. Monitor and troubleshoot production data pipelines. Requirements Required Skills & Qualifications 5+ years of experience in Data Engineering / Big Data development. Strong expertise in SQL (query optimization, performance tuning, stored procedures). Proficiency in Python for data manipulation, scripting, and automation. Hands-on experience with Apache Spark (PySpark/Scala) for large-scale data processing. Solid knowledge of Hive for querying and managing data in Hadoop environments. Strong working knowledge of Hadoop ecosystem (HDFS, YARN, MapReduce, etc.). Experience with data pipeline orchestration tools (Airflow, Oozie, or similar) is a plus. Familiarity with cloud platforms (AWS, Azure, or GCP) is preferred. Excellent problem-solving, debugging, and communication skills. Seniority level
Mid-Senior level Employment type
Full-time Job function
Other Industries
IT Services and IT Consulting
#J-18808-Ljbffr
Senior Software Engineer – Cloud, Spark, ML & H2O.ai
Posted 4 days ago
Job Viewed
Job Description
We are looking for an experienced Java based Spark focused Senior Software Engineer to join our data and identity delivery platform group. You should be familiar with machine learning concepts, preferred experience with H2O.ai libraries.
You will be joining the team responsible for developing business critical systems to support our data modeling efforts. As a software engineer you will be provided the opportunity to grow and sharpen your development skills as we develop the next generation of highly performant, cost-efficient, secure, scalable cloud components for latest products.
Key Responsibilities- Develop and maintain our enterprise AI/ML/GenAI platforms and components
- Facilitate hyper-scale MLOps
- Fulfill enhancement requests from the product team
- Remediate identified code and dependency chain vulnerabilities
- Maintain systems compliance
- Facilitate systems privacy and security reviews
- Work closely with stakeholders in an agile development methodology
- Resolve any functional or performance deficiencies
- Provide formal and informal mentorship to junior engineers
- Actively participate in peer code review and approvals
- Ensure high quality coding practices are followed
- Experience developing in leading cloud providers (AWS, Azure, GCP)
- Experience building Java/Scala based Spark components
- Demonstrated understanding of machine learning concepts
- Experience with hyper scale MLOps
- Extensive experience with H2O.ai library
- AWS Associate / Professional Certification (or GCP equivalent)
Brain Station 23 is an equal opportunities employer and welcomes applications from diverse candidates.
#J-18808-LjbffrSenior Software Engineer – Cloud, Spark, ML & H2O.ai
Posted 3 days ago
Job Viewed
Job Description
Develop and maintain our enterprise AI/ML/GenAI platforms and components Facilitate hyper-scale MLOps Fulfill enhancement requests from the product team Remediate identified code and dependency chain vulnerabilities Maintain systems compliance Facilitate systems privacy and security reviews Work closely with stakeholders in an agile development methodology Resolve any functional or performance deficiencies Provide formal and informal mentorship to junior engineers Actively participate in peer code review and approvals Ensure high quality coding practices are followed Requirements
Experience developing in leading cloud providers (AWS, Azure, GCP) Experience building Java/Scala based Spark components Demonstrated understanding of machine learning concepts Preferred Qualifications
Experience with hyper scale MLOps Extensive experience with H2O.ai library AWS Associate / Professional Certification (or GCP equivalent) Brain Station 23 is an equal opportunities employer and welcomes applications from diverse candidates.
#J-18808-Ljbffr