1,960 Senior Data Engineer jobs in Malaysia
Big Data Engineer
Posted 1 day ago
Job Viewed
Job Description
Position Summary
We are looking for a full-time Big Data Engineer II to join our team at Shirlyn Technology in Petaling Jaya, Malaysia and Palo Alto, California. The Big Data team is at the heart of our operations, playing a critical role in scaling and optimizing our data infrastructure. As we continue to grow, we are seeking an experienced and driven Big Data Engineer to help us tackle complex data challenges, enhance data solutions, and ensure the security and quality of our data systems. You will be instrumental in building and deploying scalable data pipelines to ensure seamless data flow across systems, driving business success through reliable data solutions.
Job Responsibilities
- Collaborate with global teams in data, security, infrastructure, and business functions to understand data requirements and provide scalable solutions.
- Design, implement, and maintain efficient ETL (Extract, Transform, Load) pipelines to enable smooth data flow between systems.
- Conduct regular data quality checks to ensure accuracy, consistency, and completeness within our data pipelines.
- Continuously improve data pipeline performance and reliability, identifying and addressing any inefficiencies or bottlenecks.
- Ensure data integrity, security, privacy, and availability across all systems.
- Proactively monitor data pipelines, quickly diagnosing issues and taking action to resolve them and maintain system uptime.
- Conduct root cause analysis of data-related issues and work on long-term solutions to prevent recurring problems.
- Document pipeline designs, data processes, and troubleshooting procedures, keeping stakeholders informed with clear communication of updates.
- Provide on-call support for critical data operations, ensuring systems are running 24/7, with rotational responsibilities.
Job Requirements
- Bachelor’s degree in Computer Science, Information Systems, or a related technical field.
- 5+ years of hands-on experience building and optimizing data pipelines using big data technologies (Hive, Presto, Spark, Flink).
- Expertise in writing complex SQL queries and proficiency in Python programming, focusing on maintainable and clean code.
- Solid knowledge of scripting languages (e.g., Shell, Perl).
- Experience working in Unix/Linux environments.
- Familiarity with cloud platforms such as Azure, AWS, or GCP.
- High personal integrity and professionalism in handling confidential information, exercising sound judgment.
- Ability to remain calm and resolve issues swiftly during high-pressure situations, adhering to SLAs.
- Strong leadership skills, including the ability to guide junior team members and lead projects across the team.
- Excellent verbal and written communication skills, with the ability to collaborate effectively with remote teams globally.
Big Data Engineer
Posted 17 days ago
Job Viewed
Job Description
Job Summary:
We are seeking a highly skilled Big Data Engineer with 8 years of experience in data migration , data setup , and data systems development . The ideal candidate will have deep expertise in Apache Spark , SQL , and Java (with Scala) for large-scale data processing, reporting, and system development. Strong knowledge of data architecture , semantic layer development , and experience in regression testing and cutover activities for enterprise-level migrations is essential.
Key Responsibilities: Spark:- Design, develop, and optimize Spark-based ETL pipelines for large-scale data processing and analytics.
- Utilize Spark SQL, DataFrames, RDDs, and Streaming for efficient data transformations.
- Tune Spark jobs for performance, including memory management, partitioning, and execution plans.
- Implement real-time and batch data processing using Spark Streaming or Structured Streaming.
- Write and optimize complex SQL queries for data extraction, transformation, and aggregation.
- Perform query performance tuning, indexing, and partitioning for efficient execution.
- Develop stored procedures, functions, and views to support data operations.
- Ensure data consistency, integrity, and security across relational databases.
- Develop backend services and data processing applications using Java and Scala.
- Optimize JVM performance, including memory management and garbage collection, for Spark workloads.
- Leverage Scala’s functional programming capabilities for efficient data transformations.
- Implement multithreading, concurrency, and parallel processing in Java for high-performance systems.
- 8+ years of experience in data engineering, with a focus on big data technologies.
- Strong proficiency in Apache Spark , SQL , and Java/Scala .
- Experience in data migration , data setup , and semantic layer development .
- Solid understanding of data architecture , ETL frameworks , and data governance .
- Hands-on experience with regression testing and cutover planning in large-scale data migrations.
- Familiarity with cloud platforms (e.g., AWS, Azure, GCP) is a plus.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration abilities.
- Experience with Hadoop ecosystem tools (Hive, HDFS, Oozie, etc.).
- Knowledge of containerization and orchestration (Docker, Kubernetes).
- Exposure to CI/CD pipelines and DevOps practices.
- Relevant certifications in Big Data or Cloud technologies.
Big Data Engineer
Posted today
Job Viewed
Job Description
We are seeking a highly skilled Big Data Engineer with 8 years of experience in data migration, data setup, and data systems development. The ideal candidate will have deep expertise in Apache Spark, SQL, and Java (with Scala) for large-scale data processing, reporting, and system development. Strong knowledge of data architecture, semantic layer development, and experience in regression testing and cutover activities for enterprise-level migrations is essential.
Key Responsibilities:Spark:
- Design, develop, and optimize Spark-based ETL pipelines for large-scale data processing and analytics.
- Utilize Spark SQL, DataFrames, RDDs, and Streaming for efficient data transformations.
- Tune Spark jobs for performance, including memory management, partitioning, and execution plans.
- Implement real-time and batch data processing using Spark Streaming or Structured Streaming.
- Write and optimize complex SQL queries for data extraction, transformation, and aggregation.
- Perform query performance tuning, indexing, and partitioning for efficient execution.
- Develop stored procedures, functions, and views to support data operations.
- Ensure data consistency, integrity, and security across relational databases.
- Develop backend services and data processing applications using Java and Scala.
- Optimize JVM performance, including memory management and garbage collection, for Spark workloads.
- Leverage Scala's functional programming capabilities for efficient data transformations.
- Implement multithreading, concurrency, and parallel processing in Java for high-performance systems.
- 8+ years of experience in data engineering, with a focus on big data technologies.
- Strong proficiency in Apache Spark, SQL, and Java/Scala.
- Experience in data migration, data setup, and semantic layer development.
- Solid understanding of data architecture, ETL frameworks, and data governance.
- Hands-on experience with regression testing and cutover planning in large-scale data migrations.
- Familiarity with cloud platforms (e.g., AWS, Azure, GCP) is a plus.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration abilities.
- Experience with Hadoop ecosystem tools (Hive, HDFS, Oozie, etc.).
- Knowledge of containerization and orchestration (Docker, Kubernetes).
- Exposure to CI/CD pipelines and DevOps practices.
- Relevant certifications in Big Data or Cloud technologies.
Big Data Engineer
Posted today
Job Viewed
Job Description
Job Description:
1.Gain insights into the competitiveness and serviceability requirements of HUAWEI CLOUD Big Data service products. Such as: MRS, DWS, DataLake Insights, Elasticsearch, etc.
2.Be responsible for the delivery of global key projects, including Solution design, Resources management, Data migration, Expanding and Optimizing resources based-on HUAWEI CLOUD Big Data service products.
3.Manage requirements of key projects, develop project plans, identify project risks, ensure timely project delivery, and take charge of operations of some HUAWEI CLOUD sites.
4.Provides technical support for HUAWEI CLOUD customers and ensures the proper running of customer services at the technical level.
5.Troubleshoot faults of Big Data service products in various scenarios, communicate with customers about solutions, and track the solution implementation results.
Job Requirement:
1.Have a good project management experience in the ICT or CLOUD industry.
2.Be familiar with Big Data knowledge. Such as: Hadoop, MapReduce, Kafka, HBase, Spark, Datalake Insights, etc.
3.Experience in cloud development, delivery, or O&M. Background in the cloud industry preferred.
4.HCIA/HCIP/HCIE certificate in cloud computing, or equivalent certificates in the industry preferred, ep: AWS, Azure, GCP cloud certificates.
5.Have a good sense of teamwork and organizational coordination skills.
Big Data Engineer
Posted today
Job Viewed
Job Description
Are you a dedicated IT Operation Executive with a passion for maintaining and optimizing IT systems? Do you thrive in a fast-paced, dynamic environment where your expertise ensures seamless operations and supports millions of users? If so, we want YOU on our team
你是一位热爱 IT 运营的专业人士吗?你擅长维护和优化 IT 系统,并希望在充满活力的游戏行业确保系统稳定运行,支持数百万用户?如果你的答案是 YES,那我们正在寻找的就是你
Why Join Us? / 为什么加入我们?
- 100% Work from Home (100% 远程工作)
- 13th-month salary + attractive bonus & increments (第13个月薪资 + 诱人奖金 & 加薪机会)
- Career growth opportunities – Promotion (Review in every HALF a Year) & professional development (职业发展机会 – 每半年一次晋升评估 & 专业成长)
- Many Public holiday entitlement (超多公共假期福利)
- Annual leave start from 14 days and MORE (年假从 14 天起,还有更多)
- Medical claims, Annual Dinner, Team Building, Sport Activities, Meals Gathering and MORE (医疗报销、年度晚宴、团队建设、体育活动、聚餐等多项福利)
- Birthday Gift, Birthday Celebration, Festival Gifts and MANY MORE (生日礼物、生日庆祝、节日礼品,还有更多惊喜等待你)
- Annual Dinner Cash Prizes Lucky Draw and MANY MORE Rewards WAITING FOR YOU (年度晚宴现金大奖抽奖,更多奖励等你来拿)
It will be a considerable amount of time post-writing. If the preceding content aligns with your preferences, promptly click the "Quick apply" button Missing out on such a remarkable and stable company would be regrettable, and not attempting it would be a lost opportunity. Take the chance and seize it Hurry Up….Join our team
为了公司业务扩展,我们需求更多的人才加入我们的团队这将是一个值得投入时间的机会如果以上内容符合你的期待,请立即点击"快速申请"按钮错过如此优质稳定的公司将是巨大的遗憾,不尝试就等于失去一次宝贵的机会抓住机会,立即加入我们吧
Big Data Responsibilities
Design, develop, and tackle technical challenges for core components of the big data platform;
Participate in data middleware construction, including the development and upgrades of components such as data integration, metadata management, and task management;
Research big data technologies (e.g., ELK, Flink, Spark, ClickHouse) to optimize cluster architecture, troubleshoot issues, and resolve performance bottlenecks.
大数据岗位职责
负责大数据平台核心组件模块的架构设计、开发及技术攻关;
参与数据中台建设,完成数据集成、元数据管理、任务管理等组件的开发与升级;
研究ELK、Flink、Spark、ClickHouse等大数据计算与存储技术,持续优化集群架构,发现并解决故障及性能瓶颈。
Requirements
Proficient in big data platform principles, with expertise in building and optimizing platforms using Hadoop ecosystem tools (e.g., Spark, Impala, Flume, Kafka, HBase, Hive, ZooKeeper) and ClickHouse;
Hands-on experience in developing and deploying ELK-based big data analytics platforms;
Strong grasp of big data modeling methodologies and techniques;
Skilled in Java/Scala programming, design patterns, and big data technologies (Spark, Flink);
Experience in large-scale data warehouse architecture/model/ETL design, with capabilities in massive data processing and performance tuning;
Extensive database design/development experience, familiar with relational databases and NoSQL.
Fluent in Mandarin in order to liaise with Mandarin speaking associates
任职要求
熟悉大数据平台原理,精通大数据平台搭建与优化,熟练使用Hadoop生态系统组件(如Spark、Impala、Flume、Kafka、HBase、Hive、ZooKeeper等)及ClickHouse构建大数据平台;
具备ELK大数据分析平台开发及部署经验;
熟练掌握大数据建模方法和技术;
精通Java/Scala编程及常用设计模式,熟悉Spark、Flink等大数据编程技术与原理;
具备大型数据仓库架构设计、模型设计、ETL设计经验,有海量数据处理及性能调优能力;
丰富的数据库设计与开发经验,熟悉关系型数据库及NoSQL技术。
流利的中文,需要与中文团队成员沟通.
Candidates with PhP or Java Experienced is Prioritized / 拥有PhP 或 大数据经验者会优先考虑
Big Data Engineer
Posted today
Job Viewed
Job Description
Job Description:
• Gather operational Intel on business processes and policies from multiple sources
• Prepare periodical and ad-hoc reports using operational data
• Develop semantic core to align data with business processes
• Support operations team's work streams for data processing, analysis and reporting
• Analyze data and Create dashboards for the senior management
• Design and implement optimal processes
• Regression testing of the releases
Skills Required:
• Big Data : Spark, Hive, Data Bricks
• Language : SQL, JAVA/Python
• BI & Analytics: Power BI (DAX), Tableau, Dataiku
• Operating System : Unix
• Experience with Data Migration, Data Engineering, Data Analysis
• Big Data : SCALA, HADOOP
• Tools: DB Visualizer, JIRA, GIT, Bit bucket, Control-M
• Strong problem-solving skills and the ability to work independently and in a team environment.
• Excellent communication skills and the ability to work effectively with cross-functional teams
Big Data Engineer
Posted today
Job Viewed
Job Description
Job Description:
1.Gain insights into the competitiveness and serviceability requirements of HUAWEI CLOUD Big Data service products. Such as: MRS, DWS, DataLake Insights, Elasticsearch, etc.
2.Be responsible for the delivery of global key projects, including Solution design, Resources management, Data migration, Expanding and Optimizing resources based-on HUAWEI CLOUD Big Data service products.
3.Manage requirements of key projects, develop project plans, identify project risks, ensure timely project delivery, and take charge of operations of some HUAWEI CLOUD sites.
4.Provides technical support for HUAWEI CLOUD customers and ensures the proper running of customer services at the technical level.
5.Troubleshoot faults of Big Data service products in various scenarios, communicate with customers about solutions, and track the solution implementation results.
Job Requirement:
1.Have a good project management experience in the ICT or CLOUD industry.
2.Be familiar with Big Data knowledge. Such as: Hadoop, MapReduce, Kafka, HBase, Spark, Datalake Insights, etc.
3.Experience in cloud development, delivery, or O&M. Background in the cloud industry preferred.
4.HCIA/HCIP/HCIE certificate in cloud computing, or equivalent certificates in the industry preferred, ep: AWS, Azure, GCP cloud certificates.
5.Have a good sense of teamwork and organizational coordination skills.
Be The First To Know
About the latest Senior data engineer Jobs in Malaysia !
Big Data Engineer
Posted today
Job Viewed
Job Description
- Design, develop, and optimize
big data pipelines
using Apache Spark (batch and streaming). - Implement data ingestion, transformation, and processing frameworks to handle
structured, semi-structured, and unstructured data
. - Work with
NoSQL databases
(Cassandra, MongoDB, HBase, DynamoDB, or Couchbase) for large-scale data storage and retrieval. - Integrate big data solutions with
data lakes, cloud platforms (AWS, Azure, GCP), and traditional RDBMS systems
. - Bachelor's/Master's degree in
Computer Science, Data Engineering, or related field
. - 3–6+ years of hands-on experience in
Big Data Development
. - Strong expertise in
Apache Spark (PySpark/Scala/Java)
for data processing and optimization. - Proficiency in
NoSQL databases
(Cassandra, MongoDB, HBase, Dynamo
Big Data Engineer
Posted today
Job Viewed
Job Description
Responsibilities
- ETL & Data Pipeline Development (80%): Design and implement ETL processes, data pipeline development, and data warehouse management using Airflow, Spark, and Python/Java.
- Data Extraction (20%): Provide data tables or CSVs as per the requirements of game operators.
Qualifications
- Degree in Computer Science or related technical and engineering fields
- Experience with programming language Python or java or scala or golang;SQL is a must.
- Good understanding of various types of databases, including relational databases and distributed databases
- Tabeau or PowerBI or Looker experience will be a plus.
- Good analytical and technical skills in building batch or streaming data pipelines for big data
- Business proficiency (written and spoken) in Mandarin to communicate both verbally and in writing with non-English speaking counterparts based in China for gathering requirements and analysis
Big Data Engineer
Posted today
Job Viewed
Job Description
#J-18808-Ljbffr