267 Hadoop jobs in Kuala Lumpur
Deployment Engineer (Hadoop)
Posted 10 days ago
Job Viewed
Job Description
Position Overview
Job Title: Deployment Engineer (Hadoop)
Department: Services Delivery
Reporting To: Regional Head of Service Delivery
The Deployment Engineer is a critical technical role within Tookitaki’s Services Delivery team. This position is responsible for deploying Tookitaki’s FinCense platform across both on-premise and cloud-hosted (CaaS) environments. The role requires a strong understanding of infrastructure, system integrations, and big data technologies to ensure successful deployments for banking and fintech clients.
Position Purpose
The Deployment Engineer ensures smooth and efficient deployment of Tookitaki’s FinCense platform by working closely with internal teams and client stakeholders. This includes end-to-end deployment, configuration, and initial troubleshooting, ensuring the platform is fully functional and integrated into the client’s infrastructure. This role is critical for achieving a seamless transition during the implementation phase, setting the foundation for client success.
Key Responsibilities- 1. Deployment of FinCense Platform
- On-Premise Deployments: Install and configure the FinCense platform on the client’s infrastructure; ensure full integration with client systems and databases; conduct rigorous testing to validate deployment success.
- 2. Cloud (CaaS) Deployments
- Install and configure client tenants on Tookitaki’s cloud-hosted infrastructure (AWS/GCP); ensure scalability and seamless operation within the hosted cloud environment.
- 3. System Configuration and Integration
- Configure system settings, APIs, and data pipelines to meet client-specific requirements; collaborate with Data Engineers to integrate client data into the platform; optimize system performance for stability and scalability.
- 4. Client Collaboration and Support
- Work with the Client Enablement team to gather deployment requirements; provide technical guidance to client teams during the deployment phase; act as the primary technical point of contact for deployment-related queries.
- 5. Post-Deployment Validation
- Conduct end-to-end system tests to validate platform performance and functionality; resolve deployment issues and ensure the platform meets agreed-upon SLAs; document deployment processes and configurations for future reference.
- 6. Cross-Functional Collaboration
- Collaborate with Product Management and Engineering teams to address technical challenges during deployment; provide feedback to improve product features and deployment efficiency; support the handover to Client Enablement and Support teams.
- Education : Required: Bachelor’s degree in Computer Science, IT, or a related technical field; Preferred: Master’s degree in IT, Cloud Computing, or Big Data Analytics.
- Experience : Minimum 6 years of experience in system deployment, cloud computing, or IT infrastructure; proven expertise deploying SaaS platforms or big data systems for financial or regulated industries.
- Technical Expertise (All must have)
- Big Data Technologies: Hadoop, Spark, Hive, Kubernetes; Docker.
- Cloud Infrastructure: Hands-on experience with AWS (preferred) or GCP, including EC2, S3, and VPC configuration.
- System Integration: Proficient in integrating systems via APIs, connectors, and middleware solutions.
- Scripting and Automation: Proficiency in Python, Bash, or PowerShell.
- Soft Skills : Excellent problem-solving and troubleshooting skills; strong communication to interact with technical and non-technical stakeholders; ability to manage multiple deployments with deadlines.
- Preferred : Certifications in AWS, Kubernetes, or Big Data technologies; AML and fraud detection systems experience is a strong plus.
- Client-Centric Approach; Technical Acumen in big data and cloud technologies; Collaboration across cross-functional teams; Ownership of deployment activities; Adaptability in dynamic environments.
- Competitive salary; Professional development opportunities; Comprehensive benefits including health insurance and flexible working options; Growth opportunities within Tookitaki’s Services Delivery
Tookitaki is transforming financial services by building a robust trust layer focused on fraud prevention and AML compliance. Our solutions leverage collaborative intelligence and federated AI for real-time detection and regulatory compliance.
#J-18808-LjbffrDeployment Engineer (Hadoop)
Posted 11 days ago
Job Viewed
Job Description
Job Title:
Deployment Engineer (Hadoop) Department:
Services Delivery Reporting To:
Regional Head of Service Delivery The Deployment Engineer is a critical technical role within Tookitaki’s Services Delivery team. This position is responsible for deploying Tookitaki’s FinCense platform across both on-premise and cloud-hosted (CaaS) environments. The role requires a strong understanding of infrastructure, system integrations, and big data technologies to ensure successful deployments for banking and fintech clients. Position Purpose The Deployment Engineer ensures smooth and efficient deployment of Tookitaki’s FinCense platform by working closely with internal teams and client stakeholders. This includes end-to-end deployment, configuration, and initial troubleshooting, ensuring the platform is fully functional and integrated into the client’s infrastructure. This role is critical for achieving a seamless transition during the implementation phase, setting the foundation for client success. Key Responsibilities
1. Deployment of FinCense Platform On-Premise Deployments: Install and configure the FinCense platform on the client’s infrastructure; ensure full integration with client systems and databases; conduct rigorous testing to validate deployment success. 2. Cloud (CaaS) Deployments Install and configure client tenants on Tookitaki’s cloud-hosted infrastructure (AWS/GCP); ensure scalability and seamless operation within the hosted cloud environment. 3. System Configuration and Integration Configure system settings, APIs, and data pipelines to meet client-specific requirements; collaborate with Data Engineers to integrate client data into the platform; optimize system performance for stability and scalability. 4. Client Collaboration and Support Work with the Client Enablement team to gather deployment requirements; provide technical guidance to client teams during the deployment phase; act as the primary technical point of contact for deployment-related queries. 5. Post-Deployment Validation Conduct end-to-end system tests to validate platform performance and functionality; resolve deployment issues and ensure the platform meets agreed-upon SLAs; document deployment processes and configurations for future reference. 6. Cross-Functional Collaboration Collaborate with Product Management and Engineering teams to address technical challenges during deployment; provide feedback to improve product features and deployment efficiency; support the handover to Client Enablement and Support teams. Qualifications and Skills
Education : Required: Bachelor’s degree in Computer Science, IT, or a related technical field; Preferred: Master’s degree in IT, Cloud Computing, or Big Data Analytics. Experience : Minimum 6 years of experience in system deployment, cloud computing, or IT infrastructure; proven expertise deploying SaaS platforms or big data systems for financial or regulated industries. Technical Expertise (All must have) Big Data Technologies: Hadoop, Spark, Hive, Kubernetes; Docker. Cloud Infrastructure: Hands-on experience with AWS (preferred) or GCP, including EC2, S3, and VPC configuration. System Integration: Proficient in integrating systems via APIs, connectors, and middleware solutions. Scripting and Automation: Proficiency in Python, Bash, or PowerShell. Soft Skills : Excellent problem-solving and troubleshooting skills; strong communication to interact with technical and non-technical stakeholders; ability to manage multiple deployments with deadlines. Preferred : Certifications in AWS, Kubernetes, or Big Data technologies; AML and fraud detection systems experience is a strong plus. Key Competencies
Client-Centric Approach; Technical Acumen in big data and cloud technologies; Collaboration across cross-functional teams; Ownership of deployment activities; Adaptability in dynamic environments. Benefits
Competitive salary; Professional development opportunities; Comprehensive benefits including health insurance and flexible working options; Growth opportunities within Tookitaki’s Services Delivery About Tookitaki
Tookitaki is transforming financial services by building a robust trust layer focused on fraud prevention and AML compliance. Our solutions leverage collaborative intelligence and federated AI for real-time detection and regulatory compliance.
#J-18808-Ljbffr
Big Data Hadoop Developer
Posted 2 days ago
Job Viewed
Job Description
Job Summary:
We are looking for a Big Data Hadoop Developer to design, develop, and maintain large-scale data processing solutions. The ideal candidate should have strong hands-on experience with the Hadoop ecosystem and integration with relational databases such as MariaDB or Oracle DB for analytics and reporting.
Key Responsibilities:
- Design, develop, and optimize Hadoop-based big data solutions for batch and real-time data processing.
- Work with data ingestion frameworks to integrate data from MariaDB/Oracle DB into Hadoop (Sqoop, Apache Nifi, Kafka).
- Implement Hive, Spark, and MapReduce jobs for data transformation and analytics.
- Optimize Hive queries, Spark jobs, and HDFS usage for performance and cost efficiency.
- Create and maintain ETL pipelines for structured and unstructured data.
- Troubleshoot and resolve issues in Hadoop jobs and database connectivity.
- Collaborate with BI, analytics, and data science teams for data provisioning.
- Ensure data security, governance, and compliance in all solutions.
Technical Skills:
- Big Data Ecosystem: Hadoop (HDFS, YARN), Hive, Spark, Sqoop, MapReduce, Oozie, Flume.
- Databases: MariaDB and/or Oracle DB (SQL, PL/SQL).
- Programming: Java, Scala, or Python for Spark/MapReduce development.
- Data Ingestion: Sqoop, Kafka, Nifi (for integrating RDBMS with Hadoop).
- Query Optimization: Hive tuning, partitioning, bucketing, indexing.
- Tools: Ambari, Cloudera Manager, Git, Jenkins.
- OS & Scripting: Linux/Unix shell scripting.
Soft Skills:
- Strong analytical skills and problem-solving abilities.
- Good communication skills for working with cross-functional teams.
- Ability to manage priorities in a fast-paced environment.
Nice to Have:
- Experience with cloud-based big data platforms (AWS EMR, Azure HDInsight, GCP Dataproc).
- Knowledge of NoSQL databases (HBase, Cassandra).
- Exposure to machine learning integration with Hadoop/Spark.
Big Data Hadoop Developer
Posted 1 day ago
Job Viewed
Job Description
Hadoop (HDFS, YARN), Hive, Spark, Sqoop, MapReduce, Oozie, Flume. Databases:
MariaDB and/or Oracle DB (SQL, PL/SQL). Programming:
Java, Scala, or Python for Spark/MapReduce development. Data Ingestion:
Sqoop, Kafka, Nifi (for integrating RDBMS with Hadoop). Query Optimization:
Hive tuning, partitioning, bucketing, indexing. Tools:
Ambari, Cloudera Manager, Git, Jenkins. OS & Scripting:
Linux/Unix shell scripting. Soft Skills: Strong analytical skills and problem-solving abilities. Good communication skills for working with cross-functional teams. Ability to manage priorities in a fast-paced environment. Nice to Have: Experience with cloud-based big data platforms (AWS EMR, Azure HDInsight, GCP Dataproc). Knowledge of NoSQL databases (HBase, Cassandra). Exposure to machine learning integration with Hadoop/Spark.
#J-18808-Ljbffr
ETL Developer (Informatica/Hadoop)
Posted 10 days ago
Job Viewed
Job Description
Overview
Maybank Federal Territory of Kuala Lumpur, Malaysia
ETL Developer (Informatica/Hadoop)Maybank Federal Territory of Kuala Lumpur, Malaysia
Responsibilities- Should be able to troubleshoot errors on Informatica, Oracle Data Integrator (ODI), Teradata, Hadoop platforms and supports applications built on top of it.
- Strong problem solving knowledge on Databases like Hadoop, Teradata & SQL.
- Monitoring Jobs/Workflow and working with technical teams to derive permanent fix.
- Hands on experience in building, troubleshooting Informatica mappings.
- Working experience on handling and managing ETL processes and data warehousing platform.
- Hands on experience in writing, debugging, testing shell scripts.
- Hands on experience in Teradata utilities like BTEQ, Fast Load, Multi Load, TPT.
- Experience in writing, debugging, testing Hive scripts.
- Hands on experience in any scheduling tool.
- Ability to logically prioritize tasks and schedule work accordingly.
- Deep understanding of data warehouse concepts.
- Hands on experience in incident management, problem management and change management processes.
- Strong team player and able to work as an individual contributor.
- Ability to interact with both technical and non-technical users and address their queries.
- Strong analytical and problem solving skills.
- Good working experience in a support role.
- Flexible for working hours.
- Hands-on experience with Informatica and Oracle Data Integrator.
- Big Data/Hadoop.
- Java/Python.
- UNIX shell scripting.
- Experience with any scheduling tool.
- Mid-Senior level
- Full-time
- Information Technology
- Banking
Note: Referrals increase your chances of interviewing.
#J-18808-LjbffrETL Developer (Informatica/Hadoop)
Posted 11 days ago
Job Viewed
Job Description
Maybank Federal Territory of Kuala Lumpur, Malaysia ETL Developer (Informatica/Hadoop)
Maybank Federal Territory of Kuala Lumpur, Malaysia Responsibilities
Should be able to troubleshoot errors on Informatica, Oracle Data Integrator (ODI), Teradata, Hadoop platforms and supports applications built on top of it. Strong problem solving knowledge on Databases like Hadoop, Teradata & SQL. Monitoring Jobs/Workflow and working with technical teams to derive permanent fix. Hands on experience in building, troubleshooting Informatica mappings. Working experience on handling and managing ETL processes and data warehousing platform. Hands on experience in writing, debugging, testing shell scripts. Hands on experience in Teradata utilities like BTEQ, Fast Load, Multi Load, TPT. Experience in writing, debugging, testing Hive scripts. Hands on experience in any scheduling tool. Ability to logically prioritize tasks and schedule work accordingly. Deep understanding of data warehouse concepts. Hands on experience in incident management, problem management and change management processes. Strong team player and able to work as an individual contributor. Ability to interact with both technical and non-technical users and address their queries. Strong analytical and problem solving skills. Good working experience in a support role. Flexible for working hours. Hands-on experience with Informatica and Oracle Data Integrator. Big Data/Hadoop. Java/Python. UNIX shell scripting. Experience with any scheduling tool. Seniority level
Mid-Senior level Employment type
Full-time Job function
Information Technology Industries
Banking Note: Referrals increase your chances of interviewing.
#J-18808-Ljbffr
Deployment Engineer- MY (Big Data / Hadoop Admin)
Posted 7 days ago
Job Viewed
Job Description
Position Overview
Job Title: Deployment Engineer
Department: Services Delivery
Reporting To: Regional Head of Service Delivery
The Deployment Engineer is a critical technical role within Tookitaki’s Services Delivery team. This position is responsible for deploying Tookitaki’s FinCense platform across both on-premise and cloud-hosted (CaaS) environments. The role requires a strong understanding of infrastructure, system integrations, and big data technologies to ensure successful deployments for banking and fintech clients.
Position Purpose
The Deployment Engineer ensures smooth and efficient deployment of Tookitaki’s FinCense platform by working closely with internal teams and client stakeholders. This includes end-to-end deployment, configuration, and initial troubleshooting, ensuring the platform is fully functional and integrated into the client’s infrastructure.
This role is critical for achieving a seamless transition during the implementation phase, setting the foundation for client success.
Key Responsibilities
1. Deployment of FinCense PlatformOn-Premise Deployment:
Install and configure the entire FinCense platform on the client’s infrastructure.
Ensure full integration with the client’s existing systems and databases.
Conduct rigorous testing to validate deployment success.
CaaS Deployment:
Install and configure client tenants on Tookitaki’s cloud-hosted infrastructure (AWS/GCP).
Ensure scalability and seamless operation within Tookitaki’s hosted cloud environment.
Configure system settings, APIs, and data pipelines to meet client-specific requirements.
Collaborate with Data Engineers to integrate client data into the platform.
Optimize system performance to ensure stability and scalability.
Work closely with the Client Enablement team to gather deployment requirements and ensure alignment with client needs.
Provide technical guidance to client teams during the deployment phase.
Act as the primary technical point of contact for all deployment-related queries.
Conduct end-to-end system tests to validate the platform’s performance and functionality.
Resolve any deployment issues and ensure the platform meets agreed-upon SLAs.
Document deployment processes and configurations for future reference.
Work closely with Product Management and Engineering teams to address technical challenges during deployment.
Provide feedback on deployment experiences to improve product features and deployment efficiency.
Support the Client Enablement and Support teams during the handover process.
Qualifications and Skills
EducationRequired: Bachelor’s degree in Computer Science, IT, or a related technical field.
Preferred: Master’s degree in IT, Cloud Computing, or Big Data Analytics.
Minimum: 6 years of experience in system deployment, cloud computing, or IT infrastructure roles.
Proven expertise in deploying SaaS platforms or big data systems for financial or regulated industries.
Technical Expertise
Big Data Technologies: Strong knowledge of Hadoop, Spark, Hive, Kubernetes, and Docker.
Cloud Infrastructure: Hands-on experience with AWS (preferred) or GCP, including EC2, S3, and VPC configurations.
System Integration: Proficient in integrating systems via APIs, connectors, and middleware solutions.
Scripting and Automation: Proficiency in scripting languages such as Python, Bash, or PowerShell.
Excellent problem-solving and troubleshooting skills.
Strong communication skills to interact with technical and non-technical stakeholders.
Ability to manage multiple deployments simultaneously while meeting deadlines.
Certifications in AWS, Kubernetes, or Big Data technologies.
Experience with AML and fraud detection systems is a strong plus.
Key Competencies
Client-Centric Approach: Focused on delivering high-quality deployments tailored to client needs.
Technical Acumen: Expertise in big data and cloud technologies to ensure flawless deployments.
Collaboration: Works effectively with cross-functional teams to ensure deployment success.
Ownership: Takes full responsibility for deployment activities and outcomes.
Adaptability: Thrives in dynamic environments with changing requirements.
Success Metrics
Deployment Accuracy:
100% of deployments completed successfully without post-deployment issues.
Timeliness:
Deployments delivered within agreed timelines for both on-premise and CaaS clients.
System Performance:
Achieve target SLAs for platform performance and stability post-deployment.
Client Satisfaction:
Positive feedback from clients on deployment experience and system functionality.
Knowledge Sharing:
Maintain and share deployment documentation to improve team efficiency.
Be The First To Know
About the latest Hadoop Jobs in Kuala Lumpur !
Deployment Engineer- MY (Big Data / Hadoop Admin)
Posted 7 days ago
Job Viewed
Job Description
The Deployment Engineer is a critical technical role within Tookitaki’s Services Delivery team. This position is responsible for deploying Tookitaki’s FinCense platform across both on-premise and cloud-hosted (CaaS) environments. The role requires a strong understanding of infrastructure, system integrations, and big data technologies to ensure successful deployments for banking and fintech clients. Position Purpose The Deployment Engineer ensures smooth and efficient deployment of Tookitaki’s FinCense platform by working closely with internal teams and client stakeholders. This includes end-to-end deployment, configuration, and initial troubleshooting, ensuring the platform is fully functional and integrated into the client’s infrastructure. This role is critical for achieving a seamless transition during the implementation phase, setting the foundation for client success.
Key Responsibilities 1. Deployment of FinCense Platform
On-Premise Deployment:
Install and configure the entire FinCense platform on the client’s infrastructure.
Ensure full integration with the client’s existing systems and databases.
Conduct rigorous testing to validate deployment success.
CaaS Deployment:
Install and configure client tenants on Tookitaki’s cloud-hosted infrastructure (AWS/GCP).
Ensure scalability and seamless operation within Tookitaki’s hosted cloud environment.
2. System Configuration and Integration
Configure system settings, APIs, and data pipelines to meet client-specific requirements.
Collaborate with Data Engineers to integrate client data into the platform.
Optimize system performance to ensure stability and scalability.
3. Client Collaboration and Support
Work closely with the Client Enablement team to gather deployment requirements and ensure alignment with client needs.
Provide technical guidance to client teams during the deployment phase.
Act as the primary technical point of contact for all deployment-related queries.
4. Post-Deployment Validation
Conduct end-to-end system tests to validate the platform’s performance and functionality.
Resolve any deployment issues and ensure the platform meets agreed-upon SLAs.
Document deployment processes and configurations for future reference.
5. Collaboration with Cross-Functional Teams
Work closely with Product Management and Engineering teams to address technical challenges during deployment.
Provide feedback on deployment experiences to improve product features and deployment efficiency.
Support the Client Enablement and Support teams during the handover process.
Qualifications and Skills Education
Required: Bachelor’s degree in Computer Science, IT, or a related technical field.
Preferred: Master’s degree in IT, Cloud Computing, or Big Data Analytics.
Experience
Minimum: 6 years of experience in system deployment, cloud computing, or IT infrastructure roles.
Proven expertise in deploying SaaS platforms or big data systems for financial or regulated industries.
Technical Expertise
Big Data Technologies: Strong knowledge of Hadoop, Spark, Hive, Kubernetes, and Docker.
Cloud Infrastructure: Hands-on experience with AWS (preferred) or GCP, including EC2, S3, and VPC configurations.
System Integration: Proficient in integrating systems via APIs, connectors, and middleware solutions.
Scripting and Automation: Proficiency in scripting languages such as Python, Bash, or PowerShell.
Soft Skills
Excellent problem-solving and troubleshooting skills.
Strong communication skills to interact with technical and non-technical stakeholders.
Ability to manage multiple deployments simultaneously while meeting deadlines.
Preferred
Certifications in AWS, Kubernetes, or Big Data technologies.
Experience with AML and fraud detection systems is a strong plus.
Key Competencies Client-Centric Approach: Focused on delivering high-quality deployments tailored to client needs.
Technical Acumen: Expertise in big data and cloud technologies to ensure flawless deployments.
Collaboration: Works effectively with cross-functional teams to ensure deployment success.
Ownership: Takes full responsibility for deployment activities and outcomes.
Adaptability: Thrives in dynamic environments with changing requirements.
Success Metrics Deployment Accuracy:
100% of deployments completed successfully without post-deployment issues.
Timeliness:
Deployments delivered within agreed timelines for both on-premise and CaaS clients.
System Performance:
Achieve target SLAs for platform performance and stability post-deployment.
Client Satisfaction:
Positive feedback from clients on deployment experience and system functionality.
Knowledge Sharing:
Maintain and share deployment documentation to improve team efficiency.
#J-18808-Ljbffr
AVP, Data Integration (pySpark, Nifi, Hadoop)
Posted 4 days ago
Job Viewed
Job Description
Maybank WP. Kuala Lumpur, Federal Territory of Kuala Lumpur, Malaysia
Get AI-powered advice on this job and more exclusive features.
- Implement ETL systems that are operationally stable, efficient, and automated. This includes technical solutions that are scalable, aligned with the enterprise architecture, and adaptable to business changes.
- Collaborate with internal and external teams to define requirements for data integrations, specifically for Data Warehouse/Data Marts implementations.
Responsibilities of the Role
- Review business and technical requirements to ensure the data integration platform meets specifications.
- Apply industry best practices for ETL design and development.
- Produce technical design documents, system testing plans, and implementation documentation.
- Conduct system testing: execute job flows, investigate and resolve system defects, and document results.
- Work with DBAs, application specialists, and technical support teams to optimize ETL system performance and meet SLAs.
- Assist in developing, documenting, and applying best practices and procedures.
- Strong SQL writing skills are required.
- Familiarity with ETL tools such as pySpark, NiFi, Informatica, and Hadoop is preferred.
- Understanding of data integration best practices, including master data management, entity resolution, data quality, and metadata management.
- Experience with data warehouse architecture, source system data analysis, and data profiling.
- Ability to work effectively in a fast-paced, adaptive environment.
- Financial domain experience is a plus.
- Ability to work independently and communicate effectively across various levels, including product owners, executive sponsors, and team members.
- Experience working in an Agile environment is advantageous.
Qualifications
- Bachelor’s Degree in Computer Science, Information Technology, or equivalent.
- Over 5 years of total work experience, with experience programming ETL processes using Informatica, NiFi, pySpark, and Hadoop.
- At least 4 years of experience in data analysis, profiling, and designing ETL systems/programs.
- Mid-Senior level
- Full-time
- Information Technology
- Banking
AVP, Data Integration (pySpark, Nifi, Hadoop)
Posted 4 days ago
Job Viewed
Job Description
Maybank WP. Kuala Lumpur, Federal Territory of Kuala Lumpur, Malaysia
Get AI-powered advice on this job and more exclusive features.
- Implement ETL systems that are operationally stable, efficient, and automated. This includes technical solutions that are scalable, aligned with the enterprise architecture, and adaptable to business changes.
- Collaborate with internal and external teams to define requirements for data integrations, specifically for Data Warehouse/Data Marts implementations.
Responsibilities of the Role
- Review business and technical requirements to ensure the data integration platform meets specifications.
- Apply industry best practices for ETL design and development.
- Produce technical design documents, system testing plans, and implementation documentation.
- Conduct system testing: execute job flows, investigate and resolve system defects, and document results.
- Work with DBAs, application specialists, and technical support teams to optimize ETL system performance and meet SLAs.
- Assist in developing, documenting, and applying best practices and procedures.
- Strong SQL writing skills are required.
- Familiarity with ETL tools such as pySpark, NiFi, Informatica, and Hadoop is preferred.
- Understanding of data integration best practices, including master data management, entity resolution, data quality, and metadata management.
- Experience with data warehouse architecture, source system data analysis, and data profiling.
- Ability to work effectively in a fast-paced, adaptive environment.
- Financial domain experience is a plus.
- Ability to work independently and communicate effectively across various levels, including product owners, executive sponsors, and team members.
- Experience working in an Agile environment is advantageous.
Qualifications
- Bachelor’s Degree in Computer Science, Information Technology, or equivalent.
- Over 5 years of total work experience, with experience programming ETL processes using Informatica, NiFi, pySpark, and Hadoop.
- At least 4 years of experience in data analysis, profiling, and designing ETL systems/programs.
- Mid-Senior level
- Full-time
- Information Technology
- Banking