Hlavní informace
Big Data Engineer - Apache Kafka and Big Data Technologies
Pozice: Nezadáno
Začátek: Co nejdříve
Konec: 31. 12. 2025
Město:
Ludwigshafen am Rhein, Německo
Způsob spolupráce: Pouze na projektu
Hodinová sazba: 2375 Kč
Poslední aktualizace: 1. 11. 2024
Popis úkolů a požadavky
Responsibilities:
Designing and implementing real-time data processing workflows using Apache Kafka.
Developing and managing data pipelines with Kafka Streams, Kafka Connect, and KSQL to facilitate data flow between systems.
Writing and maintaining code in Java, Python, and Scala for data integration and transformation tasks.
Ensuring efficient data processing and integration with big data technologies for large-scale data handling.
Troubleshooting data pipeline issues, optimizing performance, and resolving any data flow interruptions or bottlenecks.
Skills:
Proficiency in Apache Kafka, including Kafka Streams, Kafka Connect, and KSQL.
Strong programming skills in Java, Python, and Scala.
Experience in real-time data processing and big data technologies.
Problem-solving and troubleshooting skills, particularly with data pipeline performance.
Ability to optimize and fine-tune data flow processes for efficiency and reliability.
Tools:
Apache Kafka (Kafka Streams, Kafka Connect, KSQL).
Programming languages: Java, Python, Scala.
Big data technologies and processing frameworks (e.g., Hadoop, Spark).
Performance monitoring and debugging tools for real-time data processing.
Collaboration tools (e.g., Microsoft Teams, Slack).
—
Start: 11/2024
Duration: 12 Months+
Location: REMOTE