swipe naar links of rechts om te bladeren door de jobsgebruik de pijltjestoetsen ( and ) om te bladeren door de jobs
Caring for the world, one person at a time, inspires and unites the people of Johnson & Johnson. We embrace innovation - bringing ideas, products and services to life to advance the health and well-being of people around the world. We believe in collaboration which has led to breakthrough after breakthrough, from medical miracles that have changed lives, to the simple consumer products that make every day a little better. Our over 125,000 employees in 60 countries are united in a common mission: To help people everywhere live longer, healthier, happier lives.
The software & data engineering team is a team of young professionals working on complex web digital and data solutions for the J&J Technology Services organization and its IT business units. Day-to-day this team is focused on activities in the space of software & data engineering, advanced analytics and automation using technologies and platforms like AWS, Azure, Cloudera Hadoop, Kafka, Neo4J, Spark, Python, Jenkins, Docker and Kubernetes.
We currently have an internship position for a data engineer.
In general the intern will work on technical implementations of data engineering solutions focused around big data ingestion, streaming data integration & processing or setup of CI/CD and test automation pipelines. Depending on the candidates field of interest there's a variety of work and implementations with varying levels of complexity to choose from. During the course of the internship, we provide the intern the opportunity to get acquainted with how teams in companies like J&J collaborate on big & complex project implementations in the web digital and analytics space.
Responsibilities and duties:
There's a variety of activities for the intern to work on - a few examples: - End-to-end development in Python of data ingestion framework using Sqoop, Hive, Impala, Oozie and Yaml configuration files. - Execute and document PoCs on technologies like Airflow, Azure EventHub, Confluent Kafka, Streamsets, Apache Nifi,.. - Implementation of streaming data pipelines. - Implementation of CI CD pipelines for technologies like Cloudera Hadoop, AWS Redshift, Kafka, Talend, Control-M,.. - ...
Qualifications: - Well-practiced programming and debugging skills. - Most importantly, eagerness to learn and apply new technologies and practices.
The candidate must be a registered student for the entire course of the internship: Unfortunately, graduates that are currently not enrolled in a study program are not eligible for this internship. If you are interested in applying for this challenging internship, please send an e-mail to RA-RNDBE-InternBelg@ITS.JNJ.com, including your resume and a short motivation.