Regular Data engineer
SAP
Hybrid
Regular employment
4 - 6 years of experience
Full Time
Sofia, Bulgaria
Описание
The development center of SAP in Bulgaria plays a key role in the defining and developing of the SAP Business Technology Platform. With its more than 1400 professionals, SAP Labs Bulgaria also has strong contributions toward life-cycle management, user interface & user experience across the broader portfolio of SAP products and has recently welcomed new teams focusing on business applications. For its 24-year history, the company has established itself as a preferred employer in the IT sector in Bulgaria.
ABOUT US
The SAP CPIT Product, Data and Technology organization has the purpose to “Deliver and Run the Intelligent Enterprise”. CPIT is SAP’s technology backbone and most compelling reference customer, and as such, we influence SAP product development and strategy, with our customer experience at heart. To support the updated SAP AI Strategy and the company vision on Generative AI, we need to strengthen our team to accelerate delivery of Data & AI Cloud Solutions.
THE ROLE
We’re looking for a Data Engineer, with an entrepreneurial spirit to join our Enterprise AI CoE team and work on innovation projects. As a Data Engineer, you will take responsibility for the development and deployment of new functionalities for the company's data platforms. You will have the opportunity to practice and develop in-depth knowledge of the following technologies:
- Azure-based Cloud platform: Azure Data Lake, Databricks.
- Data Engineering with Delta Lake, PySpark, Databricks SQL.
- Applications with SAP Business Technology Platform (HANA Cloud, SAP Analytics Cloud).
You will work in an agile development environment (SAFe) following DevOps best practices (GitHub, Jenkins CI/CD).
As a Regular Data Engineer, you will:
- Contribute to the implementation of best-in-class services for our SAP Cloud Data & ML Platform, industrializing the entire Data Engineering pipelines.
- Get involved in all aspects of the entire solution lifecycle, from idea creation to productization.
- Utilize Azure and Azure Databricks to evolve and extend our Data Pipeline platform services.
- Demonstrate excellent skills in product design, architecture, and development.
- Implement high-quality ideas efficiently, with a focus on technical expertise and attention to detail.
What you will bring
- Bachelor‘s or master’s degree in business informatics, computer science, engineering or related technical fields.
- Ability to communicate complex concepts in simple terms to a non-technical audience.
- Good analytical and critical thinking.
- 4+ years of experience in Big Data and Data Engineering pipeline design
- Strong proficiency in Python, Git.
- Experience in Cloud & AI technologies (e.g. Microsoft Azure, Databricks, Spark).
- Hands-on experience with Jenkins to automate software development processes.
- Deep understanding of security design concepts and their integration into system architecture.