Senior Data Management Professional - Data Engineering - Private CreditBloomberg runs on data. Our products are fueled by powerful information. Webine data and context to paint the whole picture for our clients, around the clock - from around the world. In Data, we are responsible for delivering this data, news and analytics through innovative technology - quickly and accurately. We apply problem-solving skills to identify innovative workflow efficiencies, and we implement technology solutions to enhance our systems, products and processes - all while providing platinum customer support to our clients.
Our Team:
Our Securities Credit Data teams are responsible for the timely and accurate entry of new bonds and loans being brought to market, secondary market review and updates, as well as extensive client facing support. In addition, responsibilities include developing new data pipelines, implementing data validations, reviewing processes for efficiencies, finding opportunities to expand data sets. Our Private Credit team work closely with internal partners including the Engineering, Core Product, News and Enterprise.
We aremitted to delivering best-in-class quality and improving and expanding our Private Credit offerings through a deep understanding of market dynamics alongside our clients' current and future needs.
What's the role?
As a Data Engineer for Private Credit, you will use your domain expertise to handle and improve the financial data that feeds Bloomberg products, identify innovative workflow efficiencies, implement technical solutions to enhance our systems, products and processes, and establish links with key players in the financial market.
You will be responsible for the data management of publicly filed Private Credit data by BDCs (Business Developmentpanies) as well as creating pipelines from additional sources, establishing and implementing our automation, data alignment and data quality strategies. You will ensure the best use of our existing Private Credit data as well as leading the technical implementation of on-boarding new datasets. You will use your experience to define what a fit-for-purpose data product looks like. You will work to plan and deliver the data modeling and technology strategy and roadmap which should align to the overall product vision.
We'll trust you to:
1. Develop data-driven strategies, balancing the best of technical, product, financial and dataset knowledge, and work with our engineering and product departments to craft solutions
2. Be responsible for the end-to-end data ingestion of Ownership datasets including long positions and short position holdings data, transactional data, and derived datasets such as the free float field
3. Analyze internal processes to find opportunities for improvement, as well as implement innovative solutions
4. Design data models to create data storage solutions for our raw and enriched data and contribute to our data alignment strategy by following the FAIR data principles
5. Develop data quality strategies for the Ownership datasets and implement scalable, lasting solutions
6. Use data visualization skills to report on results of ongoing operations and projects, as required
You'll need to have:
*Please note we use years of experience as a guide, but we certainly will consider applications from all candidates who are able to demonstrate the skills necessary for the role.
7. 3+ years of programming experience in a development and/or production environment*
8. Proficiency using scripting languages to build pre-processing services that can be integrated in our data pipelines
9. A proven grasp of data modeling principles and technologies to perform requirements analysis as well as conceptual, logical, and physical modeling
10. Experience profiling new datasets, and implementing high volume, low-latency ETL pipelines
11. Understanding of data quality standard methodologies to improve the value of the dataset
12. Proven track record of effective project management and a customer focused mentality
We'd like to see:
13. Advanced degree/Master's degree or equivalent experience in a Finance or STEM subject and/or CFA designation (or working towards it)
14. Familiarity with use cases of sophisticated statistical methods such as Machine Learning, Artificial Intelligence, and Natural language Processing
15. Experience designing optimal database structures
16. Experience working in an Agile development team