Research Scientist - Acoustic and Multi-Modal Scene Understanding
At Meta’s Reality Labs Research, our goal is to make world-class consumer virtual, augmented, and mixed reality experiences. Come work alongside industry-leading scientists and engineers to create the technology that makes VR, AR, and smart wearables pervasive and universal. We are developing all the technologies needed to enable breakthrough Smartglasses, AR glasses, and VR headsets, including optics and displays, computer vision, audio, graphics, brain-computer interfaces, haptic interaction, eye/hand/face/body tracking, perception science, and true telepresence.
Responsibilities
1. Design innovative solutions for challenging multi-modal egocentric recognition problems with resource constraints.
2. Communicate research results internally and externally in the form of technical reports and scientific publications.
3. Implement state-of-the-art models and techniques on PyTorch, TensorFlow, or other platforms.
4. Identify, motivate, and execute on reasonable medium to large hypotheses for model improvements through data analysis and domain knowledge.
5. Design, perform, and analyze online and offline experiments with specific hypotheses in mind.
6. Generate reliable, correct training data with great attention to detail.
7. Identify and debug common issues in training machine learning models.
8. Design acoustic or audio-visual models with a small computational footprint on mobile devices and wearables.
Minimum Qualifications
1. PhD or postdoctoral assignment in deep learning, Machine Learning, Computer Vision, Computer Science, or related field.
2. 4+ years experience with development and implementation of signal processing and deep learning algorithms.
3. 4+ years experience with scientific programming languages such as Python, C++, or similar.
4. 3+ years experience with research-oriented software engineering skills.
5. Demonstrated experience of implementing and evaluating end-to-end prototypical learning systems.
6. Ability to independently resolve most online and offline issues affecting hypothesis testing.
7. Experience in communicating effectively with a broad range of stakeholders.
Preferred Qualifications
1. Experience with audio-visual learning, computer vision, and source localization.
2. Experience with building low-complexity models for low-power mobile devices.
3. Proven track record of achieving significant results and innovation.
About Meta
Meta builds technologies that help people connect and grow businesses. We are moving beyond 2D screens toward immersive experiences like augmented and virtual reality.
Meta is proud to be an Equal Employment Opportunity employer. We do not discriminate based on any legally protected characteristics.
Meta is committed to providing reasonable accommodations for qualified individuals with disabilities in our job application procedures.
Apply for this job and take the first step toward a rewarding career at Meta.
#J-18808-Ljbffr