Research Scientist, PRIOR Research PRIOR Seattle, WA
Tbwa Chiat/Day Inc - Seattle, WA
Apply NowJob Description
Hybrid Work Arrangement Persons in these roles are expected to spend part of their time on-site in our Seattle offices and may occasionally work remotely from their home in the Greater Seattle area. On-site requirements vary based on position and team. If you have questions about Hybrid work arrangements for this role, please ask your recruiter. Apply fast, check the full description by scrolling below to find out the full requirements for this role. Compensation Range: $167,030 - $260,570 Who You Are: As a Research Scientist on our team, you'll have the unique opportunity to work on research problems that can improve vision and multimodal models, contribute to the development of completely open foundation models, and experiment with mobile robots in the real world. You will contribute to projects similar to our recent Molmo model release (We are looking for applicants interested in computer vision, natural language processing, machine learning, robotics, agents, and/or embodied AI. International candidates are welcome to apply. Pay is competitive, and visa sponsorship is available. We collaborate frequently with researchers on other Ai2 teams focused on natural language processing and HCI, among others. Our main office has close ties and unique access to researchers at the University of Washington, only 1.5 miles away. Researchers excited about such internal and external collaborations are encouraged to apply. Research Scientists in the PRIOR team will work on multiple exciting research projects in the above areas, but will also have the opportunity to contribute and engage in a variety of other activities critical to our research and mission. These include opportunities to lead select projects, mentor interns and pre-doctoral candidates, author and present scientific papers and presentations for peer-reviewed journals and conferences and collaborate with researchers at other organizations including universities. We are considering applications on a rolling basis. The deadline of the first round of applications is on March 15th, 2025 . Ai2's Perceptual Reasoning and Interaction Research team ( PRIOR ) is looking for strong researchers to join our team. PRIOR seeks to advance computer vision to create AI systems that see, explore, learn, reason, and interact with the world. Research Areas: Embodied AI / Robotics: Enable agents that safely navigate, manipulate objects, and follow instructions in simulation and in the real world (with real robots). Language & Vision: Make vision and language models more general, efficient, and capable. Agents: Develop multimodal agents that automate real user tasks. AI for Common Good: Apply AI to help address global challenges such as climate change, illegal fishing, and wildlife poaching. We regularly publish in high-profile conferences and journals in computer vision (CVPR, ICCV, ECCV), robotics (CoRL, RSS, IROS, ICRA), machine learning (e.g., NeurIPS, ICLR), NLP (e.g., ACL, EMNLP), among others. At Ai2, you will be working with world-class AI researchers and talented software engineers. We perform team-based, ambitious research, with a mandate to strive for big breakthroughs, not just incremental progress, in an exciting and interactive workplace. What You'll Need: Qualifications: A strong foundation (typically PhD level) in one or more of the following areas: computer vision, machine learning, (M-)LLMs, foundation models, robotics, embodied AI, natural language processing, knowledge representation and reasoning, and agents. A strong publication record in AI-related areas. Example venues include CVPR, ICCV, ECCV, NeurIPS, ICLR, ICML, CoRL, RSS, ICRA, ACL, EMNLP, etc. Contributions to research communities (e.g. workshop organization, tutorials) are a plus. Strong software engineering skills. Experience with deep learning frameworks (e.g. PyTorch, Tensorflow, Jax). Bonus qualifications: Research experience in related areas like training dynamics, efficiency, data curation, large-scale multi-node training, multimodal model debugging, post-training methods (e.g., SFT, RLHF, DPO, PPO), reinforcement learning, imitation learning, real-world robot experimentation, simulation-based training, among others. Contributions to open-source research libraries (e.g. AllenNLP, AllenAct, Ai2-THOR) are a plus. Physical Demands and Work Environment: The physical demands described here are representative of those that must be met by a team member to successfully perform the essential functions of this position. Reasonable accommodations may be made to enable individuals with disabilities to perform the functions. Must be able to remain in a stationary position for long periods of time. The ability to communicate information and ideas so others will understand. Must be able to exchange accurate information in these situations. The ability to observe details at close range. A Little More About AI2: The Allen Institute for Artificial Intelligence is a non-profit research institute in Seattle founded by Paul Allen. The core mission of AI2 is to contribute to humanity through high-impact research in artificial intelligence. AI2 is proud to be an Equal Opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. This employer participates in E-Verify and will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S. We are committed to providing reasonable accommodations to employees and applicants with disabilities to the full extent required by the Americans with Disabilities Act (ADA). If you feel you need a reasonable accommodation pursuant to the ADA, you are encouraged to contact us at . #J-18808-Ljbffr
Created: 2025-01-01