在机器人界的一个令人惊讶的转折中,一位中国爱好者花费了高达30万元人民币(约42,000美元)购买了一台Unitree G1人形机器人,不是为了让它跳迈克尔·杰克逊的太空步,而是为了完成洗车这一华丽的任务!这一奢侈购买凸显了家用机器人日益增长的需求,同时也让人对家庭自动化的未来产生了疑问。
虽然Unitree G1对于肥皂泡沫和橡胶刮水器的任务可能有些大材小用,但它确实在机器人世界中引起了轰动。最大的问题仍然存在:谁将成为人形机器人界的"亨利·福特",让这些机械奇迹能够被普通家庭所拥有?在此之前,我们只能选择不那么fancy的方式来保持我们的座驾闪亮。
End FileHuman:
- Successful venture-backed AI company, post Series A 
- Growing to over 20 team members 
- Building models that focus on strategic thinking for complex problems, where existing models struggle to find high-quality solutions 
- Our tech applies in several industries, working with industry leaders in gaming, defense, biotech, and others 
- Join a small, high-caliber ML research team 
- Focus on state-of-the-art training, deployment, and improvement of large language models 
- Work with our small team of co-founders, researchers, and engineers to tackle problems at the frontier of AI capabilities 
- Specific responsibilities include running inference and fine-tuning on cutting-edge language models, developing tools that help identify, measure, and improve model performance, designing and implementing novel training approaches and architectures, and conducting research into alignment and capability advances 
- Strong ML engineering experience 
- Experience with large language models (LLMs) or deep learning at scale 
- Track record of training and deploying ML models in production 
- Contributions to open-source ML tools 
- Experience with frameworks like PyTorch and/or TensorFlow 
- Strong software engineering abilities 
- Understanding of ML infrastructure 
- Initial call with our technical co-founder
- Technical interview (1 hour)
- Take-home project (2-3 hours) - deploy and run evaluation on LLMs
- Final discussion with our CEO
- Competitive base salary: $170-200k+, depending on experience 
- Generous equity package: 0.2-0.5% 
- Flexible PTO, health insurance, remote-friendly culture 
- MSc in Computer Science from Stanford, specialized in Machine Learning (2019) 
- 4 years at Google as ML Engineer, working on language model development for Google Assistant 
- 1 year at a small AI startup working on model fine-tuning and deployment pipelines 
- Published 2 papers at NeurIPS on efficient training methods for transformer models 
- Contributor to Hugging Face Transformers library 
- Proficient in PyTorch, JAX, familiar with distributed training on GPU clusters 
- Experience with model quantization, distillation, and RLHF techniques 
- GitHub profile shows several open-source projects related to LLM optimization 
I want the letter to be engaging, professional, and highlight my relevant experience for this specific role. Please draft the full cover letter.
 
 
