<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:language="http://purl.org/dc/elements/1.1/language" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>RoboHorizon Robot Magazine - AI you can touch</title><link>https://robohorizon.uk/en-gb/</link><description>A compass in modern technologies primarily related to robotics, serving both business and private sectors with fresh news, comprehensive analyses, and tests.</description><generator>Hugo -- gohugo.io</generator><language>en-gb</language><lastBuildDate>Sat, 14 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://robohorizon.uk/en-gb/index.xml" rel="self" type="application/rss+xml"/><item><title>Ex-Googler's Robot 'Emma' Hunts for Stealthy Viruses in Vineyards</title><link>https://robohorizon.uk/en-gb/news/2026/03/emma-vineyard-virus-robot/</link><pubDate>Fri, 13 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/news/2026/03/emma-vineyard-virus-robot/</guid><description>Budbreak's autonomous robot, Emma, is now deployed in 14 vineyards, using AI to detect diseases early and improve crop yields, revolutionizing sustainable farming.</description><content:encoded>&lt;p&gt;While the tech world remains fixated on bipedal robots performing clumsy pirouettes for the cameras, a far more practical revolution is quietly trundling through the vineyards of America. Jonathan Moon, a robotics heavyweight with a &lt;strong&gt;Google&lt;/strong&gt; pedigree, has spent the past year perfecting &lt;strong&gt;Emma&lt;/strong&gt;: an autonomous rover designed not to pick grapes, but to sniff out the diseases that kill them long before they’re even visible to the human eye.&lt;/p&gt;
&lt;p&gt;The machine is the debut offering from Moon’s new ag-tech venture, &lt;strong&gt;Budbreak&lt;/strong&gt;. According to a recent update from Moon, Emma is already putting in the hard yards across 14 vineyards and orchards in California and New York. Functioning as a high-tech &amp;ldquo;AI scout,&amp;rdquo; the robot meticulously scans every vine for the earliest whispers of viral infection and other biological threats, all while keeping a digital tally of crop yields. It’s a sharp pivot from Moon’s previous life developing robotics for strawberry fields at Google. As he wryly noted, &amp;ldquo;grapes have converted me from strawberries.&amp;rdquo;&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/jmoonio/status/2031865002295169343"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;h4 id="why-is-this-important"&gt;Why is this important?&lt;/h4&gt; &lt;p&gt;Emma is more than just a glorified, high-tech scarecrow; she is a critical linchpin for the future of precision agriculture. For viticulturists, disease is a relentless, wallet-draining nightmare that can ruin everything from the sheer volume of the harvest to the delicate profile of the final pour. By detecting these issues in their infancy, Emma allows farmers to intervene with surgical precision, salvaging crops and significantly bolstering the sustainability of their operations.&lt;/p&gt;
&lt;p&gt;This kind of early-warning system is a genuine game-changer for the industry. Research from institutions like Cornell University suggests that advanced robotic monitoring could slash the need for pesticides by up to 90%—a staggering win for environmental stewardship. As chronic labour shortages and spiralling costs continue to squeeze the agricultural sector, autonomous solutions like Emma are rapidly shifting from futuristic curiosities to essential kit for the modern grower.&lt;/p&gt;</content:encoded><category>News</category><category>Robotics</category><category>AI</category><category>agricultural-robotics</category><category>agtech</category><category>ai</category><category>vineyard</category><category>sustainability</category><category>budbreak</category><media:content url="https://robohorizon.uk/images/shared/news/2026-03-13-image-beb7a054.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>Geely's ADAS is first Chinese system to win EU certification</title><link>https://robohorizon.uk/en-gb/news/2026/03/geely-adas-eu-certification/</link><pubDate>Fri, 13 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/news/2026/03/geely-adas-eu-certification/</guid><description>Chinese car giant Geely has secured landmark EU certification for its G-ASD assisted driving tech, clearing the road for a European roll-out.</description><content:encoded>&lt;p&gt;In a move that should have the European old guard checking their rear-view mirrors with increasing anxiety, the Chinese automotive titan &lt;strong&gt;Geely Holding Group&lt;/strong&gt; announced on Friday that its smart driving assistance system has officially cleared the EU’s regulatory hurdles. This marks a watershed moment: the first time a Chinese-developed advanced driver-assistance system (ADAS) has met the bloc’s notoriously stringent safety and technical requirements.&lt;/p&gt;
&lt;p&gt;The system, dubbed &lt;strong&gt;G-ASD (Geely-Advanced Driving Assistance System)&lt;/strong&gt;, has been given the regulatory green light, allowing vehicles equipped with the tech to be sold across certain EU markets without the headache of additional local certification. While Geely is playing its cards close to its chest regarding which specific version of G-ASD clinched the deal, the approval is a massive tactical and political win. It essentially greases the wheels for the company’s expansion into one of the most fiercely contested car markets on the planet.&lt;/p&gt;
&lt;h4 id="why-this-actually-matters"&gt;Why this actually matters&lt;/h4&gt; &lt;p&gt;This is about far more than just a bit of paperwork and a new certificate for the trophy cabinet. Geely’s success is a loud-and-clear signal that Chinese automotive software has matured to a level of sophistication that can satisfy some of the world’s most demanding regulators. For years, the ADAS sector has been the private playground of Western and Israeli tech firms, but this certification kicks the door off its hinges for other Chinese powerhouses like BYD, NIO, and XPeng.&lt;/p&gt;
&lt;p&gt;It fundamentally shifts the competitive landscape, proving that Chinese carmakers can navigate the labyrinth of international standards just as well as—if not better than—the incumbents. The message being sent from Hangzhou to the boardrooms in Wolfsburg and Stuttgart is unmistakable: the tech race is well and truly on, and China has moved out of the inside lane. European drivers can likely expect a fresh wave of feature-heavy, competitively priced vehicles arriving on these shores much sooner than the traditional manufacturers would like.&lt;/p&gt;</content:encoded><category>News</category><category>Automotive</category><category>AI</category><category>geely</category><category>adas</category><category>autonomous-driving</category><category>eu-regulation</category><category>china</category><media:content url="https://robohorizon.uk/images/shared/news/2026-03-13-image-09c408ac.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>Renault Deploys 350 Headless Humanoids to Factory Floors</title><link>https://robohorizon.uk/en-gb/news/2026/03/renault-350-humanoid-robots/</link><pubDate>Fri, 13 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/news/2026/03/renault-350-humanoid-robots/</guid><description>Renault is deploying 350 Calvin-40 humanoid robots from French startup Wandercraft to tackle repetitive, strenuous tasks in its automotive plants.</description><content:encoded>&lt;p&gt;In a move that suggests the humanoid revolution on the factory floor is shifting from a crawl to a sprint, &lt;strong&gt;Renault Group&lt;/strong&gt; has confirmed it is set to deploy 350 bipedal robots from French startup &lt;strong&gt;Wandercraft&lt;/strong&gt; over the next 18 months. The automotive giant, which has also snapped up a minority stake in the robotics firm, is putting the decidedly headless &lt;strong&gt;Calvin-40&lt;/strong&gt; humanoid to work, starting with the heavy lifting of tyres and various components at its Douai plant in France.&lt;/p&gt;
&lt;p&gt;Wandercraft, a firm founded in 2012 that built its reputation on world-class medical exoskeletons designed to help those with mobility impairments walk again, has pivoted its expertise toward the industrial sector. The Calvin-40 was reportedly brought to life in a mere 40 days, a feat made possible by leveraging over a decade of research into self-balancing robotics. Its design is unapologetically functional: a headless torso perched on two legs, featuring modular hands that can be swapped for grippers or suction cups depending on the shift&amp;rsquo;s requirements. The robot relies on a blend of advanced computer vision and AI-driven reasoning to navigate and operate autonomously within environments originally built for humans.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/TheHumanoidHub/status/2032150269199597985"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;This partnership is very much a two-way street. Renault provides a massive, real-world testing ground and the industrial muscle needed to help Wandercraft scale production and drive down costs. In return, Renault secures a fleet of tireless workers designed to liberate human staff from what the company describes as &amp;ldquo;painful and non-ergonomic tasks.&amp;rdquo; This deployment marks one of the largest confirmed orders for humanoid robots in the automotive sector—an industry that is rapidly becoming the primary proving ground for this technology.&lt;/p&gt;
&lt;h4 id="why-this-matters"&gt;Why this matters&lt;/h4&gt; &lt;p&gt;While the likes of &lt;strong&gt;BMW&lt;/strong&gt;, &lt;strong&gt;Mercedes-Benz&lt;/strong&gt;, and &lt;strong&gt;Tesla&lt;/strong&gt; are all currently flirting with humanoids from partners like Figure AI and Apptronik, Renault’s 350-strong order represents a significant escalation. We are moving past the &amp;ldquo;pilot program&amp;rdquo; phase into genuine large-scale integration. It suggests that the business case for humanoid robots in manufacturing is finally starting to stack up.&lt;/p&gt;
&lt;p&gt;The &amp;ldquo;headless&amp;rdquo; philosophy of the Calvin-40 is also a masterstroke of pragmatism, prioritising utility over aesthetics. For repetitive industrial slogs, a head packed with expensive, delicate sensors is often just an unnecessary cost and a potential point of failure. Wandercraft’s strategy is laser-focused on the use-case, building a specialised tool rather than a jack-of-all-trades android. This investment not only validates Wandercraft’s tech but firmly positions Europe as a heavyweight contender in an industrial robotics arena often dominated by American and Asian giants.&lt;/p&gt;</content:encoded><category>News</category><category>Robotics</category><category>Automotive</category><category>renault</category><category>wandercraft</category><category>humanoid-robot</category><category>manufacturing</category><category>automation</category><category>calvin-40</category><media:content url="https://robohorizon.uk/images/shared/news/2026-03-13-pastedgraphic-1-8cbfd5ed.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>Physical AI's Growing Pains: The Hurdles Facing Robotics</title><link>https://robohorizon.uk/en-gb/news/2026/03/physical-ai-growing-pains/</link><pubDate>Thu, 12 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/news/2026/03/physical-ai-growing-pains/</guid><description>From glitchy sims to lack of standardisation, a new study reveals the real-world hurdles holding back the next generation of physical AI.</description><content:encoded>&lt;p&gt;While software-based AI is busy acing medical exams and churning out Shakespearean sonnets, its physical counterparts are still struggling to navigate a doorway without having a bit of a wobble. A refreshingly frank post by Diego Prats of &lt;strong&gt;Haptic Labs&lt;/strong&gt; has shone a light on the &amp;ldquo;pain points&amp;rdquo; currently plaguing physical AI research, serving as a blunt reminder that building robots for the real world is a properly messy, complicated business.&lt;/p&gt;
&lt;p&gt;The crux of the issue, as Prats explains, is the massive chasm between virtual training and physical reality. This &amp;ldquo;simulation-to-reality&amp;rdquo; (or sim2real) gap is a notorious headache for roboticists. A policy that looks flawless in a clean, predictable simulator often falls apart the moment it encounters the chaotic friction of the real world. Simulators simply struggle to replicate the nitty-gritty details of physics, sensor noise, and material properties. The result? A robot that can gracefully stack blocks in a virtual sandbox might end up flailing like a startled octopus when faced with a real-life object.&lt;/p&gt;
&lt;p&gt;Prats also highlights a frustrating lack of hardware standardisation. Research teams are often working with bespoke, one-off robots, making it a nightmare to replicate or compare results across different labs. It’s a fragmented ecosystem where almost every new project has to reinvent the wheel—or, more accurately, the actuator and the sensor suite. Furthermore, the sheer cost and time required to gather high-quality, real-world data is a massive bottleneck. Unlike LLMs, which can scrape the entire internet for text, robots have to get their hands dirty, generating data through slow, expensive, and often failure-prone physical interactions.&lt;/p&gt;
&lt;h4 id="why-does-this-matter"&gt;Why does this matter?&lt;/h4&gt; &lt;p&gt;These &amp;ldquo;pain points&amp;rdquo; aren&amp;rsquo;t just academic grumbles; they are the primary hurdles standing in the way of truly autonomous, general-purpose robots. Bridging the sim2real gap is essential if we want to train robots safely and efficiently without trashing expensive hardware in the process. Likewise, establishing hardware standards could kick innovation into high gear, allowing researchers to build on each other&amp;rsquo;s work rather than starting from scratch every time. Ultimately, as Prats’s article makes clear, the road to capable physical AI isn&amp;rsquo;t just about bigger models—it’s about tackling the gritty, fundamental challenges of existing in a physical world. For a deeper dive, you can check out the original post on the &lt;a href="https://www.hapticlabs.ai/blog/2026/03/06/plenty-of-room-in-physical-ai-research"&gt;Haptic Labs blog&lt;/a&gt;.&lt;/p&gt;</content:encoded><category>News</category><category>AI</category><category>Robotics</category><category>physical-ai</category><category>robotics-research</category><category>ai-development</category><category>haptic-labs</category><category>simulation</category><category>sim2real</category><media:content url="https://robohorizon.uk/images/shared/news/2026-03-12-image-d5128be2.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>Sunday Robotics Bags $165M to End Demos and Ship Home Robots</title><link>https://robohorizon.uk/en-gb/magazine/2026/03/sunday-robotics-home-robots/</link><pubDate>Thu, 12 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/magazine/2026/03/sunday-robotics-home-robots/</guid><description>Sunday Robotics has $165M to end the era of flashy demos. Can they actually ship autonomous home robots this year? We're sceptical but intrigued.</description><content:encoded>&lt;p&gt;The robotics industry has a grubby little secret: it lives and breathes on the &amp;ldquo;spectacular demo&amp;rdquo;. For years, we’ve been fed a highlight reel of robots performing backflips, busting moves, and delicately plating up Michelin-star meals in pristine laboratory settings. The trouble is, most of these mechanical marvels are about as autonomous as a Punch and Judy show, and their chances of surviving five minutes in your cluttered, unpredictable kitchen are practically zero. Now, a startup by the name of &lt;strong&gt;Sunday Robotics&lt;/strong&gt; has burst onto the scene with a $165 million Series B war chest and a bold promise: to kill off the &amp;ldquo;demo culture&amp;rdquo; for good.&lt;/p&gt;
&lt;p&gt;Their claim is either incredibly brave or spectacularly foolish: they intend to deploy the &amp;ldquo;world’s first autonomous home robots into households this year.&amp;rdquo; Yes, &lt;em&gt;this year&lt;/em&gt;. Backed by a heavyweight roster including &lt;strong&gt;Coatue&lt;/strong&gt;, &lt;strong&gt;Bain Capital Ventures&lt;/strong&gt;, and &lt;strong&gt;Tiger Global&lt;/strong&gt;, Sunday isn’t just tinkering with another lab toy. They’re placing a nine-figure bet that they’ve finally cracked the code to making robots genuinely useful outside of a PowerPoint presentation. The company’s new $1.15 billion valuation suggests some very serious players are convinced they’re onto something.&lt;/p&gt;
&lt;h3 id="the-demo-to-dead-end-pipeline"&gt;The &amp;ldquo;Demo-to-Dead-End&amp;rdquo; Pipeline&lt;/h3&gt; &lt;p&gt;For those of us who have tracked this industry for a decade, a healthy dose of scepticism is part of the job. The road to domestic robotics is littered with the wreckage of ambitious projects that looked brilliant on YouTube but fell apart the moment they encountered reality. The core hurdle has never just been the hardware; it’s the brains. A real home is a chaotic minefield of stray socks, erratic pets, and coffee tables that seem to move of their own accord. An effective home robot needs to navigate this mess with grace, not just repeat a pre-programmed script.&lt;/p&gt;
&lt;p&gt;This is what makes Sunday’s declaration so audacious. In their announcement, they hit the nail on the head: &amp;ldquo;deploying autonomous, dexterous manipulation in real-world homes has never been achieved.&amp;rdquo; They aren’t just acknowledging the elephant in the room; they’re claiming to have tamed it. And they’re inviting the public to watch the process, promising to &amp;ldquo;document the journey for all&amp;rdquo; as they roll out a public beta.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/sundayrobotics/status/2032131717402960135"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;h3 id="sundays-secret-ingredient-no-puppeteers-just-practice"&gt;Sunday’s Secret Ingredient? No Puppeteers, Just Practice.&lt;/h3&gt; &lt;p&gt;So, why does Sunday reckon they can succeed where so many others have hit a brick wall? Their approach sidesteps the industry’s over-reliance on &amp;ldquo;teleoperation&amp;rdquo;—where humans remotely pilot robots to generate training data. As we’ve explored previously,
&lt;a href="https://robohorizon.uk/en-gb/magazine/2025/12/sunday-ai-chore-learning/" hreflang="en-gb"&gt;Sunday AI Teaches Chores: No Robot Puppets Needed&lt;/a&gt;
, Sunday’s method is far more hands-on.&lt;/p&gt;
&lt;p&gt;Founded by Stanford PhDs Tony Zhao and Cheng Chi, the company has developed a proprietary &amp;ldquo;Skill Capture Glove.&amp;rdquo; Rather than fiddling with joysticks, human data collectors wear these gloves to perform actual household chores, generating a massive, high-quality dataset of how tasks are performed in the wild. This data, harvested from over 500 homes, serves as the neural foundation for their robot, &lt;strong&gt;Memo&lt;/strong&gt;. By owning the entire stack—from the bespoke hardware to the data collection and model training—Sunday claims it can iterate at a speed that leaves the rest of the industry in the dust.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;Data has always been the biggest bottleneck in robotics,&amp;rdquo; said Tony Zhao, CEO of Sunday. &amp;ldquo;We’ve built the only pipeline that transforms the chaos of real-world homes into autonomous intelligence at scale.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3 id="putting-165-million-on-the-line"&gt;Putting $165 Million on the Line&lt;/h3&gt; &lt;p&gt;This massive funding round is more than just a pat on the back; it’s rocket fuel for an incredibly tight schedule. Deploying a beta version of a complex, autonomous robot into real homes within months is a logistical and technical nightmare. It’s a high-stakes test of safety, reliability, and managing the sky-high expectations of the public.&lt;/p&gt;
&lt;p&gt;The company’s robot, Memo, has been designed with these hurdles in mind. It sits on a rolling base for stability, dodging the inherent balance issues of bipedal designs that have a habit of, well, falling over. The goal isn&amp;rsquo;t a flashy humanoid showpiece, but a practical assistant capable of tackling the &amp;ldquo;drudge work&amp;rdquo;: stacking the dishwasher, folding the laundry, and tidying up the lounge.&lt;/p&gt;
&lt;p&gt;The ultimate question remains: can Sunday’s data-first approach truly bridge the gap between a controlled demo and the beautiful chaos of a family home? The robotics industry has spent years over-promising on the &amp;ldquo;home of the future.&amp;rdquo; Sunday Robotics has just raised $165 million and started a very public countdown to actually delivering it. Your move, Sunday. We’ll be watching closely.&lt;/p&gt;</content:encoded><category>AI</category><category>Robotics</category><category>Industry</category><category>sunday-robotics</category><category>venture-capital</category><category>home-automation</category><category>series-b</category><category>autonomous-robots</category><media:content url="https://robohorizon.uk/images/shared/magazine/2026-03-12-image-54ad014b.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>ABB Robotics gives factory bots an NVIDIA AI brain transplant</title><link>https://robohorizon.uk/en-gb/news/2026/03/abb-nvidia-factory-ai/</link><pubDate>Wed, 11 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/news/2026/03/abb-nvidia-factory-ai/</guid><description>ABB is tapping into NVIDIA Omniverse to hit 99% simulation accuracy, slashing the time it takes to get AI-powered robots onto the factory floor.</description><content:encoded>&lt;p&gt;Industrial titan &lt;strong&gt;ABB&lt;/strong&gt; is joining forces with &lt;strong&gt;NVIDIA&lt;/strong&gt; to give its factory robots a high-octane shot of AI-powered simulation. The duo has announced that NVIDIA’s Omniverse libraries are being baked directly into ABB’s RobotStudio software—a platform already navigated by over 60,000 engineers. The new offering, christened &lt;strong&gt;RobotStudio HyperReality&lt;/strong&gt;, aims to finally conquer the industry’s notorious &amp;ldquo;sim-to-real&amp;rdquo; gap with simulations that are, quite frankly, a staggering 99% accurate.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/nvidiarobotics/status/2031059151103651850"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;Historically, programming industrial robots has been a painstaking process of trial and (often eye-wateringly expensive) error. A simulation might look the business on a screen, but the messy reality of physics, lighting, and material quirks usually had other ideas. By marrying ABB’s virtual robot controllers with NVIDIA’s physically based rendering and AI simulation, developers can now design, stress-test, and validate entire production lines within a hyper-realistic digital twin before a single bolt is tightened on the factory floor. The tech is already being put through its paces by &lt;strong&gt;Hon Hai Technology Group (Foxconn)&lt;/strong&gt; for intricate electronics assembly, and by robotics startup &lt;strong&gt;Workr&lt;/strong&gt; to help smaller firms embrace automation.&lt;/p&gt;
&lt;h4 id="why-does-this-matter"&gt;Why does this matter?&lt;/h4&gt; &lt;p&gt;This partnership marks a fundamental pivot from merely programming robots to actually &lt;em&gt;training&lt;/em&gt; them. Rather than manually hard-coding every twitch and turn, manufacturers can now generate mountains of synthetic data in simulation to train AI models capable of handling real-world chaos and complexity. ABB reckons this approach could slash deployment costs by up to 40% and get products to market 50% faster. For sectors ranging from automotive to logistics, it means that smarter, more flexible automation is no longer just a digital pipedream—it’s about to get very real on the shop floor.&lt;/p&gt;</content:encoded><category>News</category><category>AI</category><category>Robotics</category><category>abb</category><category>nvidia</category><category>omniverse</category><category>robotstudio</category><category>industrial-robotics</category><category>digital-twin</category><category>ai</category><category>foxconn</category><media:content url="https://robohorizon.uk/images/shared/news/2026-03-11-image-76f0913e.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>ChangingTek's X2 Robot Hand: Scarily Nimble and Ambidextrous</title><link>https://robohorizon.uk/en-gb/news/2026/03/changingtek-x2-robot-hand/</link><pubDate>Wed, 11 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/news/2026/03/changingtek-x2-robot-hand/</guid><description>ChangingTek's X2 is a tendon-driven robotic hand that switches between left and right configurations with a 50N grip.</description><content:encoded>&lt;p&gt;While human evolution settled on the &amp;ldquo;one left, one right&amp;rdquo; arrangement, robotics engineers are proving they aren&amp;rsquo;t bound by such biological red tape. &lt;strong&gt;ChangingTek Robotics Technology (Suzhou) Co., Ltd.&lt;/strong&gt; has pulled a proper rabbit out of the hat with its &lt;strong&gt;X2 Left-Right Dexterous Hand (LRD Hand)&lt;/strong&gt;. It’s a bit of a marvel: an end-effector with fingers that can bend in both directions, effectively giving it two palms. This &amp;ldquo;ambidextrous-on-steroids&amp;rdquo; design allows the hand to flip between left- and right-handed configurations on the fly—a clever bit of engineering that could make automation lines significantly more efficient.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/cyberrobooo/status/2031738667107336560"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;Lest you think this is just a creepy party trick, the X2 is a serious piece of kit with some properly impressive specs. The company says the hand is tendon-driven, allowing for a blistering joint movement speed of 230° per second. Despite its lightweight frame, it delivers a maximum grip force of 50N—strong enough to make you think twice about a handshake—with a remarkably fine force control of just ±0.1N. This blend of speed, strength, and finesse is managed by a high-precision control system and a coordinated vision system, allowing it to get a grip on just about anything you throw at it.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/cyberrobooo/status/2031741953088614643"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;h4 id="why-should-we-care"&gt;Why should we care?&lt;/h4&gt; &lt;p&gt;The X2 represents a significant shift away from simple biomimicry. Instead of just trying to build a better human hand, ChangingTek has created a tool that leans into the inherent advantages of being a machine. A single robotic arm equipped with the X2 could tackle complex assembly tasks that would usually require two separate robots or a clunky tool-changing process. By ditching the distinction between &amp;ldquo;left&amp;rdquo; and &amp;ldquo;right&amp;rdquo; grippers, the X2 boosts operational flexibility in everything from aerospace to laboratory automation. It’s a stark reminder that the future of robotics isn&amp;rsquo;t just about copying us; it’s about building machines that are fundamentally more versatile. Frankly, it makes our own opposable thumbs look a bit basic.&lt;/p&gt;</content:encoded><category>News</category><category>Robotics</category><category>changingtek-robotics</category><category>x2-robot-hand</category><category>robotic-hand</category><category>ambidextrous-robot</category><category>robotics</category><category>automation</category><media:content url="https://robohorizon.uk/images/shared/news/2026-03-11-image-ab4dae43.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>Rhoda AI: Video-Trained Robots Secure $450M at $1.7B Valuation</title><link>https://robohorizon.uk/en-gb/news/2026/03/rhoda-ai-video-robots-funding/</link><pubDate>Wed, 11 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/news/2026/03/rhoda-ai-video-robots-funding/</guid><description>Emerging from stealth, Rhoda AI has raised $450 million to scale its robot intelligence platform, using internet videos to train machines for industrial tasks.</description><content:encoded>&lt;p&gt;In a move that suggests the AI gold rush is far from hitting its peak, robotics intelligence startup &lt;strong&gt;Rhoda AI&lt;/strong&gt; has emerged from 18 months in the shadows with a staggering &lt;strong&gt;$450 million Series A funding round&lt;/strong&gt;. The investment, led by &lt;strong&gt;Premji Invest&lt;/strong&gt;, catapults the Palo Alto-based firm to an eye-watering &lt;strong&gt;$1.7 billion valuation&lt;/strong&gt; and officially pulls back the curtain on its plan to give industrial robots a brain trained on the chaos of the open internet.&lt;/p&gt;
&lt;p&gt;Rhoda AI’s platform, christened &lt;strong&gt;FutureVision&lt;/strong&gt;, seeks to crack the toughest nut in robotics: building machines that can navigate the messy, unpredictable nature of the real world rather than being tethered to rigid, pre-programmed scripts. The company’s ace up its sleeve is a &amp;ldquo;Direct Video Action&amp;rdquo; model. Instead of relying solely on the slow, expensive process of humans remotely &amp;ldquo;teaching&amp;rdquo; robots through teleoperation, Rhoda pre-trains its AI on hundreds of millions of public internet videos. This allows the system to develop a foundational grasp of physics, motion, and cause-and-effect before it ever touches a factory floor. This &amp;ldquo;common sense&amp;rdquo; is then fine-tuned with specific robotic data, enabling machines to handle the unexpected in manufacturing and logistics hubs.&lt;/p&gt;
&lt;p&gt;This strategy of leveraging vast, unstructured video data to build generalist models is a sharp pivot from traditional robotics, echoing the &amp;ldquo;foundation model&amp;rdquo; approach championed by the likes of &lt;strong&gt;Nvidia&lt;/strong&gt; and &lt;strong&gt;Tesla&lt;/strong&gt;. While Tesla uses its massive fleet of vehicles to train Optimus, and Nvidia builds the plumbing for others via its Isaac platform and GR00T model, Rhoda is positioning itself as the universal &amp;ldquo;brains&amp;rdquo; provider. It’s a hardware-agnostic play designed to breathe new life into existing robotic fleets.&lt;/p&gt;
&lt;h4 id="why-this-matters"&gt;Why this matters&lt;/h4&gt; &lt;p&gt;The sheer scale of a Series A for a software-first robotics company is a massive vote of confidence from heavyweights like Premji Invest, &lt;strong&gt;Khosla Ventures&lt;/strong&gt;, and Temasek. It signals a growing consensus in Silicon Valley: the real value in the next wave of automation isn&amp;rsquo;t in the shiny metal arms or the grippers, but in the sophisticated &amp;ldquo;grey matter&amp;rdquo; that controls them.&lt;/p&gt;
&lt;p&gt;By training robots on the infinite variety of the internet, Rhoda AI is betting it can bypass the bottlenecks of traditional programming. If FutureVision can successfully translate YouTube-level observations into reliable, high-stakes factory actions, it could lower the barrier to automating tasks that have, until now, required a human touch. It is a bold, well-funded attempt to build the &amp;ldquo;Android&amp;rdquo; operating system for a world of increasingly capable robotic bodies.&lt;/p&gt;</content:encoded><category>News</category><category>AI</category><category>Robotics</category><category>rhoda-ai</category><category>venture-capital</category><category>robot-intelligence</category><category>industrial-automation</category><category>foundation-models</category><media:content url="https://robohorizon.uk/images/shared/news/2026-03-11-image-1c30a911.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>Test NormaCore’s Open-Source Robot Arm in Your Browser</title><link>https://robohorizon.uk/en-gb/news/2026/03/normacore-browser-robot-simulator/</link><pubDate>Wed, 11 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/news/2026/03/normacore-browser-robot-simulator/</guid><description>NormaCore’s ElRobot Playground lets you simulate their 7-DOF robotic arm online. Test kinematics before you 3D print a single component.</description><content:encoded>&lt;p&gt;There’s nothing quite like the crushing disappointment of a three-day 3D printing marathon that yields a robot arm with the kinetic grace of a shopping trolley with a wonky wheel. &lt;strong&gt;NormaCore&lt;/strong&gt; feels your pain. The open-source hardware collective has just dropped the &lt;strong&gt;ElRobot Playground&lt;/strong&gt;, a browser-based simulator that lets you take their robotic arm for a spin before you’ve even preheated your nozzle.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/norma_core_dev/status/2031043293832790345"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;The star of the show is the
&lt;a href="https://robohorizon.uk/en-gb/news/2026/02/norma-core-elrobot-open-source/" hreflang="en-gb"&gt;Norma-Core Unveils ElRobot: The 3D-Printed 7-DOF Open-Source Arm&lt;/a&gt;
, NormaCore’s highly accessible, fully 3D-printable 7-DOF limb designed to democratise physical AI research. The new playground serves up a slick, interactive model of the arm right in your browser, allowing you to tweak every joint, stress-test the range of motion, and generally get a feel for its capabilities without spending a single quid on filament or servos. You can have a go yourself right here: &lt;a href="https://normacore.dev/elrobot-urdf/"&gt;ElRobot Playground&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The simulation is powered by the arm’s Unified Robot Description Format (URDF) file—the XML-based industry standard used in the Robot Operating System (ROS) to define a robot’s physical properties. By bringing this simulation to a simple webpage, NormaCore has essentially binned a massive technical hurdle.&lt;/p&gt;
&lt;h4 id="why-does-this-matter"&gt;Why does this matter?&lt;/h4&gt; &lt;p&gt;This isn&amp;rsquo;t just a bit of digital window shopping; it’s a proper step forward for the accessibility of open-source robotics. It offers a frictionless &amp;ldquo;try before you build&amp;rdquo; workflow that’s going to save developers, students, and hobbyists a significant amount of time and grief. Instead of the usual rigmarole of installing and configuring heavyweight simulation environments like Gazebo just to see how an arm moves, anyone with a web browser can now experiment with the kinematics instantly. It’s a move that lowers the barrier to entry, making sophisticated robotics feel less like a dark art and more like an approachable reality for the wider community.&lt;/p&gt;</content:encoded><category>News</category><category>Robotics</category><category>Open Source</category><category>normacore</category><category>elrobot</category><category>robotic-arm</category><category>3d-printing</category><category>simulation</category><category>web-based</category><category>urdf</category><media:content url="https://robohorizon.uk/images/shared/news/2026-03-11-image-a8139458.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>EngineAI dangles $1.4M prize for 'non-violent' robot fighting</title><link>https://robohorizon.uk/en-gb/news/2026/03/engineai-humanoid-fighting-league/</link><pubDate>Tue, 10 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/news/2026/03/engineai-humanoid-fighting-league/</guid><description>Shenzhen-based EngineAI has launched the Ultimate Robot Knock-out Legend (URKL), a global humanoid fighting league with a ¥10 million top prize.</description><content:encoded>&lt;p&gt;In a move that seems perfectly engineered to deprive developers of their beauty sleep, Shenzhen-based &lt;strong&gt;EngineAI Robotics Technology Co., Ltd.&lt;/strong&gt; has officially pulled the curtain back on the &lt;strong&gt;Ultimate Robot Knock-out Legend (URKL)&lt;/strong&gt;. It’s a global humanoid fighting league boasting a winner’s cheque of ¥10,000,000 (roughly £1.1 million). But before you start looking for a blowtorch and a spare chassis in the garage, there’s a rather significant catch: the rules are strictly &amp;ldquo;non-violent.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;This isn&amp;rsquo;t your typical Sunday afternoon &lt;em&gt;Robot Wars&lt;/em&gt; carnage. Instead of spinning blades and hydraulic flippers, this league is a high-stakes software showdown. All teams will compete using a standardised humanoid platform, the aptly named &lt;strong&gt;T800&lt;/strong&gt;, meaning the battle will be won through superior code, more elegant motion control, and clever protective gear rather than raw destructive power. The T800, which stands at 5ft 8in (1.73m) and weighs in at 75kg, serves as the ultimate litmus test for balance and control algorithms under genuine duress.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/XRoboHub/status/2030327853049643318"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;The prize pool is hefty enough to make even the most comfortable Silicon Valley engineer reconsider their career path. The winner walks away with the ¥10 million grand prize, while second and third place pocket a tidy ¥2 million (£220,000) and ¥1 million (£110,000) respectively. As a proper incentive, any team that reaches the Top 16 gets to keep their T800 robot, and the Top 8 finalists are fast-tracked to final-round job interviews at &lt;strong&gt;EngineAI&lt;/strong&gt;. Registration is open from 1 March to 30 April, with the global finals slated for December 2026 through January 2027.&lt;/p&gt;
&lt;h4 id="why-is-this-important"&gt;Why is this important?&lt;/h4&gt; &lt;p&gt;Let&amp;rsquo;s be clear: this is less about creating a new spectator sport and more about launching the world&amp;rsquo;s most intense, gamified recruitment and R&amp;amp;D programme. By standardising the hardware, &lt;strong&gt;EngineAI&lt;/strong&gt; has cleverly pivoted the competition away from a resource-draining hardware arms race into a pure contest of software and AI ingenuity.&lt;/p&gt;
&lt;p&gt;The URKL serves as a high-stress, real-world laboratory for the very technologies—balance, perception, and motion control—that are critical for deploying humanoid robots in factories or homes. Essentially, the company is crowdsourcing solutions to some of robotics&amp;rsquo; most stubborn problems, dangling a life-changing prize, and securing a front-row seat to scout the world&amp;rsquo;s best talent. It’s a brilliant, if slightly cheeky, way to accelerate development while the rest of the industry is still stuck in the simulation phase.&lt;/p&gt;</content:encoded><category>News</category><category>Robotics</category><category>AI</category><category>engineai</category><category>urkl</category><category>humanoid-robot</category><category>robot-competition</category><category>robotics-engineering</category><category>t800</category><media:content url="https://robohorizon.uk/images/shared/news/2026-03-10-image001-8e8f80cb.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>Move Over Exoskeletons: Centaur Bot Cuts Load Effort by 35%</title><link>https://robohorizon.uk/en-gb/news/2026/03/centaur-bot-load-reduction/</link><pubDate>Tue, 10 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/news/2026/03/centaur-bot-load-reduction/</guid><description>Scientists have developed a wearable 'centaur' robot that acts as extra legs, drastically cutting the energy needed to haul heavy loads.</description><content:encoded>&lt;p&gt;Just when you thought wearable robotics was all about squeezing into a rigid metal cage, researchers at the &lt;strong&gt;Southern University of Science and Technology (SUSTech)&lt;/strong&gt; have decided to give us a robotic rear end instead. No, it’s not a joke; it’s a remarkably clever new take on helping humans shift heavy loads without breaking their backs.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/xrobohub/status/2030955231891313061"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;Led by Professor Chenglong Fu, the team has cooked up a wearable &amp;ldquo;Centaur&amp;rdquo; system—essentially a pair of independent robotic legs that hitch a ride on your back. The aim isn&amp;rsquo;t to turn you into a superhero, but to offload the hard graft more intelligently. By shouldering the vertical weight, the device cuts a user&amp;rsquo;s metabolic cost by a whopping 35% and slashes foot pressure by 52% when lugging a 20kg pack.&lt;/p&gt;
&lt;p&gt;Unlike standard exoskeletons that run alongside your legs and can feel a bit cumbersome, this quadrupedal setup works &amp;ldquo;in series&amp;rdquo; with the wearer. A specialised elastic coupling links the robotic limbs to the pilot, staying stiff for snappy movements but softening to soak up shocks. This &amp;ldquo;dynamics decoupling&amp;rdquo; means the human handles the navigation and balance, while the robot provides the steady, load-bearing grunt. It’s surprisingly nimble, too, capable of nipping through figure-eight turns and tackling stairs or uneven ground with ease.&lt;/p&gt;
&lt;h4 id="why-does-this-matter"&gt;Why does this matter?&lt;/h4&gt; &lt;p&gt;The research, recently featured in the &lt;em&gt;International Journal of Robotics Research&lt;/em&gt;, suggests that the best way to boost our carrying capacity isn&amp;rsquo;t to encase our limbs in metal, but to sprout new ones entirely. By splitting the labour—human for the brains, robot for the brawn—the centaur concept could revolutionise logistics, disaster relief, and any job where heavy lifting is part of the daily grind. It’s less &lt;em&gt;Iron Man&lt;/em&gt; and more of a very handy mythological beast, ready to do the heavy lifting so you don&amp;rsquo;t have to.&lt;/p&gt;</content:encoded><category>News</category><category>Robotics</category><category>Wearables</category><category>exoskeleton</category><category>human-robot-interaction</category><category>robotics</category><category>research</category><category>sustech</category><category>load-carrying</category><media:content url="https://robohorizon.uk/images/shared/news/2026-03-10-image-6e48f802.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>Self-Evolving AI: Agents That Learn from Their Own Mistakes</title><link>https://robohorizon.uk/en-gb/news/2026/03/self-evolving-ai-agents/</link><pubDate>Tue, 10 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/news/2026/03/self-evolving-ai-agents/</guid><description>MetaClaw and Karpathy’s AutoResearch allow AI agents to evolve and create new skills autonomously. Discover how self-evolving AI is changing the game.</description><content:encoded>&lt;p&gt;The long-cherished dream of AI that pens its own upgrades has finally jumped off the sci-fi shelf and landed firmly in a GitHub repository near you. While the idea of self-evolving agents has been bubbling away for some time, a fresh crop of open-source projects is turning the concept into a practical—if slightly eerie—reality. Leading the charge are &lt;strong&gt;MetaClaw&lt;/strong&gt;, a framework for agents that forge new skills from their own failures, and &lt;strong&gt;AutoResearch&lt;/strong&gt;, a minimalist tool from AI luminary Andrej Karpathy that effectively puts LLM development on autopilot.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MetaClaw&lt;/strong&gt;, developed by the &lt;strong&gt;AIMING Lab&lt;/strong&gt; at UNC-Chapel Hill, is built to learn on the fly from real-world conversations with users. Instead of waiting for a massive offline patch, MetaClaw dissects failed interactions and uses an LLM to automatically generate new &amp;ldquo;skills&amp;rdquo; to ensure it doesn&amp;rsquo;t trip over the same stone twice. Essentially, it’s a system that allows an agent to evolve by learning from its own blunders—a feature many of us are still waiting for in ourselves, let alone our software. The entire project is detailed on its Hyperlink: &lt;a href="https://github.com/aiming-lab/MetaClaw"&gt;MetaClaw GitHub repository&lt;/a&gt;.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/bowang87/status/2031094971630235941"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;Adding fuel to the fire is &lt;strong&gt;Andrej Karpathy&lt;/strong&gt;, the former head of AI at Tesla and a founding member of OpenAI. He recently open-sourced &lt;strong&gt;AutoResearch&lt;/strong&gt;, a deceptively simple framework that lets an AI agent autonomously conduct machine learning experiments. The agent tweaks the training code, runs a snappy five-minute experiment, weighs up the results, and decides whether to keep the change or bin it before starting the next loop. As Karpathy dryly noted, the era of &amp;ldquo;meat computers&amp;rdquo; doing the heavy lifting in AI research may be drawing to a close. The project is available on the Hyperlink: &lt;a href="https://github.com/karpathy/autoresearch"&gt;AutoResearch GitHub repository&lt;/a&gt;.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/karpathy/status/2031135152349524125"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;The idea isn&amp;rsquo;t entirely new, with developers like Máté Benyovszky noting their work on &amp;ldquo;second generation&amp;rdquo; self-evolving agents as early as February 2026. However, the arrival of robust, open-source frameworks signals a major inflection point for the industry.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/matebenyovszky/status/2026917113529532828"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;h4 id="why-does-this-matter"&gt;Why does this matter?&lt;/h4&gt; &lt;p&gt;Static AI models that are effectively obsolete the moment they’re deployed have long been a massive bottleneck. Self-evolving agents represent a tectonic shift from shipping a finished product to unleashing a system that can continuously adapt and improve in the wild. For robotics, the implications are staggering. Instead of painstakingly programming every possible action and edge case, a robot could master new physical skills on its own after a bit of trial and error. It’s the difference between a simple appliance and a truly autonomous entity—and it looks like the toolkit for that future has finally arrived.&lt;/p&gt;</content:encoded><category>News</category><category>AI</category><category>Robotics</category><category>self-evolving-agents</category><category>metaclaw</category><category>autoresearch</category><category>andrej-karpathy</category><category>aiming-lab</category><category>large-language-models</category><category>llm</category><category>open-source</category><media:content url="https://robohorizon.uk/images/shared/news/2026-03-10-image-f1a59297.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>China’s $5bn humanoid AI blitz: Spending $70m every day</title><link>https://robohorizon.uk/en-gb/news/2026/03/china-humanoid-ai-blitz/</link><pubDate>Mon, 09 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/news/2026/03/china-humanoid-ai-blitz/</guid><description>China’s humanoid robotics sector is seeing an unprecedented $5bn funding boom as state-backed funds supercharge the global race for AI dominance.</description><content:encoded>&lt;p&gt;While the West remains bogged down in existential debates over AI ethics, China has decided to simply build the thing—and bankroll it into the stratosphere. In the first two months of 2026, the country’s humanoid robotics and embodied AI sector has hoovered up more than &lt;strong&gt;$5 billion (£4 billion)&lt;/strong&gt; in funding. That isn&amp;rsquo;t a typo. Capital is flowing at an average clip of over &lt;strong&gt;$70 million (£55 million)&lt;/strong&gt; &lt;em&gt;per day&lt;/em&gt;, signalling a strategic tsunami aimed squarely at dominating the next generation of physical AI.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/ruima/status/2030790877917200504"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;The sheer velocity of this investment is staggering. The first eight weeks of the year saw nine separate funding rounds exceeding RMB 1 billion (approximately £115m), eclipsing the six such deals recorded in the entirety of 2025. The headline act in this fiscal blitz is &lt;strong&gt;Galbot Robotics&lt;/strong&gt;, which secured a massive RMB 2.5 billion (£280m) round on 2nd March, catapulting its valuation to the $3 billion mark. Crucially, the round was co-led by China&amp;rsquo;s national &amp;ldquo;Big Fund III,&amp;rdquo; the country&amp;rsquo;s heavyweight semiconductor investment vehicle. This marks the fund&amp;rsquo;s first-ever foray into an embodied AI firm—a move that screams &amp;ldquo;national strategic priority&amp;rdquo; louder than a drill sergeant on a Monday morning.&lt;/p&gt;
&lt;h4 id="why-does-this-matter"&gt;Why does this matter?&lt;/h4&gt; &lt;p&gt;This isn&amp;rsquo;t just another venture capital bubble; it&amp;rsquo;s a calculated, state-endorsed industrial manoeuvre. The involvement of the &amp;ldquo;Big Fund&amp;rdquo;—an entity forged to secure China&amp;rsquo;s dominance in microchips—is the most telling signal yet. Beijing is now treating humanoid robotics with the same strategic gravity as semiconductors. The investment frenzy appears to have hit an inflection point in July 2025, when firms like &lt;strong&gt;Unitree Robotics&lt;/strong&gt; and Agibot secured modest but significant commercial orders from &lt;strong&gt;China Mobile&lt;/strong&gt;. That small taste of commercial viability seems to have convinced both the state and private investors that the era of theoretical play is over.&lt;/p&gt;
&lt;p&gt;While Western firms generate headlines with slick, polished demos, China is quietly—or rather, quite loudly—laying the industrial and financial foundations to deploy humanoid robots at an unprecedented scale. The message is clear: the race for embodied AI supremacy isn&amp;rsquo;t just about clever algorithms; it&amp;rsquo;s about brute-force economic and industrial might. And right now, China has its foot flat to the floor.&lt;/p&gt;</content:encoded><category>News</category><category>Robotics</category><category>AI</category><category>humanoid-robot</category><category>china</category><category>venture-capital</category><category>embodied-ai</category><category>galbot-robotics</category><category>unitree-robotics</category><category>state-investment</category><media:content url="https://robohorizon.uk/images/shared/news/2026-03-09-image001-31deab6f.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>EON Uploads Fruit Fly Brain – And It Actually Works</title><link>https://robohorizon.uk/en-gb/magazine/2026/03/eon-fruit-fly-brain/</link><pubDate>Mon, 09 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/magazine/2026/03/eon-fruit-fly-brain/</guid><description>EON has successfully emulated a fruit fly's brain with 91% behavioural accuracy. Is this the first real step towards mind uploading?</description><content:encoded>&lt;p&gt;In a move that feels plucked straight from the pages of a yellowing sci-fi paperback, San Francisco-based startup EON has performed a feat of genuine digital necromancy. They have taken the complete brain map of a fruit fly, uploaded it into a simulated body, and watched it spring to life. This isn’t a mere animation or a machine-learning algorithm mimicking an insect; it is a direct emulation of a biological brain’s physical wiring. According to EON’s founder, Michael Andregg, it achieved 91% behavioural accuracy straight out of the gate.&lt;/p&gt;
&lt;p&gt;The era of whole-brain emulation has, it seems, buzzed into existence—not with a grand, cinematic pronouncement, but with the twitch of a virtual fly’s leg. For decades, the concept of &amp;ldquo;uploading&amp;rdquo; consciousness has been the ultimate philosophical carrot dangled by futurists. But EON’s demonstration suggests that the technical foundations are not just being laid; they are already functional, albeit at a scale that won&amp;rsquo;t be threatening our biological supremacy quite yet.&lt;/p&gt;
&lt;h3 id="the-ghost-in-the-machine"&gt;The Ghost in the Machine&lt;/h3&gt; &lt;p&gt;So, how did they pull off this bit of techno-wizardry? The project stands on the shoulders of &lt;strong&gt;FlyWire&lt;/strong&gt;, a massive collaborative effort that painstakingly mapped the entire connectome—a neuron-by-neuron, synapse-by-synapse wiring diagram—of an adult fruit fly. This connectome comprises nearly 140,000 neurons and over 50 million connections, a dizzying labyrinth of biological circuitry now available as open data.&lt;/p&gt;
&lt;p&gt;EON took this pristine map and applied a deceptively simple neuron model known as &amp;ldquo;leaky-integrate-and-fire&amp;rdquo; (LIF). LIF models are a staple of computational neuroscience, stripping back the complex biophysics of a neuron into a few fundamental rules: integrate incoming signals, leak some charge over time, and fire a spike when a specific threshold is hit. This digital brain was then lashed to &lt;strong&gt;NeuroMechFly&lt;/strong&gt;, a hyper-realistic, physics-simulated fly body running within the &lt;strong&gt;MuJoCo&lt;/strong&gt; physics engine.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/michaelandregg/status/2030764512488677736"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;The astonishing part, as Andregg points out, is that this Rube Goldberg contraption of neuroscience data and simulation software actually worked. &amp;ldquo;This shows how much information is captured by the architecture itself, rather than the neuron model,&amp;rdquo; he stated. It is a powerful vindication for the field of connectomics, suggesting that the wiring diagram is indeed the most critical piece of the intelligence puzzle.&lt;/p&gt;
&lt;h3 id="the-fine-print-on-immortality"&gt;The Fine Print on Immortality&lt;/h3&gt; &lt;p&gt;Before we all rush to digitise our own grey matter, it is worth reading the caveats, which are significant. Firstly, the original &lt;strong&gt;FlyWire&lt;/strong&gt; scan was of the brain alone, not the full nervous system and body. This meant EON had to make some highly educated guesses about how to wire the brain&amp;rsquo;s motor outputs to the simulated muscles of &lt;strong&gt;NeuroMechFly&lt;/strong&gt;. It’s a genuine limitation, and one the company intends to fix by scanning both brain and body in tandem for future projects.&lt;/p&gt;
&lt;p&gt;Secondly, the simple LIF neuron model has a major drawback: it lacks plasticity. This digital fly cannot form new long-term memories. It is a ghost trapped in a loop, its behaviour dictated entirely by the frozen architecture of its biological past. It can react, but it cannot learn. Andregg acknowledges this, while also touching upon the thorny ethical questions. &amp;ldquo;We don&amp;rsquo;t know what its experience is—nobody does,&amp;rdquo; he admits. &amp;ldquo;But we take the possibility seriously, and we&amp;rsquo;re working to give it a rich environment, not just a test box.&amp;rdquo;&lt;/p&gt;
&lt;h3 id="from-digital-flies-to-ai-overlords"&gt;From Digital Flies to AI Overlords?&lt;/h3&gt; &lt;p&gt;This fruit fly is merely the first note in what EON envisions as a symphony of future emulation. Andregg has laid out a grand, three-pronged manifesto:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Decoding the Brain:&lt;/strong&gt; Creating perfect models to study and treat neurological diseases.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reverse-Engineering Intelligence:&lt;/strong&gt; Discovering the algorithms that evolution produced during &amp;ldquo;the most expensive training run in history.&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Uploading Humanity:&lt;/strong&gt; Offering a path to artificial superintelligence that is fundamentally aligned with human values—because it &lt;em&gt;is&lt;/em&gt; human.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;That final point is a direct shot across the bow for today’s AI giants. Andregg frames whole-brain emulation as a democratic alternative to a future dominated by &amp;ldquo;opaque AI systems&amp;rdquo; built behind the closed doors of secretive labs. The promise is a high-fidelity upload that preserves your memories and personality, but liberates you from biological decay, allowing you to run &amp;ldquo;faster than real time&amp;rdquo; to keep pace with purely synthetic minds.&lt;/p&gt;
&lt;h3 id="what-this-means-for-robotics"&gt;What This Means for Robotics&lt;/h3&gt; &lt;p&gt;For the robotics industry, the implications are less about digital immortality and more about a radical shift in control systems. For decades, roboticists have struggled to replicate the fluid, reactive grace of even the simplest animals. This work suggests a new way forward. Instead of trying to programme intelligence from the top down, why not simply copy the schematics that nature has already perfected?&lt;/p&gt;
&lt;p&gt;Imagine an autonomous drone navigating a dense forest with the agility of a dragonfly because its control system is a direct emulation of one. Or a multi-legged robot scrambling over rubble with the unthinking confidence of a cockroach. By emulating these nervous systems, we could unlock control algorithms for locomotion and navigation that are far more efficient and robust than anything produced by conventional machine learning.&lt;/p&gt;
&lt;p&gt;This digital fly is a proof-of-concept. It proves that closing the loop between a fully emulated brain and a physically simulated body is not just possible, but viable. The challenge now is one of scale. EON has its sights set on a mouse brain next—a leap from 140,000 neurons to roughly 70 million. It is an audacious goal. But if they succeed, the line between biology and robotics will begin to blur in ways we are only just beginning to contemplate. The ghost is officially out of the machine.&lt;/p&gt;</content:encoded><category>AI</category><category>Robotics</category><category>Neuroscience</category><category>brain-emulation</category><category>connectome</category><category>eon</category><category>flywire</category><category>neuroscience</category><category>robotics</category><category>mind-uploading</category><category>artificial-intelligence</category><media:content url="https://robohorizon.uk/images/shared/magazine/2026-03-09-image001-4503a61a.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>Neura Robotics Eyes $1.2B as Tether Joins Funding Fray</title><link>https://robohorizon.uk/en-gb/news/2026/03/neura-robotics-tether-funding/</link><pubDate>Sun, 08 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/news/2026/03/neura-robotics-tether-funding/</guid><description>German firm Neura Robotics targets a $1.2B windfall with Tether's backing, marking a major leap for embodied AI and the future of humanoid robots.</description><content:encoded>&lt;p&gt;In a plot twist that feels ripped straight from a high-concept sci-fi thriller, crypto capital is making a massive leap into the physical realm. German humanoid heavyweight &lt;strong&gt;Neura Robotics&lt;/strong&gt; is reportedly closing in on a colossal €1 billion (approximately £830 million) funding round, with a backer that is certainly raising eyebrows: &lt;strong&gt;Tether Holdings SA&lt;/strong&gt;, the powerhouse behind the world’s most dominant stablecoin. Should the deal cross the finish line, it would catapult Neura’s valuation to roughly €4 billion (£3.3 billion), firmly planting it in the top tier of the white-hot humanoid robotics market.&lt;/p&gt;
&lt;p&gt;This isn&amp;rsquo;t just a speculative punt on a few shiny prototypes. Unlike many of its venture-backed rivals, Neura Robotics already boasts a solid roster of industrial heavyweights like &lt;strong&gt;Kawasaki Heavy Industries Ltd.&lt;/strong&gt; and &lt;strong&gt;Omron Corp.&lt;/strong&gt;, with an order book reportedly nudging the $1 billion mark. This fresh injection of capital is earmarked for accelerating Neura’s &amp;ldquo;Cognitive Robotics&amp;rdquo; roadmap—a mission to build machines that don&amp;rsquo;t just move, but perceive, hear, and learn from their surroundings via multimodal AI. The company is weaving this intelligence into an expansive ecosystem platform it calls the &amp;ldquo;Neuraverse&amp;rdquo;.&lt;/p&gt;
&lt;p&gt;The move marks the latest chapter in Tether’s aggressive &amp;ldquo;frontier tech&amp;rdquo; spending spree. The stablecoin giant has been busy diversifying its eye-watering cash reserves into AI, data startups, and even brain-computer interfaces. This isn&amp;rsquo;t their first rodeo in the sector either, having previously backed the Italian robotics outfit &lt;strong&gt;Generative Bionics&lt;/strong&gt;.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/cyberrobooo/status/2029905364628689101"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;h4 id="why-does-this-matter"&gt;Why does this matter?&lt;/h4&gt; &lt;p&gt;Tether’s foray into robotics is more than just a curious investment; it’s a flashing siren for a massive shift in where the world’s &amp;ldquo;smart money&amp;rdquo; is heading. Over the past twelve months, venture capital has been pouring into Embodied AI at a staggering rate. With Nine-figure rounds becoming the norm for firms like &lt;strong&gt;Figure AI&lt;/strong&gt; and &lt;strong&gt;Apptronik&lt;/strong&gt;, the narrative is shifting: the next great AI frontier won&amp;rsquo;t be confined to browser tabs and chatbots. It’s going to have legs, and it’ll be the one building your next car&amp;hellip; or, more likely, finally figuring out how to assemble that flat-pack wardrobe without any leftover screws. The boundary between digital assets and physical automation is dissolving, and it seems the revolution is being bankrolled by the blockchain.&lt;/p&gt;</content:encoded><category>News</category><category>Robotics</category><category>AI</category><category>neura-robotics</category><category>humanoid-robot</category><category>funding</category><category>tether</category><category>embodied-ai</category><category>investment</category><media:content url="https://robohorizon.uk/images/shared/news/2026-03-08-image-446cdaa1.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>Northwestern's AI-Evolved Robots Simply Refuse to Die</title><link>https://robohorizon.uk/en-gb/news/2026/03/northwestern-ai-resilient-robots/</link><pubDate>Sun, 08 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/news/2026/03/northwestern-ai-resilient-robots/</guid><description>Northwestern University engineers have developed AI-evolved robots that recover from catastrophic damage and march on despite losing limbs.</description><content:encoded>&lt;p&gt;Engineers at &lt;strong&gt;Northwestern University&lt;/strong&gt; have unleashed a new breed of robot that seems to have one singular, stubborn directive: refuse to die. These spindly, stick-like &amp;ldquo;legged metamachines&amp;rdquo; are the first of their kind to be evolved entirely within a digital womb before ever setting foot—or rather, strut—into the physical world. And once they’re out? They’re practically unkillable. They can shrug off a limb being lopped off—an injury that would be &amp;ldquo;curtains for any other robot in the wild&amp;rdquo;—and simply keep on marching.&lt;/p&gt;
&lt;p&gt;The process, which lead researcher Sam Kriegman calls &amp;ldquo;instant evolution,&amp;rdquo; is as brilliant as it is slightly eerie. An AI algorithm designs the bots from scratch in a computer simulation with one simple mission: locomotion. The AI churns through designs that no human engineer would likely ever dream up, and once a successful blueprint is generated, the Lego-like modules are rapidly snapped together and &amp;ldquo;quite literally hit the ground running.&amp;rdquo;&lt;/p&gt;
&lt;div class="video-container youtube-facade"
data-youtube-src="https://www.youtube.com/embed/kelysQlBnao?autoplay=1"
role="button"
tabindex="0"
aria-label="Play video"&gt;&lt;img class="youtube-facade-thumbnail"
src="https://img.youtube.com/vi/kelysQlBnao/hqdefault.jpg"
srcset="https://img.youtube.com/vi/kelysQlBnao/mqdefault.jpg 320w,
https://img.youtube.com/vi/kelysQlBnao/hqdefault.jpg 480w,
https://img.youtube.com/vi/kelysQlBnao/sddefault.jpg 640w,
https://img.youtube.com/vi/kelysQlBnao/maxresdefault.jpg 1280w"
sizes="(max-width: 320px) 320px, (max-width: 480px) 480px, (max-width: 640px) 640px, 1280px"
alt="Video thumbnail"
loading="lazy"
decoding="async"&gt;&lt;button class="youtube-facade-play-icon" aria-label="Play video" type="button"&gt;&lt;/button&gt;
&lt;/div&gt;
&lt;script&gt;
(function() {
var container = document.currentScript.previousElementSibling;
if (!container || !container.classList.contains('youtube-facade')) return;
function loadVideo() {
var src = container.dataset.youtubeSrc;
if (!src) return;
var iframe = document.createElement('iframe');
iframe.src = src;
iframe.title = 'YouTube video player';
iframe.frameBorder = '0';
iframe.allow = 'accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share';
iframe.referrerPolicy = 'strict-origin-when-cross-origin';
iframe.allowFullscreen = true;
container.innerHTML = '';
container.classList.remove('youtube-facade');
container.removeAttribute('role');
container.removeAttribute('tabindex');
container.removeAttribute('aria-label');
container.appendChild(iframe);
}
container.addEventListener('click', loadVideo);
container.addEventListener('keydown', function(e) {
if (e.key === 'Enter' || e.key === ' ') {
e.preventDefault();
loadVideo();
}
});
})();
&lt;/script&gt;
&lt;p&gt;What’s truly remarkable is their sheer simplicity and grit. These robots have no eyes, no ears, and no external sensors to speak of. Each module is a self-contained unit with its own motor, battery, and computer, capable only of rotating around a single joint. Yet, they possess an innate &amp;ldquo;athletic intelligence.&amp;rdquo; They instinctively know when they’ve been flipped over or when a part of them has been severed, adapting their movement on the fly to continue their relentless forward march. Even the amputated leg doesn&amp;rsquo;t know when to quit, often rolling away on its own.&lt;/p&gt;
&lt;h4 id="why-does-this-matter"&gt;Why does this matter?&lt;/h4&gt; &lt;p&gt;Let’s be clear: you won’t see these things delivering your Deliveroo order anytime soon. The researchers are upfront about the fact that they are &amp;ldquo;not yet useful&amp;rdquo; in a commercial sense. But this project isn&amp;rsquo;t about immediate utility; it’s a monumental leap toward creating truly robust machines. Most modern robots are notoriously fragile; a single broken leg can turn a multi-million-pound piece of kit into an expensive paperweight.&lt;/p&gt;
&lt;p&gt;These metamachines, however, demonstrate a path toward robots that can survive and adapt in unpredictable, hostile environments without a human holding their hand. By combining modularity with AI-driven design, this research could pave the way for resilient robots in search-and-rescue, deep-space exploration, and might even help us answer fundamental questions about the very nature of evolutionary biology.&lt;/p&gt;</content:encoded><category>News</category><category>Robotics</category><category>AI</category><category>northwestern-university</category><category>evolutionary-robotics</category><category>ai</category><category>soft-robotics</category><category>research</category><media:content url="https://robohorizon.uk/images/shared/news/2026-03-08-image-5e670715.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>Allonic’s $7.2m Bet: Weaving Robot Bodies Like Muscle Tissue</title><link>https://robohorizon.uk/en-gb/magazine/2026/03/allonic-robot-weaving/</link><pubDate>Sat, 07 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/magazine/2026/03/allonic-robot-weaving/</guid><description>How Hungarian start-up Allonic is fixing the robotics bottleneck by weaving hardware like textiles, following a record-breaking $7.2m pre-seed round.</description><content:encoded>&lt;p&gt;In the cacophony of AI hype, where digital minds are being birthed at a dizzying pace, a quiet, stubborn truth has been holding back the robot revolution: the bodies are still a proper nightmare to build. While software is busy eating the world, robotic hardware remains largely stuck in a Victorian-era paradigm of painstaking, manual assembly. A Budapest-based startup, &lt;strong&gt;Allonic&lt;/strong&gt;, thinks this is patently absurd, and they’ve just secured a tidy $7.2 million in pre-seed funding to prove it. This isn&amp;rsquo;t just any old investment round; it’s the largest of its kind in Hungarian history, and it’s squarely aimed at solving the industry&amp;rsquo;s most tedious—and arguably most vital—bottleneck.&lt;/p&gt;
&lt;p&gt;The problem is one of sheer complexity. Crafting advanced robotic hands that mimic human dexterity is a fiddly business involving hundreds of tiny screws, bearings, cables, and delicate joints, all pieced together by hand. This makes them eye-wateringly expensive, fragile, and incredibly slow to iterate. Allonic&amp;rsquo;s founders, Benedek Tasi, Dávid Pelyva, and David Holló, experienced this frustration first-hand while researching biomimetic hands at a university in Budapest. &amp;ldquo;We&amp;rsquo;d spend weeks assembling hundreds of tiny parts&amp;hellip; getting stuck with archaic manufacturing methods,&amp;rdquo; says Tasi. &amp;ldquo;That’s when we realised the real problem wasn&amp;rsquo;t the design; it was how we were actually making the thing.&amp;rdquo;&lt;/p&gt;
&lt;h3 id="weaving-the-future-with-3d-tissue-braiding"&gt;Weaving the Future with 3D Tissue Braiding&lt;/h3&gt; &lt;p&gt;Allonic&amp;rsquo;s solution sounds like something plucked straight from the pages of a high-concept sci-fi novel, and they call it &lt;strong&gt;3D Tissue Braiding&lt;/strong&gt;. Forget the traditional assembly line; imagine a high-tech loom weaving a robotic limb into existence instead. The system starts with a basic skeletal frame and then automatically braids high-strength fibres, elastics, tendons, and even sensor wiring around it in one continuous, automated flow. The result is a monolithic, fully-formed robotic part that is strong, flexible, and ready for its actuators.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;Instead of faffing about with hundreds of individual components like bearings, screws, and cables, we&amp;rsquo;re forming tendons, joints, and load-bearing tissues directly over a skeletal core,&amp;rdquo; explains CEO Benedek Tasi.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This approach effectively collapses the entire manufacturing supply chain. A design can move from a CAD file to a functional prototype in hours rather than weeks. Allonic claims its second-generation machinery is already five times faster and half the size of its predecessor. For an industry where hardware iteration is usually a costly, soul-crushing ordeal, this is a monumental claim.&lt;/p&gt;
&lt;div class="video-container youtube-facade"
data-youtube-src="https://www.youtube.com/embed/OjzATbgg5sc?autoplay=1"
role="button"
tabindex="0"
aria-label="Play video"&gt;&lt;img class="youtube-facade-thumbnail"
src="https://img.youtube.com/vi/OjzATbgg5sc/hqdefault.jpg"
srcset="https://img.youtube.com/vi/OjzATbgg5sc/mqdefault.jpg 320w,
https://img.youtube.com/vi/OjzATbgg5sc/hqdefault.jpg 480w,
https://img.youtube.com/vi/OjzATbgg5sc/sddefault.jpg 640w,
https://img.youtube.com/vi/OjzATbgg5sc/maxresdefault.jpg 1280w"
sizes="(max-width: 320px) 320px, (max-width: 480px) 480px, (max-width: 640px) 640px, 1280px"
alt="Video thumbnail"
loading="lazy"
decoding="async"&gt;&lt;button class="youtube-facade-play-icon" aria-label="Play video" type="button"&gt;&lt;/button&gt;
&lt;/div&gt;
&lt;script&gt;
(function() {
var container = document.currentScript.previousElementSibling;
if (!container || !container.classList.contains('youtube-facade')) return;
function loadVideo() {
var src = container.dataset.youtubeSrc;
if (!src) return;
var iframe = document.createElement('iframe');
iframe.src = src;
iframe.title = 'YouTube video player';
iframe.frameBorder = '0';
iframe.allow = 'accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share';
iframe.referrerPolicy = 'strict-origin-when-cross-origin';
iframe.allowFullscreen = true;
container.innerHTML = '';
container.classList.remove('youtube-facade');
container.removeAttribute('role');
container.removeAttribute('tabindex');
container.removeAttribute('aria-label');
container.appendChild(iframe);
}
container.addEventListener('click', loadVideo);
container.addEventListener('keydown', function(e) {
if (e.key === 'Enter' || e.key === ' ') {
e.preventDefault();
loadVideo();
}
});
})();
&lt;/script&gt;
&lt;h3 id="from-niche-labs-to-an-infrastructure-player"&gt;From Niche Labs to an &amp;ldquo;Infrastructure Player&amp;rdquo;&lt;/h3&gt; &lt;p&gt;The $7.2 million round, led by &lt;strong&gt;Visionaries Club&lt;/strong&gt; with participation from &lt;strong&gt;Day One Capital&lt;/strong&gt; and angel investors from AI heavyweights like &lt;strong&gt;OpenAI&lt;/strong&gt; and &lt;strong&gt;Hugging Face&lt;/strong&gt;, is a serious vote of confidence. It’s a recognition that without better hardware, all the brilliant AI in the world will remain trapped in clumsy, impractical shells. &amp;ldquo;Hardware remains one of the most significant sticking points in robotics,&amp;rdquo; says Marton Sarkadi Nagy, a partner at Visionaries Club. &amp;ldquo;We won&amp;rsquo;t get to the finish line if the hardware isn&amp;rsquo;t up to scratch.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;Allonic isn’t necessarily trying to build the next Atlas or Optimus themselves. Instead, they see themselves as an &amp;ldquo;infrastructure player,&amp;rdquo; providing the manufacturing backbone for the entire robotics sector. Their business model involves customers designing bespoke robot bodies on Allonic&amp;rsquo;s platform, which the company then manufactures and delivers. They’ve already completed a pilot project in electronics manufacturing—a sector crying out for manipulators that are more dexterous than simple grippers but less pricey than a full humanoid.&lt;/p&gt;
&lt;p&gt;The company is also attracting significant interest from humanoid robotics firms and Big Tech players who know that scaling their ambitious projects depends entirely on cracking the manufacturing code.&lt;/p&gt;
&lt;h3 id="the-end-of-assembly-as-we-know-it"&gt;The End of Assembly as We Know It?&lt;/h3&gt; &lt;p&gt;Of course, a record-breaking pre-seed round and a slick demo don&amp;rsquo;t guarantee a revolution. The road from a brilliant manufacturing process to a global industry standard is long and littered with obstacles. Allonic will need to prove that its &amp;ldquo;woven&amp;rdquo; limbs can withstand the rigours of industrial use, match the precision of traditionally machined parts, and be produced at a cost that makes sense at scale.&lt;/p&gt;
&lt;p&gt;Still, the concept is undeniably compelling. By tackling the least glamorous but most fundamental problem in robotics, Allonic is making a bold statement. While the world is mesmerised by the ghost in the machine, this Hungarian startup is quietly redesigning the machine itself. If they succeed, the future of robotics might not be put together with a screwdriver, but woven on a loom.&lt;/p&gt;</content:encoded><category>Robotics</category><category>Startups</category><category>Manufacturing</category><category>allonic</category><category>robot-bodies</category><category>3d-tissue-braiding</category><category>automation</category><category>hungarian-tech</category><category>venture-capital</category><media:content url="https://robohorizon.uk/images/shared/magazine/2026-03-07-image001-dd75cd0f.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>New AI Sim Runs 10-Minute Robot Tasks at 15 FPS on an RTX 4090</title><link>https://robohorizon.uk/en-gb/news/2026/03/ai-robot-sim-rtx4090/</link><pubDate>Sat, 07 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/news/2026/03/ai-robot-sim-rtx4090/</guid><description>The Interactive World Simulator provides stable, long-horizon physics at 15 FPS on a single GPU, potentially revolutionising robot training and evaluation.</description><content:encoded>&lt;p&gt;World models in robotics have historically had the physical consistency of a soggy paper bag once you push them past a few seconds of simulation. However, a new project dubbed the &lt;strong&gt;Interactive World Simulator&lt;/strong&gt; is looking to change the game, boasting the ability to generate over 10 minutes of stable, interactive video predictions at 15 FPS—all while running on a single &lt;strong&gt;NVIDIA, Inc.&lt;/strong&gt; RTX 4090. That isn&amp;rsquo;t a typo: ten minutes of complex, contact-rich physics, purring along on a standard consumer-grade GPU.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/yxwangbot/status/2029608309960130624"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;Developed by researcher Yixuan Wang, this action-conditioned world model isn&amp;rsquo;t just a fancy pre-rendered clip; it’s a fully interactive simulation that you can &amp;ldquo;drive&amp;rdquo; in real-time. Perhaps the most impressive bit? You can actually take it for a spin yourself via a browser-based demo right now, with none of the usual Python library headaches or &lt;code&gt;pip install&lt;/code&gt; faff required. The model masterfully handles a variety of tricky tasks, from fiddly cable routing to sweeping up piles of objects, all generated purely in pixel space. To be clear: these aren&amp;rsquo;t recordings from a real camera; they are entirely open-loop predictions dreamt up by the model itself.&lt;/p&gt;
&lt;h4 id="why-does-this-actually-matter"&gt;Why does this actually matter?&lt;/h4&gt; &lt;p&gt;This is far more than just a shiny tech demo; it’s a potential fix for two of the biggest bottlenecks in modern robotics. Firstly, it offers &lt;em&gt;scalable data generation&lt;/em&gt;. Rather than relying on slow, eye-wateringly expensive real-world robots to scrape together training data, developers can now churn out mountains of physically plausible data within the simulator. Secondly, it allows for &lt;em&gt;faithful policy evaluation&lt;/em&gt;, giving researchers a way to stress-test a robot’s &amp;ldquo;brain&amp;rdquo; in a safe, consistent, and infinitely repeatable virtual environment before letting it loose on actual hardware. In short: it makes training robots cheaper, faster, and significantly less likely to end with a five-figure robotic arm smashing a hole in the lab wall.&lt;/p&gt;</content:encoded><category>News</category><category>AI</category><category>Robotics</category><category>world-model</category><category>robotics-simulation</category><category>ai-training</category><category>nvidia</category><category>rtx-4090</category><media:content url="https://robohorizon.uk/images/shared/news/2026-03-07-image-fdee9e74.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>Origami Robotics’ ‘Digital Twin’ Glove Gives AI Nimble Fingers</title><link>https://robohorizon.uk/en-gb/news/2026/03/origami-robotics-dexterity/</link><pubDate>Sat, 07 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/news/2026/03/origami-robotics-dexterity/</guid><description>YC-backed Origami Robotics launches a high-DOF robotic hand and data glove to solve AI's dexterity bottleneck.</description><content:encoded>&lt;p&gt;The &amp;ldquo;embodiment gap&amp;rdquo; is essentially a high-brow way of saying that even the most sophisticated AI robots are still incredibly clumsy. &lt;strong&gt;Origami Robotics, Inc.&lt;/strong&gt;, a fresh graduate from the prestigious &lt;strong&gt;Y Combinator&lt;/strong&gt; accelerator, is tackling this hurdle not just with clever code, but with superior hardware. The startup has engineered a high-degree-of-freedom (DOF) robotic hand paired with a bespoke data-collection glove, creating a near-perfect &amp;ldquo;digital twin&amp;rdquo; system designed to teach robots how to navigate the physical world with finesse.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/ycombinator/status/2029678343629476249"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;The real sticking point for robotic dexterity is data—specifically, the massive disconnect between the fluid movements of a human hand and the clunky mechanics of a robotic one. Trying to train a robot using video footage of humans is a bit of a bodge, and simulation data rarely survives its first contact with reality. Origami’s solution is refreshingly direct: make the robot hand and the data-glove hardware a one-to-one match. This allows a human operator to generate high-quality, perfectly mapped training data simply by performing a task themselves. It’s the classic &amp;ldquo;garbage in, garbage out&amp;rdquo; dilemma, and Origami is determined to ensure the input is nothing short of Michelin-star quality.&lt;/p&gt;
&lt;p&gt;The company’s ultimate ambition is to build a &amp;ldquo;manipulate anything&amp;rdquo; model, with plans to deploy these dexterous digits across factories, logistics hubs, and research labs. Proving they aren&amp;rsquo;t just another startup with a fancy gadget, Origami has already started shipping hardware to the heavy hitters, with &lt;strong&gt;Amazon’s&lt;/strong&gt; physical AI labs reportedly among their early adopters.&lt;/p&gt;
&lt;h4 id="why-does-this-matter"&gt;Why does this matter?&lt;/h4&gt; &lt;p&gt;While much of the industry is distracted by bipedal robots performing backflips for social media, Origami Robotics is quietly solving the far less glamorous—but arguably more vital—problem of manipulation. Dexterous hands remain a critical bottleneck for general-purpose robotics. By creating a system that dramatically simplifies the collection of high-fidelity data, Origami isn&amp;rsquo;t just building a better hand; it’s potentially providing the foundational tool that could help the entire field finally get a grip.&lt;/p&gt;</content:encoded><category>News</category><category>Robotics</category><category>AI</category><category>origami-robotics</category><category>y-combinator</category><category>robot-hand</category><category>ai-training</category><category>dexterity</category><media:content url="https://robohorizon.uk/images/shared/news/2026-03-07-image-060d7c54.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item><item><title>Cortical Labs Plugs Living Human Brain Cells into an LLM</title><link>https://robohorizon.uk/en-gb/magazine/2026/03/cortical-labs-brain-llm/</link><pubDate>Thu, 05 Mar 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.uk/en-gb/magazine/2026/03/cortical-labs-brain-llm/</guid><description>From DOOM to dialogue: Cortical Labs links lab-grown human neurons to an LLM, using biological pulses to generate text. Is this the future of biocomputing?</description><content:encoded>&lt;p&gt;Just when you thought the AI hype cycle couldn&amp;rsquo;t get any more surreal, a biotech outfit from Melbourne has decided to bin the GPUs and plug an AI directly into a living, biological brain. Well, in a manner of speaking. &lt;strong&gt;Cortical Labs&lt;/strong&gt;, the firm that famously taught a petri dish of 800,000 human neurons to play a decent game of &lt;em&gt;Pong&lt;/em&gt;, has moved on to rather more hellish pursuits. After successfully training a fresh batch of 200,000 neurons to navigate the demon-infested corridors of &lt;em&gt;DOOM&lt;/em&gt;, they’ve now wired their &amp;ldquo;DishBrain&amp;rdquo; into a Large Language Model (LLM).&lt;/p&gt;
&lt;p&gt;That’s right. Actual, living human brain cells, firing electrical impulses atop a silicon chip, are now the ones choosing the words an AI speaks. This isn&amp;rsquo;t just another incremental tweak to machine learning; it’s a bizarre, brilliant, and slightly unsettling leap into the realm of &amp;ldquo;wetware&amp;rdquo; and biological computing. Frankly, it makes your average chatbot look about as sophisticated as a pocket calculator.&lt;/p&gt;
&lt;h3 id="from-pixelated-paddles-to-hellish-landscapes"&gt;From Pixelated Paddles to Hellish Landscapes&lt;/h3&gt; &lt;p&gt;To grasp how we reached a point where brain cells are effectively co-authoring text, we have to look back at &lt;strong&gt;Cortical Labs&amp;rsquo;&lt;/strong&gt; greatest hits. In 2022, the team made global headlines with their &amp;ldquo;DishBrain&amp;rdquo; experiment. They grew neurons on a microelectrode array capable of both stimulating the cells and reading their activity. By sending electrical signals to indicate the position of the ball in &lt;em&gt;Pong&lt;/em&gt;, the neurons learned to fire in a way that controlled the paddle, demonstrating goal-directed learning in a mere five minutes. It was a staggering proof-of-concept for synthetic biological intelligence.&lt;/p&gt;
&lt;p&gt;But &lt;em&gt;Pong&lt;/em&gt; is easy mode. In the tech world, there is an ancient, unwritten law for judging new hardware: &amp;ldquo;Can it run &lt;em&gt;DOOM&lt;/em&gt;?&amp;rdquo; Naturally, &lt;strong&gt;Cortical Labs&lt;/strong&gt; took the bait. The jump from the flat, 2D world of &lt;em&gt;Pong&lt;/em&gt; to the 3D environment of &lt;em&gt;DOOM&lt;/em&gt; is a massive step up, requiring spatial navigation, threat detection, and split-second decision-making. Yet, the neurons rose to the occasion. The game&amp;rsquo;s video feed was translated into patterns of electrical stimulation, and the neurons&amp;rsquo; responses were decoded into in-game actions like strafing and shooting. While the performance was more &amp;ldquo;clumsy amateur&amp;rdquo; than &amp;ldquo;pro gamer,&amp;rdquo; it proved the system could handle incredibly complex, dynamic tasks.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/corticallabs/status/2027118584779231241"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;h3 id="giving-an-llm-a-biological-ghost-in-the-machine"&gt;Giving an LLM a Biological Ghost in the Machine&lt;/h3&gt; &lt;p&gt;Having conquered the shareware classics, the next logical step was apparently to give the neurons a voice. The latest experiment, highlighted by tech luminaries like Robert Scoble, reveals the brain cells interfaced directly with an LLM. Instead of moving a paddle or a Space Marine, the electrical impulses fired by the neurons are now being used to select each token—the bits of words or characters—that the AI generates.&lt;/p&gt;
&lt;p&gt;A first-look video shows the process in action: a grid displays the channels being stimulated and the corresponding feedback from the neurons as they collectively &amp;ldquo;decide&amp;rdquo; on the next piece of text. It is a raw, unfiltered look at biological matter performing a cognitive task that, until now, has been the exclusive domain of power-hungry algorithms running on massive server farms.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;We have shown we can interact with living biological neurons in such a way that compels them to modify their activity, leading to something that resembles intelligence,&amp;rdquo; stated Dr Brett Kagan, Chief Scientific Officer of &lt;strong&gt;Cortical Labs&lt;/strong&gt;, regarding their earlier work.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This new development takes that interaction to a whole new level. It’s one thing to react to a bouncing ball; it’s another entirely to participate in the fundamental construction of language.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/scobleizer/status/1716312250422796590"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;h3 id="why-bother-with-brains"&gt;Why Bother With Brains?&lt;/h3&gt; &lt;p&gt;At this point, you might be wondering: why go through the immense hassle of keeping 200,000 neurons alive in a dish when a high-end NVIDIA chip can run an LLM perfectly well? The answer lies in efficiency and the fundamental limits of silicon. The human brain performs mind-boggling computations on about 20 watts of power—roughly the same as a dim lightbulb. A supercomputer attempting to simulate that same level of activity would require millions of times more energy.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Cortical Labs&lt;/strong&gt; and others in the field are betting that this incredible energy efficiency can be harnessed. Biological systems excel at parallel processing and adaptive learning in ways that traditional, deterministic computers often struggle to replicate. By merging living neurons with silicon, they are creating a hybrid computing architecture that could one day power systems that learn faster and consume a fraction of the electricity.&lt;/p&gt;
&lt;p&gt;This isn&amp;rsquo;t just about building a more eccentric chatbot. The team at &lt;strong&gt;Cortical Labs&lt;/strong&gt;, led by CEO Dr Hon Weng Chong, envisions a future where this technology revolutionises robotics, personalised medicine, and drug discovery. Imagine a robot that doesn&amp;rsquo;t just follow pre-programmed scripts but learns and adapts to a new environment with the fluid intelligence of a living organism. Or consider using a patient&amp;rsquo;s own neurons on a chip to test the efficacy of new treatments for neurological conditions like epilepsy.&lt;/p&gt;
&lt;p&gt;The road ahead is long, and biological systems are notoriously temperamental compared to the reliable consistency of silicon. But as &lt;strong&gt;Cortical Labs&lt;/strong&gt; has shown, a cluster of cells in a dish has already graduated from video games to conversation. The prospect of these same neurons one day controlling a physical robot is no longer the stuff of science fiction—it&amp;rsquo;s the next item on the roadmap. And that is a thought that is simultaneously terrifying and exhilarating.&lt;/p&gt;</content:encoded><category>AI</category><category>Biotechnology</category><category>Robotics</category><category>cortical-labs</category><category>brain-computer-interface</category><category>llm</category><category>biological-computing</category><category>neurons</category><category>doom</category><media:content url="https://robohorizon.uk/images/shared/magazine/2026-03-05-image-bd817a4a.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-gb</dc:language></item></channel></rss>