<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:language="http://purl.org/dc/elements/1.1/language" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>RoboHorizon Robot Magazine - AI you can touch</title><link>https://robohorizon.com/en-us/</link><description>A compass in modern technologies primarily related to robotics, serving both business and private sectors with fresh news, comprehensive analyses, and tests.</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Sun, 19 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://robohorizon.com/en-us/index.xml" rel="self" type="application/rss+xml"/><item><title>Ant Group's New AI Turns Single Videos into 3D Worlds in Real-Time</title><link>https://robohorizon.com/en-us/news/2026/04/ant-groups-new-ai-turns-single-videos-into-3d-worlds-in-real-time/</link><pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/news/2026/04/ant-groups-new-ai-turns-single-videos-into-3d-worlds-in-real-time/</guid><description>Robbyant, an Ant Group company, has open-sourced LingBot-Map, a foundation model that creates detailed 3D reconstructions from a single video stream at 20 FPS.</description><content:encoded>&lt;p&gt;Just when you thought your phone&amp;rsquo;s camera was only good for blurry concert photos, researchers have turned it into a real-time 3D scanner. &lt;strong&gt;Robbyant&lt;/strong&gt;, the embodied AI division of &lt;strong&gt;Ant Group&lt;/strong&gt;, has just open-sourced &lt;strong&gt;LingBot-Map&lt;/strong&gt;, a new 3D foundation model that reconstructs detailed, large-scale environments from a single streaming video. The kicker? It does this at a brisk 20 frames per second, a speed that makes most traditional photogrammetry methods look like they&amp;rsquo;re wading through molasses.&lt;/p&gt;
&lt;p&gt;The secret sauce is a novel architecture called a &lt;strong&gt;Geometric Context Transformer (GCT)&lt;/strong&gt;. This isn&amp;rsquo;t just another transformer bolted onto a vision problem. The GCT is specifically designed to tackle the Achilles&amp;rsquo; heel of monocular (single-camera) SLAM systems: drift. It cleverly manages geometric information using three parallel attention mechanisms: an anchor context for stable coordinate grounding, a local pose-reference window for fine-grained detail, and a trajectory memory to correct for errors over long distances. This allows LingBot-Map to process sequences exceeding 10,000 frames with what Robbyant claims is &amp;ldquo;almost unchanged accuracy.&amp;rdquo; The project is available now on GitHub. Hyperlink: &lt;a href="https://github.com/Robbyant/lingbot-map"&gt;Robbyant/lingbot-map&lt;/a&gt;&lt;/p&gt;
&lt;picture&gt;
&lt;img src="https://robohorizon.com/images/shared/news/2026-04-19-original-2-c9f366e2_hu_75236fc4fd1052dc.webp"
srcset="https://robohorizon.com/images/shared/news/2026-04-19-original-2-c9f366e2_hu_75236fc4fd1052dc.webp 480w, https://robohorizon.com/images/shared/news/2026-04-19-original-2-c9f366e2_hu_4950aadb093c6f23.webp 640w"
sizes="(max-width: 768px) 100vw, 50vw"
alt="A diagram showing the Geometric Context Transformer architecture of LingBot-Map."
loading="lazy"
width="480"
height="226"
class="img-fluid article-centered"
decoding="async"&gt;
&lt;/picture&gt;
&lt;p&gt;The performance claims are, frankly, audacious. On the challenging Oxford Spires dataset, LingBot-Map achieved an Absolute Trajectory Error of just 6.42 meters, a nearly 2.8x improvement over the previous best streaming method. It even outperforms established offline methods that have the luxury of processing all images at once. On the ETH3D benchmark, it scored an F1 of 98.98, obliterating the runner-up by over 21 percentage points. For those interested in the gory technical details, the full methodology is laid out in a paper on arXiv. Hyperlink: &lt;a href="https://arxiv.org/abs/2604.14141"&gt;Read the paper on arXiv&lt;/a&gt;&lt;/p&gt;
&lt;h4 id="why-is-this-important"&gt;Why is this important?&lt;/h4&gt; &lt;p&gt;LingBot-Map represents a significant step toward democratizing spatial intelligence. By eliminating the need for expensive LiDAR or complex multi-camera rigs, it opens the door for low-cost, high-performance 3D perception in robotics, autonomous vehicles, and augmented reality. This isn&amp;rsquo;t just about making pretty point clouds; it&amp;rsquo;s about giving machines a continuous, real-time understanding of the physical world. As a &amp;ldquo;3D foundation model,&amp;rdquo; it&amp;rsquo;s part of a larger trend to build AI that doesn&amp;rsquo;t just process text or images, but perceives, navigates, and interacts with complex, unstructured environments—a cornerstone for the future of embodied AI.&lt;/p&gt;</content:encoded><category>robot-brains</category><category>autonomous</category><category>research</category><category>business</category><category>open-source</category><media:content url="https://robohorizon.com/images/shared/news/2026-04-19-image-1-c9f366e2.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>Robot Shatters Human World Record in Beijing's Half-Marathon</title><link>https://robohorizon.com/en-us/magazine/2026/04/robot-shatters-human-world-record-in-beijings-half-marathon/</link><pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/magazine/2026/04/robot-shatters-human-world-record-in-beijings-half-marathon/</guid><description>A humanoid robot just ran a 21km half-marathon in 50 minutes and 26 seconds, beating the human world record. We break down how the race went from comical failure to superhuman speed in just one year.</description><content:encoded>&lt;p&gt;Let’s get this out of the way immediately: a humanoid robot just ran a half-marathon faster than any human being in history. At the 2026 Beijing Humanoid Robot Half-Marathon on April 19th, a robot named &amp;ldquo;Lightning&amp;rdquo; (or &amp;ldquo;Flash&amp;rdquo;) from smartphone maker &lt;strong&gt;Honor&lt;/strong&gt; autonomously navigated the 21.0975-kilometer course in a staggering 50 minutes and 26 seconds. That time obliterates the official men&amp;rsquo;s world record of 57 minutes and 20 seconds.&lt;/p&gt;
&lt;p&gt;This isn&amp;rsquo;t just an incremental improvement. It&amp;rsquo;s a jaw-dropping leap that makes a mockery of last year&amp;rsquo;s results. The 2025 inaugural race was, to put it mildly, beautiful chaos. One robot face-planted seconds after the starting gun. Another slammed into a fence and shattered. The crowd favorite, a tiny bot named &amp;ldquo;Little Giant,&amp;rdquo; started smoking. The winner of that comedy of errors, &lt;strong&gt;Tiangong Ultra&lt;/strong&gt;, finished in 2 hours, 40 minutes, and 42 seconds—a respectable achievement for the time, but still worlds away from elite human performance. In just twelve months, we’ve gone from slapstick to superhuman.&lt;/p&gt;
&lt;h3 id="a-year-of-frightening-progress"&gt;A Year of Frightening Progress&lt;/h3&gt; &lt;p&gt;So, what happened in a year? A brute-force acceleration of both hardware and ambition, fueled by China’s aggressive industrial strategy. While &lt;strong&gt;Honor&amp;rsquo;s&lt;/strong&gt; &amp;ldquo;Lightning&amp;rdquo; took the endurance crown, the entire field demonstrated terrifying gains in raw speed. Just days before the race, &lt;strong&gt;Unitree Robotics&lt;/strong&gt; showed its &lt;strong&gt;H1&lt;/strong&gt; humanoid sprinting at 10.1 meters per second on a real track, putting it within spitting distance of Usain Bolt’s peak speed. This blistering pace, a threefold increase in just two years, signaled that the physical hardware was rapidly overcoming previous limitations.&lt;/p&gt;
&lt;p&gt;The race organizers fundamentally changed the nature of the challenge for 2026. The number of participants exploded from around 20 to over 300 robots from more than 100 teams. Crucially, they introduced a major focus on autonomy. Nearly 40% of the teams competed in the fully autonomous category, where the robot handles all navigation and decision-making. To hammer the point home, remote-controlled finishers had their times multiplied by a 1.2x coefficient, effectively a penalty for needing a human in the loop. That an autonomous robot won under these conditions is the real story; it wasn&amp;rsquo;t just a faster machine, but a smarter one.&lt;/p&gt;
&lt;div class="video-container youtube-facade"
data-youtube-src="https://www.youtube.com/embed/f6Fx73MJhdI?autoplay=1"
role="button"
tabindex="0"
aria-label="Play video"&gt;&lt;img class="youtube-facade-thumbnail"
src="https://img.youtube.com/vi/f6Fx73MJhdI/hqdefault.jpg"
srcset="https://img.youtube.com/vi/f6Fx73MJhdI/mqdefault.jpg 320w,
https://img.youtube.com/vi/f6Fx73MJhdI/hqdefault.jpg 480w,
https://img.youtube.com/vi/f6Fx73MJhdI/sddefault.jpg 640w,
https://img.youtube.com/vi/f6Fx73MJhdI/maxresdefault.jpg 1280w"
sizes="(max-width: 320px) 320px, (max-width: 480px) 480px, (max-width: 640px) 640px, 1280px"
alt="Video thumbnail"
loading="lazy"
decoding="async"&gt;&lt;button class="youtube-facade-play-icon" aria-label="Play video" type="button"&gt;&lt;/button&gt;
&lt;/div&gt;
&lt;script&gt;
(function() {
var container = document.currentScript.previousElementSibling;
if (!container || !container.classList.contains('youtube-facade')) return;
function loadVideo() {
var src = container.dataset.youtubeSrc;
if (!src) return;
var iframe = document.createElement('iframe');
iframe.src = src;
iframe.title = 'YouTube video player';
iframe.frameBorder = '0';
iframe.allow = 'accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share';
iframe.referrerPolicy = 'strict-origin-when-cross-origin';
iframe.allowFullscreen = true;
container.innerHTML = '';
container.classList.remove('youtube-facade');
container.removeAttribute('role');
container.removeAttribute('tabindex');
container.removeAttribute('aria-label');
container.appendChild(iframe);
}
container.addEventListener('click', loadVideo);
container.addEventListener('keydown', function(e) {
if (e.key === 'Enter' || e.key === ' ') {
e.preventDefault();
loadVideo();
}
});
})();
&lt;/script&gt;
&lt;h3 id="more-than-a-race-its-an-audition"&gt;More Than a Race, It&amp;rsquo;s an Audition&lt;/h3&gt; &lt;p&gt;This event is far more than a sporting spectacle; it&amp;rsquo;s a high-stakes commercial audition. The grand prize isn&amp;rsquo;t a trophy but over 1 million yuan (about $140,000) in industrial orders. Beijing&amp;rsquo;s E-Town, the technology hub hosting the race, has explicitly designed the marathon as a pipeline to turn research projects into commercial products. With over 100 robotics firms and a 10 billion yuan government fund, the message is clear: prove your robot works on the track, and you&amp;rsquo;ll get a purchase order to deploy it in a factory.&lt;/p&gt;
&lt;p&gt;To that end, the organizers added a new event this year: the &amp;ldquo;Robot Baturu Challenge.&amp;rdquo; Held the day before the marathon, this challenge forced robots through 17 different obstacle courses simulating disaster rescue scenarios—testing their ability to navigate rubble, climb stairs, and handle real-world complexity. It’s a clear signal that the end goal isn&amp;rsquo;t just running, but creating machines capable of performing useful, difficult tasks in unstructured human environments. You can see how far these humanoids have come in their development in this
&lt;a href="https://robohorizon.com/en-us/videos/humanoid-robots-to-run-half-marathon-in-ultimate-endurance-test/" hreflang="en-us"&gt;Humanoid Robots to Run Half-Marathon in Ultimate Endurance Test&lt;/a&gt;
.&lt;/p&gt;
&lt;h4 id="the-technical-leap"&gt;The Technical Leap&lt;/h4&gt; &lt;p&gt;The performance leap was enabled by across-the-board upgrades:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Hardware:&lt;/strong&gt; Improved joint torque, better power efficiency, and advanced heat management—Honor&amp;rsquo;s winning bot reportedly uses a powerful liquid-cooling system—were essential for maintaining high speeds over 21 kilometers.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Software:&lt;/strong&gt; More robust motion control algorithms allowed for stability on varied terrain, from city streets to park paths.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Navigation:&lt;/strong&gt; Every robot was equipped with a BeiDou satellite navigation badge, providing centimeter-level precision for location tracking, a must-have for autonomous operation.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="the-starting-gun-on-a-new-era"&gt;The Starting Gun on a New Era&lt;/h3&gt; &lt;p&gt;It&amp;rsquo;s tempting to get lost in the astounding 50-minute finishing time. But the true headline is the rate of progress. In a single year, the winning time improved by nearly two hours. The competition went from a novelty act where simply finishing was a victory to a legitimate athletic contest where the winning machine surpassed the pinnacle of human achievement.&lt;/p&gt;
&lt;p&gt;While there were still stumbles—reports noted one robot falling at the start and another hitting a barrier—the overall capability of the field was night and day compared to 2025. The question is no longer &lt;em&gt;if&lt;/em&gt; humanoids can perform complex dynamic tasks, but how quickly they will master them. The 2026 Beijing Half-Marathon wasn&amp;rsquo;t just a race; it was the starting gun for an era where the physical capabilities of robots are no longer a novelty, but a serious, world-beating reality. The rest of the world has been put on notice.&lt;/p&gt;</content:encoded><category>humanoids</category><category>autonomous</category><category>business</category><category>research</category><category>policy</category><media:content url="https://robohorizon.com/images/shared/magazine/2026-04-19-image001-1d819d2c.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>DeepMind's Gemini 1.6 Gives Robots Point-and-Click Reality</title><link>https://robohorizon.com/en-us/news/2026/04/deepminds-gemini-16-gives-robots-point-and-click-reality/</link><pubDate>Thu, 16 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/news/2026/04/deepminds-gemini-16-gives-robots-point-and-click-reality/</guid><description>Google DeepMind's latest model, Gemini Robotics-ER 1.6, enhances robot vision, spatial reasoning, and safety, letting them see and understand the physical world.</description><content:encoded>&lt;p&gt;&lt;strong&gt;Google DeepMind&lt;/strong&gt; has unveiled &lt;strong&gt;Gemini Robotics-ER 1.6&lt;/strong&gt;, the latest update to its &amp;ldquo;Embodied Reasoning&amp;rdquo; model, designed to give robots a much-needed dose of common sense about the physical world. The new model significantly improves a robot&amp;rsquo;s ability to see, understand, and interact with its surroundings, moving beyond just following rote commands to actually reasoning about its tasks.&lt;/p&gt;
&lt;p&gt;A core upgrade in Gemini Robotics-ER 1.6 is its enhanced visual and spatial understanding, best exemplified by its &amp;ldquo;pointing&amp;rdquo; capability. Ask it to find a specific tool in a cluttered workshop, and the model can now accurately identify, count, and pinpoint the correct items while ignoring irrelevant objects. This isn&amp;rsquo;t just about finding things; it&amp;rsquo;s a foundation for more complex spatial logic, like mapping trajectories for the perfect grasp or understanding relational commands like &amp;ldquo;move the wrench to the toolbox.&amp;rdquo; The model can even reason through constraints, such as identifying every object small enough to fit inside a designated container.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/GoogleDeepMind/status/2044069878781390929"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;The model also tackles a chronic robotics challenge: knowing when a job is actually finished. Thanks to advanced multi-view reasoning, Gemini Robotics-ER 1.6 can fuse live video streams from multiple cameras—say, an overhead and a wrist-mounted one—to build a complete picture of the scene. This prevents a robot from getting stuck in a loop or failing a task simply because an object is temporarily occluded from one viewpoint.&lt;/p&gt;
&lt;picture&gt;
&lt;img src="https://robohorizon.com/images/shared/news/2026-04-16-image002-2-97b1b484_hu_4bd6010cd50f256c.webp"
srcset="https://robohorizon.com/images/shared/news/2026-04-16-image002-2-97b1b484_hu_4bd6010cd50f256c.webp 480w, https://robohorizon.com/images/shared/news/2026-04-16-image002-2-97b1b484_hu_dd8536ae17daec37.webp 640w"
sizes="(max-width: 768px) 100vw, 50vw"
alt="A diagram showing how Gemini Robotics-ER 1.6 processes multi-view camera streams to confirm task completion."
loading="lazy"
width="480"
height="270"
class="img-fluid article-centered"
decoding="async"&gt;
&lt;/picture&gt;
&lt;h4 id="why-is-this-important"&gt;Why is this important?&lt;/h4&gt; &lt;p&gt;This update is more than an incremental bump in performance; it&amp;rsquo;s about building the foundational skills for autonomy. The ability to read analog gauges, fuse multiple camera feeds, and understand complex spatial relationships is what separates a factory arm from a useful field robot. According to DeepMind&amp;rsquo;s &lt;a href="https://deepmind.google/blog/gemini-robotics-er-1-6/"&gt;official announcement&lt;/a&gt;, this is their safest robotics model yet.&lt;/p&gt;
&lt;p&gt;Perhaps most critically, Gemini Robotics-ER 1.6 shows a &amp;ldquo;substantially improved capacity&amp;rdquo; to adhere to physical safety constraints. It understands instructions like avoiding liquids or not lifting items over 20kg. Compared to the baseline Gemini 3.0 Flash model, it&amp;rsquo;s reportedly 10% better at perceiving human injury risks in videos. This focus on safety and real-world reasoning is a crucial step toward robots that can operate reliably and safely in unpredictable human environments. The model is already available to developers through the Gemini API and Google AI Studio.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/GoogleDeepMind/status/2044069883479007559"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;</content:encoded><category>robot-brains</category><category>research</category><category>business</category><media:content url="https://robohorizon.com/images/shared/news/2026-04-16-image001-1-97b1b484.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>Scientists Use Wall Outlet Frequency to Wirelessly Flip Genes in Mice</title><link>https://robohorizon.com/en-us/news/2026/04/scientists-use-wall-outlet-frequency-to-wirelessly-flip-genes-in-mice/</link><pubDate>Thu, 16 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/news/2026/04/scientists-use-wall-outlet-frequency-to-wirelessly-flip-genes-in-mice/</guid><description>A Korean research team has developed a magnetogenetic switch using 60 Hz electromagnetic fields to turn genes on and off, reversing aging markers in mice.</description><content:encoded>&lt;p&gt;In a development that feels ripped from a science fiction manuscript, researchers in South Korea have devised a method to wirelessly activate specific genes in living mice using the same 60 Hz frequency as a standard wall outlet. The groundbreaking study, published in the journal &lt;em&gt;Cell&lt;/em&gt;, introduces a non-invasive &amp;ldquo;magnetogenetic&amp;rdquo; switch that could revolutionize how we study and potentially treat diseases.&lt;/p&gt;
&lt;p&gt;The team, reportedly from the &lt;strong&gt;Korea Advanced Institute of Science and Technology (KAIST)&lt;/strong&gt;, demonstrated the system’s power by performing some truly astonishing biological feats. They used their electromagnetic field setup to activate genes that trigger epigenetic reprogramming in aged mice, effectively extending their lifespan and reversing aging markers across multiple tissues. In another experiment, they could conditionally switch on mutant amyloid genes specifically in the brains of older mice, allowing for a cleaner model to study Alzheimer&amp;rsquo;s disease without the confounding variables of aging itself. All of this was achieved without drugs or implants, just a precisely controlled magnetic field.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/zanehkoch/status/2044454878727311744"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;The mechanism behind this biological remote control is both elegant and specific. The low-frequency electromagnetic field is picked up by a protein called &lt;strong&gt;Cytochrome b5 type B (CYB5B)&lt;/strong&gt;. This interaction triggers the opening of voltage-gated calcium channels, but not in a chaotic flood. Instead, it produces rhythmic pulses of calcium ions. This specific oscillation activates a transcription factor, &lt;strong&gt;SP7&lt;/strong&gt;, which then binds to a target DNA sequence and turns on the desired gene. The researchers found that simply flooding the cell with calcium using other methods had no effect; the rhythmic, patterned signal is the essential key.&lt;/p&gt;
&lt;picture&gt;
&lt;img src="https://robohorizon.com/images/shared/news/2026-04-16-image002-2-c6b99f82_hu_d2837d8d1f59c933.webp"
srcset="https://robohorizon.com/images/shared/news/2026-04-16-image002-2-c6b99f82_hu_d2837d8d1f59c933.webp 480w, https://robohorizon.com/images/shared/news/2026-04-16-image002-2-c6b99f82_hu_d0fb7210590ee1e2.webp 640w"
sizes="(max-width: 768px) 100vw, 50vw"
alt="A diagram showing how a 60 Hz EMF wave activates the Cyb5b protein, leading to calcium influx and gene activation by the Sp7 transcription factor."
loading="lazy"
width="480"
height="262"
class="img-fluid article-centered"
decoding="async"&gt;
&lt;/picture&gt;
&lt;h4 id="why-is-this-important"&gt;Why is this important?&lt;/h4&gt; &lt;p&gt;This research represents a significant leap for remote biological control. While techniques like optogenetics (using light to control cells) are powerful, they often require invasive fiber optic implants to deliver light deep into tissues. &lt;strong&gt;Magnetogenetics&lt;/strong&gt;, by contrast, uses low-frequency fields that can penetrate the body harmlessly and non-invasively. This opens the door to therapies that could be switched on and off as needed with an external device.&lt;/p&gt;
&lt;p&gt;The potential applications are staggering, from activating regenerative processes to targeting cancer cells with pinpoint precision. While we&amp;rsquo;re still a long way from therapeutic applications in humans, this work provides a powerful new tool for researchers and a glimpse into a future where controlling our own biology could be as simple as flipping a switch. You can read the full paper in &lt;em&gt;Cell&lt;/em&gt;: &lt;a href="https://www.cell.com/cell/abstract/S0092-8674%2826%2900330-2"&gt;A wirelessly controlled magnetogenetic gene switch for non-invasive programming of longevity and disease&lt;/a&gt;.&lt;/p&gt;</content:encoded><category>robot-brains</category><category>bionics</category><category>research</category><media:content url="https://robohorizon.com/images/shared/news/2026-04-16-image001-1-c6b99f82.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>Humanoid Robots Are Officially on the Clock at Electronics Factory</title><link>https://robohorizon.com/en-us/news/2026/04/humanoid-robots-are-officially-on-the-clock-at-electronics-factory/</link><pubDate>Wed, 15 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/news/2026/04/humanoid-robots-are-officially-on-the-clock-at-electronics-factory/</guid><description>AGIBOT and Longcheer Technology have deployed G2 humanoid robots on a live consumer electronics assembly line, marking a pivotal shift from demo to reality.</description><content:encoded>&lt;p&gt;The long-promised, often-mocked future of humanoid robots clocking in for factory shifts is officially over its &amp;ldquo;soon&amp;rdquo; phase. Chinese robotics firm &lt;strong&gt;AGIBOT&lt;/strong&gt; and electronics manufacturing giant &lt;strong&gt;Longcheer Technology&lt;/strong&gt; have deployed multiple &lt;strong&gt;AGIBOT G2&lt;/strong&gt; humanoids on a live consumer electronics production line. This isn&amp;rsquo;t another slick demo video; this is a large-scale industrial implementation of what the companies are branding &amp;ldquo;Physical AI.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;The wheeled G2 humanoids are now working on Longcheer&amp;rsquo;s tablet production lines, tasked with precision loading and unloading at testing stations. According to reports, the integration took a mere four months, and the robots are already operating continuously, hitting all key performance targets. In a live-streamed event to prove the point, a G2 robot worked an 8-hour shift, processing 310 units per hour with a claimed task success rate above 99.5%.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/humanoidsdaily/status/2043936572500828562"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;For those unfamiliar, &lt;strong&gt;Longcheer Technology&lt;/strong&gt; is a massive but low-profile Original Design Manufacturer (ODM) that builds devices for global brands like &lt;strong&gt;Samsung&lt;/strong&gt;, &lt;strong&gt;Xiaomi&lt;/strong&gt;, and &lt;strong&gt;Lenovo&lt;/strong&gt;. Partnering with a company of this scale gives AGIBOT immediate, real-world validation that most robotics startups can only dream of. The plan is to scale the deployment to 100 robots by the third quarter of 2026.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;AGIBOT G2&lt;/strong&gt; is an industrial-grade humanoid, featuring dual 7-DoF arms with force control for delicate tasks, 26 total degrees of freedom, and a wheeled base for navigating factory floors. It&amp;rsquo;s designed for 24/7 operation with hot-swappable batteries, a feature that&amp;rsquo;s absolutely critical for minimizing downtime in high-volume manufacturing.&lt;/p&gt;
&lt;h4 id="why-is-this-important"&gt;Why is this important?&lt;/h4&gt; &lt;p&gt;This deployment represents a critical shift from choreographed lab demonstrations to the messy, high-stakes reality of a mass-production factory floor. While other companies are still showcasing prototypes, AGIBOT and Longcheer are generating actual production data and, presumably, economic value. This move puts immense pressure on other players in the burgeoning humanoid space. It proves that the technology, at least for specific manufacturing tasks, is ready for commercial primetime. The era of humanoid robotics just got a whole lot less theoretical.&lt;/p&gt;</content:encoded><category>humanoids</category><category>industrial</category><category>business</category><category>startups</category><media:content url="https://robohorizon.com/images/shared/news/2026-04-15-image-4e050761.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>Wendy Labs Open-Sources 'Physical AI OS' to Tame Edge Devices</title><link>https://robohorizon.com/en-us/news/2026/04/wendy-labs-open-sources-physical-ai-os-to-tame-edge-devices/</link><pubDate>Wed, 15 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/news/2026/04/wendy-labs-open-sources-physical-ai-os-to-tame-edge-devices/</guid><description>Wendy Labs has released Wendy, a new open-source CLI aimed at simplifying the build, deploy, and debug cycle for AI applications on edge devices like NVIDIA Jetson and Raspberry Pi.</description><content:encoded>&lt;p&gt;&lt;strong&gt;Wendy Labs Inc.&lt;/strong&gt; has just open-sourced &lt;strong&gt;Wendy&lt;/strong&gt;, a command-line tool and development platform it&amp;rsquo;s billing as a &amp;ldquo;physical AI OS.&amp;rdquo; The stated goal is to wrestle the notoriously cantankerous process of developing for edge hardware—like the &lt;strong&gt;NVIDIA Jetson&lt;/strong&gt; and &lt;strong&gt;Raspberry Pi&lt;/strong&gt;—into something that actually resembles modern cloud development. In short, less time pulling your hair out over cross-compilation toolchains.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/signalgaining/status/2043920276929556653"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;Wendy provides a unified CLI to build applications written in Swift, Python, Rust, and TypeScript, automatically containerize them using Docker, and deploy them to ARM-based devices. Its main trick is abstracting away the architectural differences, letting developers code on their native macOS or Linux machine and push to a target with a simple command. The platform also boasts full LLDB remote debugging support, a feature that can feel like an absolute luxury in the embedded world. The project&amp;rsquo;s code is now available on their Hyperlink: &lt;a href="https://github.com/wendylabsinc"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id="why-is-this-important"&gt;Why is this important?&lt;/h4&gt; &lt;p&gt;For developers building the next generation of robots and smart devices, the &amp;ldquo;give&amp;rdquo; here is a massive reduction in setup friction and a much smoother development loop. Instead of spending days configuring a finicky build environment, you can theoretically get a complex, multi-language AI application running on target hardware in minutes. The &amp;ldquo;take,&amp;rdquo; however, is that you&amp;rsquo;re adopting a new, relatively unproven abstraction layer from a nascent company. While it&amp;rsquo;s open-source, the ecosystem is, for now, a ghost town compared to more established solutions. Still, for rapid prototyping, Wendy offers a tantalizing promise: spend less time fighting your tools and more time actually building things.&lt;/p&gt;</content:encoded><category>robot-brains</category><category>industrial</category><category>startups</category><category>open-source</category><category>research</category><media:content url="https://robohorizon.com/images/shared/news/2026-04-15-image-606c8fba.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>MIT's Spaghetti-Thin Robot Muscles Lift 250x Their Own Weight</title><link>https://robohorizon.com/en-us/news/2026/04/mits-spaghetti-thin-robot-muscles-lift-250x-their-own-weight/</link><pubDate>Mon, 13 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/news/2026/04/mits-spaghetti-thin-robot-muscles-lift-250x-their-own-weight/</guid><description>MIT Media Lab researcher Ozgun Kilic Afsar explains the science behind new electrically-driven artificial fibers that could revolutionize robotics by replacing bulky motors with lightweight, powerful muscle strands.</description><content:encoded>&lt;p&gt;Researchers at the &lt;strong&gt;MIT Media Lab&lt;/strong&gt; have developed a new class of artificial muscle fiber that makes traditional motors look like clumsy, prehistoric relics. In a recent interview, lead researcher &lt;strong&gt;Ozgun Kilic Afsar&lt;/strong&gt; detailed how these &amp;ldquo;electrofluidic fiber muscles&amp;rdquo; operate, showcasing a 16-gram muscle bundle lifting a 4-kilogram weight—more than 250 times its own mass. The breakthrough, published in &lt;em&gt;Science Robotics&lt;/em&gt;, ditches the need for bulky motors, noisy compressors, and external pumps, packing the entire actuation system into a silent, self-contained strand not much thicker than a toothpick.&lt;/p&gt;
&lt;div class="video-container youtube-facade"
data-youtube-src="https://www.youtube.com/embed/gOMCNOteIDc?autoplay=1"
role="button"
tabindex="0"
aria-label="Play video"&gt;&lt;img class="youtube-facade-thumbnail"
src="https://img.youtube.com/vi/gOMCNOteIDc/hqdefault.jpg"
srcset="https://img.youtube.com/vi/gOMCNOteIDc/mqdefault.jpg 320w,
https://img.youtube.com/vi/gOMCNOteIDc/hqdefault.jpg 480w,
https://img.youtube.com/vi/gOMCNOteIDc/sddefault.jpg 640w,
https://img.youtube.com/vi/gOMCNOteIDc/maxresdefault.jpg 1280w"
sizes="(max-width: 320px) 320px, (max-width: 480px) 480px, (max-width: 640px) 640px, 1280px"
alt="Video thumbnail"
loading="lazy"
decoding="async"&gt;&lt;button class="youtube-facade-play-icon" aria-label="Play video" type="button"&gt;&lt;/button&gt;
&lt;/div&gt;
&lt;script&gt;
(function() {
var container = document.currentScript.previousElementSibling;
if (!container || !container.classList.contains('youtube-facade')) return;
function loadVideo() {
var src = container.dataset.youtubeSrc;
if (!src) return;
var iframe = document.createElement('iframe');
iframe.src = src;
iframe.title = 'YouTube video player';
iframe.frameBorder = '0';
iframe.allow = 'accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share';
iframe.referrerPolicy = 'strict-origin-when-cross-origin';
iframe.allowFullscreen = true;
container.innerHTML = '';
container.classList.remove('youtube-facade');
container.removeAttribute('role');
container.removeAttribute('tabindex');
container.removeAttribute('aria-label');
container.appendChild(iframe);
}
container.addEventListener('click', loadVideo);
container.addEventListener('keydown', function(e) {
if (e.key === 'Enter' || e.key === ' ') {
e.preventDefault();
loadVideo();
}
});
})();
&lt;/script&gt;
&lt;p&gt;For decades, robotics has been shackled to the &amp;ldquo;titans&amp;rdquo; of actuation: electromagnetic motors. While powerful, they represent a fragile, single point of failure. As Afsar explains, if a motor or its gearbox fails, the robot&amp;rsquo;s entire joint is paralyzed. In contrast, these new fibers mimic the hierarchical and distributed nature of biological muscle. Like the fibers in your bicep, if a few strands fail, the whole system degrades gracefully rather than failing catastrophically. The secret sauce is the integration of miniaturized electrohydrodynamic (EHD) pumps directly into the fiber, which use an electric field to move fluid and generate pressure without any moving parts.&lt;/p&gt;
&lt;p&gt;We previously covered the initial announcement of this impressive technology, noting its potential for creating durable, even machine-washable, robotic textiles. You can get the backstory here:
&lt;a href="https://robohorizon.com/en-us/news/2026/04/this-machine-washable-muscle-fiber-can-lift-200x-its-own-weight/" hreflang="en-us"&gt;This Machine-Washable Muscle Fiber Can Lift 200x Its Own Weight&lt;/a&gt;
. The recent interview with Afsar provides a much deeper dive into the mechanics and the philosophy behind moving away from rigid, joint-based actuation. &lt;a href="https://www.science.org/doi/10.1126/scirobotics.ady6438"&gt;Read the paper in Science Robotics&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id="why-is-this-important"&gt;Why is this important?&lt;/h4&gt; &lt;p&gt;This isn&amp;rsquo;t just about making stronger, quieter robots; it&amp;rsquo;s about fundamentally changing how they&amp;rsquo;re built. Instead of designing a rigid skeleton and then figuring out how to bolt on clunky motors, engineers can now weave power and movement directly into the robot&amp;rsquo;s structure. This opens the door to truly soft, compliant machines that are safer for human interaction, as well as more advanced prosthetics and wearable exoskeletons. Imagine combining this with other futuristic manufacturing techniques, like those being developed by &lt;strong&gt;Allonics&lt;/strong&gt; to weave complex robotic bodies:
&lt;a href="https://robohorizon.com/en-us/magazine/2026/03/allonics-72m-bet-to-weave-robot-bodies-like-muscle-tissue/" hreflang="en-us"&gt;Allonic&amp;#39;s $7.2M Bet to Weave Robot Bodies Like Muscle Tissue&lt;/a&gt;
. We are looking at a future where a robot&amp;rsquo;s body and its muscles are one and the same—a resilient, silent, and unnervingly lifelike architecture.&lt;/p&gt;</content:encoded><category>bionics</category><category>humanoids</category><category>research</category><category>business</category><media:content url="https://robohorizon.com/images/shared/news/2026-04-13-pastedgraphic-1-ce9d4837.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>Meet ToddlerBot: The $6,000 Open-Source Humanoid Aimed at Democratizing AI</title><link>https://robohorizon.com/en-us/news/2026/04/meet-toddlerbot-the-6000-open-source-humanoid-aimed-at-democratizing-ai/</link><pubDate>Sun, 12 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/news/2026/04/meet-toddlerbot-the-6000-open-source-humanoid-aimed-at-democratizing-ai/</guid><description>A new open-source humanoid, ToddlerBot, is now available for approximately $6,000, making advanced robotics and AI research more accessible than ever.</description><content:encoded>&lt;p&gt;In a world where humanoid robots often carry price tags heftier than a luxury car, a new project is taking a decidedly different, and refreshingly affordable, approach. Meet &lt;strong&gt;ToddlerBot&lt;/strong&gt;, a low-cost, open-source humanoid platform designed to bring advanced AI and robotics research to the masses for a bill of materials totaling under $6,000. The project, led by &lt;strong&gt;Stanford University&lt;/strong&gt; Ph.D. student Haochen Shi, aims to democratize a field long dominated by well-funded corporate and academic labs.&lt;/p&gt;
&lt;p&gt;The core idea behind ToddlerBot is to provide a scalable, reproducible platform for data-driven research, particularly in &amp;ldquo;loco-manipulation&amp;rdquo;—the complex art of moving around and handling objects simultaneously. Standing at a compact 0.56 meters and weighing 3.4 kg, the robot is designed for safe operation in real-world environments. Its 30 degrees of freedom, entirely 3D-printable body, and use of off-the-shelf components make it accessible for labs and enthusiasts with basic technical skills. The complete open-source plans, from 3D models on MakerWorld to the Python-based control code, are available on GitHub. Hyperlink: &lt;a href="https://github.com/hshi74/toddlerbot"&gt;ToddlerBot on GitHub&lt;/a&gt;&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/HaochenShi74/status/1886599720279400732"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;The latest V2.0 release, available on MakerWorld, enhances the robot&amp;rsquo;s capabilities, which already include walking, crawling, and even performing push-ups. The platform is designed for machine learning compatibility from the ground up, featuring a high-fidelity digital twin for seamless sim-to-real policy transfer. This allows researchers to train AI models in simulation and deploy them on the physical robot with minimal friction.&lt;/p&gt;
&lt;h4 id="why-is-this-important"&gt;Why is this important?&lt;/h4&gt; &lt;p&gt;The six-figure cost of most research-grade humanoids creates a massive barrier to entry, stifling innovation. By slashing the price to around $6,000—with 90% of that cost being motors and computers—ToddlerBot opens the door for smaller universities, startups, and even ambitious hobbyists to contribute to the field. This isn&amp;rsquo;t just about making a cheaper robot; it&amp;rsquo;s about building a larger, more diverse community of researchers. An accessible platform like ToddlerBot could significantly accelerate progress in embodied AI, reinforcement learning, and physical human-robot interaction, proving that the future of robotics doesn&amp;rsquo;t have to come with a soul-crushing price tag.&lt;/p&gt;</content:encoded><category>robot-brains</category><category>humanoids</category><category>research</category><category>open-source</category><category>education</category><media:content url="https://robohorizon.com/images/shared/news/2026-04-12-image001-abedc83e.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>Tesla FSD 'Supervised' Gets Dutch Approval, With Strings Attached</title><link>https://robohorizon.com/en-us/news/2026/04/tesla-fsd-supervised-gets-dutch-approval-with-strings-attached/</link><pubDate>Sat, 11 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/news/2026/04/tesla-fsd-supervised-gets-dutch-approval-with-strings-attached/</guid><description>The Dutch vehicle authority (RDW) has granted provisional approval for Tesla's Full Self-Driving (Supervised), marking its first official entry into a European market.</description><content:encoded>&lt;p&gt;&lt;strong&gt;Tesla, Inc.&lt;/strong&gt; has finally broken through the European regulatory wall, securing its first-ever approval to launch its &lt;strong&gt;Full Self-Driving (Supervised)&lt;/strong&gt; software in the Netherlands. The announcement, made on April 10, 2026, confirms that Dutch Tesla owners will soon be able to use the advanced driver-assist system, a feature long available in North America. However, a closer look at the fine print reveals this is less of a robotaxi revolution and more of a heavily chaperoned debut.&lt;/p&gt;
&lt;p&gt;The Dutch vehicle authority, &lt;strong&gt;RDW (Rijksdienst voor het Wegverkeer)&lt;/strong&gt;, issued what it calls a &amp;ldquo;European type approval with provisional validity in the Netherlands&amp;rdquo; after an exhaustive 18-month evaluation. The RDW was quick to pour cold water on any notions of true autonomy, stating unequivocally that a vehicle with FSD Supervised is &lt;em&gt;not&lt;/em&gt; self-driving. It is legally classified as a Level 2 driver-assist system, meaning the driver remains fully responsible and must be prepared to take control at a moment&amp;rsquo;s notice.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/teslaeurope/status/2042709396111724639"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;In its announcement, Tesla claimed, &amp;ldquo;No other vehicle can do this.&amp;rdquo; This statement is, to put it mildly, marketing bravado. The RDW itself noted that other manufacturers, such as &lt;strong&gt;BMW&lt;/strong&gt; and &lt;strong&gt;Ford&lt;/strong&gt;, already have approvals for similar hands-off driving systems in Europe. The approval places FSD Supervised under the same regulatory framework as these competitors, requiring constant driver monitoring via in-car sensors to ensure attentiveness.&lt;/p&gt;
&lt;h4 id="why-is-this-important"&gt;Why is this important?&lt;/h4&gt; &lt;p&gt;This approval is a significant, if incremental, victory for Tesla. It establishes a critical regulatory foothold in the notoriously cautious European market, which operates on a &amp;ldquo;type approval&amp;rdquo; basis, unlike the &amp;ldquo;self-certification&amp;rdquo; model in the United States. While the Dutch approval doesn&amp;rsquo;t automatically apply to the entire EU, it creates a pathway for other member states to recognize the certification, with a broader rollout potentially happening by summer 2026.&lt;/p&gt;
&lt;p&gt;Ultimately, the Netherlands is now the official proving ground for FSD in Europe. The &amp;ldquo;provisional&amp;rdquo; nature of the approval means regulators will be watching closely. For Tesla, this is a chance to gather crucial data and prove its system can handle Europe&amp;rsquo;s complex roads. For drivers, it&amp;rsquo;s a chance to experience a more advanced driver-assist system, as long as they remember they&amp;rsquo;re still the ones in charge—no reading the newspaper behind the wheel, as the RDW explicitly warned. The future of mobility may have arrived in the Netherlands, but it&amp;rsquo;s clear it will be supervised for the foreseeable future.&lt;/p&gt;</content:encoded><category>autonomous</category><category>business</category><category>policy</category><media:content url="https://robohorizon.com/images/shared/news/2026-04-11-image-79c67bd2.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>Unitree R1 Humanoid Hits AliExpress With a Shocking $4,900 Price Tag</title><link>https://robohorizon.com/en-us/news/2026/04/unitree-r1-humanoid-hits-aliexpress-with-a-shocking-4900-price-tag/</link><pubDate>Sat, 11 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/news/2026/04/unitree-r1-humanoid-hits-aliexpress-with-a-shocking-4900-price-tag/</guid><description>Unitree Robotics is dropping its R1 humanoid on AliExpress for under $5,000, opening up humanoid robotics to the masses with a global launch next week.</description><content:encoded>&lt;p&gt;Chinese robotics firm &lt;strong&gt;Unitree Robotics&lt;/strong&gt; is about to make owning a humanoid robot feel less like science fiction and more like an impulse purchase. The company will launch its &lt;strong&gt;R1 humanoid robot&lt;/strong&gt; on Alibaba&amp;rsquo;s global marketplace, AliExpress, next week with a starting price of just &lt;strong&gt;$4,900&lt;/strong&gt;. The international debut is set to cover major markets including North America, Europe, Japan, and Singapore, effectively dropping a budget-friendly, cartwheeling robot onto the world&amp;rsquo;s doorstep.&lt;/p&gt;
&lt;p&gt;The R1, marketed as being &amp;ldquo;born for sport,&amp;rdquo; stands 123cm tall, weighs around 25-29kg, and can perform impressive athletic feats like running downhill and, yes, doing cartwheels. This isn&amp;rsquo;t the company&amp;rsquo;s first foray into affordable humanoids; it follows the recent announcement of the more capable, but significantly more expensive,
&lt;a href="https://robohorizon.com/en-us/news/2026/04/unitree-g1-humanoid-drops-for-16000-upending-the-robotics-market/" hreflang="en-us"&gt;Unitree G1 Humanoid Drops for $16,000, Upending the Robotics Market&lt;/a&gt;
. The R1 is clearly aimed at a different market: researchers, developers, and hobbyists who were previously priced out of the game, with a price tag that&amp;rsquo;s a fraction of its $16,000 sibling.&lt;/p&gt;
&lt;p&gt;The base R1 AIR model starts at $4,900, with a more advanced standard R1 model priced at $5,900. For that, you get a robot with 20-26 degrees of freedom, an 8-core CPU, a built-in multimodal AI for voice and image processing, and about an hour of runtime on a hot-swappable battery. It’s a spec sheet designed for accessibility, not for heavy industrial lifting.&lt;/p&gt;
&lt;h4 id="why-is-this-important"&gt;Why is this important?&lt;/h4&gt; &lt;p&gt;This launch isn&amp;rsquo;t just about a cheap robot; it&amp;rsquo;s a strategic bombshell in the global robotics race. By making a functional humanoid available on a mass-market platform like AliExpress, &lt;strong&gt;Unitree&lt;/strong&gt; is democratizing access to hardware that, in the U.S., can cost upwards of $300,000. This move is powered by China&amp;rsquo;s highly localized supply chain, which allows for aggressive pricing that Western competitors can&amp;rsquo;t currently match.&lt;/p&gt;
&lt;p&gt;The numbers tell the story. In 2025, Unitree shipped over 5,500 humanoid robots—mostly to universities and researchers—while competitors like Tesla and Figure AI each delivered around 150 units. By putting the R1 on a global e-commerce site, Unitree isn&amp;rsquo;t just selling a product; it&amp;rsquo;s aiming to build a massive, worldwide developer ecosystem on its platform before rivals have even left the lab. The age of the affordable, mail-order humanoid has officially begun.&lt;/p&gt;</content:encoded><category>humanoids</category><category>service</category><category>business</category><category>startups</category><media:content url="https://robohorizon.com/images/shared/news/2026-04-11-image-774bafa3.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>Verne, Backed by Rimac and Uber, Launches Europe's First Robotaxi Service</title><link>https://robohorizon.com/en-us/news/2026/04/verne-backed-by-rimac-and-uber-launches-europes-first-robotaxi-service/</link><pubDate>Sat, 11 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/news/2026/04/verne-backed-by-rimac-and-uber-launches-europes-first-robotaxi-service/</guid><description>Verne, in a major partnership with Pony.ai and Uber, has launched a commercial robotaxi service in Zagreb, Croatia, letting the public book autonomous rides now.</description><content:encoded>&lt;p&gt;While the tech world has been fixated on the robotaxi turf wars in San Francisco and Phoenix, the first commercial autonomous ride-hailing service in Europe just went live in a city you probably weren&amp;rsquo;t expecting: Zagreb, Croatia. &lt;strong&gt;Verne&lt;/strong&gt;, an autonomous mobility company spun out of electric hypercar maker &lt;strong&gt;Rimac Group&lt;/strong&gt;, officially launched its service on April 8, 2026.&lt;/p&gt;
&lt;p&gt;This isn&amp;rsquo;t some closed-course demo. The public can book and pay for rides through the Verne app, with the service soon to be integrated into the &lt;strong&gt;Uber&lt;/strong&gt; app following a strategic partnership. The operation is a three-way powerhouse collaboration: &lt;strong&gt;Pony.ai&lt;/strong&gt;, a global leader in autonomous tech, provides the brains; Verne owns and operates the fleet; and Uber provides its massive ride-hailing network. For now, the vehicles are &lt;strong&gt;Arcfox Alpha T5&lt;/strong&gt; electric cars equipped with Pony.ai&amp;rsquo;s seventh-generation autonomous driving system. And yes, for this &amp;ldquo;early phase,&amp;rdquo; there&amp;rsquo;s still a human safety operator behind the wheel, just in case the AI gets a sudden craving for burek.&lt;/p&gt;
&lt;h4 id="why-is-this-important"&gt;Why is this important?&lt;/h4&gt; &lt;p&gt;This launch represents a major milestone for autonomous mobility in Europe, moving the technology from years of testing into a tangible commercial service. The partnership model is particularly noteworthy; instead of a go-it-alone approach, Verne has combined best-in-class technology from Pony.ai and a world-class user platform from Uber to accelerate its market entry.&lt;/p&gt;
&lt;p&gt;It also marks a strategic pivot. Verne had previously planned to launch with its own purpose-built vehicle using technology from Mobileye. By deploying with an existing vehicle and a new partner, the company gets a critical first-mover advantage in the European market. With plans to expand to 11 more cities across the EU, UK, and Middle East, Verne&amp;rsquo;s quiet launch in Zagreb could be the starting gun for the robotaxi race on a whole new continent.&lt;/p&gt;</content:encoded><category>autonomous</category><category>service</category><category>startups</category><category>business</category><category>policy</category><media:content url="https://robohorizon.com/images/shared/news/2026-04-11-image-e4182dbf.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>Tesla's Optimus Knee Patent Is More Human Than You Think</title><link>https://robohorizon.com/en-us/news/2026/04/teslas-optimus-knee-patent-is-more-human-than-you-think/</link><pubDate>Fri, 10 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/news/2026/04/teslas-optimus-knee-patent-is-more-human-than-you-think/</guid><description>Tesla's patent for the Optimus knee reveals a design that mimics human anatomy to slash costs and boost efficiency for mass production.</description><content:encoded>&lt;p&gt;On April 9, 2026, the US Patent and Trademark Office published a &lt;strong&gt;Tesla, Inc.&lt;/strong&gt; filing that contained no neural networks, no world models, and zero mention of AI. Instead, patent US20260097493A1 describes, in painstaking detail, a knee. Filed on the same day as Tesla’s 2022 AI Day, the patent reveals the bio-inspired mechanics behind the &lt;strong&gt;Optimus&lt;/strong&gt; humanoid. Just days before the publication, CEO Elon Musk posted on X that &amp;ldquo;Optimus 3 is walking around, but needs some finishing touches.&amp;rdquo; This is, almost certainly, the knee it’s walking on.&lt;/p&gt;
&lt;p&gt;The patent’s most revealing figure isn’t a complex CAD drawing but a simple, three-panel story. It starts with a diagram of a human knee labeled &amp;ldquo;Biological Principle,&amp;rdquo; moves to a stick-figure &amp;ldquo;Mechanical Analogue,&amp;rdquo; and ends with the final &amp;ldquo;Design.&amp;rdquo; The document explicitly maps the quadriceps, patella, and ligaments to a four-bar linkage. This isn&amp;rsquo;t just a robot part; it&amp;rsquo;s a direct mechanical translation of millions of years of evolution. The design provides a human-equivalent 150 degrees of rotation from a single, small linear actuator.&lt;/p&gt;
&lt;picture&gt;
&lt;img src="https://robohorizon.com/images/shared/news/2026-04-10-image-2-e7619b1f_hu_758b4e514b552574.webp"
srcset="https://robohorizon.com/images/shared/news/2026-04-10-image-2-e7619b1f_hu_758b4e514b552574.webp 480w, https://robohorizon.com/images/shared/news/2026-04-10-image-2-e7619b1f_hu_9d5164786135c42d.webp 640w"
sizes="(max-width: 768px) 100vw, 50vw"
alt="Patent figure showing the transition from human knee anatomy to a mechanical linkage."
loading="lazy"
width="480"
height="340"
class="img-fluid article-centered"
decoding="async"&gt;
&lt;/picture&gt;
&lt;p&gt;The mechanism, a modified inverse Hoecken&amp;rsquo;s linkage, is an elegant solution to a complex problem. The human knee is efficient because it doesn&amp;rsquo;t pivot on a single point; the leverage changes as it bends, maximizing torque when needed most. Tesla’s four-bar system replicates this variable mechanical advantage, allowing a small motor to produce a powerful and wide-ranging motion. The patent shows how simulations were used to find the optimal link lengths to minimize power consumption while hitting torque and speed targets.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/niccruzpatane/status/2042322142910693556"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;h4 id="why-is-this-important"&gt;Why is this important?&lt;/h4&gt; &lt;p&gt;This knee is the key to making Optimus affordable. By using one small actuator instead of a more complex and power-hungry assembly, Tesla dramatically cuts the cost, weight, and complexity of each leg. This is critical for hitting Musk&amp;rsquo;s ambitious target price of $20,000–$30,000 per robot. These savings are essential for the planned production of one million units per year at the Fremont factory, which is already clearing space by ending production of the Model S and X.&lt;/p&gt;
&lt;p&gt;While the design is clever, the underlying geometry isn&amp;rsquo;t exclusive to Tesla. Analysts have noted that the next-generation IRON humanoid from &lt;strong&gt;Xpeng&lt;/strong&gt; appears to use a remarkably similar linkage. With Tesla&amp;rsquo;s design public since its 2022 AI Day, it seems the industry is converging on the most efficient designs. Evolution had millions of years to work out the geometry. Tesla has to match it on a budget.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/seti_park/status/2042433754057347083"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;</content:encoded><category>humanoids</category><category>bionics</category><category>business</category><category>research</category><media:content url="https://robohorizon.com/images/shared/news/2026-04-10-image-1-e7619b1f.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>RAI's AthenaZero Robot Wields Two Arms With Human-Like Speed</title><link>https://robohorizon.com/en-us/news/2026/04/rais-athenazero-robot-wields-two-arms-with-human-like-speed/</link><pubDate>Thu, 09 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/news/2026/04/rais-athenazero-robot-wields-two-arms-with-human-like-speed/</guid><description>The Robotics and AI Institute, led by Boston Dynamics' founder, has unveiled AthenaZero. With two 7-DOF arms, it's a new breed of bimanual robot built for dynamic tasks.</description><content:encoded>&lt;p&gt;The &lt;strong&gt;Robotics and AI Institute (RAI)&lt;/strong&gt;, the research organization led by &lt;strong&gt;Boston Dynamics&lt;/strong&gt; founder Marc Raibert, has unveiled &lt;strong&gt;AthenaZero&lt;/strong&gt;, a bimanual robot that moves less like a factory machine and more like a person. In a blog post from April 7, RAI detailed the new prototype, which is designed specifically for &amp;ldquo;dynamic manipulation&amp;rdquo;—a robotics grand challenge focused on tasks requiring two hands working together with speed and grace.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/rai_inst/status/2041520357127958552"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;While most industrial robots are notoriously stiff and clumsy due to high gear ratios, AthenaZero was built to be the opposite. The 5'3&amp;quot; tall robot features two 7-degree-of-freedom (DoF) arms that prioritize low inertia and high acceleration. The secret sauce is in its quasi-direct drive actuators, which allow the robot to be &amp;ldquo;force transparent.&amp;rdquo; This means it can instantly switch from applying high force for a heavy task to having a gentle, compliant touch for a delicate one, a feat most traditional robots can&amp;rsquo;t manage without risking damage to themselves or their surroundings.&lt;/p&gt;
&lt;p&gt;The goal isn&amp;rsquo;t just to bolt two arms onto a torso; it&amp;rsquo;s to create a platform that can learn to master complex, coordinated movements. Bimanual manipulation is crucial for automating tasks that are currently impossible for single-armed robots, such as assembling intricate products, handling large or flexible objects, or basically anything that doesn&amp;rsquo;t involve picking up one specific thing and putting it in one specific place forever.&lt;/p&gt;
&lt;h4 id="why-is-this-important"&gt;Why is this important?&lt;/h4&gt; &lt;p&gt;For decades, automation has been defined by powerful but unintelligent arms performing single, repetitive motions. The &lt;strong&gt;Robotics and AI Institute&lt;/strong&gt; is tackling the problem from both ends: building hardware like AthenaZero that is physically capable of dynamic interaction, and developing the AI and reinforcement learning models needed to control it. By creating a system designed from the ground up for learning-based control, RAI is taking a serious step toward a &amp;ldquo;general-purpose manipulator&amp;rdquo; with human-like capabilities. This is the kind of fundamental research that could eventually allow robots to move out of the cage and into unpredictable, real-world environments.&lt;/p&gt;</content:encoded><category>robot-brains</category><category>humanoids</category><category>research</category><category>startups</category><category>business</category><media:content url="https://robohorizon.com/images/shared/news/2026-04-09-image-0bff9778.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>Unitree G1 Humanoid Drops for $16,000, Upending the Robotics Market</title><link>https://robohorizon.com/en-us/news/2026/04/unitree-g1-humanoid-drops-for-16000-upending-the-robotics-market/</link><pubDate>Thu, 09 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/news/2026/04/unitree-g1-humanoid-drops-for-16000-upending-the-robotics-market/</guid><description>China's Unitree Robotics has launched the G1, a humanoid robot with a startlingly low price tag of $16,000, putting immense pressure on competitors.</description><content:encoded>&lt;p&gt;In a move that feels less like a product launch and more like firing a cannonball at the entire robotics industry, &lt;strong&gt;Unitree Robotics&lt;/strong&gt; has unleashed its &lt;strong&gt;G1 humanoid robot&lt;/strong&gt; with a base price of just $16,000. That’s not a typo. For less than the price of a mid-range sedan, you can now own a bipedal robot that can walk at 2 meters per second (about 4.5 MPH) and, perplexingly, fold itself up for easy storage. The robot revolution will not be televised; it will be delivered in a surprisingly compact box.&lt;/p&gt;
&lt;p&gt;The G1 isn&amp;rsquo;t a towering metal giant; it stands at a modest 127 cm (about 4'2&amp;quot;) and weighs around 35 kg (77 lbs). It’s more child-sized than its larger, $90,000 sibling, the H1. But don&amp;rsquo;t let its smaller stature fool you. The base model features 23 degrees of freedom, 3D LiDAR and depth cameras for vision, and a battery life of about two hours. Unitree is also offering an &amp;ldquo;EDU&amp;rdquo; version with up to 43 degrees of freedom, more powerful joints, and an optional NVIDIA Jetson Orin module for developers who want to do more than just impress their friends.&lt;/p&gt;
&lt;h4 id="why-is-this-important"&gt;Why is this important?&lt;/h4&gt; &lt;p&gt;The G1&amp;rsquo;s price point is a seismic shock to the nascent humanoid market. While companies like &lt;strong&gt;Tesla&lt;/strong&gt; are targeting a sub-$30,000 price for Optimus and &lt;strong&gt;Agility Robotics&amp;rsquo;&lt;/strong&gt; Digit costs upwards of $250,000, Unitree has blown past speculation and delivered a machine at a fraction of the cost. This isn&amp;rsquo;t just about making robots cheaper; it&amp;rsquo;s about making them accessible.&lt;/p&gt;
&lt;p&gt;By pricing the G1 this aggressively, Unitree is positioning it as a go-to platform for research labs, universities, and smaller companies that were previously priced out of advanced robotics. While the G1 may not have the brute strength or polished AI of its more expensive rivals from &lt;strong&gt;Figure AI&lt;/strong&gt; or &lt;strong&gt;Boston Dynamics&lt;/strong&gt; just yet, it provides a &amp;ldquo;good enough&amp;rdquo; hardware platform for a massive community of developers to start building skills and applications. This could massively accelerate software development and create a robust ecosystem around Unitree&amp;rsquo;s platform, potentially giving it an insurmountable lead before the competition has even named a price. The age of the hobbyist humanoid developer might just be upon us.&lt;/p&gt;</content:encoded><category>humanoids</category><category>industrial</category><category>business</category><category>startups</category><media:content url="https://robohorizon.com/images/shared/news/2026-04-09-image-6dcd682a.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>Anthropic’s Glasswing: The Plan to Stop Skynet Before It Starts</title><link>https://robohorizon.com/en-us/magazine/2026/04/anthropics-glasswing-the-plan-to-stop-skynet-before-it-starts/</link><pubDate>Wed, 08 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/magazine/2026/04/anthropics-glasswing-the-plan-to-stop-skynet-before-it-starts/</guid><description>Anthropic's Project Glasswing is using an unreleased AI to secure critical software. We dive into whether this AI-powered cybersecurity is the seatbelt we need for the AGI rocket ship, or just wishful thinking.</description><content:encoded>&lt;p&gt;There’s a certain flavor of dread brewing in the tech world, a low hum of anxiety that says 2026 is the year the machines wake up. It&amp;rsquo;s the year Artificial General Intelligence (AGI) is whispered to arrive, not as a friendly chatbot, but as a force capable of out-thinking, out-maneuvering, and out-performing its creators. So, when &lt;strong&gt;Anthropic&lt;/strong&gt;, the AI lab that styles itself as the safety-conscious one, announces a new initiative called &lt;strong&gt;Project Glasswing&lt;/strong&gt;, you might expect a grand plan to install a big, red &amp;ldquo;off&amp;rdquo; switch for the coming gods.&lt;/p&gt;
&lt;p&gt;Instead, we get something that sounds profoundly… boring. Project Glasswing&amp;rsquo;s stated goal is &amp;ldquo;securing critical software for the AI era.&amp;rdquo; It sounds less like a Skynet prevention program and more like an overdue IT audit. But don&amp;rsquo;t let the corporate-speak fool you. This isn&amp;rsquo;t about patching your web browser; it&amp;rsquo;s about building a cage for a beast that hasn&amp;rsquo;t been born yet, and using another, slightly smaller beast to do it.&lt;/p&gt;
&lt;h3 id="the-ai-to-police-all-other-ais"&gt;The AI to Police All Other AIs&lt;/h3&gt; &lt;p&gt;At its core, &lt;strong&gt;Project Glasswing&lt;/strong&gt; is a massive, preemptive bug hunt. Anthropic has developed a frontier AI model called &lt;strong&gt;Mythos Preview&lt;/strong&gt;, which is apparently so adept at finding and exploiting software vulnerabilities that the company deems it too dangerous for public release. So, in a move that’s either brilliantly proactive or terrifyingly ironic, they’ve unleashed it for defensive purposes.&lt;/p&gt;
&lt;p&gt;In partnership with a who&amp;rsquo;s-who of Silicon Valley—including &lt;strong&gt;Apple&lt;/strong&gt;, &lt;strong&gt;Google&lt;/strong&gt;, &lt;strong&gt;Microsoft&lt;/strong&gt;, and &lt;strong&gt;NVIDIA&lt;/strong&gt;—Anthropic is letting Mythos loose on the world&amp;rsquo;s most critical software systems. The model has already found thousands of high-severity vulnerabilities, some of which have lurked in major operating systems and browsers for decades, surviving years of human review.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely,&amp;rdquo; Anthropic states. &amp;ldquo;The fallout—for economies, public safety, and national security—could be severe.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is the AI arms race in a nutshell: building a weapon so powerful you have to immediately build a defense against it, and that defense is just a slightly friendlier version of the same weapon. It’s a high-stakes bet that you can give the good guys a head start before the same technology inevitably leaks into the wild.&lt;/p&gt;
&lt;h3 id="from-digital-brains-to-physical-bodies"&gt;From Digital Brains to Physical Bodies&lt;/h3&gt; &lt;p&gt;This all feels abstract until you connect it to the other half of the AGI equation: the body. The existential fear isn&amp;rsquo;t just about a super-smart piece of code; it&amp;rsquo;s about that code inhabiting a physical form. We&amp;rsquo;re not talking about a smart speaker. We’re talking about &lt;strong&gt;Embodied AI&lt;/strong&gt;—humanoid robots that can walk, manipulate objects, and operate in the real, messy world.&lt;/p&gt;
&lt;p&gt;The term for an intelligence that surpasses humans in all domains, including physical tasks, isn&amp;rsquo;t AGI; it&amp;rsquo;s Artificial Superintelligence (ASI). AGI is the milestone where a machine matches human intellect; ASI is the hypothetical point where it leaves us in the cognitive dust. Many experts believe the jump from AGI to ASI could be terrifyingly short, a rapid, recursive self-improvement cycle known as an &amp;ldquo;intelligence explosion.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;Now, imagine an ASI running on a global network of humanoid robots. That&amp;rsquo;s the scenario that keeps people up at night. While companies like Boston Dynamics and Figure are perfecting the hardware, the software—the world model, the reasoning engine—is what labs like Anthropic are building. Project Glasswing is an admission that the software we&amp;rsquo;re building our entire digital and future physical world on is fundamentally insecure. It’s an attempt to bolt down the hatches before the hurricane makes landfall.&lt;/p&gt;
&lt;h3 id="so-are-we-ready-for-2026"&gt;So, Are We Ready for 2026?&lt;/h3&gt; &lt;p&gt;The prediction that AGI will arrive by 2026 is a hot topic, with figures like Elon Musk championing the short timeline, while others place it closer to the end of the decade. Regardless of the exact date, the consensus is that it&amp;rsquo;s no longer a question of &amp;ldquo;if,&amp;rdquo; but &amp;ldquo;when.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;Initiatives like Project Glasswing are a sobering reality check. They represent the most serious attempts yet to grapple with the control problem: how do you ensure a system vastly more intelligent than you remains aligned with your values and commands? Anthropic&amp;rsquo;s approach is to use AI&amp;rsquo;s own power to find the cracks in our digital foundations and seal them. It’s a race to harden the infrastructure of society before an unaligned AGI can find an exploit.&lt;/p&gt;
&lt;p&gt;This isn&amp;rsquo;t the glorious, philosophical debate about AI consciousness we see in movies. It&amp;rsquo;s the gritty, unglamorous work of cybersecurity, scaled to a planetary level. It&amp;rsquo;s about ensuring the operating system of the future doesn&amp;rsquo;t have a backdoor that could be exploited by an intelligence we can&amp;rsquo;t comprehend. Project Glasswing is scary not because of what it is, but because of what it says about what&amp;rsquo;s coming. It’s the sound of the world’s smartest people quietly and urgently trying to lock the doors. We can only hope they finish before whatever is on the other side learns how to pick them.&lt;/p&gt;</content:encoded><category>robot-brains</category><category>research</category><category>policy</category><media:content url="https://robohorizon.com/images/shared/magazine/2026-04-08-image-3d09214e.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>Figure AI Now Builds a Humanoid Robot Every 90 Minutes</title><link>https://robohorizon.com/en-us/news/2026/04/figure-ai-now-builds-a-humanoid-robot-every-90-minutes/</link><pubDate>Tue, 07 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/news/2026/04/figure-ai-now-builds-a-humanoid-robot-every-90-minutes/</guid><description>Figure AI has ramped up production to assemble a new humanoid robot every 90 minutes, aiming for a million units a year this decade. All powered by AI.</description><content:encoded>&lt;p&gt;In the race to build a robotic workforce, &lt;strong&gt;Figure AI, Inc.&lt;/strong&gt; just strapped on a jetpack. In a candid walkthrough on the &lt;em&gt;Shawn Ryan Show&lt;/em&gt;, the company revealed it can now assemble a complete humanoid robot in about &lt;strong&gt;90 minutes&lt;/strong&gt;. This isn&amp;rsquo;t some far-off projection; it&amp;rsquo;s their current capability when the line is running, with ambitions to scale to a staggering one million units per year within the decade. Let that sink in. We&amp;rsquo;ve officially left the &amp;ldquo;one-off science project&amp;rdquo; phase of humanoids and entered the era of the assembly line.&lt;/p&gt;
&lt;div class="video-container youtube-facade"
data-youtube-src="https://www.youtube.com/embed/HWq9cFhTvvQ?autoplay=1"
role="button"
tabindex="0"
aria-label="Play video"&gt;&lt;img class="youtube-facade-thumbnail"
src="https://img.youtube.com/vi/HWq9cFhTvvQ/hqdefault.jpg"
srcset="https://img.youtube.com/vi/HWq9cFhTvvQ/mqdefault.jpg 320w,
https://img.youtube.com/vi/HWq9cFhTvvQ/hqdefault.jpg 480w,
https://img.youtube.com/vi/HWq9cFhTvvQ/sddefault.jpg 640w,
https://img.youtube.com/vi/HWq9cFhTvvQ/maxresdefault.jpg 1280w"
sizes="(max-width: 320px) 320px, (max-width: 480px) 480px, (max-width: 640px) 640px, 1280px"
alt="Video thumbnail"
loading="lazy"
decoding="async"&gt;&lt;button class="youtube-facade-play-icon" aria-label="Play video" type="button"&gt;&lt;/button&gt;
&lt;/div&gt;
&lt;script&gt;
(function() {
var container = document.currentScript.previousElementSibling;
if (!container || !container.classList.contains('youtube-facade')) return;
function loadVideo() {
var src = container.dataset.youtubeSrc;
if (!src) return;
var iframe = document.createElement('iframe');
iframe.src = src;
iframe.title = 'YouTube video player';
iframe.frameBorder = '0';
iframe.allow = 'accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share';
iframe.referrerPolicy = 'strict-origin-when-cross-origin';
iframe.allowFullscreen = true;
container.innerHTML = '';
container.classList.remove('youtube-facade');
container.removeAttribute('role');
container.removeAttribute('tabindex');
container.removeAttribute('aria-label');
container.appendChild(iframe);
}
container.addEventListener('click', loadVideo);
container.addEventListener('keydown', function(e) {
if (e.key === 'Enter' || e.key === ' ') {
e.preventDefault();
loadVideo();
}
});
})();
&lt;/script&gt;
&lt;p&gt;The robot at the center of this manufacturing blitz stands 5'6&amp;quot; tall, weighs around 135 pounds, and runs for four to five hours on a single charge. When it&amp;rsquo;s out of juice, it recharges in about an hour by simply standing on an inductive charging pad, pulling in about two kilowatts of power wirelessly through its feet. Every movement, from walking and balancing to complex manipulation, is driven entirely by &lt;strong&gt;Figure&amp;rsquo;s Helix neural network&lt;/strong&gt;; there&amp;rsquo;s no traditional, hand-written code for its actions. When asked about durability, a Figure representative noted with admirable frankness that after a fall, &amp;ldquo;Sometimes we break necks, sometimes it&amp;rsquo;s fine.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;This production horsepower isn&amp;rsquo;t just for show. &lt;strong&gt;Figure AI&lt;/strong&gt; already has commercial agreements with heavyweights like &lt;strong&gt;BMW&lt;/strong&gt; for automotive manufacturing and &lt;strong&gt;Brookfield&lt;/strong&gt; for logistics and real estate applications. The company also teased two more major customer announcements coming within the next 60 days. The robots feature fifth-generation hands with embedded cameras and tactile sensors, a soft, foam-wrapped body for safety, and removable &amp;ldquo;clothes&amp;rdquo; that don&amp;rsquo;t require tools.&lt;/p&gt;
&lt;h4 id="why-is-this-important"&gt;Why is this important?&lt;/h4&gt; &lt;p&gt;The biggest bottleneck in robotics has never been just the robot; it&amp;rsquo;s the factory that builds the robot. While competitors focus on demos, Figure is focused on scaling production. A 90-minute build time per unit fundamentally changes the economics and accessibility of general-purpose robots. It signals a strategic pivot from crafting individual, high-cost prototypes to mass-producing a standardized platform. This approach, combined with an AI-first control system that learns instead of being explicitly programmed, suggests Figure isn&amp;rsquo;t just trying to build a better robot—it&amp;rsquo;s trying to build the &lt;strong&gt;Ford Model T&lt;/strong&gt; of the humanoid world. The race is no longer just about who has the most agile bot, but who can build and deploy them by the thousands.&lt;/p&gt;</content:encoded><category>robot-brains</category><category>humanoids</category><category>business</category><category>startups</category><category>research</category><media:content url="https://robohorizon.com/images/shared/news/2026-04-07-image001-9e839574.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>Generalist's GEN-1 Brain Hits 99% Success, 3x Speed</title><link>https://robohorizon.com/en-us/magazine/2026/04/generalists-gen-1-brain-hits-99-success-3x-speed/</link><pubDate>Sat, 04 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/magazine/2026/04/generalists-gen-1-brain-hits-99-success-3x-speed/</guid><description>Generalist's new GEN-1 model for robots achieves 99% reliability and 3x speeds, showing emergent 'intelligent improvisation' that may finally unlock commercial viability.</description><content:encoded>&lt;p&gt;Let&amp;rsquo;s be honest, most robot demos are a carefully choreographed ballet of disappointment, set to the tune of slow, clumsy movements that make you wonder if the heat death of the universe will arrive before the task is complete. But every so often, something cuts through the noise. Today, that something is &lt;strong&gt;Generalist&amp;rsquo;s&lt;/strong&gt; new AI model, &lt;strong&gt;GEN-1&lt;/strong&gt;. The company is making some audacious claims: a general-purpose AI brain for robots that doesn&amp;rsquo;t just work, it excels.&lt;/p&gt;
&lt;p&gt;Generalist is touting GEN-1 as the first model to truly &amp;ldquo;master&amp;rdquo; simple physical tasks, and they&amp;rsquo;re bringing receipts. We&amp;rsquo;re talking average success rates of 99% on tasks where its predecessor, GEN-0, topped out at a B-minus grade of 64%. It&amp;rsquo;s also completing tasks up to three times faster than the prior state-of-the-art and, most critically, it can learn a new task with only about an hour of robot-specific data. This isn&amp;rsquo;t just an incremental update; it&amp;rsquo;s a potential phase shift toward robots that are actually, finally, commercially viable.&lt;/p&gt;
&lt;h3 id="from-scaling-laws-to-physical-mastery"&gt;From Scaling Laws to Physical Mastery&lt;/h3&gt; &lt;p&gt;Just five months ago, Generalist introduced &lt;strong&gt;GEN-0&lt;/strong&gt;, a model that provided the first real evidence that the scaling laws underpinning the meteoric rise of LLMs like GPT could also apply to robotics. More data and more compute led to predictably better, more generalized performance. It was a crucial academic point, but GEN-0 wasn&amp;rsquo;t ready for prime time.&lt;/p&gt;
&lt;p&gt;GEN-1 is the result of cranking those dials way up. It&amp;rsquo;s scaled on a much larger dataset—now over half a million hours of high-fidelity physical interaction data—and accelerated by new algorithmic advances. The secret sauce, however, is the data source itself. Instead of relying solely on expensive and difficult-to-scale teleoperation datasets, the foundation of GEN-1 is built on data from low-cost wearable devices worn by humans. This provides a rich pre-training corpus of real-world physics and intuitive micro-corrections that simulation or teleop often miss.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;We believe GEN-1 to be the first general physical AI model to cross a key threshold: unlocking commercial viability across a broad range of tasks,&amp;rdquo; the company stated in its announcement.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;picture&gt;
&lt;img src="https://robohorizon.com/images/shared/magazine/2026-04-04-image002-2-d88ecd8b_hu_b9baad7ece2674c.webp"
srcset="https://robohorizon.com/images/shared/magazine/2026-04-04-image002-2-d88ecd8b_hu_b9baad7ece2674c.webp 480w"
sizes="(max-width: 768px) 100vw, 50vw"
alt="A robotic arm meticulously packing a smartphone into a box, demonstrating high-speed precision."
loading="lazy"
width="480"
height="271"
class="img-fluid article-centered"
decoding="async"&gt;
&lt;/picture&gt;
&lt;h3 id="the-holy-trinity-reliability-speed-and-improvisation"&gt;The Holy Trinity: Reliability, Speed, and Improvisation&lt;/h3&gt; &lt;p&gt;Generalist defines &amp;ldquo;mastery&amp;rdquo; as a combination of three key capabilities, two of which have been the bedrock of industrial automation for 60 years. It&amp;rsquo;s the third one that changes the game.&lt;/p&gt;
&lt;h4 id="reliability-and-speed-the-industrial-baseline-supercharged"&gt;Reliability and Speed: The Industrial Baseline, Supercharged&lt;/h4&gt; &lt;p&gt;First, the numbers are just plain impressive. In long-duration tests, GEN-1 packed blocks over 1,800 times in a row, folded boxes over 200 times, and even serviced a robot vacuum cleaner over 200 times in a row—a robot maintaining another robot, which is either the dream or the beginning of a very specific horror movie. These tasks ran for hours without intervention at a 99% success rate.&lt;/p&gt;
&lt;p&gt;Then there&amp;rsquo;s the speed. Robots powered by GEN-1 can assemble a box in 12.1 seconds, a task that took its predecessor around 34 seconds. Packing a phone into a case is done in 15.5 seconds, 2.8 times faster than before. This isn&amp;rsquo;t just a matter of cranking up motor speeds; the model learns from experience and leverages advanced inference techniques to perform tasks more efficiently than the human demonstrations it learned from.&lt;/p&gt;
&lt;div class="video-container youtube-facade"
data-youtube-src="https://www.youtube.com/embed/SY2xyrmV44Y?autoplay=1"
role="button"
tabindex="0"
aria-label="Play video"&gt;&lt;img class="youtube-facade-thumbnail"
src="https://img.youtube.com/vi/SY2xyrmV44Y/hqdefault.jpg"
srcset="https://img.youtube.com/vi/SY2xyrmV44Y/mqdefault.jpg 320w,
https://img.youtube.com/vi/SY2xyrmV44Y/hqdefault.jpg 480w,
https://img.youtube.com/vi/SY2xyrmV44Y/sddefault.jpg 640w,
https://img.youtube.com/vi/SY2xyrmV44Y/maxresdefault.jpg 1280w"
sizes="(max-width: 320px) 320px, (max-width: 480px) 480px, (max-width: 640px) 640px, 1280px"
alt="Video thumbnail"
loading="lazy"
decoding="async"&gt;&lt;button class="youtube-facade-play-icon" aria-label="Play video" type="button"&gt;&lt;/button&gt;
&lt;/div&gt;
&lt;script&gt;
(function() {
var container = document.currentScript.previousElementSibling;
if (!container || !container.classList.contains('youtube-facade')) return;
function loadVideo() {
var src = container.dataset.youtubeSrc;
if (!src) return;
var iframe = document.createElement('iframe');
iframe.src = src;
iframe.title = 'YouTube video player';
iframe.frameBorder = '0';
iframe.allow = 'accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share';
iframe.referrerPolicy = 'strict-origin-when-cross-origin';
iframe.allowFullscreen = true;
container.innerHTML = '';
container.classList.remove('youtube-facade');
container.removeAttribute('role');
container.removeAttribute('tabindex');
container.removeAttribute('aria-label');
container.appendChild(iframe);
}
container.addEventListener('click', loadVideo);
container.addEventListener('keydown', function(e) {
if (e.key === 'Enter' || e.key === ' ') {
e.preventDefault();
loadVideo();
}
});
})();
&lt;/script&gt;
&lt;h4 id="improvisation-the-spark-of-intelligence"&gt;Improvisation: The Spark of Intelligence&lt;/h4&gt; &lt;p&gt;Reliability and speed are staples of industrial arms bolted to a factory floor. What they lack is the ability to handle the universe&amp;rsquo;s persistent refusal to stick to the script. This is where GEN-1&amp;rsquo;s &amp;ldquo;improvisational intelligence&amp;rdquo; comes in.&lt;/p&gt;
&lt;p&gt;Generalist describes this as an emergent capability, a form of &amp;ldquo;freestyle problem-solving.&amp;rdquo; In one demo, a robot kitting automotive parts accidentally bumps a washer. Instead of freezing or failing, the GEN-1 powered system assesses the situation and adapts. It might set the washer down to regrasp it cleanly, or cleverly use the edge of a slot to reorient the piece, or even bring in its other hand for a bimanual assist. These aren&amp;rsquo;t pre-programmed recovery routines; they are novel solutions generated on the fly, well outside the training distribution. This is the difference between automation and autonomy.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/GeneralistAI/status/2039709306145190262"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;h3 id="more-than-a-model-its-a-system"&gt;More Than a Model, It&amp;rsquo;s a System&lt;/h3&gt; &lt;p&gt;It&amp;rsquo;s crucial to understand that GEN-1 is not merely a set of model weights. It&amp;rsquo;s a complete system that includes innovations in pre-training, post-training techniques, and inference-time processing. This system-level approach is what makes it so data-efficient, capable of adapting to a new robot body and a new task simultaneously with about an hour of new data.&lt;/p&gt;
&lt;picture&gt;
&lt;img src="https://robohorizon.com/images/shared/magazine/2026-04-04-image003-3-d88ecd8b_hu_a17ef0c4d6a6bf53.webp"
srcset="https://robohorizon.com/images/shared/magazine/2026-04-04-image003-3-d88ecd8b_hu_a17ef0c4d6a6bf53.webp 480w"
sizes="(max-width: 768px) 100vw, 50vw"
alt="A robot arm servicing a robot vacuum cleaner, showcasing complex interaction between two machines."
loading="lazy"
width="480"
height="480"
class="img-fluid article-centered"
decoding="async"&gt;
&lt;/picture&gt;
&lt;p&gt;Of course, GEN-1 is not a silver bullet for physical AGI. The company is quick to point out its limitations. Not all tasks achieve that 99%+ success rate, and some industrial applications demand even higher reliability. Furthermore, emergent improvisation raises the critical question of AI alignment. A robot that can creatively solve a problem is fantastic, but you also need to ensure its creative solutions don&amp;rsquo;t involve, say, punching a hole in a wall for efficiency.&lt;/p&gt;
&lt;picture&gt;
&lt;img src="https://robohorizon.com/images/shared/magazine/2026-04-04-image004-4-d88ecd8b_hu_d9aa4b68078e470f.webp"
srcset="https://robohorizon.com/images/shared/magazine/2026-04-04-image004-4-d88ecd8b_hu_d9aa4b68078e470f.webp 480w"
sizes="(max-width: 768px) 100vw, 50vw"
alt="A pair of robotic arms working in tandem to fold a t-shirt, a classic challenge in dexterous manipulation."
loading="lazy"
width="480"
height="468"
class="img-fluid article-centered"
decoding="async"&gt;
&lt;/picture&gt;
&lt;p&gt;Still, the launch of GEN-1 feels like a significant milestone. It strengthens the argument that scaling models with vast amounts of real-world physical interaction data is the most promising path toward generalist robots. By focusing on a trifecta of performance—doing the task right, doing it fast, and knowing what to do when things go wrong—Generalist may have just dragged the dream of the useful, general-purpose robot one giant leap closer to reality. For us, that&amp;rsquo;s more than just a model; it&amp;rsquo;s a sign that the physical world is finally about to get a whole lot more intelligent.&lt;/p&gt;</content:encoded><category>robot-brains</category><category>research</category><category>business</category><category>startups</category><media:content url="https://robohorizon.com/images/shared/magazine/2026-04-04-image001-1-d88ecd8b.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>This Self-Driving AI Bicycle Is an Open-Source Engineering Masterpiece</title><link>https://robohorizon.com/en-us/news/2026/04/this-self-driving-ai-bicycle-is-an-open-source-engineering-masterpiece/</link><pubDate>Sat, 04 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/news/2026/04/this-self-driving-ai-bicycle-is-an-open-source-engineering-masterpiece/</guid><description>An AI engineer built a fully autonomous bicycle that can balance, navigate, and avoid obstacles, then open-sourced the entire mind-bending project on GitHub.</description><content:encoded>&lt;p&gt;While the world’s biggest tech companies are pouring billions into taking the driver out of a four-wheeled car, AI engineer &lt;strong&gt;Peng Zhihui&lt;/strong&gt; decided to solve a much trickier problem: taking the rider off a two-wheeled bicycle. The result is the &lt;strong&gt;XUAN-Bike&lt;/strong&gt;, a shockingly capable autonomous bicycle that balances itself perfectly, navigates complex environments, and avoids obstacles. And in a move that can only be described as a massive flex, he open-sourced the entire project.&lt;/p&gt;
&lt;p&gt;The bike is a marvel of complex systems integration. Its brain is a custom control board powered by &lt;strong&gt;Huawei&amp;rsquo;s Ascend 310 AI processor&lt;/strong&gt;. For vision, it uses a combination of an RGBD depth camera and traditional sensors like an accelerometer and gyroscope. But the real magic is in the balance system. Instead of just relying on steering adjustments, the bike uses a metal momentum wheel mounted under the seat that spins at high speed, providing the gyroscopic force needed to keep the bike upright even from a dead stop. You can see the unnervingly smooth results in action on &lt;a href="https://www.bilibili.com/video/BV1fV411x72a"&gt;Bilibili&lt;/a&gt;.&lt;/p&gt;
&lt;picture&gt;
&lt;img src="https://robohorizon.com/images/shared/news/2026-04-04-image002-2-9d281f52_hu_e4a3240f794463ea.webp"
srcset="https://robohorizon.com/images/shared/news/2026-04-04-image002-2-9d281f52_hu_e4a3240f794463ea.webp 480w, https://robohorizon.com/images/shared/news/2026-04-04-image002-2-9d281f52_hu_c4543d6c4e3f4c1b.webp 640w"
sizes="(max-width: 768px) 100vw, 50vw"
alt="A CAD rendering of the XUAN-Bike showing its custom motors and control systems."
loading="lazy"
width="480"
height="261"
class="img-fluid article-centered"
decoding="async"&gt;
&lt;/picture&gt;
&lt;p&gt;The entire system is controlled by a neural network running on Huawei’s &lt;strong&gt;MindSpore&lt;/strong&gt; framework, a deep learning architecture. This allows the bike not just to balance, but to perceive its surroundings, identify obstacles, and plot a course. According to the project documentation, the bike&amp;rsquo;s control model is based on LQR/MPC and a custom reinforcement learning algorithm. For those interested in building their own physics-defying machine, Peng has made all the hardware schematics, model files, and source code available on the project&amp;rsquo;s &lt;a href="https://github.com/peng-zhihui/XUAN/blob/main/enREADME.md"&gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id="why-is-this-important"&gt;Why is this important?&lt;/h4&gt; &lt;p&gt;This isn&amp;rsquo;t just an impossibly cool weekend project; it&amp;rsquo;s a masterclass in modern robotics and control theory. The XUAN-Bike demonstrates that with the right combination of accessible AI hardware and sophisticated software, a single individual can develop autonomous systems that rival the complexity of corporate R&amp;amp;D labs. By open-sourcing the project, Peng has provided an invaluable resource for students, researchers, and hobbyists, demystifying advanced concepts in dynamic stability and autonomous navigation. It’s a powerful reminder that groundbreaking innovation doesn&amp;rsquo;t always come from a boardroom—sometimes it comes from a garage and a deep-seated desire to make a bicycle do the impossible.&lt;/p&gt;</content:encoded><category>autonomous</category><category>robot-brains</category><category>open-source</category><category>research</category><category>startups</category><media:content url="https://robohorizon.com/images/shared/news/2026-04-04-image001-1-9d281f52.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>Airbus's 'Bird of Prey' Drone Fires Mini-Missiles to Kill Drones</title><link>https://robohorizon.com/en-us/news/2026/04/airbuss-bird-of-prey-drone-fires-mini-missiles-to-kill-drones/</link><pubDate>Thu, 02 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/news/2026/04/airbuss-bird-of-prey-drone-fires-mini-missiles-to-kill-drones/</guid><description>Airbus has demonstrated a new anti-drone system that uses a modified target drone to fire ultra-light, low-cost missiles, promising a radical shift in air defense economics.</description><content:encoded>&lt;p&gt;It seems &lt;strong&gt;Airbus&lt;/strong&gt; has grown tired of the comically bad economics of modern air defense, where multi-million dollar missiles are routinely used to swat drones worth less than a used car. The company just demonstrated its answer: a reusable hunter drone that fires its own tiny, low-cost missiles. Dubbed the &lt;strong&gt;Bird of Prey&lt;/strong&gt;, the system recorded its first air-to-air kill during a maiden demonstration flight in Germany.&lt;/p&gt;
&lt;p&gt;The announcement came via a post on X from Boris Alexander Beissner, a department head at &lt;strong&gt;Airbus Defence and Space&lt;/strong&gt;, who noted the project went from kickoff to its first successful intercept in a blistering nine months. The Bird of Prey is a modified &lt;strong&gt;Do-DT25&lt;/strong&gt; target drone, a 160 kg platform with a 2.5-meter wingspan, that has been repurposed from catching missiles to firing them.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/boris_beissner/status/2039031733375410409"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;p&gt;During the test, the drone autonomously hunted and engaged a kamikaze target drone with a &amp;ldquo;Frankenburg Mk1&amp;rdquo; missile. These ultra-light interceptors, developed by partner &lt;strong&gt;Frankenburg Technologies&lt;/strong&gt;, weigh less than 2 kg each and measure just 65 cm long. The prototype carried four missiles, but operational versions are planned to carry up to eight. Each fire-and-forget missile has an engagement range of about 1.5 km and uses a fragmentation warhead to neutralize threats.&lt;/p&gt;
&lt;h4 id="why-is-this-important"&gt;Why is this important?&lt;/h4&gt; &lt;p&gt;The current cost-exchange ratio in drone warfare is unsustainable. Firing a Patriot missile, which can cost upwards of $4 million, to destroy a $20,000 drone is a strategy that leads to empty coffers and depleted stockpiles. The Bird of Prey system aims to flip that economic script entirely. By using a reusable, relatively low-cost drone to launch cheap, mass-producible interceptors, Airbus is creating a scalable defense against the growing threat of drone swarms. It&amp;rsquo;s less like using a sledgehammer to kill a fly and more like training a falcon to do it for you—efficiently, repeatedly, and without breaking the bank. Airbus and Frankenburg plan further tests throughout 2026 to bring the system to operational readiness.&lt;/p&gt;</content:encoded><category>autonomous</category><category>industrial</category><category>policy</category><category>business</category><media:content url="https://robohorizon.com/images/shared/news/2026-04-02-image-9336b2d9.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item><item><title>Cortical Labs Is Now Renting Out Human Brain Cells in the Cloud</title><link>https://robohorizon.com/en-us/magazine/2026/04/cortical-labs-is-now-renting-out-human-brain-cells-in-the-cloud/</link><pubDate>Thu, 02 Apr 2026 00:00:00 +0000</pubDate><guid>https://robohorizon.com/en-us/magazine/2026/04/cortical-labs-is-now-renting-out-human-brain-cells-in-the-cloud/</guid><description>Australian startup Cortical Labs has opened its Cortical Cloud, allowing anyone with a hefty budget to rent and program live biological neural networks on a chip.</description><content:encoded>&lt;p&gt;For years, &amp;ldquo;cloud computing&amp;rdquo; has been a convenient, if slightly fluffy, metaphor for accessing vast server farms over the internet. Australian startup &lt;strong&gt;Cortical Labs&lt;/strong&gt; has apparently decided to take the term with unnerving literalness, replacing some of that silicon with living, firing human neurons. And now, for a price, they&amp;rsquo;ll let you run your code on it.&lt;/p&gt;
&lt;p&gt;Welcome to the &lt;strong&gt;Cortical Cloud&lt;/strong&gt;, a platform that officially moves the concept of &amp;ldquo;wetware-as-a-service&amp;rdquo; from science fiction novels to a publicly accessible API. For approximately $2,170 per month per instance, you can now &amp;ldquo;hire&amp;rdquo; a biological neural network (BNN) grown from human brain cells and fused to a silicon chip. It’s a bold, slightly unsettling business model that promises to unlock new frontiers in computing, assuming you have the budget and a flexible definition of &amp;ldquo;end-user license agreement.&amp;rdquo;&lt;/p&gt;
&lt;h3 id="from-pong-to-the-public-cloud"&gt;From Pong to the Public Cloud&lt;/h3&gt; &lt;p&gt;If the name &lt;strong&gt;Cortical Labs&lt;/strong&gt; rings a bell, it’s because this is the same team that famously taught a cluster of brain cells in a dish—dubbed &amp;ldquo;DishBrain&amp;rdquo;—to play the video game &lt;em&gt;Pong&lt;/em&gt; back in 2022. That experiment, published in the journal &lt;em&gt;Neuron&lt;/em&gt;, demonstrated that these biological circuits could learn and adapt in real-time, far faster than many traditional AI models. It was a watershed moment for what the company calls &amp;ldquo;Synthetic Biological Intelligence.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;Since then, they&amp;rsquo;ve leveled up their ambitions considerably. As we&amp;rsquo;ve covered previously, their neural networks have
&lt;a href="https://robohorizon.com/en-us/magazine/2026/03/cortical-labs-plugs-human-brain-cells-into-an-llm-after-they-mastered-doom/" hreflang="en-us"&gt;Cortical Labs Plugs Human Brain Cells Into an LLM After They Mastered DOOM&lt;/a&gt;
. Now, they&amp;rsquo;ve productized their creation. The company has officially opened its platform to the public, inviting researchers, developers, and the morbidly curious to see what they can discover with a literal brain in a box.&lt;/p&gt;
&lt;div class="x-post-container"&gt;
&lt;blockquote class="twitter-tweet"&gt;
&lt;a href="https://twitter.com/CorticalLabs/status/2033703626695479376"&gt;&lt;/a&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;style&gt;
.x-post-container {
margin: 1.5rem 0;
display: flex;
justify-content: center;
}
&lt;/style&gt;
&lt;h3 id="how-to-program-a-brain"&gt;How to Program a Brain&lt;/h3&gt; &lt;p&gt;So, how does one go about renting a slice of biological compute? The process is surprisingly similar to spinning up a server on AWS or Google Cloud, which is perhaps the most surreal part of this entire endeavor. The core of the platform is the &lt;strong&gt;CL1&lt;/strong&gt;, a custom hardware device containing the BNN on a high-density multi-electrode array. This hardware allows for both stimulating the neurons and recording their responses with microsecond latency.&lt;/p&gt;
&lt;p&gt;Access to this wetware is managed through the &lt;strong&gt;Cortical Labs API (CL API)&lt;/strong&gt;, a Python library that abstracts away the bio-physical complexity. Developers can use a simple SDK to interact with the neurons, sending signals and interpreting the resulting activity spikes.&lt;/p&gt;
&lt;picture&gt;
&lt;img src="https://robohorizon.com/images/shared/magazine/2026-04-02-image002-2-07bb4b21_hu_2e5ccd9c06d2a6eb.webp"
srcset="https://robohorizon.com/images/shared/magazine/2026-04-02-image002-2-07bb4b21_hu_2e5ccd9c06d2a6eb.webp 480w, https://robohorizon.com/images/shared/magazine/2026-04-02-image002-2-07bb4b21_hu_c45537a37a6f7cd8.webp 640w"
sizes="(max-width: 768px) 100vw, 50vw"
alt="A screenshot of the Cortical Labs developer documentation showing Python code for installing the SDK."
loading="lazy"
width="480"
height="240"
class="img-fluid article-centered"
decoding="async"&gt;
&lt;/picture&gt;
&lt;p&gt;For those who want to kick the tires before committing a couple of grand, Cortical Labs provides a simulator that mimics the behavior of a real CL1 device. Any code developed against the simulator is designed to be a drop-in replacement for the real thing. The entire software development kit is open-source, and you can find the code on their GitHub repository. Hyperlink: &lt;a href="https://github.com/Cortical-Labs/cl-sdk"&gt;cl-sdk on GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="the-killer-app-for-wetware"&gt;The Killer App for Wetware&lt;/h3&gt; &lt;p&gt;This all begs the question: what is this actually for? Beyond the sheer novelty, Cortical Labs is targeting three primary fields:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Neuroscience:&lt;/strong&gt; Providing a standardized platform to study how neurons learn, form memories, and process information in a highly controlled environment.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Drug Discovery &amp;amp; Toxicology:&lt;/strong&gt; Researchers can test the effects of new pharmaceutical compounds on real neural circuits to screen for efficacy and neurotoxicity, potentially accelerating treatments for diseases like Alzheimer&amp;rsquo;s or epilepsy.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Artificial Intelligence:&lt;/strong&gt; This is the big one. Proponents of biological computing argue that brains are vastly more energy-efficient than silicon-based AI for certain tasks. By studying and harnessing biological intelligence, we might discover entirely new computing paradigms that don&amp;rsquo;t require planet-spanning data centers.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Of course, this cutting-edge access comes at a price. While a single instance runs about $2,170 a month, Cortical Labs offers a discount for bulk orders—renting ten instances for six months drops the price to around $1,600 per unit per month. As the company cheekily notes, this is &amp;ldquo;cheaper than a human.&amp;rdquo; For now, anyway. They also encourage academic institutions to reach out for grants, signaling a clear intention to seed the research community.&lt;/p&gt;
&lt;p&gt;The launch of the Cortical Cloud is a strange and significant milestone. It&amp;rsquo;s the commercialization of a field that has long been theoretical. We&amp;rsquo;ve moved from simulating neural networks on silicon to offering genuine biological intelligence as a cloud service. What will be built on this platform remains to be seen, but one thing is certain: the line between computer and organism has never been blurrier.&lt;/p&gt;</content:encoded><category>robot-brains</category><category>bionics</category><category>research</category><category>startups</category><media:content url="https://robohorizon.com/images/shared/magazine/2026-04-02-image001-1-07bb4b21.webp" medium="image"/><dc:creator>Robot King</dc:creator><dc:language>en-us</dc:language></item></channel></rss>