A few blogs are passing around videos of the Ishikawa Komuro Lab's high-speed robot hand performing impressive acts of dexterity and skillful manipulation. However, the video being passed around is slight on details. Meanwhile, their video presentation at ICRA 2009 (which took place in May in Kobe, Japan) has an informative narration and demonstrates additional capabilities. I have included this video below, which shows the manipulator dribbling a ping-pong ball, spinning a pen, throwing a ball, tying knots, grasping a grain of rice with tweezers, and tossing / re-grasping a cellphone!
The iconic Pixar animated lamp, Luxo Jr., unofficially debuted in animatronic form at Disney's Hollywood Studios in late June (videos below). Both the animated and animatronic Luxo Jrs. have remarkable anthropomorphic emotive capabilities in spite of their simple, non-human form. This reminds me of conversations in Dr. Andrea Thomaz's human-robot interaction course about applying animation techniques to design more effective social robots -- clearly Disney's Imagineers have perfected this art.
I came upon this new commercial (video below) entitled "The Runner -- Exploit Yourself" created by Big Lazy Robot (a design / visual effects studio) for Nike. The humanoid robot performs impressive feats of urban acrobatics, strongly resembling a more agile version of the movie-star robot, Hinokio. It is always interesting to compare robot fact with fiction. Hopefully the future lives up to (nay, exceeds) our expectations.
There was a very interesting plenary talk at ICRA 2009 about "Computational Cameras" given by Prof. Shree Nayar of Columbia University. A video of the plenary is included below, as well as a discussion of some of its contents -- from assorted pixel techniques for high dynamic range to flexible depth of field photography -- all very cool stuff! These developments are particularly relevant to robotics, as cameras are probably the most ubiquitous sensors encountered. This video was made available in the ICRA 2009 podcasts. While there is a large push for open-access journals / conferences, freely-available recordings of conference talks is even more lacking. As I find these more entertaining than television, I really hope this becomes a common trend (perhaps the RSS committee members are watching...?).
While most (semi)autonomous mobile manipulators employ expensive articulated arms with grippers (6 or more DOF), the Healthcare Robotics Lab at Georgia Tech, the same folks who made EL-E, are also examining low-complexity end effectors modelled off of dustpans and kitchen turners for non-prehensile grasping of isolated objects from the floor. When mounted on an iRobot Create (Roomba), the system's performance was impressive; it successfully grasped ~95% of the 34 test objects across numerous orientations / configurations and four different surfaces -- an impressive feat of robustness given that the end effector is a single under-actuated "sweeper" (1 DOF) working in tandem with a planar wedge, the whole system operates via open loop control, and the objects were quite varied (from small individual pills to large containers, and from deformable textiles to rigid bottles). This system is slated to appear at ICRA 2009 in Kobe, Japan in the next few days and is documented in a paper entitled "1000 Trials: An Empirically Validated End Effector that Robustly Grasps Objects from the Floor" (of which I am a coauthor). Read further for videos and additional discussion.
I would like to mark this momentous occasion by sharing it with you -- that, and it is just plain cool (and artistic)! The folks at RadiologyArt.com have been building Computed Tomography (CT) scans of various objects (dolls, electronics, vacuum tubes, McDonalds hamburgers, etc). The addition of a remote control dog and wind-up drumming bunny represent (to my knowledge) the first examples of CT scans of robots, albeit rudimentary robots. Read on for pictures and amazingly detailed videos.
There was a paper just released in Science (Materials) about "Giant-Stroke, Superelastic Carbon Nanotube Aerogel Muscles." This is a rare case where I believe the research material far exceeds the buzzword hype! The new material responds to applied voltages by expanding 220% in a few milliseconds, operating in temperatures as low as liquid-nitrogen and as high as the melting point of iron. It has the strength and stiffness of steel (by weight) in one direction and yet is as compliant as rubber in the other two. It has extremely low density due its airy (aerogel) properties, is conductive, and transparent. This materials innovation has the potential to rejuvenate research on artificial muscles, which has generally been focused on shape memory alloys (i.e. nickle-titanium or Nitinol), piezoelectrics (such as PZT), or electroactive polymers (EAPs). Read on for a discussion about these alternative technologies, their drawbacks, and why this new material may be a game-changer!
Describing science as "beautiful" makes perfect sense to me; I believe the physics experiments described in The Prism and the Pendulum are on par with the greatest paintings and sculptures ever conceived! However, I'm having difficulties classifying the $30,000 robot, Keepon: Is it a research robot, an art-robot, or both? On one hand, there is evidence supporting its role in important robotics research. On the other hand, there are the numerous (many more?) whimsical videos of Keepon dancing to music or traveling the world, such as the "Keepon Goes Seoul-Searching" video to be shown on Friday at the Human-Robot Interaction (HRI) 2009 conference (we show this video below). Having seen Keepon in person, I can attest to its "cuteness" factor and quality design... but my questions are: "Where is the line between art and research drawn?" "Does such a line, necessarily, exist?" and "How can HRI researchers and peer-reviewers objectively evaluate important robotics research that also possesses strong artistic components?" I'd love to hear your thoughts.
While Hizook covered the Rollin' Justin robot over three months ago, the rest of the world (including Engadget) had to wait until CeBIT, where Rollin' Justin "debuted" today. Lots of great pictures and videos were taken, including a video where Rollin' Justin is led around by the hand (I assume using the force/torque sensing capabilities of the DLR-III lightweight arm or the DLR-II hand). However, the "serious" coverage at CeBIT left out one of Justin's most hilarious commands: "dance like in pulp fiction." We show this video (to be shown at the upcoming ICRA 2009 conference) below.
I've been meaning to mention this for some time now... SICK has released a "new" laser rangefinder, the LMS 100. This laser rangefinder seems to be a departure from the classic "coffee-pot" look of yore (i.e. SICK LMS 291). In fact, it's form-factor and specifications are quite similar to the Hokuyo UTM-30LX; it seems like the LMS 100 might be SICK's strategic response to the "budget" LIDAR manufacturer's (Hokuyo's) burgeoning popularity among indoor roboticists. Priced at $5000 USD ($1000 less with academic discounts), I'm curious how it actually compares in performance (in the field) to the $5600 Hokuyo UTM -- can anyone weigh in? Read on for a comparison of specifications.
Hizook reader Yue Khing pointed us to another disassembled laser rangefinder (LIDAR); this time, it is an Omron STI OptoShield OS3100. This LIDAR seems similar in form (apparently referred to as "coffee pots" in industry) and specification to the SICK LMS laser rangefinders. Honestly, I wasn't even aware that Omron made laser rangefinders, so I'm not sure what these units cost, or how common they are; however, it is still interesting to compare their internal design to the SICK LMS series and Velodyne laser rangefinders we've already seen disassembled. Read on for pictures and videos.
As of January 2009, the iBOT powered-wheelchair will be discontinued. This is unfortunate for the disabled community -- Dean Kamen and the others at DEKA (the same people responsible for the Segway and Luke Arm) developed an amazing robotic wheelchair that was (somewhat) unique it its ability to transition from a statically-stable, 4-wheel configuration to a dynamically-stable, 2-wheel configuration to give occupants added height. Further, by pivoting pairs of wheels, the wheelchair and occupant were able to dynamically balance while traversing stairs, not to mention the wheelchair's basic ability to traverse (relatively) poor terrain, such as sand and gravel! All of this was possible due to careful controllers and internal gyros (not entirely dissimilar to a Segway). Read further for discussion -- specifically about why this loss for the disabled community could be an opportunity in disguise for the robotics community and a big win for Kamen and company.
I think this is both brilliant and hilarious... University of Delaware researchers, James Galloway and Sunil Agrawal, were awarded a two-year, $325k NSF grant to explore robot-enabled mobility for special needs children, with the goal of spurring cognitive development -- this is brilliant. However, why focus solely on special needs children? I think it is hilarious to imagine "regular" children using "smart wheelchairs" to putter around before they learn to crawl / walk -- it would certainly make for some entertaining rounds of baby-bumper-cars! Adding to the hilarity, their initial prototypes are Pioneer robots pulling a plywood trailer, supported by casters, with a small chair atop (images below)! But who am I to judge... we can all relate to "ugly prototype syndrome."
Early robots have found utility in warfare dating back to World War II (and arguably earlier), with the invention of simple electrical servo-mechanisms for fire control and targeting. While fire control has become extremely advanced, its "human in the loop" nature kept us (relatively) oblivious of the ethical implications of robots in warfare. However, increased autonomy and point-and-click capabilities are forcing us to reevaluate the ethical implications of robots in warfare. Enter a new book by P.W. Singer, entitled Wired for War: The Robotics Revolution and Conflict in the 21st Century. Singer was recently interviewed by NPR (and on The Daily Show by Jon Stewart), where he talked about a number of interesting issues. Links and discussion follow.
Friend and colleague, Richard Roberts, has offered to give us an glimpse of his recent work on learning autonomous robot behaviors/controllers from human operators:
While working on the DARPA LAGR project, we found it exceedingly difficult to tune our reactive behaviors to work well in cluttered and patchy environments. Either obstacle avoidance was too sensitive and the robot would not drive through gaps, or it was too aggressive and the robot would collide with obstacles. Of course, we could have made the behaviors more and more complicated, introducing more parameters to tune, but we wanted an easy way to have the robot “just do what I say!” Thus, we developed a system for interactive, on-line training of behaviors with a remote control. The user flips a switch to training mode, and drives the robot how they would like it to drive, then flips the switch back to autonomous to test the behavior.
Back on December 15th, we got a look at the internals of a SICK Laser Rangefinder (LIDAR), a $6k device that employs a single laser diode to produce ~6000 points per second (~600 points per scan at ~10Hz) over a 180° field-of-view. Now, we can compare that to the Rolls Royce of Laser Rangefinders -- the Velodyne Lidar, a $75k device employing 64 laser diodes to produce 1.3 million data points per second with a 360° horizontal field-of-view and a 26.8° vertical field-of-view. Below is a video of Bruce Hall, President of Velodyne LIDAR, demonstrating the HDL-64E in operation and taking a look at its internals. It may not be a complete disassembly (it does cost $75,000 afterall!), but it does provide some interesting insights into the Velodyne's internals.
The Situational Awareness Mast (SAM, also known as a Zipper Mast) from Geosystems Inc. is a telescoping linear actuator that has a unique property -- it's stroke length is an order of magnitude greater than its nominal height! For example, the SAM8 is a 10 lb device with a stroke length (8ft) that is 24 times it's nominal height (4 inches)! This can be used to vertically translate a robot's sensor suite for better visibility while still allowing for a low profile. Read on for information on the different Zipper Mast variants, the patent describing the system, and an exclusive video of a Zipper Mast on an iRobot Packbot!
Back in May 2008 it was announced that CMU professors Sara Kiesler and Jodi Forlizzi (from the HCI Institute) and Paul Rybski (from the Robotics Institute) were awarded $500k in Microsoft's Human-Robot Interaction funding to develop a social, snack-selling robot to traverse Newell-Simon and Wean halls (press release). After seeing a prototype appear on Flickr in July, we've all been waiting patiently to see pictures of the final version. Well, the wait is over -- photos of the new CMU snackbot, conceptual designs, and construction photos are contained below! It appears that the CMU team is progressing nicely.
This is great! Honda is celebrating its 50th year in the US by creating a 49-foot tall Asimo float that will lead off the Rose Parade on January 1st, 2009. To quote the Honda press release: "Honda's Rose Parade float, a 49-foot replica of Honda's ASIMO humanoid robot, and the parade's first-ever hydrogen-powered fuel cell pace car, the Honda FCX Clarity, will lead the 120th Rose Parade as well as kick off Honda's 50th anniversary of U.S. operations." I'm always a fan of robots being displayed (and appreciated) by the general public; thanks to DVICE for pointing this out.
I've always wanted to pull apart a SICK laser rangefinder (LIDAR). However, the $6k price-tag (and advisor repercussions) have always been a sufficient deterrent. Well, Kyle Vogt of MIT has disassembled what looks to be a SICK LMS-210 -- perhaps his was already broken? Anyway, the internal design is surprisingly simple. It's interesting to look at the internals of such iconic piece of robotics hardware. Read on for more images.
The folks at Dr. Matsumaru's Bio-Robotics & Human-Mechatronics Laboratory have worked on some very interesting human-robot interaction projects. I'm particularly interested in their video-projector interfaces. In one scenario, the video projector shows the robot's intended motion trajectory. In another scenario, dubbed the "Step-On Interface" or SOI, users step on projected "buttons" to control the robot. According to videos (below), Dr. Matsumaru is targeting home-based service robots. Read on for videos and more information about the video projector robot interfaces, as well as some others (using visible lasers, LCDs, and Persistence of Vision or POV displays).
You may recall Justin, the humanoid robot sporting two DLR-III lightweight arms and two DLR-II hands. Well, Justin has recently acquired a 4-wheel mobile base dubbed "Rollin' Justin". The base utilizes a "powered-caster" design similar to the Willow Garage PR2, except that the torso-caster linkage contains a spring-loaded lift mechanism that gives the base a variable footprint. I'm sure this will prove useful when trying to squeeze through doors, adapting to uneven terrain, or providing a larger support polygon. While we currently do not have any video of the system in action, there are a number of great pictures and design documents below.
Myself and several colleagues are anxiously following the creation of Willow Garage's PR2 mobile manipulation robot. By looking at the progress on WG's blog, it appears they're well on their way to functioning units by early next year; they already have some bases, spines, heads, and even an arm up and running -- read on to see more images from the PR2 "alpha" prototypes. One interesting aspect of Willow Garage is that their "Robot Operating System" (ROS), being developed by the Player-Stage founder Brian Gerkey, is entirely open source and run on (among others) Ubuntu Linux! You may also recall that Keenan Wyrobek and Eric Berger (formerly at Stanford, now both at Willow Garage) had a hand in the PR1 robot, with impressive videos of the robot cleaning up rooms, fetching beer, and unloading a dishwasher (see videos below).
There has been a lot of press in the last six months revolving around El-E, the autonomous mobile manipulation platform for the motor impaired out of Georgia Tech's Healthcare Robotics Lab (to which I belong). There was a report in the NY Times on El-E's laser-pointer interface, and now a report in MIT Tech Review on El-E behaving like a service dog. Recently, the lab's director (and my advisor) Dr. Charlie Kemp, gave an impressive talk at Carnegie Mellon's Robotics Institute (CMU-RI) where he adeptly ties together these research initiatives and makes a compelling case for more autonomous mobile manipulators for the motor impaired. Read on for the CMU-RI video and some choice images and themes from the talk.