Velodyne HDL-64E Laser Rangefinder (LIDAR) Pseudo-Disassembled

Back on December 15th, we got a look at the internals of a SICK Laser Rangefinder (LIDAR), a $6k device that employs a single laser diode to produce ~6000 points per second (~600 points per scan at ~10Hz) over a 180° field-of-view.  Now, we can compare that to the Rolls Royce of Laser Rangefinders -- the Velodyne Lidar, a $75k device employing 64 laser diodes to produce 1.3 million data points per second with a 360° horizontal field-of-view and a 26.8° vertical field-of-view.  Below is a video of Bruce Hall, President of Velodyne LIDAR, demonstrating the HDL-64E in operation and taking a look at its internals.  It may not be a complete disassembly (it does cost $75,000 afterall!), but it does provide some interesting insights into the Velodyne's internals.

You may recall that the Velodyne (below left) is a popular fixture on DARPA Urban Grand Challenge vehicles, producing the characteristic concentric laser scans (below right) that proved useful in everything from obstacle avoidance to curb and lane detection.  

Velodyne Laser Range Finder (LIDAR)  Velodyne Laser Range Finder (LIDAR) Concentric Circle Visualization

So let's dig a little deeper and show how this amazing sensor functions.  First (below left) is an image showing the characteristic front lens assembly. Notice that there are two "blocks" -- a top and a bottom, which each contain 32 laser diodes (for a total of 64).  The laser beams exit the device on the outer lenses and return to photo-detectors through the middle lenses, using time-of-flight (TOF) to determine distance.  Below right is a view of the rear of the device.

Velodyne Insides: Front Lens Blocks  Velodyne Insides: Backside Internals

There are a couple of interesting structures to note in the rear of the Velodyne.  For example, there are four banks of laser diodes, each containing 16 lasers; in the image (below left), Bruce is pointing to the "top right" laser diode bank.  The lasers are precisely (and painstakingly?) aligned with avalance photodiodes (a semiconductor approximation to a photo-multiplier tube) contained on a PCB behind the central lens.  Bruce is pointing to the top avalance photodiode board in the image (below center).  All of the timing, control, and reception signals are routed to a "main PCB" just under the top of the device.  Finally, counter-balancing weights are employed to keep the entire (spinning) system stable -- they are being pointed to in the image (below right).

Velodyne Insides: 16 Top Right Laser Diodes   Velodyne Insides: Top Avalanche Photodiode Assembly  Velodyne Insides: Counter-Balancing Weights

 

OK, enough chatter.  You can watch the video if you like.

 

So, I have a few questions about the resiliency of these sensors...  Among my questions:

  • What is the mean-time-between-failure (MTBF)?
  • How critical is the laser diode and photodiode alignment?  What happens if the alignment drifts, and how often does this happen?  (How about cost of repair or maintenance?)
  • Obviously, mass-balancing is pretty important for something with so much rotational inertia.  What happens if the device's balance is disrupted (say, it's hit and deformed with a ball) -- where does all that inertia go, and how dangerous is it to bystanders?

 

Anyway, it is a very compelling sensor -- I wish I could afford one.

Credit to Robot Central for pointing out this video.

Comments

I am excited to see that Velodyne's official homepage links to this post (see the link for "HDL-64E Product Demonstration").  While on their hompage, I was also reminded of the very cool, Grammy-Nominated RadioHead Music Video (see below) for the song "House of Cards" that features Velodyne-generated 3D point-clouds in addition to spatial data from Geometric Informatics' camera system.

 


 

To me, the most curious part of the video is where you can clearly make out the power lines in the urban point-clouds!  Power lines are generally quite small, making them difficult to resolve at large distances, yet the Velodyne seems to see them just fine -- impressive!  For those who are curious how this music video was produced, check out the video about the production below.

 


 

Finally, the processing source code and LIDAR scan data to build your own visualizations is available from Google Code, along with a fun web-app that lets you interactively explore a LIDAR scan.  When I get some spare time, I'll have to go take a look at how they're doing their cloud visualization -- most of the systems I've used are based on an OpenGL desktop application.

While I'm at it, I think it may be prudent to include more comprehensive specifications for the Velodyne HDL-64E (also found in PDF form).

Specifications for the Velodyne HDL-64E:

Sensor:

  • 64 lasers
  • 360 degree field of view (azimuth)
  • 0.09 degree angular resolution (azimuth)
  • 26.8 degree vertical field of view (elevation) -+ 2° up to -24.8° down with 64 equally spaced angular subdivisions (approximately 0.4°)
  • < 2 cm distance accuracy
  • 5-15 Hz field of view update (user selectable)
  • 50 meter range for pavement (~0.10 reflectivity)
  • 120 meter range for cars and foliage (~0.80 reflectivity)
  • >1.333M points per second
  • < 0.05 milliseconds latency

 

Laser

  • Class 1 - eye safe
  • 4 X 16 laser block assemblies
  • 905 nm wavelength
  • 5 nanosecond pulse
  • Adaptive power system for minimizing saturations and blinding

 

Mechanical

  • 12V input (16V max) @ 4 amps
  • < 29 lbs.
  • 10" tall cylinder of 8" OD diameter
  • 300 RPM - 900 RPM spin rate (user selectable)

 

Output

  • 100 Mbps UDP Ethernet packets

 

 

—Travis Deyle
Somehow it doesn't surprise me that a company that was first known for applying motion-feedback technology to subbass speakers is now looking into motion control technology for robots. This is the way I always hoped Velodyne would branch out, but I hope they'll continue to give attention to their range of audiophile and home theater subwoofers as well.
Super interesante
—JORGENICOLAS
great article! still sad to see such an interesting piece of technology being wasted on that whining person in the second video.
—m.s.
The video is awesome and you know it m.s.
—guy
amazing .... americans using meters!
—roffe
As for the question of how durable these things are... well, they're pretty dense bits of machinery. If you mounted one to something really solid and took a sledgehammer to it, I'd bet that the motor bearings would be fine. The only thing that I think might fly off would be shards of glass, and/or lens fragments.
—scarp

When we were working on that video, we actually discovered a lot of great applications for modern filmmaking that we've been working on putting into films.  I think it's only a matter of time before we can start capturing better results with "non-participating" media like smoke and glass.  On the film we're working on right now for Martin Scorsese, we're capturing most of the sets with lidar and including color values for all the vertices so we can reconstruct not just the XYZ values of geometry in the scene, but also RGB.

 One of the biggest things we wanted to do after shooting that video was set up a lidar system at 24Hz synced with a traditional RGB motion picture camera via beamsplitter so we could capture RGBZ data.  It's possible that we could one day automate a lot of things that are presently done in visual effects with manual labor.

Sadly it's on the back burner for lack of research funds, and we're too damn busy with the work we've already got in front of us. 

Glad to see someone appreciated that video at least on a technical level.  It was a pretty  major gamble that we only had 5 weeks to pull off.  And forget trying to explain to the "creatives" what we were doing.  No one really had too much of a clue what was going on, they wanted to know if they'd get something cool at the end.  Props to them for taking the chance.

-ben (VFX Supervisor on that thing)

PS:  Most of the noise and static you see in the performance captures we did with the Geometrics system are the result of shooting through glass that had water drizzling on it.  And when you see chunks of his head flying around, that's because we were wacking the scanner periodically because the director thought it looked too clean and real.  We also ended up decimating the data set to make it look pixelated.  We were recording so much data from his face that it looked like a complete mesh, and with the intensity values applied, it just looked like we shot him with a regular camera in black and white.

PPS: We always use the metric system in VFX. ;) 

—ben grossmann

Would someone be kind enough to please point one to a dataset using velodyne? (preferably incrementally complex or going from simple to complex please).

 

Thanks

—A.S

@ A.S.

A quick Google search for "velodyne data set" turned up one dataset from the University of Osnabruck in Germany (yeah, I had never heard of them either) and another dataset from the University of Washington.  There are probably others too...

—Travis Deyle

This is a great article. One thing that I want to know is MTBF  for this type of LIDAR systems? can velodyne LIDAR sensors run on their own without any interruption (rebooting or maintanence) for long periods of time? say weeks or months...!!!Thanks. 

 

—Anonymous

Blip removed your video (actually your whole account), can you upload somewhere else?

—R.G.

@R.G.,

Apparently Blip was undergoing some "shrinking pains"?  This video was actually created and hosted by Scivestor, but it seems their Blip account (and all videos!) have been removed during Blip's downsizing.  Worst of all... Scivestor seems to be defucnt / deceased.  

Thankfully.... I try to retain copies of all source material (including videos) when writing Hizook articles for this very reason!!  I've uploaded my local copy to YouTube for posterity and embedded it in place of the Blip player in the article.

If anyone from Scivestor has issues with me doing this... please let me know via the contact form.

—Travis Deyle

This is indeed is an amazing equipment. Isn’t this the camera used by google for creating street view? Anyways the brains behind this machine I would say are just superficial. Thanks a lot for the read. I would like to know more about LIDARS.

—Kumilapole

Great article! I wonder that the number of APDs is also 2 groups, each group contains 32 ?  

The optics seems fantasitc here, I do hope that I can have a sight to its lense assembly. 

If you know, can put some pictures ? thanks.

 

—Maodou

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <p>
  • HTML tags will be transformed to conform to HTML standards.

More information about formatting options