I recently toured the factory here in Santa Barbara where a flash LADAR system is manufactured. Part of the tour included a look at the laser subcomponent construction where Advanced Scientific Concepts builds their own lasers because there are no "off the shelf" lasers available with suitable characteristics. This is how the founder, Roger Stettner, deals with "getting things done". If he can't find what he needs, he makes it himself.
Early LIDAR (Light Detection And Ranging) systems were developed to take advantage of shorter wavelengths than typical radar. Lasers in the ultraviolet, visible and near infrared have advantages in imaging non metalic objects and small, even molecular, targets. 3D imaging of a scene was accomplished by scanning a pulsed laser across the object scene and, for every pulse, measuring the time between the output pulse and the return signal. In this configuration, every pulse gives a direction (where were you aiming the laser?) and a distance (calculated from how long it took to get a signal back). By accumulating enough of these data points , you can build a full 3D scene and store a "cloud" of such points in 3D space in your computer.
Dr. Stettner and his company, Advanced Scientific Concepts, advanced the art to the next level. In their system, a single pulse from the laser is spread over the entire scene and then imaged by a lens onto an array. They get the entire 3D point cloud for every pulse. By realizing that each pixel of a ccd array corresponds to a small solid angle (a direction from the camera) in a typical imaging camera system, they figured the only thing missing from the information output from such a ccd video camera was the time of flight, or distance to the object point being imaged. That information would be pretty difficult to obtain from a ccd based camera. Usually, a ccd accumulates light for some integration period (often measured in ms) and the the charge developed in each pixel is read out from the array without regard to when that charge was developed during the integration period. The amount of charge, of course, corresponds to the amount of light absorbed. For some monochromatic cameras, a frame rate of 1kHz is possible, which means reading out the array every ms, with integration times as small as tens of microseconds (µs). But, if you want to be able to tell the difference between a laser light scattering point that is 20m away and one that is 21m away (not very good resolution by the way) then you need to know that the difference between the time of flight for these two separate image points is 2×21m/c - 2×20m/c = 2m/c = < 7ns.
So, as I see it, the real magic developed by the team at Advanced Scientific Concepts is an imaging array that gives time information for each pixel. Precision for this timing information is in the sub-nanosecond range! ASC's solution is a multifunction ROIC (read out integrated circuit) based upon both analog and digital processing. What thier solution makes possible is a new toy that I would love to play with - a 3D video camera. The images are in purely IR light, so they are monochrome for us humans, but the image we would actually want to see is the 3D model created by the computer from the information fed to it by the flash LIDAR video camera.
I think the next step for the flash LIDAR video camera is an improvement to the image reconstruction algorithms that would -on the fly- recognize the surfaces comprised of the "point clouds" and display a more recognisable image. This won't be an easy task, but I suspect that the team at ASC will "get it done" in the near future.
Note: The acronym LADAR (Laser Detection and Ranging) is often used in military contexts.