The Penultimate Visualization System?
By Art Kalinski, GISP
Last month we looked at old and new providers of oblique imagery. I mentioned what a strong proponent I am of oblique imagery because it’s such a powerful visualization tool, easily comprehended by non-GIS users. My experience with police, firefighters and the Atlanta Regional Commission demonstrate that many first responders and politicians have difficulty reading blueprints, technical drawings or maps, but can visualize an area of interest much faster with oblique imagery.
Jack Maguire, a colleague and GIS Manager for Lexington County South Carolina, coined a very descriptive phrase. He said that most non-GIS people have “map blindness,” in that they have difficulty comprehending maps even if merged with ortho imagery. However, those same users will have no difficulty getting oriented viewing an oblique image. (See my July article for a more detailed explanation). That’s why both Google Earth and MS Bing now include oblique views and even some interactive 3D models for a growing number of urban areas.
Most oblique imagery data sets are generally limited to four cardinal directions along with an ortho view. That’s why I believe 3D models are a notch above, because they offer infinitely adjustable oblique views for even better visualization. It’s the oblique views that are the key attraction of 3D models. If you observe someone using an interactive 3D model, they almost always look at multiple oblique views. I’ve never seen a 3D model user navigate to the ortho view and stay there as they navigate around a city.
There are many ways to create 3D models, ranging from manually produced models using CAD/CAM/BIM/GIS programs to fast simple 3D modeling tools such as Google Sketch Up. Over the years there have been many vendors in the business of building 3D models, some extremely detailed and sophisticated. In my opinion the best 3D models being produced are from PLW Modelworks. Their models are very detailed, photo realistic and photo accurate. There is a precision and “correctness” to their models that is missing from many other models I’ve seen.
Most of their models are built from measurements taken directly from Pictometry metric oblique imagery. The same oblique imagery is then “draped” on each building face resulting in 3D models that are true to life and fully measurable, including length, width, height and even angular measurements from one building roof to another. This YouTube video will give you an appreciation for their models.
One aspect of PLW models important to first responders and military operators is that no part of any building in their models is cloned, textured or faked. The buildings are draped with the actual building image. If all or part of a building is occluded, then the PLW people indicate that as a black “no-data” area that looks like a black shadow. That way operators know that any window or door that is visible on a building is actually there and measurable.
An example from a PLW model showing a “no-data” area on the building across the street from the highly detailed Milan cathedral.
A recent addition to 3D modeling is Street Factory by Astrium Services, which does automated 3D models as complex TINs built from existing oblique imagery. The process is advertised as photogrammetrically corrected for high accuracy with a quick turn-around in the range of several hours. Unlike PLW models where each building is a separate object in the database, Street Factory models are one continuous surface requiring extra processing tools to extract individual buildings/features and link to attributes. See the brochure for additional information. I hope to personally see their system and products soon and will let you know what I learn and observe.
Although PLW and Street Factory models are the state of the art, there are some limitations. It does take time to build the models ranging from hours to weeks if the area is large and complex. If new imagery has to be captured, the aerial flights can add significantly more time to the entire process. So, for my GIS budget, the ultimate “holy grail” of visualization would be accurate, high resolution, full color, interactive and measurable 3D models that are easy to produce and close to real time.
Well, hang on to your surveyor’s helmet; that time has arrived.
Ball Aerospace FLASH LiDAR
For several years, I’ve observed refinements of a technology developed by Ball Aerospace called FLASH LiDAR. Simply put, Ball Aerospace created the ability to capture continuous rapid multiple LiDAR images/point clouds merged with continuous high-resolution optical images to create full-color 3D models in real time. Yes, real-time full motion video resulting in interactive geo-referenced metric 3D models.
Shown here are screen shots of the system software showing the LiDAR data colored by height, the optical image captured at the same time, and the resultant full-color 3D model of the merged data in real time.
Screenshot of the system software showing the resultant full-color 3D model of the merged data in real time.
The first time I saw the system was at GEOINT 2010 where the Ball engineers had their FLASH LiDAR running in sync with a video camera creating continuous 3D fused images. That first demonstration was somewhat crude but I could see the significant potential. They’ve continued to refine the system to a point where the models now look extremely good. This is one technology that needs to be viewed as video clips which you can access through the Ball Aerospace website.
Since the capture process is fully automated, complexity is not an issue as both simple buildings and complex trees are modeled at the same speed. Since the resultant 3D model is assembled from multiple views, trees look like trees and not like bushes. Additionally, since the very accurate LiDAR point cloud is an intrinsic part of the capture process, relative and real positional accuracy suitable for targeting is continuously maintained. Another benefit of the integrated system design is that mounting the camera pod is not complex nor does the aircraft have to be modified. Installation is quick and easy on large or small fixed-wing aircraft and helicopters.
The optical sensor can be a RGB, IR, low light, night vision or multi-spectral cameras. The resultant models can be down-linked to ground computers or hand held devices for real-time viewing and analysis.
According to Roy Nelson, Ball’s Senior Advanced Systems manager, FLASH LiDAR is tailor made for time critical 3D mapping for critical missions, enhanced situational awareness, battlefield characterization, tactical mission planning and improved targeting. For emergency responders it can help with disaster response planning and event forensics. Roy also cited a discussion he had with an EOC manager who indicated that the real-time models could be a valuable tool to communicate with the public via television, kiosks or the Internet. Since the real time 3D/oblique images are easily comprehended by the public, he could show the actual progress of a fire or flood and communicate to the public evacuation needs and routes.
So, what will be the ultimate word in visualization? I saw two possibilities at recent GEOINT conferences. First, immersive virtual reality and augmented reality keep improving and are making deep inroads in many different applications. Second, Zebra Imaging, producers of compelling 3D holograms, may eventually have the real “killer” visualization product. Their ZScape holographic motion displays are full motion holographic 3D video displays that are still in the early stages of development. I can easily imagine where this Star Wars technology will be in five years when combined with real-time full motion 3D models.