OpenGeo has recently released OpenGeo Suite 3.0. Slashgeo sat down with Rolando Peñate, the Product Manager for the OpenGeo Suite, to discuss this new release. Here are the results of our interview:
Slashgeo: What are some of the most exciting features available in the OpenGeo Suite 3.0 release?
Rolando: We’re most excited about bringing spatial processing to the OpenGeo Suite. With processing, our customers can work with larger data and perform just-in-time analysis with fewer bottlenecks than with a desktop-based GIS workflow. We believe that exposing processing on the web will provide IT professionals the flexibility to solve unique problems in innovative ways—either by directly invoking existing WPS processes, writing new processes in common scripting languages, using rendering transformations to style data, or even in ways we haven’t yet considered.
One place we’ve implemented this is a prototype for the USGS National Hydrography Dataset that allows editing of data while validating against strict topology rules across multiple layers. Their current tools require up to six hours for a single edit, but our prototype drastically reduced the effort—down to about one minute—by using a web-based editor.
We are also working to make processing operations easier in browser-based visualizations. Rendering transformations enable just-in-time use of any WPS process as part of a layer’s style. The process is applied on-the-fly to transform the area being viewed, rather than the full data set (as it would with desktop based GIS), and provide immediate visual feedback. For example, NASA’s Global Learning and Observations to Benefit the Environment (GLOBE) program leverages this feature for fast, dynamic presentation of interpolated surfaces derived from environmental measurements collected at schools around the world. Similarly, one could create topography maps in real-time by applying a process to a digital elevation model and defining a style for the resulting contour lines.
Slashgeo: How does OpenGeo Suite 3.0 measure up to comparable proprietary solutions?
Rolando: One of our goals for the OpenGeo Suite is to solve many of the same problems as proprietary solutions, though not necessarily by providing the same tools. Our product and proprietary solutions can both publish data from a variety of enterprise databases, including Oracle Spatial and Microsoft SQL Server. Both offer web services for querying and editing features, publishing and caching map tiles, and running processes on spatial data. Both provide tools for building web mapping applications or mobile applications.
The primary differences are in approach. While proprietary solutions use server-client architecture that’s been adapted to accommodate web services, the OpenGeo Suite was built for the web from the ground up and encompasses best practices from the IT field. Rather than build a kitchen sink product with many highly-specialized features, we focus on developing a powerful base of functionality and providing tools for building applications to solve specific problems. While traditional GIS requires ‘certified’ experts with extensive training to pull data into expensive desktop tools and provide derivative data, the OpenGeo Suite exposes similarly powerful functionality on the web in ways that are integrated with the tools that IT professionals use every day.
Slashgeo: How does cloud computing factor into OpenGeo’s plans for the future?
Rolando: Many of our customers already use our software in the cloud and we offer support for those who wish to deploy OpenGeo Suite on Amazon Web Services (AWS). As cloud computing steadily gains acceptance, we will continue to ensure that our products are reliable and scalable. We’re keen to expand these offerings so stay tuned and we’ll make sure to keep you updated on what we expect to do in the future.
Slashgeo: What major trends in the geospatial technology arena do you see developing in the next 5 years, and how will those trends impact OpenGeo?
Rolando: Much like how source code is managed with distributed version control, we anticipate a future where spatial data will live in a collaborative infrastructure that is able to track data’s origin and evolution. Although spatial data is really just one aspect of the greater information technology landscape for any given enterprise, it has traditionally been siloed and forced through specialized workflows. As with other types of ‘big data’, we foresee increased difficulty storing, managing, and maintaining spatial data across enterprises. Thanks in large part to the reactions to geospatial crowd-sourcing efforts like OpenStreetMap and Ushahidi, we see a significant shift in how geospatial data is conceived of, stored, and distributed.
Adopting the distributed version control model pioneered with source code will play a critical role in alleviating the difficulties that have historically plagued users of geospatial data. A distributed version control model can better address such problems as collaborating between users or organizations, maintaining authoritative data, and enabling offline, low-bandwidth, or intermittent connectivity. Just as access to source code enables a developer to change software by adding to or changing its functionality and appearance, access to underlying geospatial data enables cartographers and analysts to fix mistakes, conduct analysis and modeling, and update datasets. We foresee true collaboration around geospatial information having profound implications for users of geospatial data and have begun investing in solutions to support them.
Slashgeo: Is OpenGeo an example of a successful business model built around open source software? If so, what can other organizations learn from this business model?
Rolando: Given how much we’ve grown in the last several years we believe that a business built around open source software can not only succeed, but thrive. The appetite for open source software is growing; companies like RedHat and JBoss have proven that an open source development model does not inhibit growth, but can often enable it.
In a recent presentation at the Texas GIS forum, Paul Ramsey delivered a compelling introduction to open source and why it’s being adopted faster than ever. Among the many takeaways was the simple fact that startups love open source. Why? Because startups cannot afford artificial limits on their growth. While the cost of computing hardware falls every year, the cost of proprietary software licenses does not. If you’re using software licensed per CPU or core, the primary driver of scaling cost is software cost, and that math does not benefit the consumer. Larger enterprises are being strangled by the immense license costs they are being forced to pay year-over-year. Today we’re all expected to do more with less. With open source functionality meeting or exceeding proprietary solutions, license costs are quickly being targeted as an obvious way to cut costs.
While open source software lacks explicit license costs, all software has maintenance, operating and other related costs. Unsupported open source software shifts these costs to the end user, which isn’t an issue if the end user or their enterprise has expertise in the relevant software and is willing to pay for support using staff time. Commercial open source provides the option to save time and reduce direct labor costs by outsourcing maintenance and support to experts. OpenGeo steps in when enterprises lack the time, resources, or internal expertise to maintain open source software. OpenGeo’s mission is to lower costs while continually enhancing the functionality of open source, and our customers value that highly. We expand on these idea in our white paper, “The Value of the OpenGeo Suite”, which outlines our model and why it’s beneficial to all parties involved.
We appreciate the participation of OpenGeo, and of Rolando, in this interview. We hope you enjoyed reading it.