Social Media Strategy for Product Launches

September 7th, 2010 No comments

Where does social media play into a Product Manager’s go-to-market strategy?  Earlier this year Pragmatic Marketing released their 2009 Annual Product Management and Marketing Survey.  While the survey covers many aspects of product management, I noticed that the survey included a section on social media.  In particular, one particular question asked: “How much influence is social media on your go-to-market activities?”  The results:

  • 51% Considered it but it was not a significant factor.
  • 40% None.
  • 9% Major part of the program.

With regards to Twitter, 62% of respondents did not use it.  The survey also includes a number of sample answers, which you can check out for yourself on the survey results page.   These covered a broad spectrum of opinions from optimistic to highly pessimistic.

So where does social media fit in with Product Management?  Recently the folks at Brainmates shared the results of a panel discussion they hosted on the challenges and opportunities that social media poses for product managers and marketers.  There are some great points (that I won’t rehash here), and I’d recommend checking out the post.  Instead of discussing social media and Product Management in general, I’d like to focus in on social media and product launches.  Here are a few considerations that come to mind:

  • Planning: how does social media play into your product launch plan?  For example, will you use any social media channels to discuss the launch (blogs/videos/forums/groups/micromedia/etc.)?
  • Feedback: along with client visits, conferences, phone calls, and email, social media is another venue to connect with your user community.  While your feedback strategy may vary depending on how mainstream or niche your product offering is, social media can be a source for immediate feedback on your product launch.
  • Engagement: Measuring feedback is great when you have an online community, but what do you do when that community starts reaching out to you?  What if your customers hate your new product or update?  Adding social media to your launch plan translates into engaging with your community.  So what does that mean?  Twitter isn’t an RSS feed for your product updates – use it (and other channels) to engage in discussions, thank your community, and open up new lines of communication into your organization.

Circling back to the question about where social media fits into product launch strategy: what can a well-executed strategy look like?  The Social Media Examiner recently authored a post on Cisco’s social media launch strategy.  While cost savings may not be a measurable outcome in all cases, it does highlight the broad spectrum of social media channels a launch can inhabit.  Getting back to the original question posed by the Pragmatic Marketing survey: is social media impacting your launch strategy?

Photo Courtesy of Kryten

Share

Mapping Census Tweets

March 1st, 2010 No comments

Steven Romalewski of the CUNY Mapping Service at the Center for Urban Research reached out after my previous post, which outlined UMapper’s approach for mapping real-time Twitter posts.  The CUNY Mapping Service recently launched the Census Hard to Count 2010 site, designed to assist in increasing participation in the 2010 census.  The application displays areas determined as “hard to count” areas (methodology here), along with several layers of thematic data.

The site was recently updated with a Twitter feed that uses the #census and #census2010 hashtags.  This serves two purposes: acts as a geo-referenced feed for census information, and also helps create increase communications between census advocates and citizens interested in the 2010 census.  It’s also a great real-world application of georeferenced Twitter data.   While the percentage of users that have enabled geotagging is estimated to be under 1%, the total number of users in the USA is estimated to be 18 million.  So even a small fraction of users still represents a large quantity of georeferenced data.  Also, geotagging is still relatively new to Twitter – so the adoption rate may stand to grow as more people turn it on, and as more client application add the option to geotag posts.

To display the Twitter feed in the Hard to Count site, you just need to click on the Twitter tab on the upper left portion of the application (see screenshot to the left).  You can also choose the hashtag you would like to display, and the relevant post are displayed on the side panel as well as on the map.  Additionally, you can select a post on the left and choose the “Show on Map” option.  This will display the selected side panel post on the map.

It’s great to see a real-world use case for geotagged micromedia data – so feel free to check it out at www.censushardtocountmaps.org.  Hopefully this is a sign of more to come!

Share

Mapping Realtime Tweets

February 25th, 2010 No comments

I had the opportunity to check out the new update from UMapper today, which adds the ability to layer real-time tweets over your maps.  While there are a lot of real-time tweet-mapping applications out there, the interesting thing about UMapper’s implementation is that you can create a map with a specific search term built into the map metadata.

See below for an example: this map focuses on the Washington DC area, showing tweets with the word “Toyota” in them.  In the map below, you can also pan and zoom – which will show the posts from any area of interest.

UMapper has come along quite away since the last time I looked at it, which was around a year ago.  The map above uses OpenStreetMap as a base layer, and there are quite a few improvements: different base data providers, several templates, as well as a flexible user permissions system that allows for collaborative mapping.  While I didn’t include anything fancy in the map above, it is also possible to mash in other data sources (e.g. KML, GeoRSS, GPX).  This allows you to easily design and share maps.

The ability to embed maps, such as the one above, is also a nice capability.  For example, it would be great for event monitoring – where people all over the world may be using a Twitter hashtag to talk about the event, but you are only interested in tweets from the specific event location.  While geotagged tweets have been possible to view in Google Maps for awhile now, I think the UMapper approach presents an attractive alternative for homing in on specific topics of interest.

Share

Customer Tours and Product Management

February 15th, 2010 2 comments

A few weeks ago I had the good fortune on going on my first customer tour in quite a while.  I’ve done a lot of these over the years, but it is a refreshing eye-opener to meet customers for the first time in a new role.  While phone calls and email discussions are valuable, there’s no substitute for learning about how customers are interacting with your product.  I want to touch on a few key areas that live visits have helped me with, namely context, feedback, and insights.

A useful component of an in-person visit is an understanding of context.  For example, during the recent trip I was able to visit major corporations along with small and medium-sized businesses.  The needs between organizations of these sizes can vary greatly, and on-site visits can help understand how barriers to success can differ between organizations of varying size.  For example, a large enterprise deployment may require a comprehensive plan for user adoption, initial training, and continuing refreshing (e.g. new features or workshops to get new employees up to speed quickly).  There may also be several groups within an organization that have different needs, and may need to use the system in different ways.  This can differ from a smaller organizations, where the number of use cases may not be as broad and the business needs for the software can be fulfilled by small number of power users.

Feedback and insights are also valuable reasons for customer visits.  Here are a few considerations:

  • How are customers using the software (actually watch)?  There’s no replacement for seeing how people interact with software first-hand, as it may not be in ways you would expect.
  • Points of pain: are there any “but if it only did this….” moments?
  • Learn about the role your software plays at a customer site: how does it add value to their business?  What problems does it solve for them?  What are the usage patterns (e.g. casual usage, project based, constant part of a key workflow, etc).
  • Ideas generation: great ideas can come from listening and brainstorming with customers.
  • Communications: how is the relationship between the customer and your organization.  Are they getting the level of service they desire?  Are they getting timely updates about new features?  Visits are an opportunity to learn how customers want to interact with your organization.

While events such as trade shows, workshops, and seminars are also great places to interact with customers, going on-site and understanding the full experience through a customer’s eyes can provide a great deal of value.  There’s also another great reason for customer tours: relationships.  Much like we (product management) appreciate feedback, customers tend to appreciate all the software tips, ideas, and insights into the software that you can provide them.  And when you’re not there to sell anything to them, the trust factor can be high.

Share

Web-based CAD With Project Butterfly

February 1st, 2010 4 comments

After reading about Autodesk’s Project Butterfly, I took some time to give it a whirl.  Project Butterfly is an on-demand system designed for users that want to experience AutoCAD through a web browser.  Being a technology preview with limited functionality at present, it isn’t a product available for sale – but you can try it out for free and see how the experience differs from the usual desktop (on-premise) experience.

I’m not a heavy CAD user, but there are several reasons why I think Project Butterfly is compelling.

1) The fact that it is an on demand solution:

  • No installation required.  A secondary benefit is that updates are pushed out rather than downloaded and installed.
  • Platform independence: enables accessibility on desktops, laptops (netbooks!), and a number of operating systems.
  • Project Butterfly utilizes Amazon Web Services.  This enables online data storage, but also allows users to download models to local computers.

2) SaaS solutions are nothing new, but I’m not aware of a precedence of a solidly entrenched geo-related desktop application being offered as an on-demand service in addition to the traditional desktop ownership model.  I wouldn’t bet on the decline of AutoCAD as a desktop solution anytime soon, but Project Butterfly provides and attractive glimpse of future possibilities in terms of CAD and geospatial data production and editing.  It will be interesting to see if and when Product Butterfly can graduate into a commercial product!

3) Online data hosting:

Data Download Capability Via Amazon Web Services

Files are stored using the Amazon Simple Storage Service.  It is possible to upload your own data (e.g. I was easily able to add a JPG image as a backdrop), or download data in a number of formats.  These include DWG, PNG, JPG or Zip (with Xrefs).

4) Data production and editing tools: I played around with the geospatial sample data in Project Butterfly.  While the current functionality is limited (it is a tech preview after all), the available features are presented in an nice user interface that is easy to navigate.  Basic drawing and editing tools are available, as well as modes such as snapping and ortho.  It is also possible to upload (import) local data.

Editing in Project Butterfly

5) Collaboration: I didn’t actually try it out, but there is an option to invite others to edit and also set permissions for invitees.  These include the ability to edit and download the data – great features for showing the data to someone without actually allowing them to edit or download the data.

It’s great to see a major vendor moving in this direction.  From a product perspective, an application like this should be able to dramatically reduce the typical software release cycle (e.g. 6 months or more for a heavy desktop application), and create greater efficiencies in terms of product rollouts, technical support, removing the need to ship traditional media discs, and more…

Share
Categories: Geospatial Tags: , ,

What’s In a New Version Number?

January 20th, 2010 No comments

Now that 2010 has arrived, press releases and announcements for new “2010″ and “10″ version numbers are appearing with increasing frequency.

Why the change?  Why didn’t we see “2007″ version numbers a few years ago?  While there are a lot of 2010 versions coming out, an excellent post on “The Amazing World of Version Numbers” suggests the trend started way back in 1966 with Fortran 66 (although there are even earlier suggestions in the comments).  However, it didn’t register as a common naming practice until the release of Windows 95.

There are many methodologies for software version numbers.  Most of them operate under the premise that a big jump indicates a major release (e.g. moving from 8.0 to 9.0) while a minor increment indicates a minor release or perhaps even a maintenance release (e.g. 9.0 to 9.1).

The connotations surrounding “2010″ are plentiful: a new year, a new decade, and hopefully a new start after the biggest economic crisis of our time.  It’s an attractive reason to switch from an ordered number system to a date-based system.  It also represents a shift from engineering-based ordering to a marketing-based system.  While a 2010 label may give a product a fresh look, vendors switching to a date-based method should ensure the release matches expectations in terms of value for customers.  A minor update with little value repackaged as a “2010 Edition” has the potential to impact brand credibility.  On the flip-side, a new release packed full of valuable capabilities can warrant such a change and deliver some extra oomph during a new release launch.  Like many product management decisions, software version methodologies deserve careful consideration when planning a release.

Image Credit: Marcin Wichary

Share

The Trouble With Location-Based Social Media…

January 19th, 2010 6 comments

Yelp

The big news in the location-enabled social media biz this past week was a new update from Yelp.  As described in the TechCrunch post, the latest iPhone update from Yelp now allows check-ins.  This is a great development: check-ins allow people to share their location, connect, and say what the think about where they have checked in.  All of these things are good.  They allow us to to learn about particular spaces, share our own information and experiences about them, and provide a scheme that makes us feel good about “checking in.”  At first blush, these systems work fine…

But here’s the problem: locked-in environments.  The first location-based system I started using was Gowalla.  Why?  Because, like many other people, I don’t live in NYC or Los Angeles.  I live in a small city that wasn’t on the initial Foursquare list.  That’s fine: I started using Gowalla because it doesn’t care what city you’re in and allows you to create “spots” for anywhere.  Then, in a recent development, Foursquare allowed check-ins from any city.  OK – great news, and I started to try it out.  But then came Yelp – yet another system that allows me to check-in.  So I now have three systems that I can check-in on.  All of them will allow me to update Twitter or Facebook, but they are are still independent of each other.  I can’t add a “place” to Foursquare and Gowalla at the same time.  Choosing one system means ignoring another.  And by investing my time in one system, I’ll be less inclined to join into the next system that comes along allowing check-ins.

Sooooo, here’s calling for a universal check-in system.  Why is it that I have to choose between Yelp, Foursquare, or Gowalla?  Should I not be able to check in on a phone, and then that data gets shared with every location-based Social Media program I have subscribed to?  Interoperability will provide these products with features to differentiate on other than the ability to check-in.  And I suppose that’s the good thing about such a dynamic market space: greater competition and adoption will (hopefully) reward providers that support and promote interoperability.

Share
Categories: Social Media Tags: ,

Haiti Earthquake Mapping

January 14th, 2010 7 comments

I initially learned about MapAction during an Infoterra event in the UK about a year a half ago.  Impressed by their work, I’ve been following their activities ever since.  After reading they had a team en route to Haiti to assist in the humanitarian effort there, I took a look at their Field Guide to Humanitarian Mapping.  It’s an excellent introduction to mapping methodologies and GIS on a budget, with a heavy focus on concepts, data collection, and specific workflows.

Looking through their list of Open Source GIS applications, I thought it might be interesting to run through a small project: gather data for the affected parts of Haiti and think about data sources, workflows, and considerations in the field.

A few observations at the start, and then I’ll walk through the process I went through:

  • Satellite Imagery: Very difficult to attain…  GeoEye has graciously released post-earthquake imagery, but it is still difficult to get the full resolution processed imagery (not an ungeoreferenced jpeg).  I think that remote sensing satellite operators would want to (a) process post-catastrophe imagery as quickly as possible, and (b) get the imagery into the public domain as soon as possible.  It’s great that we can see post-earthquake imagery as a network-link in Google Earth, but people on the ground are not necessarily going to have internet access.  I know we cannot rely on private companies to provide free data as a service, but I do believe there is a need to acquire imagery quickly and get it into the public domain.
  • SRTM terrain data is a tremendous resource, but getting at the data can be tricky.  More on that below…
  • OpenStreetMap only needs a one-word description: fantastic.  I read earlier today that there have been over 400 edits made to Haiti since the Earthquake.  People around the world are donating their time to help out the cause.  Another great thing about it is that it is extremely easy to check out data and then pull it into another application – more on that below.

The Goal

Nothing fancy: I just wanted to see how long it would take and if it would be challenging to pull together base mapping data and then view it all together.  No real geoprocessing, but just data acquisition and setup using open source software and publicly available data.  This would be a similar to real-world workflows one could use for in-the-field mapping applications.  With this software and data configuration, it would be possible to begin updating data in the field, performing analysis (e.g. slope analysis for areas that could have a greater potential for mudslides), and providing spatial resources to other humanitarian groups in the field.

Ingredients

Software: no better time to try out the new QuantumGIS version, 1.4.0 ‘enceladus.’  It’s a desktop GIS application, and the download and installation process is quick and easy.

Terrain: I decided to use SRTM as a terrain layer and primary base data layer.  Why?  Height information is valuable when combined with vector data.  For example: a road network layered on top of an orthophoto won’t tell you that the road is on a steep slope – and terrain will.

Vector data: OpenStreetMap data was an obvious choice here.  The coverage is pretty good, and it is also being rapidly updated.

Imagery: I thought about downloading some Landsat imagery but took a pass: for an urban application, the medium/low resolution publicly available data isn’t very helpful.  What’s needed is 0.5 meter resolution imagery from the latest generation of sensors, which isn’t yet public at the time of writing.

Process

Here’s a step-by-step overview of the process:

1) Download an install Quantum GIS from www.qgis.org.  No extra instructions needed: this is easy.

2) Find the appropriate SRTM data.  As much as I love working with it, this is my pet peeve with SRTM: it’s available from multiple host sites, with multiple processing levels, and it can be quite a challenge to find the data you need.  Maybe it’s just me, but every time I grab some I’m left thinking about how much easier it could be to access.  In this case I first went to the Consortium for Spatial Information site and downloaded the Google KML link, displayed below. 

Looking at the KML link identifies srtm_21_09 as the file required for Haiti.  Instead of navigating the maze of SRTM sites, I just Googled the filename, which took me here: http://collections.sdsc.edu/dac2/telascience/telascience_data/elevation/cgiar_srtm_v4/tiff/.  I then downloaded the appropriate TIF file.

3) Acquire vector data.  This was very straightforward: I’ve never tried to download or use OpenStreetMap data offline, but fortunately it is a fairly simple process.  I went to www.openstreetmap.org, zoomed into Port-au-Prince, and then selected the Export button at the top.  I exported the data in the OpenStreetMap XML format.

4) The next step is to start assembling the data in QGIS.  I launched the application and loaded the SRTM data.  If you’ve used a desktop GIS application before, QGIS is fairly intuitive.

The image above shows the SRTM data with a MinMax contrast stretch applied.

5) I had to use “Manage Plugins” and load the OpenStreetMap plugin prior to adding the OSM data (Plugins > OpenStreetMap > Load OSM from file).  Now it is possible to view the vectors over the terrain data.

6) Once the OSM data is loaded, we’re ready for field mapping.  Note that it is possible to query the OSM data as well.  The image below shows a query on a hospital, which is identified in the table on the right.

This workflow isn’t very sophisticated, but it does demonstrate the ability to get up and running relatively quickly.  SRTM and OSM data are both invaluable resources – ideal for humanitarian work in disaster areas.  As for the timing: if you know where to get the data, I think the simple example above could be completed in under an hour.  That includes software installation, data downloads, and then assembling the data in a GIS.

Share

Pricing and Product Management

January 12th, 2010 No comments

Most software vendors know that pricing can be a complex beast to tackle.  There are so many strategies and perspectives on pricing that you could fill a book.  Wait a second, there ARE books on pricing…  I recently came across “Software Product Management and Pricing” and gave it a read.  The latter part of the book has some valuable insights, and I particularly liked this one:

“Customer expectations must also be managed: it makes economic sense for a vendor to sacrifice some growth for a longer term commitment; it makes no sense for a vendor to cut price for a renewal of business with no growth.”

The quote pertains to enterprise offerings, where the authors discuss special bids for large accounts.  The point is that price negotiation and discounting strategies should take the larger context of the business relationship into consideration.  Sacrificing long-term revenue for short-term gain is something that happens all the time.  While it may help with quarterly results, it rarely pays off in the long run.

Full reference: Kittlaus, Hans-Bernd, and Peter Clough. Software Product Management and Pricing. Springer-Verlag New York Inc, 2009. Print.

Share

Hosted Imagery and Web-Enabled Data Generation

January 10th, 2010 No comments

Last October I reviewed Google’s new Building Maker application, and just recently I gave it another whirl.  While there are some new technical improvements (freeform polygons, new block options, and six new cities), it’s the concepts and implications of the system that continue to impress me…

3D Feature Extraction in Google's Building Maker

A few more thoughts on the implications:

  • One of the best things about the system (and hopefully geospatial vendors are thinking about this as well) is that it’s 100% web-based.  All you require is a browser plug-in.  Compare this with other government or private mid to large-scale mapping efforts: these typically involve setting up local image servers in each office, and then shipping imagery around on hard drives.  Not ideal, but these are the realities when a single raw image can be 1GB in size.  While the geospatial market has a lot of image server solutions, not many organizations are delving into the imagery hosting/warehousing/serving business.  The current model is largely based on setting up your own infrastructure and hosting environment.
  • I have a feeling this may represent the beginning a shift in how large/mid-scale mapping is performed.  While Building Maker is a  rudimentary toolset for 3D feature extraction, the idea of delivering browser based tools instead of desktop apps will open up a lot of opportunities.  For one thing, Building Maker is a great proof of concept for web-enabled mapping tools that don’t require thick desktop software installations.
  • While a SOCET SET or PRO600 user may find Building Maker tools to be relatively basic, we shouldn’t underestimate the level of complexity in developing such a solution.  Mapping technology pre-dated software, and the commercial tools that are currently available have a high level of sophistication.  Hence, I don’t have any sort of expectation for a web-based replacement anytime soon.  The use of oblique imagery instead of creating some sort of stereo WMS viewer is a clever move by Google though.

Automatic Building Textures

Certainly interested in thoughts on this – and if you haven’t tried it yet, give Building Maker a whirl!

Share
Categories: Geospatial Tags: , ,
Get Adobe Flash player