Today I had the pleasure to talk with the Special Libraries Association‘s Competitive Intelligence Division on tricks and Internet tools to use to research private companies. I always really enjoy presenting to SLA audiences because they are so engaged and tend to teach me new tricks.
Here are the slides from that presentation. I know the CI Division will have replay details shortly. For anyone who has information they would like to add to the discussion or questions they would like to ask please feel free to do so in the Comments to this blog entry.
In preparation for my presentation on RSS at the SLA 2009 conference I created a set of screencasts of simple RSS workflows. These workflows are very simple and straightforward.
This is the most simple workflow for finding RSS feeds for standard sources such as on-line publications and blogs:
Next I demonstrate the simplicity of creating a custom RSS feed using search. In this case I want to ride the coattails of Twitter and demonstrate the potential for custom RSS feeds based on Twitter searches to give you near real-time tracking of company reputation or tracking of developing events.
One of the reasons I am such a big fan of Google Reader is the ease with which users can share the items that they find interesting (witness the ever-changing list of my latest shared items from Google Reader that graces the right side of this very blog). For the information professional this ease has real potential to facilitate team collaboration on research projects or create information products such as corporate news portals.
I’m not entirely thrilled with the video quality of the screencasts as they made their way from Quicktime files on my Mac to flash-based videos on YouTube. Your viewing experience will be improved if you expand the video to full screen.
Posted in CI
Tagged research, rss, SLA
A week from this coming Monday I will be reprising my presentation on how competitive intelligence professionals can best use RSS as a low-cost method to cast a wide research network. I’ve tried to update the material to discuss the potential of Twitter to track sentiment, issues and breaking events in near real-time.
I’ve also updated the material to highlight one of my favorite features of Google Reader: the ease with which users can share news items of interest, and how the RSS feed of a user’s shared items can simplify collaboration and publishing of relevant news items. Anybody who is tracking my shared Google Reader items will quickly see that I am a promiscuous sharer of items related to telecom, competitive intelligence, technology, politics, economics and other topics. Between this and Twitter this blog has really become more of an aggregation point for me (and I suppose my Facebook page functions in a similar way) than a site for which I write frequently (and never as frequently as I would like).
As much as I think Google Reader is a great tool and the best RSS aggregator around, there is one feature that is sorely missing. The SmartList feature in NetNewsWire (a Macintosh RSS reader) is a sophisticated way to filter all of the news items in your RSS aggregator based on the occurrence of key words that the user defines, including with some Boolean functionality.
Feel free to take a look at my slides and let me know what you think. I would actually appreciate feedback in the next few days that might help me deliver an even better presentation to the SLA audience.
Posted in CI
Tagged research, rss, SLA
This weekend I’ve been working feverishly to recreate my presentation on “Using the Internet to Research Private Companies” for my upcoming SCIP webinar. I’ve been applying Andrew Abela’s Extreme Presentation method that results in a more coherent “story” and also results in much more attractive and meaningful graphic slides. I used this approach for my presentation at the Frost & Sullivan Competitive Intelligence MindXChange in January and was very happy with the results. One of my goals for this presentation is that I want to encourage researchers to move from thinking about requests for specific information and focusing more on the motivating decision that they are trying to inform.
I have been looking at this concept recently based on some observations I’ve been pulling together about some cognitive biases that occur when competitive intelligence tasking is focused strictly on finding specific information about the market or a competitor versus inquiries and support based on decision support. While I’m not going to go into this topic in detail in the webinar, I did take some time to capture some quick thoughts on the cognitive biases that I have observed behind information-driven inquiry from CI customers:
- Over-estimate the level of specificity required
- Over-estimate the level of precision required/possible
- Over-value quantitative information
- Over-estimate the need for “up-to-the-minute” facts over historical trends
- Value the tactical and devalue strategic
- Under-estmate ethical considerations, up to and including advocating for industrial espionage under-estimate cost
- Under-estimate timeframe required for information collection
- Emphasize adherence to requirements over results
- Emphasize information over analysis, reject external opinion
- Over-reliance on individual pieces of data or information, often from unqualified or unverified sources
- Significant confirmation bias– seek specific information to prove intuitive conclusions or justify decisions already made
- Over-emphasize the need to move quickly over confirmation of accuracy of information or quality of analysis
Admittedly there is a lot of redundancy and overlap in that list. As I refine the concept for a new project and really get down to specific cases and examples I am sure the list will be both narrowed and focused. In a nutshell I relate these cognitive biases back to the tyranny of the urgent over the important.
Good CI managers and practitioners are going to be challenged always to push back against the information-driven approach to client inquiry. The sometimes subjective nature of “good” collection and “quality” analysis actually gives me a degree of sympathy for the client who expresses his or her requests for support in terms of access to information. It is much easier to answer the question “Did I get what I requested?” if I express my request in terms of tangible information. The decisions that need to be made are often very sensitive in nature, and the desire to compartmentalize those considerations is certainly justifiable. All of these very understandable preferences lead us to a very sub-optimal destination where practitioner time and effort is wasted to deliver something that doesn’t really address the client’s need.
New CI practices and employees effectively have to earn the permission to be decision support consultants by going above and beyond traditional information-driven requests. They must also somehow do this without falling into the trap of becoming so good at meeting information-driven expectations that they become typecast as purveyors of information as opposed to the true decision support role that CI really is intended to be. Key to doing this is to anticipate the decision requirement that drives the information request and do “well enough” on the information but go above and beyond in providing it in a firm form that also provides some quick-win analysis.
There’s a lot more here, and probably more than I can go into in one blog entry. I’m particularly interested in seeing if any of my fellow CI practitioners and vendors have any thoughts, experiences or cases along these lines of moving a client from information-driven requests to an inquiry framework based on decision support.
On February 25 at 12h00 Noon Eastern I will be reprieving an updated version of the very successful webinar I delivered for the Society of Competitive Intelligence Professionals on using the Internet to research private companies. Interested parties can register for the event at the SCIP web site. I’m particularly excited about some of the updates that I’m going to be able to detail new methods and tools to research private companies using social networks.
I really enjoy delivering training such as this and sharing some of the secrets that I use for researching private companies of all sizes. Conducting research on small, private companies is much more challenging than researching large, public companies. The obvious distinction is the availability of SEC and other securities filings for public companies that contain a wealth of information about operations and performance. Using secondary sources to research private companies requires a lot of creativity.
The principal message that I try to convey in each of my webinars or presentations about Internet research is to have a plan. Being smart about how you are going to spend your time, what sources you are going to use and what you can realistically expect to find on the Internet is critical to success. Sometimes stakeholders need to be reminded that not everything is available on The Google. My first rule of thumb for Internet research is that if there is no reason for a person to put a piece of information on-line it won’t be on-line. People and companies make information available for self-serving reasons such as promotion, recruiting or because they need to comply with legal requirements.
Finally, secondary research and OSINT do not stand alone. Primary collection and HUMINT are critical for gaining real insight about private companies, and that really cannot be avoided. Secondary sources can provide some great guidance on the best primary resources you should be interviewing, what questions you should be asking them and how you should evaluate the information they provide to you. No amount of primary or secondary collection should stand alone without analysis: what does the information mean to us, what might come of all this and what should we do about it?
I hope readers of this blog will be able to join us for this webinar. I’m going to try to find an opportunity to present the webinar live to a group of local attendees to offer some face-to-face interaction. Watch this space for updates.
Once again I am stealing shamelessly from Slashdot…
When I first arrived in Washington, DC to attend university, I would see many of my classmates who had internships on the Hill carrying around reports with “CRS” on the cover. It was then that I first discovered the Congressional Research Service, an organization with a $100 million budget to support the research requirements of members of Congress.
Reports published by the CRS enter into the public domain once released by a member of Congress. A number of libraries have been building collections of these reports. The wonderful folks at the Center for Democracy & Technology have been kind enough to create a searchable compendium of these reports as part of the Open CRS project.
CRS reports provide some great background for researchers, particularly if they’re trying to get a handle on a new subject area or get a sense of what facts legislators are reading. The quality of the reports I’ve been looking at today might not necessarily be on par with industry-specific analysts, but that is mostly because the reports are clearly written for lay audiences such as legislators and their staff. One really great aspect of the reports I am discovering is the citations, which are often freely available deep web sources.
The Open CRS site is also complete with its own RSS feed, a very excellent touch.