“Swiftriver is a custom built framework and server application that is responsible for collecting and collating content from multiple sources (such as SMS, Twitter etc.). We have a suite of Web Services; each one specifically designed to add value to content (an example of this would be our NLP Service). Then finally we have our Web Applications (such as Sweeper) that allow the content processed by Swiftriver and the Web Services to be visualised and used by various end users. In short, when you download one of our software packages (such as our recent Sweeper V0.3 release) you are getting: A web application called Sweeper, that sits over a server framework called Swiftriver that communicates with cloud hosted Web Services.”
It’s important for a user to understand that the platform is multi-faceted; a remote server can tag and store data while software tools designed to seamlessly feed information into the server and extract that information for insertion directly into a dataset on a computer. So what do I need to be able to tell my IT support colleague? What are the keywords she or he needs when setting up a filtering system using the Swiftriver platform? To start with, how does the Swiftriver platform determine where and which data to pull in? Matthew explained how this is done:
“Our core platform has several points of easy extension and one of these is the plug-in system we call Parsers. Each parser knows how to communicate with one type of source and how to process data coming from that source. Examples of existing parsers are; the ‘Twitter Search Parser’, the ‘Frontline SMS Parser’ and the ‘Google News Parser’. The Parser plug-in architecture is very simple to programme for, meaning that new Parsers for any new source are simple to produce and then leverage. It would therefore, be a relatively simple task for a developer to create a Parser that ‘understood’ how to use discreet data such as geo-coordinates (or in fact any other type of data) and knew how you receive that data from a source such as an SMS Gateway. Once written, the Parser can literally be dropped into the correct folder of the software install and this new Channel (combination of source and data type) would instantly become available for use.”
Parsers does what its name describes: it parses data based on the end user’s parameters. If we only want data that is west of 29 degrees east and east of 25 degree east longitudes, and south of 7 degrees south and north of 10 degree south latitudes, it will only allow data from that block of territory to enter the Swiftriver platform. If we also only want that geographic data to be more than one hour old, but less that 48 hours old from a set time, it will parse out data that is too new or too old so that only temporally relevant data goes to the platform.
But what about getting data into a system like Excel or SPSS so that it can be analyzed? Like my colleagues, much of what I’ve seen of Swiftriver is focused on managing Twitter or RSS content. Again, Matthew gives an explanation of the mechanism that takes your data from the Swiftriver platform and plugs it into a dataset.
“Our Core platform has several points of easy extension and one of these is the plug-in system we call Reactor Turbines. These Reactor Turbines react to system events and have the ability change, control or redirect the flow of content within Swiftriver. For example, we already have a ‘Ushahidi Reactor Turbine’ that is responsible for sending content items that have been collected and processed by Swiftriver (and by processed I mean that they have been passed to our Web Services for Auto NLP Tagging, Auto GEOLocation etc.) directly to the Ushahidi mapping platform. It is very possible to link Swiftriver with any other data driven platform or in fact any other application of any kind.“
The Reactor Turbine is pulling data off the Swiftriver platform after it’s been tagged and put into a working format for analysis. It could be putting it into Excel, SPSS or another data program. While this is going on, Parsers is collecting data from your preferred information streams. Much of this work is being done on an online ‘cloud’ platform, which makes it particularly useful to peacekeeping and governance professionals who are highly mobile and working in low-infrastructure environments.
Because of the volume of data that can be pulled in and organized quickly, professionals working in peacekeeping and governance development can begin to see how events are shaping up around them, and their impact on those events. As technology expands to play a larger role in these fields, web based data gathering technology will be a powerful tool for achieving operational success and developing a better understanding of our impact in fragile social settings.
Charles Martin-Shields Charles is in charge of TechChange’s New York City and Private Sector development. His work focuses on developing applications and training programs that can help private sector entities invest in developing counties in a way that is both profitable and socially responsible. Prior to TechChange, Charles worked for the U.S. Institute of Peace in the Education and Training program and later the Academy for International Conflict Management and Peacebuilding. He can be reached at charles [at] techchange.org.
- Tech Tools and Skills for Emergency Management. (Sept 5-23)
- Global Innovations for Digital Organizing: New Media Tactics for Democratic Change. (Sept 26-Oct 14)
- Mobiles for International Development: New Platforms for Public Health Finance and Education. (Oct 17-Nov 4)