A screen of Raytheon's RIOT analytics software

A screen of Raytheon’s RIOT analytics software

Multinational defence contractor Raytheon has developed analytics software called RIOT that they claim is capable of tracking people’s movements and predicting their future behaviour by “mining” data from social media websites such as Facebook, Twitter, and Foursquare.

No-one should be surprised that the intelligence community is using this publicly available data to mine information on members of the public, and the legality of them doing so has consistently been approved by the courts as users are sharing it openly and freely with the world. Social media has become part of digital life for a large part of the world’s population and when a person is under surveillance the police and/or secret service would be remiss not to track their movements online. Where RIOT, Rapid Information Overlay Technology, is different, however, is that it continually analysis huge swathes of the digital population, including millions of innocent people in its aim to collect the messy data from social media and use that to predict future events.

Big data has become a watchword of information technology startups recently, with many companies coming into the ring with an idea to take on board the huge datasets of information surrounding health, or transport, or communications, and analyse them into producing useful conclusions that will result in better efficiencies or more sales. RIOT is different to these other big data projects because it makes use of people’s social media data without their knowledge or direct consent to create profiles and find patterns which stand out and may be related to future crimes.

Preventing “future crime” is not a new idea, with the subject the basis the of 2002 blockbuster film Minority report with Tom Cruise’s “Pre-crime division”. However, just like in that film, people are not purely rational and logical objects. Finding the causation of our actions is notoriously difficult, as correlation ≠ causation. Police and security services do need to make use of every tool at their disposal to prevent crimes and terrorist acts from occurring, but if that crime or act has not yet occurred then proving that it would have happened if not for their intervention is a very difficult task to prove in the courts – especially if the intervention was solely from a computer algorithm.

Moreover, whilst the open data from social media services do offer a huge amount of data regarding location and group interactions – it is limited in that it is only that data that people are openly sharing. Many people do “overshare” on social media, with a large proportion of Facebook users leaving their profile and photographs freely accessible to the world. However, one would assume that if a person or group were plotting a crime or terrorist act then they would be more careful about what they share. Yes there were people arrested after the London riots for sharing photographs of themselves with their looted goods, but these are not the large scale destructive crimes that interest the intelligence communities. The social media dataset may be huge, but its value for mining to predict crime is limited. This data would be very useful for marketing agencies looking to find deeper information on their current or potential clients, and possibly for governments looking for big picture maps of how people interact – but when targeted down to the single user – unless they are over-sharing then the data is sparse at best.

Digital privacy and human rights groups are up in arms about their new technology and for good reason, as it is a piece of software that tracks innocent people all the time without their knowledge or permission. However, without access to private data such as from email, Skype, and instant messenger then the data is rarely relevant to preventing crimes all.


About Author


TechFruit is a UK-focused blog with news and analysis covering the latest technology, gadgets, science, and start-ups.

Comments are closed.