Durbin says organizations can reduce the effect of misinformation through proactive means: Monitoring what others say about the organization online and keeping track of changes made to internal information to provide early warning signals.
Automated misinformation gains instant credibility
Advances in artificial intelligence (AI) personas allows for the creation of chatbots that will soon be indistinguishable from humans. Attackers will be able to use these chatbots to spread misinformation targeting commercial organizations: Without ever breaching an organization's digital boundary an attacker could damage that organization's reputation by spreading convincing misinformation about its working practices or products. A single attacker could deploy hundreds of chatbots, each spreading malicious information and rumors over social media and news sites.
Attacks won't just target reputation. Fake news can also be used to manipulate a company's share price. German payments company Wirecard AG found that out the hard way in February of last year, when a fake report 'detailed' fraudulent activities by the company. While the report was later proven fake, the company's share price plummeted and took three months to recover.
You won't be able to stop chatbots from disseminating misinformation about your company, but recognizing the threat and incident response planning can mitigate the damage.
To protect your organization, the ISF recommends you do the following:
- Build scenarios covering the spread of misinformation into your overall incident management process.
- Extend monitoring of social media before and after big organizational announcements or events.
- Combine forces with industry bodies to lobby governments and regulators to investigate ways of identifying and prosecuting those spreading fake news and misinformation.
- Consider increasing existing social media output to proactively counter the spread of misinformation (e.g., encourage employees to spread legitimate news and report suspicious posts.
Falsified information compromises performance
Organizations are increasingly reliant on data to drive their decision-making, and that means criminals and competitors can add information distortion to their toolbox of threats. The ISF believes three types of attack on the integrity of information will become commonplace over the next two years:
- Distorting big data sets used by analytics systems.
- Manipulating financial records and reports, or bank account details.
- Modifying information before leaking it.
For instance, consider a utility company which analyzes data from smart meters to balance the amount of electricity it generates against the current demand. An attacker could manipulate smart meter data to falsely show high demand. Such manipulation could cause a surge in electricity generation. If that surge is significant enough, it could cause the electricity supply grid to fail.
Bogus or distorted data could also significantly affect pharmaceutical research, which is increasingly turning to big data analytics to improve the speed of modeling and trialing new drugs.
Durbin says organizations need to start preparing now to ensure information risk assessments address the likelihood and impact of attacks on integrity.
Sign up for Computerworld eNewsletters.