It's the old "garbage in, garbage out" problem, said David Molnar, IEEE member and senior researcher at Microsoft.
Security pros need to have a strategy in place for figuring out whether an attacker is attempting to trick an AI into making wrong decisions, he said. "If you did make the wrong decision based on bad data, how long would it take you to find out?"
Human judgment will play a big role here, said Elizabeth Lawler, CEO and co-founder at security firm Conjur. "There's no magic bullet here."
In particular, companies need to be careful not to set up systems, get them running, and then forget about them.
"Things drift over time," she said.
Checking up to make sure that systems get miscalibrated can be a routine and annoying job, especially if employees forget about how they work, and companies might not be able to afford multiple systems that approach problems from different directions, to check up on one another.
This is a good area in which to consider a managed security services provider, she added, one with expertise in those particular systems, and plenty of opportunities to learn the tricks that the bad guys are using to get around them.
"A managed service would be awesome for this particular domain, because you'd have a broader set of data than [from] just one institution's," she said.
Although the machine learning systems might be new, the tactics used against them are evolutions of tried-and-true methods, said Dale Meredith, author and cybersecurity trainer at Pluralsight.
"Flooding and poisoning -- that's what they did with routers and firewalls," he said.
Another old-school technique that will continue to work is social engineering, he added.
It doesn't matter how good the AI is if there's someone in headquarters who can flip a switch and turn it off.
"The users are always going to be the weakest link no matter what we put in place," he said.
Sign up for Computerworld eNewsletters.