"Logistically, it's time consuming and painful to interrupt your authoring process to get feedback from those tests," he said.
And it's almost impossible to get good results during these early stages, he added.
"Until you build the software product and understand the relationship between components, you're just guessing," he said.
It's the difference between static and dynamic analysis, he explained.
For example, if a developer calls a particular open source library, the exact version of the library that's used in isn't locked in until the build takes place, once the package managers resolve all the dependencies -- and dependencies of dependencies.
"We can get much better insights and can tell you exactly where in your software you're linking to vulnerable methods or vulnerable libraries," he said. "And you're not going to do a build every time you type a word. It just wouldn't be efficient."
But a tool that checks for problems during the writing stage doesn't have to catch all potential vulnerabilties, said Cahill, the ESG analyst.
"This is just the first step," he said. "There are no silver bullets in security, but you can at least reduce the mistakes and the attack surface area along the way."
The optimal approach is to use each kind of security tool at the point where it works best, he said.
"Static and dynamic analysis should happen at the appropriate stage of the software life cycle," he said. "There should be scanning done in each environment. If you layer, you can dramatically reduce the security attack surface in production."
Sign up for Computerworld eNewsletters.