Artificial Intelligence (AI) is transforming the world and being woven into every other aspect of our lives. The latest is in programming, with the Lightning Cat AI model detecting vulnerabilities in smart contracts with a f1-score of 94%.
In a recent paper, five AI experts delved into the application of deep learning in detecting vulnerabilities in smart contracts.
Currently, developers rely on human review, which can be tedious, and static analysis tools, which tend to record false negatives and positives as they rely on predefined rules and lack the ability to analyze complex semantics. Additionally, new data usually renders the predefined rules obsolete.
Deep learning methods don’t require predefined detection rules and can adapt to learn new features of vulnerabilities.
In comes Lightning Cat, a deep learning-based model. The paper revealed that Optimized-CodeBERT, which was trained using Lightning Cat, was at least 11% better than the best existing solution for detecting code vulnerabilities.
“Based on the experimental evaluation results, the Lightning Cat proposed in this paper shows better detection performance than other vulnerability detection tools. [It] achieves a recall rate of 93.55%, which is 11.85% higher than Slither, a precision rate of 96.77%, and an f1-score of 93.53%,” the researchers stated.
Lightning Cat’s scope can be expanded beyond smart contracts to detect vulnerabilities in any other type of code. It also collects data on new and emerging vulnerabilities and updates its model parameters to tackle the new challenges.
But while it can be an indispensable asset to developers, it can be lethal in the hands of malicious actors. These parties can use Lightning Cat to detect undisclosed vulnerabilities in smart contracts and launch attacks before the developers have a chance to patch up their code.
To prevent such attacks, developers should conduct regular human audits, the researchers advise.
But while AI is making inroads into programming, experts say it’s still miles away from being independent with the ability to work on its own. Like with all other general queries, AI tends to miss out on some obvious answers and sometimes invents solutions to non-existent challenges.
According to blockchain security company CertiK, AI should be used to assist developers and can’t be trusted to build on its own. The company’s Chief Security Officer sounded a warning in September over amateur developers delegating their work to AI as their products would be easy to penetrate for any decent attacker.