The line between human and machine-generated threats is starting to blur. Aqua Nautilus recently uncovered a malware campaign that hints at this unsettling shift. Koske, a sophisticated Linux threat, shows clear signs of AI-assisted development, likely with help from a large language model. With modular payloads, evasive rootkits, and delivery through weaponized image files, Koske represents a new breed of persistent and adaptable malware built for one purpose: cryptomining. It is a warning of what is to come.
This is the part that leaves me speechless:
Indicators of AI-Generated Code
Several script components suggest LLM involvement:
Verbose, well-structured comments and modularity Best-practice logic flow with defensive scripting habits Obfuscated authorship using Serbian phrases and neutralized syntax Such code may have been designed to appear “generic”, frustrating attribution and analysis.
Are we not producing good programmers anymore? I haven’t ever seen, not saying it doesn’t exist, but code that a LLM has written that is actually useful. Mentioning that modularity and comments means AI code is a sad state of affairs.