AI: The need for transparency in the cybersecurity industry | 7wData

AI: The need for transparency in the cybersecurity industry | 7wData

Over the last decade, AI, once a far-fetched theme of old-school science-fiction movies, has quickly grown into one of the most prolific emerging technologies – and, by association, one of the most recognizable buzzwords out there. Virtually every industry today – healthcare, transportation, manufacturing, agriculture, banking, retail, finance – has either implemented or is planning to implement AI in some way. It’s the major technology trend of our era, driving everything from voice-controlled consumer tech to factory robots. And we know this because many of the companies operating in these spaces publicize their use of AI. What looks more cutting-edge than announcing your new AI-powered initiative? It’s as much a marketing gimmick as it is about product and service efficiency.

As with other fields, AI has surfaced as an accelerator of cybersecurity innovation over the past five years. However, this progress has been slightly mixed and inefficient. For example, cybersecurity firms still rely predominantly on non-AI systems to detect vulnerabilities in their codebases and sophisticated adversaries on their networks. This lack of advancement is due to a culture of secrecy, joined with a culture of hollow marketing hype, which inhibits the cybersecurity industry’s adoption of AI. This is the opposite when it comes to other AI application areas – like computer vision, voice recognition, and natural language understanding – where firms have created an “intellectual commons,” in the form of open benchmarks, conferences, and workshops in which innovations are shared openly with the goal of elevating AI research in those industries.

The effects of these are significant. On one hand, firms engaged in genuine AI research are motivated to withhold their findings, because they know other firms likely won’t either. Conversely, because firms are not open about their AI technologies, free riders who use poorly performing AI systems are not being held accountable for them.

The tech giants of the world such as Google, Amazon and Facebooks, are increasingly more open about their AI research. A good reason for this is the need to recruit top-notch talent, including AI researchers from academia. Being able to attract those candidates often means allowing them to continue publishing their work, which naturally leads to more openness about their AI R&D.

Comparatively, there seems to be a lot more uncertainty and embellishment in the security space. There’s a group of cybersecurity companies that claim to be working in AI but are really practising something closer to Stats 101. These companies try to act as if they’re building some algorithmic secret sauce behind closed doors, when in reality, they’re just trying to get people to pay no attention to the wizard behind the curtain.

Coupled with that is a certain level of mystification and even pomposity. Many security firms claim to be deploying AI and machine learning, but when asked to explain what’s actually going on, you’ll get replies along the lines of: “You wouldn’t understand it, just trust us. It’s too sophisticated to explain. We can’t reveal it because our competitors would just copy us.” This defensive posture doesn’t just deteriorate the credibility of the security industry as a whole; it also means legitimate AI innovators have to carry the weight of these bad actors.

Images Powered by Shutterstock