Felix Jiang, on May 15 2019
Will Smith Was Wrong About the Robots
I, Robot was first released to theaters back in 2004. In it, the movie’s filmmakers paint a fictional future (2035) where humanoid robots serve humanities needs. Our beloved Fresh Prince of Bel Air superstar is cast as a Chicago police detective named Del Spooner. Del hates robots with a deep skepticism after his experience with one that was unable to navigate a moral conundrum. Throughout the film, he condescends to these mechanical stewards for being unable to empathize and emote the way he believes only humans can. In one memorable scene, Del proclaims defiantly that robots are incapable of writing a symphony, or turning a canvas into a beautiful masterpiece.
Fast-forward to 2019, not even two decades after the film came out, and “robots” can perform complex, multifaceted tasks with jaw-dropping results.
For years, art was thought to emerge only from a creative and non-quantifiable ether in the human psyche. But in today’s era of machine learning and artificial intelligence, the sky is the limit.
Consider this: a decade ago, you could take a picture of your friends on vacation, manually tag everyone in the album, and watch the ‘likes’ roll in. Today, it seems like manual photo-tagging is a thing of the past. Even a blurry selfie you took with sunglasses and that beard you regretfully grew out in the spring of 2017 can be auto-tagged with your identity.
Object recognition (OR) – the ability of computers to find and identify objects in an image or video – is just one field of research that has recently transformed. For a period of time, the slow evolution of OR techniques was restricted by:
- Small labeled sample sets
- Inadequate computer processing power
- And long “training” times required to optimize AI models
Then in 2012, a team from the University of Toronto delivered a breakthrough. In the premier global OR contest, known as the ImageNet competition, Krizhevsky and his team dominated their opponents with an astounding 11% margin over the next highest score.
This propelled the AI world to double down on fast data processing solutions, and embrace new techniques with deep neural networks. Overnight, applications as obvious as photo-tagging and as salient as autonomous vehicle navigation experienced critical version upgrades.
Enter Natural Language Understanding (NLU)
In late 2018, natural language understanding (NLU) - a subset of natural language processing focused on AI-hard problems - experienced an ImageNet moment of its own. Until last year, NLU focused on siloed and shallow tasks such as:
- Reading comprehension, and
- Coreference resolution
However, a successful glimpse of applying a pre-trained model and transfer learning to a group of tasks occurred in 2018 when embeddings from language models (aka ELMo) were used to demolish previous state of the art benchmarks by 10-20% across the board. Other NLU techniques that also leverage transfer learning have since yielded similar success. Like the ImageNet moment for OR, an explosion in language-based software advancements is already occurring. The Google Duplex demo last year is just one example of how much further voice assistants can go with the right NLU backend.
Figure 1 - The GLUE benchmark in the graphic above is one measure used to depict how different models perform on a variety of language-focused tasks. Recent radical improvements have opened the doors to tackling sophisticated cybersecurity attacks that could not previously be addressed at scale. [Image credit Dannielle Dan.]
Using NLU to Automate Tasks, Increase Productivity
The showcasing of useful applications borne from advanced NLU has already become apparent. Email apps can guess what phrase you’ll use, saving time and increasing productivity. Likewise, the automation of customer support and call centers has allowed workers to move up-market. And yet large, increasingly detrimental issues still permeate the modern enterprise.
Changing the Game for Cybersecurity
As a field, cybersecurity first emerged even before the very first episode of the Fresh Prince of Bel Air. For years, companies both small and large have tackled and adapted reactively to increasingly complex cyberattacks and vulnerabilities. But the core issue of attacks and data leaks still remain. As a result, the average cost of a cyber-data breach has risen 50% in 2017 to $7.5 million per attack in 2018. Garnered by this, as well as a 250% increase in spoofing attacks, venture capital investments in cybersecurity just set a new record at $5.3B in 2018 alone.
Initially, modest metadata-based approaches worked to protect unsophisticated attacks. But today, advanced socially-engineered cyber-attacks expose the vulnerability of humans in mostly email-based attacks. Understanding the content, not just the metadata, of these attacks is the only way to nip these attack vectors in the bud. Understanding context and language provides cybersecurity practitioners the ability to:
- Separate the signal from the noise. NLU can flag communications that are the most likely to be suspicious
- Automatically detect and protect information that is truly sensitive without manually configuring specific keywords
- Save time for SecOps teams, giving them back the time to triage other issues
Enterprise cybersecurity is an age-old problem. Here at Armorblox, we believe that NLU is today’s unequivocal solution. If you’re interested in seeing the power of NLU in detecting attacks that other solutions miss, and to save you time and optimize your security posture, request a demo of Armorblox.
NEW ESG REPORT