October is Cybersecurity Awareness Month. All month long, ITS News will highlight how ITS — and you — keep the University safe. In this guest post, Josh Jenkins with the Information Security Office, shares how machine learning technologies are changing cyber and physical security. Check out all Cybersecurity Awareness Month events, or for year-round tips on staying cybersafe, visit Safe Computing at UNC.
The integration of machine learning (ML) technologies into the various facets of security have ushered in a new era of proactive threat detection, risk mitigation and enhanced decision-making processes.
From cybersecurity to physical security, these ML technologies have proven invaluable in safeguarding individuals, organizations and societies against evolving threats.
The advent of ML technologies has revolutionized the way security is approached across different domains.
This technology offers the promise of dynamic and adaptive solutions that can address both known and emerging security challenges.
The primary domains closely linked with ML technologies can be divided into two overarching areas: cybersecurity and physical security.
Within these two major categories, we can further dissect the subject into subcategories, which we will explore in detail below.
About the guest writer
Josh Jenkins, AKA “Huggable Hacker,” has spent the last few years physically breaking into and testing financial institutions and private industry. Before coming to UNC-Chapel Hill, Jenkins worked for such companies as Trend, Qualys and Wells Fargo. When not breaking into places, Jenkins enjoys tending to his blueberries and chickens in Lexington, North Carolina.
Machine learning as a function of cybersecurity
Machine learning, as a function of cybersecurity, can be thought of as all issues that extend within the realm of the digital. Think of cybersecurity in the constraints of the following:
- Threat detection and prevention: ML and artificial intelligence (AI) are instrumental in identifying and neutralizing cyber threats in real time. They can analyze vast datasets to detect anomalous activities and patterns, helping security experts respond swiftly to potential breaches.
- Vulnerability management: AI-driven systems can automate vulnerability assessments and prioritize them based on potential impact, enabling organizations to proactively address weaknesses before they are exploited.
- User and entity behavior analytics (UEBA): ML algorithms can profile normal user behavior and identify deviations that may indicate unauthorized access or suspicious activities, enhancing user security.
- Phishing and malware detection: AI models can recognize phishing emails and malware signatures, providing a crucial defense against cyberattacks.
Machine learning as a function of physical security
AI and ML as general function of physical security can conversely be thought of as everything that extends into the real world. (i.e. doors, locks, keys, badges, gates, etc):
- Video surveillance and facial recognition: AI-powered cameras can analyze video feeds in real-time, flagging unusual activities or identifying individuals of interest. This technology is critical in public safety and law enforcement.
- Access control: ML algorithms can enhance access control systems by continuously adapting to changes in user behavior, reducing the risk of unauthorized entry.
- Identity proofing and automation: ML is increasingly utilized in identity proofing processes, which are essential for secure access control and authentication. However, relying on these technologies for identity verification introduces the risk of identity theft and fraud if they are compromised. Similarly, in machine room automation and critical infrastructure control, the integration of AI and ML must be carefully managed to prevent unauthorized access and potential cyberattacks, making security and robust authentication mechanisms paramount.
Challenges and ethical considerations
While AI and machine learning offer substantial benefits in security, they also present challenges and ethical dilemmas.
- Bias and fairness: AI algorithms can inherit biases from training data, thus leading to unfair discrimination. Ensuring fairness in AI models is a critical ethical consideration.
- Privacy concerns: The use of AI in surveillance and data analysis raises privacy concerns. Striking a balance between security and privacy is a constant challenge.
- Adversarial attacks: AI systems can be vulnerable to adversarial attacks that manipulate their input data, undermining their reliability.
- Regulation and accountability: As AI becomes more integrated into security, the need for regulations and accountability mechanisms grows. Ensuring responsible AI use is paramount.
So, what can we glean from this? Machine learning technologies have unquestionably become indispensable in the realm of security. They provide advanced threat detection, facilitate proactive risk management, and enhance decision-making processes significantly. However, their integration comes with challenges related to bias, privacy, and accountability, and to some extent false or misleading information and results.
To harness the full potential of ML in security, it is essential to develop ethical frameworks and regulatory guidelines that ensure responsible and fair use. The ongoing evolution of these technologies will continue to shape the landscape of security, driving innovation and enhancing our ability to protect individuals, organizations and societies from evolving threats.
Editor’s note: ChatGPT, a machine learning tool, assisted the author with outlining and writing this article.