Center for Cybersecurity Research
Labs & Centers
Smiles Lab
Director: Khalid Malik
The Secure Modeling and Intelligent Learning in Engineering Systems Lab is a forward-thinking interdisciplinary group of faculty and student researchers embracing outside-the-box thinking to develop cutting-edge artificial intelligence based solutions to some of the most pressing problems of our time. The translational research put forth by the SMILES team has an impact that extends beyond our community with marketable solutions in cybersecurity and healthcare that will benefit us all.
Center on Pervasive Personalized Intelligence
Director: Marouane Kessentini
The PPI Center is a multi-university, industry-focused research center supervised by the US National Science Foundation. We operate under the NSF IUCRC model. The pre-competitive (i.e., of interest to many companies) and industry-applied research projects we work on are funded by industry members, our universities, and the NSF.
Current Projects
Development of an Explainable and Robust Detector of Forged Multimedia and Cyber Threats using Artificial Intelligence
Funded by the National Science Foundation and Michigan Translational Research and Commercialization
Our team at the cybersecurity center is tackling the increasing threat posed by deepfaked multimedia, a development that compromises our ability to trust digital content. Funded by the National Science Foundation and Michigan Translational Research and Commercialization, our research efforts focus on creating cutting-edge solutions to detect and combat this form of cyber threat. The centerpiece of our research is the Deep Forgery Detector , a tool designed to identify audio-visual forgeries, including sophisticated deepfakes. With extensive research spanning over six years and bolstered by nearly $1M in grants, the DFD MVP has been developed to counter the malicious use of AI-generated media that threatens global security, democratic institutions, and personal safety.
Our interdisciplinary team has worked on enhancing the DFD to withstand anti-forensic measures and ensure it delivers transparent, explainable, and user-friendly assessments. This is critical for applications in various fields, including the legal system, where the need for verifiable evidence is paramount. Additionally, our involvement with this project provides students with valuable, hands-on experience in utilizing advanced AI technologies such as deep learning and neurosymbolic AI. They also learn to contend with anti-forensic attacks, contributing to the development of systems available to non-experts. The project is an expansion on previous awards and efforts, aiming to refine and strengthen the DFD’s capabilities to stay ahead of the evolving digital forgery techniques. By doing so, we aim to make significant contributions to the integrity of digital content and cybersecurity.
For more details, see:
- NSF Award Abstract: ForensicExaminer: Testbed for Benchmarking Digital Audio Forensic Algorithms
- NSF Award Abstract: Deep Forgery Detection Technology
- MEDC Press Release: MTRAC Innovation Hub for Advanced Computing Welcomes Third Cohort of Early-Stage Deep Tech Innovation Projects
Data donation for AI model training | Douglas Zytko
Effective functioning of risk detection AI is contingent on high quality data, which is often hard to acquire at scale. At the same time, modern implementations of AI are often trained on data collected from users without their awareness or consent, which poses its own set of cybersecurity concerns. This project seeks to actualize a data donation paradigm through which end-users most susceptible to cybersecurity and other risks consciously donate personal data to 1) improve autonomy over personal data and 2) improve the quality of risk detection AI. This project has attracted $600,000 from the National Science Foundation.
Hidden Sources of Technical Debt from Software Security Concerns | Jeffrey J. Yackley
Currently, it is poorly understood how software security is impacted by technical debt. Technical debt is an abstraction used to describe the costs associated with choosing an expedient solution over a more prudent or thorough approach to an issue during software development. The aim of this project is to make progress in understanding the fundamentals behind the relationship between technical debt and software security. This knowledge will serve as a scientific foundation for the development of new categories of technical debt and software quality issues related to security which will be used in the PI’s future research using machine learning to predict and resolve software security and software quality issues.
Analyzing the effectiveness of static analysis in detecting security violations | Mohamed Wiem Mkaouer
Linting is the process of using static analysis tools to scan the source code and detect coding patterns that are considered bad programming practices. These patterns can be responsible for future bugs and stylistic anomalies beyond compiler errors. Given their importance, linters have been introduced in classrooms to educate students on detecting and potentially avoiding these code anti-patterns. Yet, little is known about to what extent these statics analysis tools are contributing to the detection and correction of common vulnerabilities in code. Therefore, this project aims to understand the extent to which popular static analysis tools are being adopted by practitioners. Such analysis will allow the understanding of what types of vulnerabilities are being typically detected and corrected by developers and what other types are being ignored or remain unpatched. Results are going to help research better provide the rationale for fixing such vulnerabilities, along with alerting educators about the need to provide means to support the correction of such vulnerabilities. In this project, students will learn how to run static analysis tools, then learn how to identify vulnerability-related issues in popular open-source projects before extracting insights about their correction.
Non-Technical Cyber-Attacks and International Cybersecurity: The Case of Social Engineering | Amal Alhosban
Work in this project includes providing an overview of social engineering attacks, and their impacts on cybersecurity, including national and international security, and developing detection techniques and major methods for countermeasures. How do social engineering attacks affect national and international security? And why is it so hard to cope with them?
Neuro-symbolic AI-based Web Filtering Sponsored by Netstar, Inc | Khalid Malik
Web filtering solutions are a vital component of cybersecurity. They block access to malicious websites, prevent malware from infecting our machines, and protect sensitive data from going out of organizations. They offer a secure, efficient, and controlled online experience across various sectors, addressing concerns related to security, productivity, and content appropriateness. The growing trends in Internet usage for data and knowledge-sharing calls for dynamic classification of web contents, particularly at the edge of the Internet.
Companies today need these solutions to have multilingual capabilities and protect their employees’ data privacy. To meet these challenges, a reliable solution that can effectively classify the URLs into the correct classes is required.
To meet these needs, UM-Flint has partnered with a leading Japanese URL Filtering company, Netstar Inc., to develop a machine learning-based solution. The team consists of multiple PhD and postdoc students of Secure Modeling and Intelligent Learning in Engineering System Lab and employees of Netstar.
Students involved in this project will learn advanced techniques of Natural Language Processing, multilingual content processing, and knowledge graph development. They will gain experience with neurosymbolic and multimodal AI that is explainable and offers reasoning. They will also have opportunities to gain the many soft skills required for collaboration with a global corporation.
Integrity Verification of Vehicle’s Sensor using Digital Twin and Multimodal AI | Khalid Malik
Training and recruiting cybersecurity is one of the most pressing issues in workforce development today. The global Institution of Engineering and Technology has released Automotive Cyber Security, a thought leadership review of risk perspectives for connected vehicles. This explains that our trajectory toward more connected vehicles has greatly increased the need for cybersecurity professionals in the automotive industry.
Filling those roles has a challenge, though: the learning curve. Cybersecurity education, as it’s done today, can be a little dry and theoretical, making it seem more inaccessible than it actually is, but it doesn’t have to be that way.
Our research group is developing various tools, such as virtual ‘digital twins’ and a visual question-answer system, to teach the complexity of interdisciplinary subjects such as cybersecurity in automobiles. This process will enable students to have a VR experience of the complex ways that IoT sensors, driving systems, and the networks of systems and software in and out of the vehicle interact in a physical vehicle.
In summary, using neurosymbolic logic, AI, and the flipped classroom, we’re working on redesigning classes from the ground up, starting with a new offering in automotive cybersecurity. The class will feature hands-on exercises on the digital twin of a real car system and will also offer 24/7 assistance with a chatbot based on a large language model like ChatGPT.
Detecting Malicious Behavior in Android Apps | Halil Bisgin
Mobile apps are at the center of everyone’s daily lives, and users give them access to their personal data. Therefore, it is important to develop methods for figuring out how much an app can detect and collect from its users, and whether that access is in line with their expectations of privacy. Several methods have been devised to determine app intrusiveness, including analysis of their descriptions and conformity with their programmed behavior. Most of the existing approaches depend on static analysis, which is not easily done on the go. To this end, we aim to develop machine learning solutions and artificial intelligence solutions to determine whether an app is intrusive based on the app features, which range from its source code to its description, which can allow users to make decisions before downloading.
Coalescing Research into Modular and Safe Educational Cybersecurity Labs with AI Solutions | Halil Bisgin
The demand for cybersecurity professionals is surging, with a projected increase of over 35% in jobs over the next decade, according to the U.S. Department of Labor’s BLS. Simultaneously, advancements in Artificial Intelligence, Data Science, and Machine Learning are reshaping industries and creating challenges, notably job displacement for the under-skilled. This project aims to address the critical need to integrate AI/DS/ML into cybersecurity education to meet these evolving demands. We are developing a methodology that enables instructors and researchers to transform their work into practical, safe, and interactive teaching labs, enhancing student learning across various cybersecurity topics. Our initiative aims to integrate AI, DS, and ML techniques into interactive labs to address current cybersecurity challenges effectively.