Secure Modeling and Intelligent Learning in Engineering Systems Lab
![](https://www-s3.umflint.edu/wp/uploads/2024/02/SMILE-Page-24.jpg)
AI Tools for Cybersecurity & Neurodegenerative Diseases
Director and PI: Khalid Malik, Director of Cybersecurity Programs, Professor, Computing Division, CIT
The Secure Modeling and Intelligent Learning in Engineering Systems (SMILES) Lab is a forward-thinking interdisciplinary group of faculty and student researchers who are embracing outside-the-box thinking to develop cutting-edge AI-based solutions to some of the most pressing problems of our time. The translational research put forth by the SMILES team has an impact that extends beyond our community with marketable solutions in cybersecurity and healthcare that will benefit us all.
The bold vision of the SMILES lab has identified pressing needs and put unwavering focus on building and improving AI tools to solve them. Malik and his team have published many journal and conference articles, and they are continually building on that foundation. Through rich relationships with industry and medical experts, the team has been able to meet very specific needs with relevant solutions.
In addition to the external impact of SMILES research, the students working with SMILES projects are gaining a wealth of unique experiences. They are exposed to the latest in AI and cybersecurity tools, while constantly being supported to practice nimble, critical thinking that unlocks life-changing growth. With thoughtful mentorship from Malik, the students are empowered to practice persistence toward important tangible goals. These skills and the relationships they form will be lifelong and prepare them for a world that has much need for creative individuals who know how to bridge the gap between research and practice.
On This Page
- Deepfake Detector
- NeuroAssist
- AI-based Web Filtering
- Automated Knowledge Graph Curation
- Automotive Cybersecurity Education
Development of an Explainable and Robust Detector of Forged Multimedia and Cyber Threats using Artificial intelligence
Funded by the National Science Foundation and Michigan Translational Research and Commercialization
![](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/White-BG.jpg)
![Deep fake mask that indicates a swap](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/Deep-Fake.png)
![](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/Dk-to-Lt-Blue-Gradient-1.png)
Deep Forgery Detector (DFD)
Reveal Truth. Find Justice
![demask logo](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/demask.png)
Secure Modeling and Intelligent Learning in Engineering Systems Lab
![Seeing is believing - but for how long? demask logo. Snapshots of recent news articles about the public concerns with Deepfakes. Deepfake Audio is a Political Nightmare, Microsoft's new AI can simulate anyone's voice with 3 seconds of audio, AI Scam: Canadian Couple loses K to Fake Son's voice, AI Generated Deepfake of Japan's Prime Minister Sparks Concern](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/Deepfake-News-1024x576.png)
Disinformation is a growing concern for society and is fueled by a new weapon: deepfaked multimedia. We have been told all of our lives to believe what we see with our own eyes, and for the first time, we can no longer trust them. AI generated Deepfakes have left the realm of science fiction, and are an unsettling reality that demands our immediate attention.
A Deepfake is essentially a piece of media that has been either manipulated by or entirely generated by AI to appear as though it’s an original artifact. With recent developments in Generative AI tools, the capabilities have grown to the point where humans cannot detect a difference anymore without assistance.
Fake multimedia is a growing threat on the global stage. Misinformation is not a new tactic, but the tools today are far more advanced. A well-made AI video of a political or industry leader can spread false narratives about public or corporate policy and have a devastating public impact. Imagine a viral video in which some foreign head of state threatened an impending attack on the U.S. – but that video is indistinguishable from a real one.
![Deepfakes: The Most Dangerous Cyber Weapon. Below that, The destabilizing political impact of deepfakes... A deepfake of Ukranian President Volodymyr Zelensky calling on his soldiers to lay down their weapons was reportedly uploaded to a hacked Ukranian news website. To the right of that is an image of POTUS Joe Biden with a news banner that says, the Realities of Nuclear War. Caption says Biden announcing that men and women would be drafted to fight in Ukraine.](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/Deepfake-Destabalize-1024x576.png)
Using deepfaked audio and video in scams is increasingly possible. On Feb 4, 2024, a finance worker at a multinational firm was tricked with a Deepfake ‘chief financial officer’ video call and paid out $25 million to a scammer.
If large financial institutions can fall prey to these things, consider the vulnerability of an average citizen. According to a 2022 survey of 16,000 people across eight countries, 71% of people said that they don’t even know what a deepfake is.
As we are discussing the threat of deepfakes to global security, democratic institutions, and scams on an international level, it’s important to note that verified audio and video artifacts are now the norm as evidence in our judicial system. Deepfakes pose a significant threat to the integrity of that process.
Meeting court evidentiary standards is a challenging task, especially in the absence of underlying metadata, like digital watermarks, or if the media is post-processed with anti-forensic intent. In early February 2024, social media platforms like Meta announced that they will require AI-generated content to be labeled as such, but that falls under the category of ‘locks only keep out the honest.’ Those intent on using these advanced tools for deception will not be putting labels on them.
As the ability to create convincing fake videos has significantly increased, our need to authenticate legitimate digital media artifacts has grown as well. Beyond that, the tools needed to authenticate these media artifacts need to deliver assessments in an accessible way. Our judicial system, for example, is designed around a ‘jury of peers’ who won’t have deep knowledge of AI and cybersecurity systems.
To meet this essential demand, we have developed a Deep Forgery Detector. This research has been ongoing for over 6 years, backed by nearly $1M in grants from agencies like the National Science Foundation and MTRAC. This funding has enabled us to develop the DFD MVP with the appropriate tools and knowledge and we are working to further develop them into a product that will be usable by companies and individuals without a major background in cybersecurity.
Student researchers associated with this project will gain the opportunity to learn how to use deep learning, Neurosymbolic AI, and Multimodal AI to develop tools to authenticate digital multimedia. The students will also learn how to protect detectors from anti-forensic attacks and gain experience in designing AI-based detectors to be transparent and explainable with accessible outputs. They will get the opportunity to work in interdisciplinary teams and solve problems beyond what they would encounter in a classroom setting.
![](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/White-BG.jpg)
![SMILES Lab Journey. demask logo. Below that is a box with a timeline that moves from left to right, as follows: Purple, 1018, NSF, Testbed for Benchmarking Digital Audio Forensic Algorithms, Blue, 2019, NSF, REU Supplement to Forensic Examiner, Grey, 2020, MSGC, NSF, Towards Development of Deepfake Detection Framework, Green, 2021, NSF, REU Supplement to Forensic Examiner, Yellow, 2022, MTRAC, Deep Forgery Detector, Red, 2023, NSF, MTRAC, Explainable and Robust Detector (NSF), DFD (MTRAC).
Below this to left are four circles stacked diagonally. From bottom-left, Purple with fingerprint and magnifying glass, Report Generation, Visual, Textual. Red, Gear, Robustness, Ensembled Decision, Multiple Modal Verification, Blue, Brain with circuits, Explainability, NeuroSymbolic, Interpretable Features, Green, Generalizable, Common Knowledge, Human Psychology, Multimodal.
Bottom right has diagrams. SpoTNet: A Spoofing-aware Transformer Network for Effective Synthetic Speech Detection.](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/Deepfake-Journey-1-1024x576.png)
![Flow chart leads from Icon of person speaking with arrow
down to speech signal icon. From there, arrow points to box labeled Data
Cleansing and Acoustics Signal Processing. In this box, Framing & Windowing
leads to Non Silence speech indices, leads to Speech indices normalization,
leads to band pass filtering, leads to pre-emphasis, leads to Mel Spectogram,
followed by an image of a mel spectrogram. An arrow leads to the right of the
box to two Spectral graphs, one labeled Envelops, and another labeled contrast.
A bracket to the right of these graphs indicates the Knowledge Based
Representation pulls an S data point from Envelops and a P data point from Contrast
to create an SP data set. Arrow leads down to a box labeled Logical Spoofing
Transformer Encoder (LSTE) in which an SP matrix leads through four Conv. And BN
filters to Token Encoding, labeled tokens Tk 1-n, then an arrow points to Transformer
Encoder, and another arrow leads to Attentive Audio Representation. From this
box, an arrow points to a diagram labeled Spoofing Multi layer classification. This
diagram shows the data to flow through three layers labeled Dense, BN, and DO.
DO is shown to be .5, and Dense is 128, relu in the first pass, 64 in the
second, and 32 in the third. From these three, the arrow points to a box
labelled Flatten, which contains AE data 1-n. This then leads to one Dense box
with 128 Relu, one DO box with .5, and one final Dense box with 1, sigmoid. An
arrow leads to Speech Verification Output – real or spoof.](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/Architecture-of-SpotNet-Framework.jpg)
This NSF partnership for innovation (NSF-PFI) and MTRAC-funded project seeks to further improve Deep Forgery Detector (DFD) technology built on NSF lineage award# 1815724: SaTC: CORE: ForensicExaminer: Testbed for Benchmarking Digital Audio Forensic Algorithms and MTRAC project titled “Deep Forgery Detector.” The DFD detects audio-visual forgeries, including various types of Deepfakes, that are used in the manipulation of digital multimedia, but new types are continuously appearing. Improvements to the DFD MVP will help to make it more robust against anti-forensics and also make it more accessible and explainable.
For details, see:
- NSF Award Abstract: ForensicExaminer: Testbed for Benchmarking Digital Audio Forensic Algorithms
- NSF Award Abstract: Deep Forgery Detection Technology
- MEDC Press Release: MTRAC Innovation Hub for Advanced Computing Welcomes Third Cohort of Early-Stage Deep Tech Innovation Projects
NeuroAssist: An Intelligent Secure Decision Support System for the Prediction of Brain Aneurysm Rupture
Funded by the Brain Aneurysm Foundation
Cerebrovascular accident, or stroke, is the leading cause of disability worldwide and the second leading cause of death. Additionally, stroke is the fifth leading cause of death for all Americans and a leading cause of serious long-term disability. Annually, 15 million people worldwide suffer a stroke, and of these, 5 million die and another 5 million are left permanently disabled.
![Circle to the left and right. Smiles logo in top right. Title says: Subarachnoid Hemorrhage Prediction:Problems and Unmet Needs in Healthcare
Left circle says Challenges of AI in medicine and has a heart icon. 5 bubbles come from it. Purple, Limited Performance and High Training Cost
Blue, Demand for Human in
Loop Methods
Orange, Data Privacy and
Integrity
Green, Demand for Multimodal
Representation Learning
Red, Data scarcity, small
Datasets, and Non-IID
Right has concentric circles with Unmet needs in center. Outer circle, green, 1. Lack of Multimodal and Large Training Samples
Current dataset deficiencies limit the ability of AI models that generalize effectively and hinders performance.
Dark Blue, 2, Privacy Preserving Decentralized Solutions
Existing challenges revolve around the need for decentralized solutions that maintain the confidentiality of sensitive health data.
Yellow, 3. Lack of Explainability & Human in Loop AI
The knowledge gap between AI and domain experts needs to be filled by explainable predictions.
Red, 4. Lack of Interdisciplinary Approach
Addressing complex cardio/neuro vascular issues demands a synthesis of expertise from multiple disciplines.](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/Aneurysm-Unmet-Needs-1024x576.png)
In order to prevent these deaths and disabilities, neurologists and neurosurgeons must be able to diagnose the root causes early and improve their clinical management. They also need to determine an individual’s overall risk across multiple complex considerations, including cerebral aneurysms, arteriovenous malformations (AVM), and Cerebral Occlusive Disease (COD).
Clinical management of diseases causing stroke is very complex. To illustrate the complexity, take the factor of Unruptured Intracranial Aneurysms by itself. Treating them is a complex decision-making process because the risk of rupture is not solely determined by the size of the aneurysm. Location/artery matters a great deal; small aneurysms on certain arteries may rupture, while larger ones on other arteries may not.
Beyond the isolated case of the aneurysms themselves, various degrees of arteriovenous malformations and plaque accumulation inside the carotid arteries can add other risk factors to the overall stroke risk. Our current assessments are not enough to meet this complexity.
Lack of proper data often leads to a decision-making process that could aptly be described as ‘better safe than sorry.’ It is certainly true that surgical intervention is a successful method for eliminating the risk of stroke. However, these surgeries are invasive and may result in severe iatrogenic complications or neurological deficits so treating all aneurysms/AVMs/COD is not always worth that risk.
![Diagrams. Title reads Location Specific Vs. Global Models
Diagram has three sections. Left has four areas flowing down from the Datasets, through Under Sampling, then through three Location Specific a Global Trained Model, leading down to two rule sets.
The middle section starts out with the same datasets, but filters only through an Apriori Algorighm (50% Confidence & 20% Support), and leads down to four rule sets.
The third portion shows how the cumulative data and rule sets combine with the 5-year modeling experience of the project and domain expert weights optimization down to one greatly improved rule set.](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/Aneurysm-Location-Specific-1024x576.png)
On the other hand, delayed intervention when combining factors increases the risk of a stroke, the consequence can be death or permanent disability. When the overall risk is high, it is imperative to perform the correct treatment right away.
Without a dependable clinical risk/severity score available, neurosurgeons must rely on heuristics compiled from unreliable data and their previous experience.
![Four diagrams. Title says Knowledge Infused Model for Aneurysm
Detection & Segmentation
SMILES logo in top right.
Top left Pre-processing has various comparisons of aneurysm images. arrows lead down to Knowledge Infused DNN and to Knowledge Extraction top-right. That box shows Medical expert training of the model and feedback back and forth to ROI Extraction and Categorization. Portions of this lead down to the bottom-right box and feed to Infusion Level selection and Equivalent Feature Maps Calculation there are some black-box illustrations and Adaptive weight Exploitation, Adaptive Layer Infusion sections as well as Level Specific Deep Features Extraction. These circle back across to the Knowledge Infused DNN section in the bottom left and lead to Detection Output and Segmentation Output.
The bottom says, "DeepInfusion: A dynamic infusion based-neuro-symbolic AI model for segmentation of intracranial aneurysms." Neurocomputing 551 (2023): 126510.](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/Aneurysm-Knowledge-Infused-1024x576.png)
Between those two extremes are many cases where the risk warrants monitoring over time, not action. Doctors struggle with the decision of when to treat and when to watch, and every year thousands of unnecessary procedures are performed because they just aren’t. sure. Quantifying the overall stroke risks based on a group of risk factors in similar patients can help make this crucial decision much easier for neurosurgeons.
This means the tool needs to be a trusted one that clinicians can use to explain the individual situation. The patient and family are imagining the worst outcomes. They are worried about a devastating stroke and the financial burden of treatment. Being able to clearly explain why the best option is to wait and monitor would be a wonderful benefit to those families.
![Diagram with title, StrokeNet: An Automated Approach for Segmentation
and Rupture Prediction of Intracranial Aneurysm
SMILES logo in top right. Left of two boxes reads Step 1: Aneurysm Segmentation. Has a scan image and many representative icons with small print on them. An arrow leads to box two, Step 2: Aneurysm Rupture Prediction. Four images on left lead to four parts, deep features, geometrical features, blood flow pattern, and Fourier Descriptor, they lead into a box showing weighting of those factors, then feature selection, then classifier, then Green, mild, yellow moderate, pink severe, and red critical.
Bottom reads: "StrokeNet: An automated approach for segmentation and rupture risk prediction of intracranial aneurysm." Computerized Medical Imaging and Graphics 108 (2023): 102271.](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/StrokeNet-1024x576.jpg)
To meet this need, we have developed tools with a decentralized and highly explainable AI-based approach. These tools use a wide array of techniques: Multimodal AI on Digital Subtraction Angiography, Magnetic Resonance Angiography, and Computed Tomography Angiography image modalities along with clinical text, federated learning, RAG-based Neuro-symbolic AI, computational fluid dynamics, and multimodal explainable AI.
Ultimately, this project will deliver tools that will reduce fatalities and long-term disabilities, defray high costs for patients and our healthcare system, and alleviate much psychological stress for patients. It will also help to develop and share more robust data with other researchers to advance our understanding of brain aneurysms going forward.
For details, see: Brain Aneurysm Foundation: Meet Research Grant Recipient: Khalid Malik, PhD
Neuro-symbolic AI-based Web Filtering
Sponsored by Netstar Inc.
![](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/White-BG.jpg)
![SMILES Lab logo](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/SMILE-Lab-logo.png)
![](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/Dk-to-Lt-Blue-Gradient-1.png)
Explainable Multimodal Neurosymbolic
Edge AI Models for Web Filtering
Secure Modeling and Intelligent Learning in Engineering Systems Lab
Web filtering solutions are a vital component of cybersecurity. They block access to malicious websites, prevent malware from infecting our machines, and protect sensitive data from going out of organizations. They offer a secure, efficient, and controlled online experience across various sectors, addressing concerns related to security, productivity, and content appropriateness. The growing trends in Internet usage for data and knowledge-sharing calls for dynamic classification of web contents, particularly at the edge of the Internet.
![Title bar: Problem: Industrial Relevance and Novelty
Left: List with vertical NetSTAR Needs: and the following extending to the right: 1. Develop accurate trustworthy Multimodal AI URL filtering for dynamic contents
2. Demand for multimodal representation learning
3. Multilingual small datasets
4. Demand for human in loop methods
5. Data privacy
Right: SMILES Lab with bubbles sticking out to the left, from top, 1. Neuro-symbolic AI for Diverse Contents
2. Multimodal Learning
3. Knowledge Infusion
4. Multilingual
Representation Learning
5. Federated Learning](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/Netstar-Needs-1024x576.png)
Companies today need these solutions to have multilingual capabilities and protect the data privacy of their employees. To meet these challenges requires a reliable solution that can effectively classify the URLs into correct classes.
To meet these needs, UM-Flint has partnered with leading Japanese URL Filtering company, Netstar Inc., to develop a machine learning-based solution. The team consists of multiple PhD and postdoc students of Secure Modeling and Intelligent Learning in Engineering System (SMILES) Lab and employees of Netstar.
Students involved in this project will learn advanced techniques of Natural Language Processing, multilingual content processing, and development of knowledge graphs. They will gain experience with neurosymbolic and multimodal AI that is explainable and offers reasoning. They will also have opportunities to gain the many soft skills required for collaboration with a global corporation.
Automated Neuro Knowledge Graph Curation
To develop Neurosymbolic AI systems, it’s essential to have knowledge graphs that represent all the entities of the domains and the relationships between them.
The rapid growth of Knowledge Graphs (KG) in recent years has been indicative of a resurgence in knowledge engineering. The use of KGs in the published literature to distill usable information that neuro-symbolic models and expert-based systems could use is one of the most promising approaches to the data consumption problem; and also, it provides more explanations for AI techniques such as machine learning and deep learning
Most companies today recognize that data is their most valuable asset, but it can come in many different forms and formats. Making that data usable for ML and AI tools is challenging. Currently, knowledge graph creation and curation are mostly manual or somewhat semi-automated, and thus it is a labor-intensive process. In many cases, this manual process takes a person with a high level of expertise away from investing that time in the core product or scientific work they could be doing.
The automated curation of knowledge graphs from voluminous unstructured data can extract actionable information that is machine-readable and can potentially help knowledge discovery from Big data. To get actionable information, it’s necessary to identify sources and meanings of and relationships between entities of the given domains.
Furthermore, automatically extracting reliable and consistent knowledge particularly from structured and unstructured sources at scale is a formidable challenge. Very few attempts have been made on the automated construction of health knowledge graphs. The ones that have been tried limited their focus to the creation of triplets by having only one type of relationship.
![Title: Automated Knowledge Graph Curation Framework
Conceptual view
Three bubbles to left, each with arrows pointing to right, leading to a large bubble with three interconnected bubbles inside.
From Left: Blue. Document icon. Text Preprocessing, Tokenization, Normalization, Tagging
Blue-green, Cluster icon, Categorization, clustering.
Green-blue, family tree icon, Classification, PICO.
Large, light-green circle labeled Knowledge Modeling.
Three green circles. Top-left, checklist icon, Concept Extraction, OBIE. arrow to top-right, cluster icon, Relationship Extraction, BioBERT, CNN-BiLSTM. Arrow down to third circle, KG Generation, Extract Triples
Bottom left has Legend of Abbreviations: PICO - Patient/Population, Intervention, Comparison and Outcomes. OBIE - Ontology Based Information Extraction, BioBERT - Biomedical BERT](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/Knowledge-Graph-Framework-1024x576.png)
![Automated Knowledge Graph Curation Framework.
Functional View
Below this is the following list:
Data Preprocessing
Clustering - Neuro-Symbolic
PICO Classification
Relationship Extractor
Knowledge Graph Generator
To the right is a complex diagram with four boxes on top and one below all of them. Across the top, from left, Data Preprocessing, includes Dataset, Text preprocessing bubble including Sentence Detection, Tokenization, Cleaning, and Lematization. Leads to box two, Neuro Symbolic Clustering, which has tree vertical boxes labelled Generate training data, Cluster Model, and Knowledge Infusion. Each of these has a newspage icon as part of an arrow pointing toward the four boxes in Section three (Cluster model has two coming off)
Box three is labeled PICO classifier and each of the four boxes within show that each cluster feeds through a P, I, C, and O Classification process.
These four boxes have arrows that converge to one large server icon with some classification symbols in it.
That server icon is one of three elements in box four, labeled Relationship extractor and it points toward each of the other two elements, One is Tax. RE, which has Concept Mapping to Semantic Groups, Semantic Group Identification, and Relationship Extraction parts in it. The last element in the fourth box is Non-tax. RE, which takes the same clusters and leads them through Pairs Creation, Paired Relationships Detection, then Pair Relationship Identification. These final parts of box four converge with an arrow down to the bottom section of the image, which has four boxes leading from bottom right to the left. the first is labeled Triple Extractions and includes Tax. RE Triples, Concept-Cluster-Concept, Concept-Topic-Concept, Concept-PICO-Concept, and Concept-Relation-Concept.
An arrow leads left to a cluster relationship image titled Initial KG Generation. an arrow leads over to KG Completion, which has an additional arrow between the concept and the relationship items, This leads to a final image labeled Ambiguity Removal with fewer arrows between elements. This bottom box is titled KG Fusion.](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/Knowledge-Graph-Functional-View-1024x576.png)
Traditional models do not consider semantic, correlative, and causal relationships among domain concepts in knowledge graphs. None of the existing approaches have focused on building hierarchical relationships among extracted concepts. Additionally, concept extraction using either word embedding, or ontology-based information extraction does not give reliable accuracy, and this also affects the accuracy of relationship extraction. Lastly, efforts have not been made to develop predictive knowledge that should be interpretable to both machines and humans to enable true symbiotic human-machine and machine-machine interactions.
This project attempts to solve the above-mentioned challenges by proposing an automated domain-specific knowledge graph construction by making use of structured and unstructured data. This process is being repeated across multiple industries in the Deepfake Detector MVP, NeuroassistAI, and Netstar AI-based web filtering projects.
Automotive Cybersecurity Education
![](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/White-BG.jpg)
![SMILES Lab logo](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/SMILE-Lab-logo.png)
![](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/Dk-to-Lt-Blue-Gradient-1.png)
Integrity Verification of Vehicle’s sensor
using Digital Twin and Multimodal AI
Secure Modeling and Intelligent Learning in Engineering Systems Lab
![Top Center: Sensing layer and signal icon surrounded by two arrows pointing inward
Chat icon and Feedback layer surrounded by two arrows pointing outward. A real-looking vehicle is on left, a sketched vehicle is on right. They are pointing inward. Above the left vehicle is Physical space and Physical input is to the left with an arrow pointing at the vehicle. Beneath this car is a box. Systems - four icons, labeled In Vehicle Network, Sensors, External Network, and Driving Systems. The car on the right says digital space above and network input to the right with an arrow pointing towards it. The box below says Modules. there iis a computer-with-brain icon labelled Perceptive Cognitive Layer between the two boxes and pointing at this Modules box. All four icons are repeated in this box with the addition of a fifth icon representing two gears and arrows completing a circuit between them. to the right is an icon of a person with an arrow pointing towards the Modules box labeled Cybersecurity SME.](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/DTwins-image_2-1024x590.png)
Training and recruiting cybersecurity is one of the most pressing issues in workforce development today. The global Institution of Engineering and Technology has released Automotive Cyber Security, a thought leadership review of risk perspectives for connected vehicles, which explains that our trajectory toward more connected vehicles has greatly increased the need for cybersecurity professionals in the automotive industry.
Filling those roles has a challenge, though: the learning curve. Cybersecurity education as it’s done today can be a little dry and theoretical, making it seem more inaccessible than it actually is, but it doesn’t have to be that way.
![Top center, a cloud icon with nodes coming out is labeled Cloud-based Interconnection. Three boxes below. Left: Physical Input, Real car, sensor icon, steering wheel icon, and four wheel drivetrain icon. Image of dash view and Physical Experience label.
Arrow points to middle box: Digital Twins, ghost car, gear-circuit icon, car w/ nodes icon, and graph icon, image of virtual dashboard labelled twin module.
Arrow to box on right with hacker / cybersecurity expert icon below. Box on right is labelled, AR/VR Gaming Engine - car that looks between real and digital, three icons representing 360 vr, Image shows and is labeled VR User Experience.](https://www-s3.umflint.edu/wp/blogs.dir/10/files/2024/02/DTwins-Map-1024x576.png)
Our research group is developing various tools such as virtual ‘digital twins’ and a visual question-answer system to teach the complexity of interdisciplinary subjects such as cybersecurity in automobiles. This process will enable students to have a VR experience of the complex ways that IoT sensors, the driving systems, and the networks of systems and software in and out of the vehicle interact in a physical vehicle.
In summary, using neuro symbolic logic, AI and the flipped classroom, we’re working to redesign classes from the ground up, starting with a new offering in automotive cybersecurity. The class will feature hands-on exercises on the digital twin of a real car system and will also offer 24/7 assistance with a chatbot based on a large language model like chatGPT.