Uninformed AI use can lead to ‘Dear Diary’ moments

by Christopher Wright ([email protected]) 63 views 

Diaries are designed to exist under lock and key. Many of us growing up in the 1980s or 90s remember the frustration and adolescent angst we felt when a loved one peeked at or, even worse, read one of our journal entries. Whether recorded with pen and paper or now, through an iPhone app, almost no one would want to make these innermost thoughts public—at least not voluntarily.

However, by harnessing artificial intelligence (AI) ineffectively, we might unintentionally broadcast our most sensitive “Dear Diary” moments from our personal lives and workplaces. Unfortunately, this lack of awareness is not only cringe-inducing but potentially dangerous. Without proper training on the responsible use of AI, we might be opening ourselves or our companies up to real security risks.

Heartland Forward, a think tank based in Northwest Arkansas, recently unveiled a poll gauging Middle America’s perceptions of AI. While recognizing the “positive difference” and beneficial impact the technology can have on our lives and within industries like agriculture and manufacturing, the survey also found widespread hesitation and, sometimes, trepidation about its use. Most respondents agreed employees should receive proper training on leveraging AI in the workplace due to doubts about its ability to make “unbiased, ethical decisions” or “safeguard privacy and data.”

Chris Wright

This skepticism about AI is understandable. Over the last several years, the technology has increasingly enabled malicious behaviors around the globe. AI operates on training data. Scrapers continuously crawl and collect information from the open internet to feed the systems. The technology also relies on input data. The National Institute of Standards and Technology (NIST) noted, “Datasets used to train an AI are far too large for people to monitor and filter successfully.” According to a NIST report, cybercriminals can deliberately introduce untrustworthy or corrupt data into systems, causing them to malfunction and deliver unintended effects.

Along with so-called bad actors, we have also seen unintentional threats from AI arise due to uninformed consumer use. The Federal Trade Commission notes that ML models are incentivized to “constantly ingest additional data,” sometimes to the detriment of users’ data, privacy or even “competitively significant data.” Take, for example, a query in ChatGPT that uses proprietary company information—financial spreadsheets, future expansion plans or yet-to-be-patented innovations. If an employee enters this data in an open platform, it’s fair game.

As individuals, we should be careful about what we upload, particularly personally identifiable information, and practice caution when posting on our social media accounts. The more open we are, the more source material we offer to open AI platforms. The same applies within the workplace. As IBM reiterated, it’s crucial for workers to “exercise caution when feeding data into algorithms to avoid exposing [their] company” and to “protect [the] information of others.”

If we don’t take precautions, AI-based platforms can offer up data for public use. In our personal lives, we must be mindful of this threat and make smart choices, including avoiding oversharing. We can mitigate workplace risk by promoting collective responsibility for those using the technology. As part of our comprehensive cybersecurity strategy, we can train and equip employees to select secure AI-powered platforms and process data safely. With greater awareness and continued education, we can reduce the likelihood of leaving sensitive data—our own, our companies or our customers—up for grabs.

Editor’s note: Chris Wright is co-founder and partner at Sullivan Wright Technologies, an Arkansas-based firm that provides cybersecurity, IT and security compliance services. The opinions expressed are those of the author.