Large Liability Model

Large Liability Model

Hello everyone, I hope your week is going well. Today, I want to share a thought that came to my attention during a great offline conversation regarding the inherent risks of LLMs (Large Language Models) in the corporate workplace.

Before I begin, I should clarify: while I’ve previously spoken about using LLMs in your personal journey, this article will focus more on their implementation at the business level. At that scale, the impact often extends beyond individuals and can affect external entities as well.


The LLM boom:

Before diving into the main concern, I want to touch on the LLM craze currently sweeping through nearly every company. When ChatGPT was released on November 30, 2022, it was like a metaphorical snowball being pushed down a hill. In its early stages, the technology simply revealed an opportunity to the public—one that eventually took the world by storm.

But the real question is: “When did LLMs become the center of everything?” Researching this does not give a clear-cut answer, as there are dozens of articles that attribute their popularity to market demand. Others state it was the increase in hardware power that allowed humans to create more complex algorithms. However, I still wonder how this became the case, as LLMs are known to fail, provide incorrect information, and collect data from their users.

Regardless of the path that led to our current state, we now reside in a world where LLMs are being added to everything—even items that have no need for this technology. For example, why in the world does a clothes dryer need an LLM integrated? Some will say, “So you can know when clothing is dry or something broke?” My simple reply is: use a basic sensor to tell me if clothes are dry, and I’ll know it’s broken when the machine simply stops working.

Before I get too off-topic, let’s dive into the main concern.


Inviting in an unknown:

Now let’s speak of the corporate risk, as these LLMs are being added to everything, and the C-Suites are simply approving it all with the promise that this will increase the output of their company.

That is a nice thought; however, you’re just adding a complex algorithm that you don’t control. When I speak of control, I am talking about both the algorithm itself and the integrated platform. When speaking of algorithm control, we simply accept the risk that the company creating the LLM won’t mess anything up. Now, when talking about the integrated platform, I am referring to a common issue: you do not control the data used by this algorithm, mainly because you do not control the platform either—such as GitBook. (Nothing against GitBook, I have enjoyed their platform.)

All this theoretical talk is great, but let’s speak on real impact. Say you have a team using a GitBook like tool with the LLM enabled for internal procedures, while also having the ability to create their own pages.

  • Situation 1: The LLM collects all of your procedures and dumps them to your competitor.
    • Now you need to create new procedures from the ground up to ensure you maintain an edge over your competitor. This will result in a potential loss of revenue.
  • Situation 2: A new employee writes their own duplicate procedures page with some “unapproved” items included. Another employee then asks the LLM a question regarding the procedure, and the data is poisoned/skewed resulting in the wrong procedure being given.
    • Once a procedure is broken, this can bring legal liability depending on what the procedure leads the employee to complete.

The path forward:

Now, moving forward, we need to stop throwing an LLM into every product. In fact, we need to treat these LLMs as unknowns—or even, in some cases, as potential threat actors. I do not say this lightly, as at the corporate level companies need to vet these technologies more carefully, rather than allowing these algorithms a surplus of access.

Now, again, I want to stress that this is simply my opinion on the subject.