top of page

Spot the AI Lie

Everyone wants to be an LLM AI expert, few have done it. It shows.

By Bradley Clerkin, BreakFree Solutions' CTO


Navigating the labyrinthine world of LLM-driven AI can be a daunting endeavor. As more businesses embrace AI, distinguishing between genuine understanding and jargon-spewing pretenders becomes paramount. We felt it necessary to illuminate key signals indicating a lack of deep understanding or potential misinformation about LLM-driven AI.


Fallacy 1: The Misplaced Emphasis on Fine-Tuning


When embarking on an AI journey, you may encounter partners or providers touting the value of "fine-tuning". This perspective originates from a legacy mindset around machine learning. While fine-tuning was essential in traditional machine learning, it's less applicable when dealing with large language models (LLMs) that encompass billions of parameters and data points. The idea of refining this enormous network with your comparatively tiny speck of data is not only unrealistic but also shows a fundamental misunderstanding of the vast task at hand. AI transcends the conventional data challenge to become a robotics and AI conundrum, requiring a more sophisticated understanding than mere fine-tuning.


Fallacy 2: Viewing Prompt Engineering as the End State


Effective interaction with models, like ChatGPT, through prompt engineering is part of the solution, not the end goal. Focusing solely on prompt engineering could mean “missing the forest for the trees”. Yes, we need to engage with the LLM but that’s only one of four – six key components of the current AI platforms. This approach may expose you to companies that only scratch AI transformation's surface and fail to deliver a holistic solution.


Fallacy 3: Software Development Becoming Obsolete


AI can’t fully replace the developer, but it will replace some of them. If you think it’s not possible, you likely have a bad approach, or you haven’t tried to engineer the solution that would make it happen. If you think it’s going to replace all the developers in the same vein, you haven’t tried to build a solution to do so. It’s not a likely outcome anytime soon. The reality is we are going to see a reduction in the number of developers needed for lower-value development work and we are going to see a productivity increase from our best and brightest developers.


Fallacy 4: Unfounded Security Concerns and Online Models


A common misconception is the alleged security threat posed by online models, leading to the dismissal of these powerful tools. Companies' training models, like ChatGPT, invest enormous resources into their development, making the capabilities of offline models pale in comparison. An effectively designed system can engage with online models while maintaining rigorous data security protocols, dispelling the myth that online models inherently risk data integrity. If you’ve built a capable AI platform, you recognize that you can build capabilities with AI that help increase your company’s ability to securely operate. LLM AI platforms provide more security, not less.


Fallacy 5: The Premature Focus on Data Platform Capabilities


A pervasive myth suggests that robust data platform capabilities are a prerequisite for effective AI execution. We see no less than ten new data and AI-related memes on LinkedIn shared weekly by big data SMEs and companies . While AI can produce impressive results when engaging with well-governed data lakes or Snowflake environments, it's not an absolute necessity. To meet business users' demands, AI can begin by engaging with source data, deferring the larger task of grappling with the big data problem to a later stage. This approach allows for a more agile and effective implementation of AI.


The Truth: The Future is Agent Engineering


The key to comprehending and leveraging AI lies in grasping agent engineering. The future rests on creating autonomous agents, akin to a micro-service architecture, each with a clear directive. With a solid, AI-enabled platform on the cloud, these agents, each endowed with an LLM brain, can achieve specific objectives. While designing this intricate network is challenging, requiring considerable thought and design work is the path to truly harnessing AI.


Don’t believe me? Hit us for our deep dive on AI platforms and agents and we can validate everything in this article. We’ve done the hard work and built working AI platforms.






Recent Posts

See All

Embracing AI-Enablement:

Transforming Cloud-Native Development Platforms and Driving Adoption By Bradley Clerkin, BreakFree Solutions CTO Cloud-native development...

bottom of page