{{item.title}}
Tech leaders around Australia will be considering how to integrate AI into their organisation to drive efficiency, innovation and competitive advantage. But should open or closed models be used as the foundational base to build AI-powered applications and workflows? The decision is important because the choice of model affects scalability, control, cost and adaptability.
It’s not easy. Today’s AI expert can become tomorrow's laggard overnight. Just weeks separated OpenAI's o1 and Deepseek R1 releases, with leaks and imminent new cutting-edge models coming soon from both open and closed sources. So, what are tech leaders to do? This article explores the strengths and limitations of both and why we believe it’s important to build for flexibility, not exclusivity, to future-proof your AI investment.
Open source
Similar to the crypto mantra “not your keys, not your coins1,” open models give you ownership and control but require infrastructure management. You'll need to handle GPU resources, memory sizing and compute monitoring but gain the ability to fine-tune and own your AI capabilities.
Closed source
Closed models from tech giants offer cutting-edge capabilities without the burden of maintenance, but you're dependent on their pricing, policies and decisions. In short, you gain power, but with less control.
Exercise caution when deciding to fine-tune models, and do so only at the right time for the right use case. Fine-tuning is warranted only when:
The off-the-shelf model is genuinely insufficient (often, enhanced engineering around the model would suffice).
You possess truly differentiated data for fine-tuning or scalable feedback mechanisms for reinforcement learning.
A cautionary tale from the financial sector demonstrates this trade-off: after investing millions in a specialised financial model, a major firm saw their custom AI quickly outpaced by generalist models like GPT-4, showing how even significant investments in specialised models can rapidly become obsolete.
If you’re considering fine-tuning, you might assume open models are the best choice to keep full ownership. However, closed models also allow fine-tuning through APIs. In those cases, your real IP isn’t the model itself but the data, process and feedback loop mechanism used to refine it.
Standard benchmarks are like timing a 100m sprint – they don't tell you who's best at soccer, cricket, or table tennis! Your AI needs to be tested on tasks that match your real-world use case.
We have been testing large language models (LLMs) for some time. We’ve found that models performing similarly on public benchmarks delivered strikingly different real-world results. When testing our AI Agents on data migration use cases – more specifically code conversion between database systems – some top-rated LLMs blindly translated syntax (like indexes) that didn't even exist in the target system, while others adapted intelligently. This suggests that some models may have overfit to public benchmarks while lacking real-world adaptability. Only trust testing and benchmarks relevant to your real life use cases!
Opportunities: Multi-expert capacity, language mastery, reasoning, super-human abilities.
Risks: Black-box operations, potential for confident-sounding errors, hallucinations and bias.
Rather than betting on a single model, create systems that can work with any LLM:
Field-test models beyond public benchmarks using your specific use cases and real-world scenarios that reflect your actual business needs.
Decouple your LLM-workflows and/or AI Agentic solutions from any specific model – modularity is queen.
Use the right LLMs (whether open or closed) to uplift your data assets, transforming them from “gold” to “diamond” quality, perform data remediation, build knowledge graphs to better understand relationships in your data.
Implement meaningful human oversight with transparency.
Implement responsible AI tools and practices that bring policies to life at the system level - creating model-agnostic safeguards that consistently validate outputs across any AI model you deploy.
Success in AI won’t come from having the biggest models but from building the smartest, most adaptable systems. Prioritise flexibility over exclusivity to keep your AI investment relevant and competitive in a rapidly evolving landscape.
If you would like to find out more about how best to adopt flexible AI in your organisation, please contact Murad Khan, Jahanzeb Azim, Samir Ghoudrani or Musab Anwar.
Get the latest in your inbox weekly. Sign up for the Digital Pulse newsletter.
Sign Up
Theme Enter theme here
Samir Ghoudrani
Senior Manager, Advisory, PwC Australia
Musab Anwar
Manager, Advisory, PwC Australia
References
© 2017 - 2025 PwC. All rights reserved. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity. Please see www.pwc.com/structure for further details. Liability limited by a scheme approved under Professional Standards Legislation.