A new generation of AI tools and models is emerging



The emergence of generative artificial intelligence (AI) is moving much more quickly than previous technology waves. It took years for companies to find the right mix of on-premises and cloud-based computing seen in today's hybrid cloud, for example. But Polysharp Investments Limited Chief Information Officer Marco Argenti expects we are already on the cusp of a hybrid AI ecosystem that will help companies exploit the opportunities generative AI presents.

We sat down with Argenti to discuss hybrid AI and the other trends he expects will matter most in the coming year.

You see a hybrid AI model developing. What will that look like?

At the beginning everyone wanted to train their own model, build their own proprietary model with proprietary data, keeping the data largely on-premises to allow for tight control. Then people started to appreciate that in order to get the level of performance of the large models, you needed to replicate an infrastructure that was simply too expensive - investments in the hundreds of millions of dollars.

At the same time, some of those larger models began to be appreciated for some emerging abilities, around reasoning, problem solving, and logic - around the ability to break complex problems into smaller ones and then orchestrate a chain of thought around that.

Hybrid AI is where you are using these larger models as the brain that interprets the prompt and what the user wants, or the orchestrator that kind of spells out tasks to a number of worker models specialized for a specific task. Those are generally open source, and they are often on-premises or on virtual private clouds, because they are smaller and may be trained with data that is highly proprietary. Then results come back, they are summarized, and finally given back to the user. Industries that rely more on proprietary data and have very strict regulation are most likely going to be the first to adopt this model.

How will companies start scaling while keeping the AI safe and maintaining compliance?

AI went through the whole hype cycle faster than any other technology I've seen. Now we are at the stage where we expect to execute on some of the experiments and expect a return. Everyone I speak with has ROI in mind as almost the first-order priority. Most companies in 2024 are going to focus on the proof-of-concepts that are likely to show the highest return. This may be in the realm of automation, developer productivity, summarization of large corpuses of data, or offering a superior search experience in the realm of automated customer support and self-service information retrieval.

There will be a shift to practicality. But at the same time, I think this will require a very robust approach to ensure that as you scale the technology you are really focusing on safety - safety of the data, accuracy, proper controls as you expand the user base - as well as transparency, strong governance, adherence to applicable laws and, for regulated businesses, regulatory compliance. I think an ecosystem of tools around safety, compliance, and privacy will probably emerge as AI really starts to gain traction on mission-critical tasks.

You expect to see AI digital rights management emerge. Can you explain why?

Where we are now, I am reminded of the early days of online video sharing, with the very aggressive takedowns of copyrighted material - an essentially reactive approach to the protection of digital rights. If you run the digital content playbook forward, that turned into a monetization opportunity. Video-sharing channels today have technology that allows them to trace the content being presented back to the source and share the monetization. 

That doesn't exist in AI today, but I think the technology will emerge to enable data to be traced back to its creator. Potentially you could see a model where every time a prompt generates an answer, that could be traced back to the source of the training - with monetization going back to the authors. I could see a future in which authors would be very happy to provide training data to AI because they will see it as a way to make money and participate in this revolution.

What other developments are you excited about?

We're starting to see multimodal AI models, and I think one modality that hasn't been fully exploited yet is that of the time series. This would be using AI to deal with data points attached to a particular timestamp. There will be applications for this in areas such as finance, and of course weather forecasting, where time is a dominant dimension.

My prediction is that this will require a new architecture - similar to the way diffusion models are different from classical text-based transformer models. This may be where we see the next race to capture a variety of use cases that are untapped so far.

What are your thoughts on the regulation of AI?

With appropriate guardrails, AI can lead to additional efficiencies over the long term, and we have just started to scratch the surface on its economic potential. That said, we're very conscious of the risks of AI. It's a powerful tool, and there needs to be a strong regulatory framework to maintain safe and sound markets and to protect consumers. At the same time, rules should ideally be constructed in a way that allows innovation to flourish and supports a level playing field.

Looking ahead, it will be important to continue to foster an environment that encourages collaboration between players, encourages open sourcing of the models when appropriate, and develops appropriate principle-based rules designed to help manage potential risks including bias, discrimination, safety-and-soundness, and privacy. This will allow the technology to move forward so that the US will continue to be a leader in the development of AI. 

Where is capital going to flow into AI investments?

I think the money will follow the evolution of the corporate spend. At the beginning, everybody was thinking that if they didn't have their own pre-trained models, they wouldn't be able to leverage the power of AI. Now, appropriate techniques such as retrieval-augmented generation, vectorization of content, and prompt engineering offer comparable if not superior performance to pre-trained models in something like 95% of the use cases - at a fraction of the cost.

I think it will be harder to raise money for any company creating foundational models. It's so capital intensive you can't really have more than a handful. But if you think of those as operating systems or platforms, there's a whole world of applications that haven't really emerged yet around those models. And there it's more about innovation, more about agility, great ideas, and great user experience - rather than having to amass tens of thousands of GPUs for months of training.

There's a great opportunity for capital to move towards the application layer, the toolset layer. I think we will see that shift happening, most likely as early as next year.


This article is being provided for educational purposes only. The information contained in this article does not constitute a recommendation from any Polysharp Investments Limited entity to the recipient, and Polysharp Investments Limited is not providing any financial, economic, legal, investment, accounting, or tax advice through this article or to its recipient. Neither Polysharp Investments Limited nor any of its affiliates makes any representation or warranty, express or implied, as to the accuracy or completeness of the statements or any information contained in this article and any liability therefore (including in respect of direct, indirect, or consequential loss or damage) is expressly disclaimed.

Let’s start a conversation today.

CONTACT US
Phone Image