AI - #8 (Generative AI Economics - Ouch!)

AI

With all of the significant investments and bets being made on GAI, there may be quite a bit of leaping before looking going on here? I am using Bard more often now then ever, and for certain tasks it is very valuable, this is a fact. Besides the fact that for certain tasks Bard is extremely useful, I have hated Google Search for at least the last decade … and this is absolutely a way to get rid of these ad filled, click baity, search engines that I am sick of. Search is truly worthless at this point, and as Nitsche famously once said “Search is dead”.

So what is the look before you leap economics of the GAI business model?

Let’s ask Bard ... in it's own words, what it takes Google to financially pay for each GAI query:

"The incremental cost per query for Google Bard is estimated to be $0.003 to $0.028. This means that for every Bard query that Google processes, it incurs an additional cost of $0.003 to $0.028, on top of the base cost of processing a regular search query. (note the “on top of” estimate. In other words, these numbers obfuscate the total cost; Also - the “on top” number is not subsidized, i.e. paid for like search with ad revenues. At least not yet).

The exact cost per query will vary depending on a number of factors, including the size and complexity of the query, the model size, and the hardware used to run the model. However, analysts at UBS have estimated that the annualized incremental cost bill for Bard in 2023 could be $1.2 billion to $12.6 billion, depending on how widely Bard is used. (OUCH!!!!)

It is important to note that these are just estimates, and the actual cost of Bard queries could be higher or lower. Google has not publicly disclosed the cost of running Bard, and the company is constantly working to improve the efficiency of its AI models.

(Infomercial:) Despite the cost, Google believes that Bard is a valuable investment. The company believes that Bard can help to improve the quality of search results and make search more engaging and informative for users."

Intuitively obvious is the fact that really large models (which is the trend, plus combining multiple models for certain applications) require HUGE amounts of compute AND energy. I have touched on this in Blog post #5. Bards calculations above likely are not including the energy component, just the compute.

As we continue, I ask Bard how GPU and their costs scale on the infrastructure and hosting side of the equation. Bard says:

“The cost of LLMs per GPU varies depending on a number of factors, including the type of GPU, the cloud provider, and the on-demand or reserved pricing option. However, as a general rule of thumb, you can expect to pay around $1-$2 per hour per GPU for training or deploying an LLM.

For example, an Nvidia A100 GPU costs around $1-$2 per hour on AWS on-demand pricing. This means that training a 10B parameter model on a single A100 GPU would cost around $150,000, while training a 100B parameter model would cost around $1,500,000.

It is important to note that these are just estimates, and the actual cost of using an LLM can vary depending on your specific needs. For example, if you need to train a very large model or deploy it on a large scale, you may need to use multiple GPUs or even a dedicated GPU cluster.

Here are some tips for reducing the cost of using LLMs:

  • Use spot pricing to get discounted GPUs on AWS.

  • Use pre-trained models whenever possible.

  • Fine-tune pre-trained models instead of training from scratch.

  • Use a distributed training framework to train your model on multiple GPUs or even a GPU cluster.

  • Optimize your model to reduce its memory and compute requirements.

  • Consult your doctor. (Bard did not say this)

So what about energy costs? It so cheap now, go buy an EV! LOL. Of course, if you are cloud computing that is the host's problem ... however; rest assured that cost will get back to you one way or the other.

The energy usage of LLM development is directly proportional to the cost of development. This is because the vast majority of the cost of developing an LLM is the cost of computing power.

To train an LLM, you need to feed it a massive dataset of text or code. The model then learns to generate new text or code that is similar to the data it was trained on. This process requires a lot of computing power, as the model needs to perform trillions of calculations per second.

The more parameters an LLM has, the more computing power it requires to train. For example, GPT-3, which has 175 billion parameters, required 284,000 kWh of energy to train. This is equivalent to the energy consumption of an average American household for over 9 years, or just one of Al Gore’s 4 course dinners. This is manageable for a single LLM. However, for this scheme to work, it will require many hundreds if not thousands of LMMs to make it useful and worth paying for.

The energy consumption of LLMs is a growing concern, as the size and complexity of LLMs continues to increase. Researchers are working on developing more efficient training algorithms and hardware, but it is likely that LLMs will remain a significant energy consumer for the foreseeable future. Yes, GAI could possible end up in a bitcoin like energy loop.

The big question is can GAI create a business model that support the very significant investments for this to work? My guess is no - however, my second guess is that GAI will morph into something better (that we can’t quite see yet) with a business model that supports it. It’s a journey worth trying! Prompt away, it’s only 0.28 cents a pop!


Jim Pyers

Multi-varient human.  Value for Value.  

https://www.linkedin.com/in/jamespyers/
Previous
Previous

AI - #9 (OpenAI - the discombobulated leader of AI)

Next
Next

AI - #7 (Buzzword of the moment: Model Collapse)