AI's Napster Moment

DeepSeek may be the Napster of AI.

In partnership with

Napster’s fundamental disruption was building a world of abundance in the music industry, forever changing the access users expected to music.

Simultaneously, it destroyed the economic value of creating music in the first place.

It would be seven years between the launch of Napster (1999) and the business model that would save music — brought to us by Spotify. But it would be a full 22 years, not until 2021, that the music industry would fully recover from Napster’s fundamental disruption.

This fundamental disruptive tension has made its way to artificial intelligence.

What happens to artificial intelligence and the hundreds of billions of dollars being invested if there’s no payback on the initial breakthrough?

What happens when we all expect AI to be free, and available anywhere anytime with any capability?

We may need to start asking that question because of an innovation that came from an open-source developer in China.

DeepSeek Becomes AI’s Napster

DeepSeek is a Chinese AI company creating open-source models. But it’s not spending billions of dollars to build these models, it’s living under Chinese chip constraints and has to develop with much more modest means.

It’s what they’re creating with those means that is so striking.

Chinese AI startup DeepSeek, known for challenging leading AI vendors with open-source technologies, just dropped another bombshell: a new open reasoning LLM called DeepSeek-R1.

Based on the recently introduced DeepSeek V3 mixture-of-experts model, DeepSeek-R1 matches the performance of o1, OpenAI’s frontier reasoning LLM, across math, coding and reasoning tasks. The best part? It does this at a much more tempting cost, proving to be 90-95% more affordable than the latter.

The release marks a major leap forward in the open-source arena. It showcases that open models are further closing the gap with closed commercial models in the race to artificial general intelligence (AGI). To show the prowess of its work, DeepSeek also used R1 to distill six Llama and Qwen models, taking their performance to new levels. In one case, the distilled version of Qwen-1.5B outperformed much bigger models, GPT-4o and Claude 3.5 Sonnet, in select math benchmarks.

So, the model is cheaper AND better than the original model.

And the process and economics are what’s innovative here.

Over Christmas break DeepSeek released a GPT-4 level model called V3 that was notable for how efficiently it was trained, using only 2788K H800 training hours, which costs around $5.6 million, a shockingly low figure

The key part here is that R1 was used to generate synthetic data to make V3 better; in other words, one AI was training another AI. This is a critical capability in terms of the progress for these models.

DeepSeek took the output from musicians AI labs and gave it away a better, more accessible version for free.

Asymmetric Investing has a freemium business model. Sign up for premium here to skip ads and get double the content, including all portfolio additions.

Is it time to expand your portfolio?

  • AI discovery for cities to call home

  • Massive market of 30M U.S. relocations per year

  • Founders with 6 exits & $100M+ ARR experience

Read the Offering information carefully before investing. It contains details of the issuer’s business, risks, charges, expenses, and other information, which should be considered before investing. Obtain a Form C and Offering Memorandum at https://wefunder.com/lookyloo

AI Finds Its Napster

Napster’s innovation wasn’t creating content, it was the speed and abundance that it brought to distribution. I remember downloading thousands of songs in my college dorm room in 2000 when Napster was at its peak.

The change had a chilling effect on music.

  1. Lowered the cost of content for users to zero.

  2. Magnified distribution by making music abundant rather than scarce.

  3. Destroyed monetary value for the original creator.

As I mentioned earlier, it wouldn’t be until Spotify made music subscriptions popular that the music industry truly recovered. In the meantime, at least musicians could tour because, without that, no money would be left.

What’s Next?

Here’s the conundrum in AI: If there’s no reward for spending multiple billions of dollars building foundation models because those models will just be used to train cheaper, better models, what’s the point of building the foundation model in the first place?

This is a question Alphabet, Meta, OpenAI, Anthropic, and even NVIDIA will need to answer.

Disclaimer: Asymmetric Investing provides analysis and research but DOES NOT provide individual financial advice. Travis Hoium may have a position in some of the stocks mentioned. All content is for informational purposes only. Asymmetric Investing is not a registered investment, legal, or tax advisor or a broker/dealer. Trading any asset involves risk and could result in significant capital losses. Please, do your own research before acquiring stocks.

Reply

or to participate.