Learning AI From Zero Should Be a Priority for Any Business Owner

This realization was only possible when I decided to ignore all the buzz about AI and really tried to understand what the heck was going on.

Gui Renno
7 min readJan 19, 2024
Source: SpaceX

Hype or Reality?

ChatGPT was probably the most successful product launch ever: 100 million users in 2 months (2nd place is TikTok with 9 months).

The valuation of the OpenAI (the owner) currently is worth U$ 80 billion.

The OpenAI staff has barely 500 people.

That's U$ 160 million of business value per employee.

Ludacris.

But how much of this is hype and how much is real?

The only way to find out is to resist the temptation to stay on the surface, with entertaining new launches and conspiracy theories, and go deep into the boring stuff to find out what is really going on.

I did it for the last couple of years, organized it through first principles reasoning and now I'm sharing the essentials with you, especially if you lack computer science knowledge like I used to.

Don't worry, understanding AI is simpler than it may seem.

First Principles of AI

AI is "just another" computer algorithm

A computer algorithm is simply a formula, like the ones we learn at school.

The difference is that once this "school formula" is inside a computer, it will execute an action, mostly like:

If "x" happens (someone presses the power button), then "y" happens (computer starts).

We can think of it as a fundamental formula for traditional computing:

Input + Algorithm = Output

The algorithm is the group of rules, decided by a human, that will bridge specific inputs (stimuli) to outputs (results).

This may sound like “starting to get technical” but trust me, it’s simple and will be key to understanding the power of AI soon.

This "fundamental formula" of traditional computing, aka rules-based, has generated massive technological progress for the last decades.

However, in the 1950s at Dartmouth University (New Hampshire, USA), a group of nerds started wondering if the machines could do more.

What if Computers could think like humans?

First AI Conference, Dartmouth University, 1956

Instead of solely executing what humans wanted, was it possible to make machines think and decide for themselves?

Those first studies at Dartmouth coined the term “Artificial Intelligence” and developed a field known as “Cognitive Computing”, whose goal is to emulate how the human brain works.

The First AIs were actually not very smart

The first AIs were mostly hyper-specialized machines for specific tasks (like playing chess) that operated with traditional rules-based computing.

Those AIs when in action may look "smart", but actually they are just executing the human knowledge written in codes.

It means that it is impossible for a traditional AI, like the chess player "Deep Blue" (IBM), to come up with a new play that wasn't in its original program, meaning:

It’s not creative at all.

Even though Deep Blue shocked the world by beating the #1 chess player in the world in 1997, Garry Kasparov, it was still a 100% human-crafted technology.

Definitely impressive, but not yet intelligent. If we call it intelligent, every computer before it was also intelligent, which would make AI an empty concept.*

Deep Blue AI vs Garry Kasparov , 1997

The rise of really smart AI

In the 1990s everything changed with the advance of disruptive innovations, notably:

Big Data, strong processing power, and Machine Learning.

We won't get into details here, but what matters is that it allowed a new approach to computing as a whole.

Remember the "fundamental formula" for traditional computing?

Input + Algorithm = Output

Well, it wasn't holy anymore.

If it's true that the speed of the fleet is not determined by the fastest ship but by the slowest one (Wen Jiabao), then the one ship being left behind was definitely the human element.

Mapping and coding every instruction for the machines to run is very resourceful and time-consuming.

What if the machines could do it by themselves?

Well, not completely alone, humans still would need to give learning instructions (through coding, normally Python language), but all the "dirty work" would be done by the machine.

And a crucial detail: in a way faster and way deeper manner.

Let's think of an analogy: an overweigh guy (computer) trying to get fit as fast as possible (solve a problem).

So instead of simply hiring a regular "dictator personal trainer" (human coder) you have another option:

Go to a time machine, in which 1.000 years go by in one hour. Inside there is the largest gym you can imagine. You receive a few instructions from the best fitness influencers (examples) and are told to explore as many possibilities as you can in your hardcore training.

After that much experimentation, you will probably get out of this with the best strategies to get fit in the world, even with innovative and super-effective ones, true game-changers.

That's kind of how Machine Learning operates when training models.

It's more about discoveries than inventions

In this new approach, of training AI models after Machine Learning, a whole new world was been opened.

Remember the IBM Chess AI Deep Blue?

Well, 20 years after its victory against Kasparov, it was obsolete compared to a new Chess AI that operated on this new Machine Learning paradigm.

Google's AlphaZero "thought itself" to play chess and has beaten the best "human programmed AI" (StockFish) 155 times, having lost 6.

The most impressive though, is that it invented a bunch of new strategies and moves. It innovated deeply on a game that's been around for 1.500 years.

Today, many chess masters study AlphaZero's "out of the box" chess solutions and try to learn with them.

In a sense they are "discovering" this AI more than "inventing" it, because humans don't really comprehend them.

Another groundbreaking example was the announcement made by the Massachusetts Institute of Technology (MIT) that a novel antibiotic, capable of killing a very resistant bacteria, was discovered.

The AI provided this antibiotic solution, MIT researchers tested it, and it worked, but they (who happen to be very smart people) don’t really know how it came to that conclusion or why exactly that worked.

This new field of "cognitive computing" had a massive challenge from the start:

we still have relatively little knowledge about how the brain works, and even less about the mind.

So, how could we replicate something we don't really understand?

The funny thing is, we kind of were able to replicate it through multiple Machine Learning techniques (supervised, unsupervised, reinforced, etc), but just like with our brains or minds, we don't fully understand how it works.

Neither control them.

So, is it doomsday or an age of abundance?

I don't believe any honest person can provide that answer.

We are probably dealing with the largest power source in human history.

It can go both ways:

An age of abundance in which:

Humans no longer need to work because machines take care of most stuff, tech evolves to make Earth sustainable and savable, and therefore wars over resources become senseless because there is no scarcity. Big-time utopia, sure, but why not?

The other option is a Cyberpunk dystopian world, in which it can deepen the income inequality gap, cause more concentration, poorness and wars.

It may sound like an exaggeration, but maybe it’s not.

OpenAI has been sponsoring major Universal Basic Income (UBI) studies, in case human work really becomes obsolete, and has been tapping into this subject more and more over the last public statements.

The beauty of it is that it's up to the humans to decide. Will we let our elevated or underground potential speak louder?

It may be the first big choice of a new era for humans, or the last bad choice we ever make.

Considering the "zeitgeist" and once-in-a-lifetime historical gap, I decided to go all in into this transformation and be a microscopic piece, but still, a piece, that nudges people towards the world of abundance option, coping with (and surveilling) our new AI friends.

But that's just me.

The First Principles Synthesis

  1. AI is just another computer algorithm
  2. New technologies allowed AI to break the “input + algorithm = output” rule and make machines “creative”
  3. The creativity of AIs has already surpassed humans in some cases.
  4. AI creativity is pushing science and tech innovations to an exponential growth rate.
  5. This creative potential may be used to increase or reduce human well-being overall and no one knows for sure what will happen.

Feeling lost?

Navigating confidently in these times of uncertainty is definitely not easy.

Machines seem to be getting smarter and people, well…less smart.

I created a community so we can help each other to be healthier, wealthier, and happier during these challenging times.

It’s especially useful if you want to start or grow a business, which I have been doing all my life.

Oh, it’s totally free (and will always be).

Click Here to Join My Skool Community.

Main References:

  1. Kartik Hosanagar — A Human’s Guide to Machine Intelligence (2019) (book)
  2. Henry Kissinger — The Age of AI (2021) (book)
  3. Harvard University — CS50 AI (course)
  4. IBM — AI fundamentals (course)
  5. Wharton University — AI Fundamentals for Non-Data Scientists (course)

--

--