WEBVTT
Kind: captions
Language: en-GB

00:00:00.140 --> 00:00:00.820
AI.

00:00:01.060 --> 00:00:05.040
The concept of AI it's something that I think we're all familiar with

00:00:05.580 --> 00:00:10.520
Thanks to numerous amount of TV shows, movies and novels

00:00:10.740 --> 00:00:13.680
For me personally I find this to be sort of a tired

00:00:13.720 --> 00:00:17.820
concept at this point. You've just seen it so many times in sci-fi.

00:00:17.920 --> 00:00:22.000
And especially the idea that AI is going to take over.

00:00:22.000 --> 00:00:24.460
We've seen in it "Terminator"

00:00:31.200 --> 00:00:35.980
But I also seen this in some of my favorite work of fiction like "A Space Odyssey"

00:00:38.680 --> 00:00:42.180
I'm sorry Dave. I'm afraid I can't do that

00:00:42.850 --> 00:00:48.090
Some of my favorite novels like I have no mouth when I'm with scream. Don't get me wrong. I find it a very

00:00:48.700 --> 00:00:54.360
interesting concept even though it's overused, but when Stephen Hawkins in 2014 came out and said

00:01:01.000 --> 00:01:05.040
My mind sort of went Stephen Hawkins, what do you know about?

00:01:05.799 --> 00:01:07.320
Anything okay?

00:01:07.320 --> 00:01:13.349
Do you watch Rick and Morty because I do and I think I have a little better grasp of the universe

00:01:13.689 --> 00:01:16.949
Concept ideas like AI thank you very much Stephen

00:01:16.950 --> 00:01:24.750
But stick to your science stuff all right the concept of AI taking over. It's feels so far off because it seems so

00:01:25.299 --> 00:01:28.919
Obscure the way, it's portrayed in Hollywood movies or in the work of fiction

00:01:29.049 --> 00:01:33.688
But in reality it's actually I think or from what I've learned an

00:01:34.210 --> 00:01:41.159
Actual threat the idea that Johnny Depp could come back and kill us all it's not as far away as we think

00:01:42.630 --> 00:01:46.799
My god. It's that my mind has been said feeling despair. I need more power

00:01:52.140 --> 00:01:53.820
But how did we come to this conclusion

00:01:53.820 --> 00:01:58.279
I'm gonna try and explain okay, but I truly don't know shit

00:01:58.280 --> 00:02:05.510
Well of what I'm talking about so please if we take it back a couple steps a lot a couple steps. There's this game of

00:02:06.030 --> 00:02:07.380
tic-tac-toe

00:02:07.380 --> 00:02:11.119
That I found that no matter what input you make

00:02:12.060 --> 00:02:14.630
You will the computer will never

00:02:15.150 --> 00:02:18.560
Let you win. It's kind of fucking annoying

00:02:18.560 --> 00:02:19.860
It's programmed with

00:02:19.860 --> 00:02:20.970
algorithms so that

00:02:20.970 --> 00:02:28.490
No matter what move I make and it was exactly what moved to counter it to make sure that I can't win no matter what?

00:02:29.010 --> 00:02:32.359
Not a big deal, not a big deal. Okay in

00:02:32.880 --> 00:02:37.189
1958 a che Simon and Alan Newell AI experts

00:02:37.230 --> 00:02:42.679
But what if you take a more complex game than tic-tac-toe say chess for examples?

00:02:42.680 --> 00:02:49.190
There's a lot more possible outcomes in that game first saw that within ten years a digital computer will beat

00:02:49.190 --> 00:02:52.850
The world's best chess champion now. It didn't take ten years not until

00:02:53.700 --> 00:02:56.749
1997 you may have heard of this it was quite the

00:02:57.209 --> 00:03:04.669
Big deal at the time deep blue became the first computer that was able to be the reigning chess champion at the time casparo

00:03:04.820 --> 00:03:10.790
but it's clear that the computer will reliably do what he himself would do and he

00:03:11.100 --> 00:03:17.570
Recognizes that he has already lost on deep blues 19th move the champion resigns

00:03:20.210 --> 00:03:25.840
Now it still doesn't seem like that big of a deal and basically the way deep-blue worked was that it would scan

00:03:26.330 --> 00:03:28.960
every single possible outcome it could make about

00:03:29.540 --> 00:03:31.030
200,000 per second

00:03:31.030 --> 00:03:38.440
And it would make the best decision based on what he could find through through this method of scanning at this point

00:03:38.440 --> 00:03:40.440
I'm still like Stephen Hawking

00:03:40.910 --> 00:03:44.380
I've seen the videos of the machines falling over okay

00:03:44.380 --> 00:03:51.010
I think we have nothing to worry about but here's where I think it gets interesting in March 15 2016

00:03:51.640 --> 00:03:57.400
The champion of the Chinese board game go was beaten by an AI against alphago

00:03:57.770 --> 00:04:00.730
the artificial intelligence designed by google's deepmind

00:04:01.250 --> 00:04:02.930
It was a resounding loss

00:04:02.930 --> 00:04:08.709
They had won only one game alphago wins we landed it on the moon so proud of the team

00:04:09.170 --> 00:04:13.329
Respect to the amazing lee sedol too now the reason why this is such a big deal

00:04:13.330 --> 00:04:15.820
Is that in chess you only have so many options?

00:04:15.820 --> 00:04:17.180
but in go

00:04:17.180 --> 00:04:20.890
There are so many different moves that you can make that there are more

00:04:20.960 --> 00:04:26.379
Possible moves that you can make then there are atoms in the universe and there's no way that you're going to be able to compute

00:04:26.450 --> 00:04:30.039
That amount of options to figure out

00:04:30.040 --> 00:04:34.899
What's the best move to make so how did they make this it may not seem like that big of a deal either?

00:04:35.180 --> 00:04:43.150
But it's really cool. Okay. It's really cool. It basically uses deep reinforcement learning which is similar to how we learn as humans

00:04:43.850 --> 00:04:48.189
through trial and error reward and Punishment and raw inputs

00:04:48.950 --> 00:04:50.950
Say if we see something ourselves

00:04:51.200 --> 00:04:56.379
The computer figure learns itself how to become good at the game not too long ago

00:04:56.380 --> 00:04:58.380
there was a viral video of

00:04:58.430 --> 00:05:05.980
From sethbling that uses method to teach a computer to play Mario and it became really fuckin good at it

00:05:07.130 --> 00:05:09.130
Really good at it

00:05:09.860 --> 00:05:11.450
Look at that

00:05:11.450 --> 00:05:15.969
Basically a used neural networks to learn how to play the game

00:05:16.460 --> 00:05:22.359
which is similar to how we think as human beings and with enough computing power you could simulate a

00:05:22.940 --> 00:05:26.140
Human brain in this way, but we're not there yet

00:05:26.140 --> 00:05:32.799
But it wasn't good from the beginning it had to learn how to get good get good in the beginning. It doesn't even know where

00:05:33.310 --> 00:05:40.030
Has to go or what the option is or what Mario is but eventually it figures out it needs to move right, but through different

00:05:40.640 --> 00:05:45.220
generations and learning and from trial and error and adapting from these mistakes

00:05:45.770 --> 00:05:51.460
It eventually becomes better and better and the similar method was used for the alphago

00:05:52.070 --> 00:05:58.989
Program where it would train against itself slowly becoming better and better and better and eventually a master at the game

00:05:58.990 --> 00:06:02.949
There's a super cool video about a robot that doesn't know

00:06:04.010 --> 00:06:05.410
That it has limps

00:06:05.410 --> 00:06:10.959
But it teaches itself how to walk despite of this so it's just doing random movements

00:06:11.240 --> 00:06:13.720
It sort of figures out it has four limbs

00:06:13.720 --> 00:06:20.980
But it doesn't know where those limbs on its body is attached and by trial and error it eventually figures out

00:06:20.980 --> 00:06:22.700
Where is limbs are?

00:06:22.700 --> 00:06:26.050
Positioned and eventually it can very graciously

00:06:26.570 --> 00:06:28.599
Move across that's cool

00:06:30.020 --> 00:06:32.590
Self learning AI is really fuckin cool

00:06:32.590 --> 00:06:37.209
And there's a lot of advantages that you can do from this using it in design for example

00:06:37.210 --> 00:06:41.680
This is a 3d printed cabin partition. That's been designed by a computer

00:06:42.230 --> 00:06:48.099
It's stronger than the original yet half the weight and it'll be flying the Airbus a320 later this year

00:06:48.920 --> 00:06:54.099
So computers can now generate they can come up with their own solutions to our well-defined

00:06:54.650 --> 00:07:01.929
Problems so then with Elon Musk as well as Stephen Hawking saying AI could become a problem in the in the future

00:07:02.300 --> 00:07:07.210
That idea starts to sort of make more sense to me knowing this is how it works

00:07:07.210 --> 00:07:09.430
I think we should be very careful about artificial intelligence

00:07:10.130 --> 00:07:14.799
if I would guess at what our biggest existential threat is

00:07:15.890 --> 00:07:17.890
It's probably that

00:07:17.930 --> 00:07:19.930
Elon Musk as well as Bill Gates

00:07:20.600 --> 00:07:22.629
Chiming in as well with the same idea

00:07:28.370 --> 00:07:32.949
That evolution has endowed us with and it's funny in

00:07:34.480 --> 00:07:36.380
Slow computer

00:07:36.380 --> 00:07:38.380
very limited memory size

00:07:39.410 --> 00:07:43.989
ability to send a man to other computers this mountain air and

00:07:45.140 --> 00:07:47.739
but it's also whenever we bear a new one and

00:07:48.680 --> 00:07:55.900
It doesn't know how to walk three tries. Yeah, so believe me as soon as this algorithm

00:07:57.020 --> 00:07:59.560
Taking experience and turning it into knowledge

00:07:59.560 --> 00:08:05.530
Which is so amazing of course? We have not done in in software as soon as you do that

00:08:06.200 --> 00:08:08.200
It's not clear you'll even

00:08:08.210 --> 00:08:12.700
Even know when you're just at the human level you'll be at this superhuman level

00:08:13.250 --> 00:08:16.809
Almost as soon as that algorithm is implanted in

00:08:17.540 --> 00:08:23.980
Siliguri bill basically here come it compares. How our brains as a computer our

00:08:24.890 --> 00:08:26.890
method of evolving is

00:08:27.080 --> 00:08:29.080
very inefficient with

00:08:29.419 --> 00:08:35.348
Comparing it to how AI would be evolving and exponentially growing and knowing and keeping that in mind

00:08:36.080 --> 00:08:37.520
humans are inferior

00:08:37.520 --> 00:08:39.520
Without a doubt that

00:08:39.650 --> 00:08:44.799
Being said not everyone is on board with this idea that AI is going to take over or that. It's a

00:08:44.930 --> 00:08:49.839
Problem for the future. What are your thoughts on AI and how it could affect the world?

00:08:51.950 --> 00:08:53.930
You know I have I have pretty

00:08:53.930 --> 00:08:58.390
Strong opinions on this I'm really optimistic right I'm an optimistic person in general

00:08:58.390 --> 00:09:01.659
I think you can build things and and the world gets better, but

00:09:02.330 --> 00:09:06.939
with AI especially I'm really optimistic, and I think that people who are

00:09:07.820 --> 00:09:13.479
Naysayers and and kind of try to drum up these doomsday scenarios are I?

00:09:14.450 --> 00:09:16.960
Just I don't understand it. I think it's it's it's really

00:09:17.690 --> 00:09:21.489
Negative and it and in some ways I actually think it's it's pretty irresponsible. Yeah, hey

00:09:22.100 --> 00:09:28.270
Elon Musk responding on Twitter I've talked to marked about this his understanding of the subject is limited

00:09:29.300 --> 00:09:31.250
Hi mom Mark Zuckerberg

00:09:31.250 --> 00:09:33.609
Obviously, I love the suck more than anyone

00:09:33.650 --> 00:09:38.739
It's kind of hard to take him seriously on the subject especially since he clearly is trying to

00:09:39.440 --> 00:09:45.339
Make an AI himself good morning mark. It's Saturday. So you only have five meetings

00:09:45.870 --> 00:09:47.870
Room temperature is set to a cool

00:09:48.220 --> 00:09:51.150
68 degrees I guess what Mark is saying

00:09:51.150 --> 00:09:55.019
It's a I can do a lot for us as humans it can benefit us greatly

00:09:55.210 --> 00:09:57.599
And I think what Elin points out is that?

00:09:58.089 --> 00:10:04.559
There are dangers involved with the development on this and we need to be careful. How can we protect ourselves from ourselves?

00:10:05.230 --> 00:10:09.450
We are an intelligent adversary we can anticipate threats and plan around them

00:10:09.450 --> 00:10:13.499
But so could an super intelligent agent how confident could would be that

00:10:13.900 --> 00:10:18.269
The a I couldn't find a bug like given that merely human hackers find bugs all the time

00:10:18.400 --> 00:10:23.939
I'd say probably not very confident like disconnect Ethernet cable to create an air gap

00:10:24.760 --> 00:10:30.809
But again like merely human hackers routinely transgress air gaps using social engineering like right now as I speak

00:10:30.820 --> 00:10:34.439
I'm sure there is some employee out there somewhere who is being talked into

00:10:34.600 --> 00:10:38.040
Handing out her account details by somebody claiming to be from the IT

00:10:38.110 --> 00:10:43.680
Department we should not be confident in our ability to keep a super intelligence genie locked up in its bottle forever

00:10:43.680 --> 00:10:47.339
I'm actually fairly optimistic that this problem can be solved like we wouldn't have to

00:10:47.440 --> 00:10:51.029
Try to write down the long list of everything we care about or or worse yet

00:10:51.970 --> 00:10:55.650
Spell it out in some computer language like C++ or Python like that

00:10:55.650 --> 00:11:00.150
That would be a task beyond hopeless instead we would create an AI

00:11:00.760 --> 00:11:04.169
that uses these intelligence to learn what we value and

00:11:05.020 --> 00:11:07.469
His motivation system is constructed in such a way

00:11:08.350 --> 00:11:10.029
That it is

00:11:10.029 --> 00:11:17.759
Motivated to pursue our values or to perform actions that it predicts that we would have approved of computers smarter than human beings is

00:11:18.160 --> 00:11:24.510
Inevitable if you keep in mind how short we have even had technology and our presence in the universe

00:11:24.959 --> 00:11:30.899
Now whether AI will be something good or or destroy us all in the future

00:11:31.060 --> 00:11:33.539
That's just for us to find out meanwhile

00:11:34.029 --> 00:11:39.329
You're gonna have to excuse me because I have some Rick and Morty episodes to catch up on I hope this video was

00:11:40.150 --> 00:11:45.239
Educational and I hope I didn't say any wrong things because I sure AM no expert

00:11:45.420 --> 00:11:51.150
Thank you for leaving a like on this video. If you enjoyed. I really appreciate it make sure to sucscribe and as always

00:11:51.670 --> 00:11:53.670
squad fam out

