Insights Into Tomorrow: Episode 18 “Artificial Intelligence”

https://www.podbean.com/media/share/pb-hvn5q-1363f92

Is the future of AI more like HAL 9000, Skynet and the Matrix?

Or is it more like Wall-E, Star Trek’s Commander Data or C-3PO from Star Wars?

Some of the world’s most brilliant minds like Michio Kaku and Stephen Hawking recognize the tremendous potential for artificial intelligence but also offer words of caution at its development

Is artificial intelligence the next step in our evolution, or the first step in our ultimate destruction

How will AI impact what humans create, and could humans be entirely removed from the creative process?

Show Notes

[INTRO THEME]
[INTRODUCTIONS] (3-5 minutes)
Show introduction:
Insights Into Tomorrow Episode 18: “Artificial Intelligence”

Host introductions
I’m your host Joseph Whalen
And my co-host this week is Sam Whalen

[SUMMARY]
Is the future of AI more like HAL 9000, Skynet and the Matrix?
Or is it more like Wall-E, Star Trek’s Commander Data or C-3PO from Star Wars?
Some of the world’s most brilliant minds like Michio Kaku and Stephen Hawking recognize the tremendous potential for artificial intelligence but also offer words of caution at its development
Is artificial intelligence the next step in our evolution, or the first step in our ultimate destruction
How will AI impact what humans create, and could humans be entirely removed from the creative process?
That’s what we’ll be discussing on today’s Episode of Insights Into Tomorrow
But before we get into that I’d like to take a moment to invite our viewing and listening audience to subscribe to the podcast

[Show Plugs] (2-3 minutes)
Subscriptions:
Google
Apple
Spotify
Stitcher
Amazon

Contact Info
Email us at: Comments@insightsintothings.com
Twitter: @insights_things
Facebook: https://www.facebook.com/InsightsIntoThingsPodcast/
Instagram: https://www.instagram.com/insightsintothings/
Links to all these on the web: https://www.insightsintothings.com

[TRANSITION]

[SEGMENT 1:] (10-15 minutes)

What is Artificial Intelligence
https://www-formal.stanford.edu/jmc/whatisai.pdf
https://bit.ly/3COXCFw

It is the science and engineering of making intelligent machines, especially intelligent computer programs.
It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.
John McCarthy, Head of the Computer Science Department, Stanford University (2007)

The Turing Test
https://en.wikipedia.org/wiki/Turing_test
http://bit.ly/3kiI8mW

Originally called the Imitation Game by Alan Turing in 1950, it is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses.
The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another.
The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine’s ability to render words as speech.
If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test.
The test results would not depend on the machine’s ability to give correct answers to questions, only on how closely its answers resembled those a human would give.

Artificial Intelligence: A Modern Approach
https://aima.cs.berkeley.edu/
http://bit.ly/3IRhcVE

Stuart Russell, professor of Computer Science at the University of California Berkeley and Peter Norvig, Director of Research for Google published, Artificial Intelligence: A Modern Approach
It became one of the leading textbooks in the study of AI.
In it, they delve into four potential goals or definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting:

Human approach:
Systems that think like humans
Systems that act like humans

Ideal approach:
Systems that think rationally
Systems that act rationally

Alan Turing’s definition would have fallen under the category of “systems that act like humans.”

At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving.
It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence.
These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.

[AD 1]

[SEGMENT 2: (8-12 minutes)]

Types of Artificial Intelligence
https://www.simplilearn.com/tutorials/artificial-intelligence-tutorial/types-of-artificial-intelligence
http://bit.ly/3knxPxG

Artificial Intelligence can be divided based on capabilities and functionalities.

There are three types of Artificial Intelligence-based on capabilities
Narrow AI
General AI
Super AI

Under functionalities, we have four types of Artificial Intelligence
Reactive Machines
Limited Theory
Theory of Mind
Self-awareness

Capabilities Based AI’s

Narrow AI
Narrow AI, also called as Weak AI, focuses on one narrow task and cannot perform beyond its limitations.
Examples:
Apple Siri, IBM Watson

General AI
General AI, also known as strong AI, can understand and learn any intellectual task that a human being can.
Examples:
Fujitsu’s K Computer, one of the fastest supercomputers in the world
Tianhe-2, Chinese supercomputer

Super AI
Super AI surpasses human intelligence and can perform any task better than a human.
Examples:
None current exist, this is purely theoretical

Functionality based AIs

Reactive Machine
A reactive machine is the primary form of artificial intelligence that does not store memories or use past experiences to determine future actions.
It works only with present data.
They perceive the world and react to it.
Examples:
IBM’s Deep Blue chess playing computer

Limited Memory
Limited Memory AI trains from past data to make decisions.
The memory of such systems is short-lived.
They can use this past data for a specific period of time, but they cannot add it to a library of their experiences.
Examples:
This kind of technology is used in self-driving vehicles.

Theory of Mind
Theory of mind AI represents an advanced class of technology and exists only as a concept.
Such a kind of AI requires a thorough understanding that the people and things within an environment can alter feelings and behaviors.
It should understand people’s emotions, sentiments, and thoughts.
Examples:
No practical examples, but attempt at mimicking human emotion in automatons is the closest example

Self Awareness
Self-awareness AI only exists hypothetically.
Such systems understand their internal traits, states, and conditions and perceive human emotions.
These machines will be smarter than the human mind.
This type of AI will not only be able to understand and evoke emotions in those it interacts with, but also have emotions, needs, and beliefs of its own.
Examples
No practical examples exist in the real world, thing science fiction like HAL from 2001, but hopefully not as psychotic

[AD 2]

[SEGMENT 3: (10-15 minutes)]

Is Artificial Intelligence Dangerous
https://bernardmarr.com/is-artificial-intelligence-dangerous-6-ai-risks-everyone-should-know-about/
http://bit.ly/3kc9RFD

Elon Musk wrote: “The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most.”

Should we be scared of artificial intelligence (AI)?

Some notable individuals such as legendary physicist Stephen Hawking and Tesla and SpaceX leader and innovator Elon Musk suggest AI could potentially be very dangerous; Musk at one point was comparing AI to the dangers of the dictator of North Korea.
Microsoft co-founder Bill Gates also believes there’s reason to be cautious, but that the good can outweigh the bad if managed properly.
Since recent developments have made super-intelligent machines possible much sooner than initially thought, the time is now to determine what dangers artificial intelligence poses.

How can artificial intelligence be dangerous?
While we haven’t achieved super-intelligent machines yet, the legal, political, societal, financial and regulatory issues are so complex and wide-reaching that it’s necessary to take a look at them now so we are prepared to safely operate among them when the time comes.
Outside of preparing for a future with super-intelligent machines now, artificial intelligence can already pose dangers in its current form.
Let’s take a look at some key AI-related risks.

Autonomous weapons
Russia’s president Vladimir Putin said:
“Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with enormous opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”
AI programmed to do something dangerous, as is the case with autonomous weapons programmed to kill, is one way AI can pose risks.
It might even be plausible to expect that the nuclear arms race will be replaced with a global autonomous weapons race.
Aside from being concerned that autonomous weapons might gain a “mind of their own,” a more imminent concern is the dangers autonomous weapons might have with an individual or government that doesn’t value human life.
Once deployed, they will likely be difficult to dismantle or combat.

Social manipulation
Social media through its autonomous-powered algorithms is very effective at target marketing.
They know who we are, what we like and are incredibly good at surmising what we think.
Investigations are still underway to determine the fault of Cambridge Analytica and others associated with the firm who used the data from 50 million Facebook users to try to sway the outcome of the 2016 U.S. Presidential election and the U.K.’s Brexit referendum, but if the accusations are correct, it illustrates AI’s power for social manipulation.
By spreading propaganda to individuals identified through algorithms and personal data, AI can target them and spread whatever information they like, in whatever format they will find most convincing—fact or fiction.

Invasion of privacy and social grading
It is now possible to track and analyze an individual’s every move online as well as when they are going about their daily business.
Cameras are nearly everywhere, and facial recognition algorithms know who you are. In fact, this is the type of information that is going to power China’s social credit system that is expected to give every one of its 1.4 billion citizens a personal score based on how they behave—things such as do they jaywalk, do they smoke in non-smoking areas and how much time they spend playing video games.
When Big Brother is watching you and then making decisions based on that intel, it’s not only an invasion of privacy it can quickly turn to social oppression.

Misalignment between our goals and the machine’s
Part of what humans value in AI-powered machines is their efficiency and effectiveness.
But, if we aren’t clear with the goals we set for AI machines, it could be dangerous if a machine isn’t armed with the same goals we have.
For example, a command to “Get me to the airport as quickly as possible” might have dire consequences.
Without specifying that the rules of the road must be respected because we value human life, a machine could quite effectively accomplish its goal of getting you to the airport as quickly as possible and do literally what you asked, but leave behind a trail of accidents.

Discrimination
Since machines can collect, track and analyze so much about you, it’s very possible for those machines to use that information against you.
It’s not hard to imagine an insurance company telling you you’re not insurable based on the number of times you were caught on camera talking on your phone.
An employer might withhold a job offer based on your “social credit score.”

[AD 2]

[SEGMENT 4: (10-15 minutes)]

The Impact of AI on our World

AI Facts and Figures
https://www.simplilearn.com/tutorials/artificial-intelligence-tutorial/artificial-intelligence-applications
http://bit.ly/3XAt8Po

Revenue from the artificial intelligence (AI) software market worldwide is expected to reach 126 billion dollars by 2025.
As per Gartner, 37% of organizations have implemented AI in some form.
The percentage of enterprises employing AI grew 270% over the past four years.
According to Servion Global Solutions, by 2025, 95% of customer interactions will be powered by AI.
A recent 2020 report from Statista reveals that the global AI software market is expected to grow approximately 54% year-on-year and is expected to reach a forecast size of  USD $22.6 billion

Special AI Projects to Mention/ Sam’s Soapbox About How Art is Dying

ChatGPT: Optimizing Language Models for Dialogue
We’ve trained a model called ChatGPT which interacts in a conversational way.
The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.

Canva
Convert text to image with an AI image generator
Get creative with your words and watch them transform into stunning pictures that tell a story.
Turn text into an image using Canva’s free AI image generator app and use them to add visual flavor to your designs.

Lensa
Lensa is an all-in-one image editing app that takes your photos to the next level.
Improve facial retouching with a single tap of Magic Correction.
Perfect the facial imperfections with tons of cool tools.
Lensa has also faced criticism for its AI stealing large swaths of art from the internet and repurposing it for its users. In some cases, artists’s signatures can still be seen in Lensa images, albeit with some distortion.
https://www.smithsonianmag.com/smart-news/is-popular-photo-app-lensas-ai-stealing-from-artists-180981281/
Sam’s Conclusion
What does this mean for art going forward? With the increased usage of deaging technology in popular films like the Marvel Cinematic Universe, and with the upcoming Indiana Jones 5 featuring a wholly digital, younger Harrison Ford, what are the implications for art going forward?
With the popularity of apps like Lensa, are we devaluing the human touch when it comes to art? If the masses can look at a piece of AI generated art compared to one made by humans and value them the same, what does that mean for the future of creating things?
For this reporter, this is maybe the scariest thing about AI. Sure, we talked about its implications for warfare and social manipulation, and that’s also bad. Those are real, tangible things. When it comes to AI being able to replicate human creativity, something so nebulous, what does that leave us with?
I recognize that at a certain point our world got way too big and that AI is necessary to keep things running. I also understand the extreme temptation to become reliant on it. Cue every sci fi movie ever.

[OUTRO AND CREDITS]

Show Plugs
Subscriptions:
Apple Podcasts
Spotify
Google Podcasts
Stitcher
iHeart Radio
Tunein
Amazon
Pandora

Contact Info
Email us at:
Comments@insightsintothings.com
Twitter:
@insights_things
Twitch (Twitch Prime/Amazon Prime)
http://www.twitch.tv/insightsintothings
Facebook:
https://www.facebook.com/InsightsIntoThingsPodcast/
Instagram:
@insightsintothings
Links to all these on the web
Web:https://www.insightsintothings.com

Transcription

00:00:01:21 – 00:00:56:16
Michelle
Insightful podcasts by informative host insights into Things, a podcast network. Welcome to Insights Into Tomorrow, where we take a deeper look into how the issues of today will impact the world of tomorrow. From politics and world news to media and technology, we discuss how today’s headlines are becoming tomorrow’s reality TV business.

00:00:59:19 – 00:01:03:11
Joseph
Welcome to Insights Into Tomorrow. This is Episode.

00:01:03:11 – 00:01:04:09
Joseph
18.

00:01:04:11 – 00:01:07:17
Joseph
Artificial Intelligence. I’m your host.

00:01:07:23 – 00:01:09:27
Joseph
Joseph Whalen, and my co-host.

00:01:09:27 – 00:01:10:24
Joseph
Sam Whalen.

00:01:10:26 – 00:01:13:21
Sam
Everybody, you believe we’ve done 18 of these 18. Wow.

00:01:14:11 – 00:01:15:04
Joseph
Oh.

00:01:15:15 – 00:01:17:05
Sam
Wow. Look at the 100. No time.

00:01:18:26 – 00:01:24:01
Joseph
Yeah, we kind of have fallen off of our cadence of monthly podcast on this, and there’s.

00:01:24:01 – 00:01:25:12
Sam
Just nothing to talk about.

00:01:25:19 – 00:01:30:18
Joseph
Yeah, well, that’s what happens when you steer away from all the controversial topics.

00:01:30:18 – 00:01:31:07
Sam
Exactly.

00:01:32:09 – 00:01:37:15
Joseph
So this week, we’re going to talk about a nice, safe topic, and that is artificial intelligence.

00:01:37:23 – 00:01:39:19
Sam
No, no dire implications with that.

00:01:41:05 – 00:01:45:22
Joseph
So you you know how this one is going to enhance.

00:01:45:22 – 00:01:46:23
Joseph
So is the future of.

00:01:46:25 – 00:01:47:24
Joseph
A.I. more like.

00:01:47:24 – 00:01:48:29
Joseph
HAL 9000.

00:01:48:29 – 00:01:50:12
Joseph
Skynet and The Matrix.

00:01:51:02 – 00:01:52:10
Joseph
Or is it more like Wally.

00:01:52:17 – 00:01:54:06
Joseph
Star Trek’s Commander Data.

00:01:54:06 – 00:01:54:13
Joseph
Or.

00:01:54:13 – 00:02:17:15
Joseph
C-3PO from Star Wars? Some of the world’s most brilliant minds, like Michio Kaku and Stephen Hawking, recognize the tremendous potential for artificial intelligence, but also offer words of caution at its development. Is artificial intelligence the next step in our evolution or the first step in our ultimate destruction?

00:02:18:07 – 00:02:20:13
Joseph
How will I impact humans?

00:02:21:04 – 00:02:31:23
Joseph
How will I impact what humans create? And could humans be entirely removed from the creative process? That’s what we’ll be discussing on today’s episode of Insights into Tomorrow.

00:02:32:13 – 00:02:33:17
Joseph
But before we do that.

00:02:33:22 – 00:02:51:06
Joseph
I’d like to take a moment to invite our listening and viewing audience to subscribe to the podcast. You can find audio versions of this podcast listeners insights into Tomorrow. You can find both audio and video versions of all the network’s podcast listeners insights into things.

00:02:52:04 – 00:02:53:07
Joseph
And we can be found at anywhere.

00:02:53:07 – 00:03:02:02
Joseph
You get a podcast these days Apple, Spotify, Google, Stitcher, etc.. I would also encourage you to writing. Give us your feedback, tell us how we’re doing.

00:03:02:02 – 00:03:03:16
Joseph
Give us suggestions on what you’d like.

00:03:03:16 – 00:03:04:13
Joseph
Us to talk about.

00:03:05:08 – 00:03:05:29
Joseph
You can email.

00:03:05:29 – 00:03:11:28
Joseph
Us your comments and insights into things on rt.com. You can find us on Twitter at Insights, underscore things.

00:03:12:25 – 00:03:13:07
Joseph
Or on.

00:03:13:07 – 00:03:16:03
Joseph
Facebook at Facebook.com, slash insights.

00:03:16:03 – 00:03:16:25
Joseph
In the Things.

00:03:16:25 – 00:03:19:12
Joseph
Podcast. Are we ready?

00:03:19:18 – 00:03:20:05
Sam
Let’s get into it.

00:03:20:12 – 00:03:22:19
Joseph
Let’s go.

00:03:26:25 – 00:03:28:22
Joseph
So what is artificial intelligence?

00:03:29:08 – 00:03:30:18
Joseph
So John.

00:03:30:18 – 00:03:34:25
Joseph
McCarthy, head of computer science department at Stanford University in 2000.

00:03:34:25 – 00:03:45:04
Joseph
Seven, defined artificial intelligence as the science and engineering of making intelligent machines, especially intelligent computer programs.

00:03:45:27 – 00:03:50:14
Joseph
It’s related to the similar task of using computers to understand human intelligence.

00:03:51:05 – 00:03:52:27
Joseph
But A.I. does not have.

00:03:52:27 – 00:03:57:16
Joseph
To confine itself to methods that are biologically observable.

00:03:57:16 – 00:03:58:06
Sam
Like a brain.

00:03:58:19 – 00:03:59:11
Joseph
Exactly.

00:04:00:19 – 00:04:01:18
Joseph
And this is kind of a.

00:04:01:18 – 00:04:09:03
Joseph
Different approach to artificial intelligence than what has traditionally been the measure of artificial intelligence. And that.

00:04:09:03 – 00:04:09:19
Joseph
Was really.

00:04:10:04 – 00:04:35:29
Joseph
Something called the Turing Test that was created by Alan Turing back in the 1950s. So the Turing test basically said if if it’s observable and convincing enough to be a human, the test basically was two people having a conversation, a third party watching that conversation. And at the third party, watching that conversation, it’s all text based. Can’t distinguish the A.I. for the human.

00:04:36:12 – 00:04:38:00
Joseph
Then it’s passed the Turing test.

00:04:38:00 – 00:04:40:08
Joseph
Some kind of kind of a loose.

00:04:40:08 – 00:04:46:00
Joseph
Definition of what A.I. was back at the time. So it was basically.

00:04:46:00 – 00:04:46:19
Joseph
How well it.

00:04:46:19 – 00:04:48:07
Joseph
Could emulate a human being.

00:04:49:01 – 00:04:49:14
Joseph
But I think.

00:04:49:21 – 00:04:52:27
Joseph
AI is changed significantly in that time, don’t you think?

00:04:53:01 – 00:05:03:09
Sam
Yeah, definitely. I mean, we have to, you know, change what we define it as as well, it’s not just it’s not just text based. It can do way more than that. Now it’s expanded, you know, pretty exponentially since then.

00:05:03:25 – 00:05:07:11
Joseph
So what does a modern take on on artificial intelligence?

00:05:07:16 – 00:05:27:08
Sam
So we have some information here from Stuart Russell, professor of computer science at the University of California, Berkeley. And Peter Norvig, I apologize, Peter, if I mispronounce your name there. He’s the director of research for Google Published Artificial Intelligence, a modern Approach. It became one of the leading text books. I believe this is Norvig book in a study of A.I..

00:05:27:09 – 00:05:55:28
Sam
In it, they delve into four potential goals or definitions of A.I., which differentiates computer systems on the basis of, forgive me, rationality and thinking versus acting. So we have the human approach systems that think like humans, systems that act like humans as well. And the ideal approach systems that think rationally and then systems that act rationally. Turing’s definition would have fallen under the category of systems that act like humans.

00:05:56:10 – 00:06:17:11
Sam
And there’s a little bit more here that I’m going to keep reading. I think really was at its simplest form, artificial intelligence is a field which combines computer science and robust data sets to enable problem solving. Basically, we’re making these things as sort of extra brain to help us out. It also encompasses subfields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence.

00:06:17:24 – 00:06:25:15
Sam
These disciplines are comprised of A.I. algorithms which seek to create expert systems which make predictions or classifications based on that input data.

00:06:26:00 – 00:06:27:09
Joseph
So A.I. right.

00:06:27:09 – 00:06:39:09
Joseph
Now, some of the some of the biggest things in A.I. right now probably are algorithms. You know, you have the famed Google algorithm for searches and, you know, they apply it to YouTube.

00:06:39:09 – 00:06:41:21
Joseph
And is that really is the.

00:06:41:21 – 00:06:43:26
Joseph
Algorithm itself that basically.

00:06:43:26 – 00:06:45:16
Joseph
Looks at usage patterns.

00:06:45:16 – 00:06:49:25
Joseph
And what you’re doing online and then determines what else you would like.

00:06:50:22 – 00:06:51:23
Joseph
Is that considered a.

00:06:51:23 – 00:06:53:11
Joseph
Form of artificial intelligence?

00:06:53:21 – 00:07:12:03
Sam
I think it would be. I mean, it’s it’s interpreting data and then I guess projecting and speaking its own conclusions based on that data and then sort of showing you that conclusion. Right. So if you click on something on Instagram and you read the page long enough and you go back to Instagram, it’ll have more things like that.

00:07:12:03 – 00:07:25:15
Sam
Or it happens to me all the time because every device is listening to you all the time. You say you mention a product and then 5 seconds later an ad for it pops up on your Instagram or you text somebody about it in the same thing. So it’s definitely always watching and always learning.

00:07:26:02 – 00:07:27:00
Joseph
Yeah, no. And I.

00:07:27:00 – 00:07:38:28
Joseph
Agree. And I think we’ve run into that in the house here because I’ve got a whole boatload of Amazon a products I don’t want to say the name because I don’t want to actually activating.

00:07:40:11 – 00:07:43:24
Joseph
But that’s the same sort of thing that and.

00:07:43:24 – 00:07:49:25
Joseph
Siri and what’s the Android version of a voice? Bixby. Bixby is.

00:07:49:25 – 00:07:51:19
Joseph
Added. So all of these.

00:07:51:19 – 00:07:58:15
Joseph
Are these assistants are some type of artificial intelligence as well. But how intelligent are they really?

00:07:58:24 – 00:08:00:12
Sam
Not very. Yeah.

00:08:00:13 – 00:08:02:25
Joseph
That’s gets kind of like none of them are thinking.

00:08:02:25 – 00:08:07:16
Joseph
On their own. They all require some level of input and they all require some.

00:08:08:02 – 00:08:09:01
Joseph
Source of.

00:08:09:01 – 00:08:23:26
Joseph
Data that they go out and get. Like if you ask the Amazon devices for information on something, they’ll come back and tell you a Google contributor added this so.

00:08:24:12 – 00:08:25:06
Joseph
They don’t even seem to.

00:08:25:06 – 00:08:29:08
Joseph
Compile their own set of knowledge. Mm hmm.

00:08:30:04 – 00:08:33:01
Joseph
So it’s almost like the it’s more of an.

00:08:33:01 – 00:08:45:18
Joseph
Interface to the Web than it is really an artificial intelligence. It might be an intelligent way to interface with it, but we’re not talking Hal 9000, right?

00:08:45:18 – 00:08:46:29
Sam
It’s not learning and adapting.

00:08:47:08 – 00:08:48:18
Joseph
Right. And it can’t take its.

00:08:48:18 – 00:08:56:11
Joseph
Own actions or anything like that. You know, when we get into stuff like that, we might be looking at something like a Tesla or a self-driving car.

00:08:56:24 – 00:08:58:13
Joseph
Where it takes input.

00:08:58:13 – 00:09:12:02
Joseph
From outside sensors and has the ability to make decisions on its own at that point in time. Right. That’s where we tend to get a little bit more down the rabbit hole of how dangerous things could be.

00:09:12:02 – 00:09:21:20
Sam
Yeah, once you start getting into the territory, we transition out of this into our next segment of more developed intelligence of where that could possibly go once something starts thinking for itself.

00:09:21:27 – 00:09:22:09
Joseph
Right.

00:09:23:04 – 00:09:26:23
Joseph
So let’s take a quick break. We’ll come back and we’ll dig a little bit deeper.

00:09:26:23 – 00:09:38:07
Joseph
Into the types of artificial intelligence that are out there. We’ll be right back.

00:09:38:27 – 00:10:09:02
Joseph
For over seven years, the second Sith empire has been the premier community guild. In the online game, Star Wars, The Old Republic, with hundreds of friendly and helpful active members, a weekly schedule of nightly events, annual guild, meet and greets and connect with the community, both on the Web and on Discord. The second season Empire is more than your typical gaming group.

00:10:09:27 – 00:10:41:07
Joseph
We’re family. Join us on the Star Forge server for nightly events such as operations, flash points, World Boss Friends, the Star Wars Trivia Guild Lottery and much more. Visit us on the Web today at WW about the second super empire dot com.

00:10:41:12 – 00:10:46:04
Joseph
Welcome back to insights into tomorrow we are talking artificial intelligence today.

00:10:46:23 – 00:10:47:24
Joseph
So there’s an I don’t.

00:10:47:24 – 00:11:03:14
Joseph
Want to go through every every line that we have here just kind of throw a few things out there. So the types of artificial intelligence that we’re really kind of referring to here can be broken into two categories, one’s capabilities and the other functionalities. So under your.

00:11:03:14 – 00:11:04:15
Joseph
Capabilities, you’ve got.

00:11:04:15 – 00:11:13:00
Joseph
Narrow AI. And an example of narrow AI is sort of like what we talked about with Siri, where, you know, it’s also referred to as.

00:11:13:00 – 00:11:13:12
Joseph
Weak.

00:11:13:18 – 00:11:25:00
Joseph
AI. It focuses on a narrow task you can even classify IBM’s Watson that was used for chess playing in something like this as well. It’s it’s not.

00:11:25:07 – 00:11:26:03
Joseph
Robust.

00:11:26:03 – 00:11:54:17
Joseph
It’s not thinking on its own. You then get into your general eyes and your general AI’s tend to be a little bit more sophisticated. These are your strong AIS you can understand and learn intellectual tasks that humans can do. So you’re getting a little bit smarter with these, but the requirements from a hardware standpoint, you need a supercomputer, you know, the couple, those are that are listed here are two of the fastest supercomputers in the world in order to accomplish this.

00:11:55:01 – 00:12:03:07
Sam
Yeah and do we know what these we have the Fujitsu K do you know those are used for like what type of computing we’re using these like intense supercomputers where.

00:12:03:07 – 00:12:04:24
Joseph
Most of these are being used.

00:12:04:24 – 00:12:11:24
Joseph
For simulating weather, simulating nuclear explosions without actually having a firearm also.

00:12:11:24 – 00:12:12:09
Sam
So that’s helpful.

00:12:12:16 – 00:12:40:26
Joseph
They’re highly complicated and powerful devices just to get this next level of AI. And then the last level of AI from a capability standpoint is what’s referred to as super A.I.. This is where I you this is your Skynet. This is where it surpasses human intelligence. None exist at this point in time, thankfully. But is this the direction that we’re going as we’re seeing this natural progression?

00:12:40:26 – 00:12:42:15
Joseph
Are we going to see super avionics?

00:12:42:21 – 00:12:58:15
Sam
I don’t know. I mean, I don’t know if I want that. And then then you get in like the Blade Runner discussion of like what is what makes something human when you give something like a a will of its own, like what? We’re getting to like deep sci fi territory here. But I don’t know if we would ever get to that point.

00:12:58:23 – 00:13:14:01
Sam
You know, what would we need something like that for? I mean, maybe to, like, run a city like, uh, like monitor trains and public works and stuff like that on like a, a city level. I don’t know. I don’t know what we ever need. Something that’s smart for, honestly.

00:13:14:09 – 00:13:15:12
Joseph
Yeah. I mean, I could certainly.

00:13:15:12 – 00:13:25:14
Joseph
See from a, an infrastructure standpoint, you know, most of what we do now, you know, you got positive train control is a great example you just mentioned with trains where.

00:13:26:01 – 00:13:26:23
Joseph
If the human.

00:13:26:23 – 00:13:47:12
Joseph
Operator fails to make certain checks, the computer can kick in and run things safely or shut the system down safely. So there’s lesser versions of that now, but I think seeing something like that on a grand scale might be something where a super A.I. would work. What about functional anti based? AI’s told us about that.

00:13:47:20 – 00:14:04:06
Sam
So we’ve got the reactive machine. React machine is the primary form of AI that does not store memories or use past experiences to determine future actions. It only works in the present. So an example of this would be IBM’s deep blue chess playing computer. That things really cool, by the way. Yeah, I’ve seen some videos of that. It’s really neat.

00:14:04:06 – 00:14:05:28
Joseph
Garry Kasparov would differ.

00:14:07:17 – 00:14:30:14
Sam
We’ve also got limited memory AI, which trains from the past to make decisions. They do have a memory, but it’s very short lived and an example of this would be using self-driving vehicles like we mentioned, Teslas, things like that. And then finally we have Theory of the Mind on the show. We like to say theater of the mind like that Theory of the Mind A.I. represents an advanced class of technology and exist only as a concept.

00:14:31:09 – 00:14:46:05
Sam
They would require a thorough understanding of the people and the things in an environment, including understanding feelings and behaviors. There’s no examples because they don’t exist. But, you know, attempts at mimicking human emotion in automatons is as close as we can get.

00:14:46:05 – 00:14:47:26
Joseph
One of the things regarding that.

00:14:47:26 – 00:14:49:08
Joseph
Is people are trying.

00:14:49:08 – 00:14:51:17
Joseph
To experiment with this and.

00:14:52:06 – 00:14:58:21
Joseph
And they’re doing it through this, the same type of training that they’re using for chat.

00:14:59:00 – 00:15:05:12
Joseph
GPT and the other AI engines that are out there. And we’ll talk more about those in a little bit.

00:15:05:12 – 00:15:10:01
Joseph
But what they’re doing is they’re taking they’re having human beings.

00:15:10:06 – 00:15:16:23
Joseph
Emulate emotions, facial expressions of emotions, sadness, happiness, that type of thing. And they’re.

00:15:16:23 – 00:15:17:08
Joseph
Having.

00:15:18:01 – 00:15:39:05
Joseph
Multiple iterations of people doing these. They’re sticking them in a database and then they’re having a computer. Look at these and interpret them. So just by facial expressions, they’re trying to get computers to understand emotions. How affective do you think that would be? How effective would it be for you to have no other communication method whatsoever.

00:15:39:16 – 00:15:43:04
Joseph
But just being able to see someone’s face, do you think you would they would be able to.

00:15:43:04 – 00:15:45:05
Joseph
Convey emotions accurately enough?

00:15:45:19 – 00:16:08:07
Sam
I don’t know. It would depend on how they’re measuring it. Right. Like if you’re looking at like specific muscle movements for like a frown versus a smile, but everybody expresses emotion in different ways. So you have to have some kind of baseline to do that. And I forget the name, but we learned it in college. But it’s a thing where you show us an image of a guy’s face and it’s that effect where you can interpret it however you want, like it’s the same face over and over again.

00:16:08:13 – 00:16:24:04
Sam
But if you interpret it as and he’s happy, you can kind of see it, or he said you can kind of see it. And so I wonder where you get into something like that, where it’s like it’s sort of a gray area of, well, is this person really happy? You know, that’s when you have to go back to the baseline of if we’re measuring facial tics or.

00:16:24:07 – 00:16:27:20
Sam
Right. Certain, I don’t know visual ratios on your face.

00:16:27:29 – 00:16:30:28
Joseph
Yeah, and I agree. And then like I don’t know if I as a.

00:16:30:29 – 00:16:37:12
Joseph
As a person who doesn’t need artificial intelligence would be able to accurately understand emotion. Right.

00:16:37:14 – 00:16:37:23
Sam
Right.

00:16:38:03 – 00:16:40:03
Joseph
And then I look at the technology that we’ve had.

00:16:40:03 – 00:16:44:00
Joseph
With facial recognition already and some of the challenges we’ve had where.

00:16:44:13 – 00:16:45:07
Joseph
You had.

00:16:46:02 – 00:16:51:22
Joseph
For instance, Microsoft’s camera technology that they had on their Xbox series.

00:16:52:05 – 00:16:53:27
Joseph
When that came out not.

00:16:53:27 – 00:16:54:29
Joseph
That many years ago.

00:16:56:02 – 00:16:58:15
Joseph
They only used Caucasian.

00:16:58:15 – 00:17:06:12
Joseph
People to train it. And as people with darker skin, Asians, African-Americans and so forth started using it.

00:17:07:06 – 00:17:08:21
Joseph
The technology didn’t hold up.

00:17:08:26 – 00:17:09:07
Sam
Right.

00:17:09:17 – 00:17:13:15
Joseph
And it was because of the way that the technology itself.

00:17:13:15 – 00:17:16:01
Joseph
Was developed and trained and and the.

00:17:16:01 – 00:17:16:19
Sam
Foundation.

00:17:16:19 – 00:17:18:12
Joseph
Of it. Exactly. So it’s like.

00:17:19:06 – 00:17:22:18
Joseph
If you can’t get that right, then I have a.

00:17:22:18 – 00:17:24:24
Joseph
Very difficult time thinking you’re going to get emotional.

00:17:25:07 – 00:17:35:24
Sam
Yeah, that’s the thing, right? You get into like user error at that point. If you’re if you’re trying to emulate human behavior but you don’t account for the whole world, how are we going to apply this in any practical sense?

00:17:35:24 – 00:17:36:09
Joseph
Right.

00:17:36:19 – 00:17:39:12
Joseph
And that’s why you always have to think you need multiple.

00:17:39:12 – 00:17:47:04
Joseph
Forms of input, voice inflection, body movement, you know, the the micro tics that people have.

00:17:47:11 – 00:17:52:23
Joseph
You’d I can certainly see you coming up with enough data.

00:17:52:23 – 00:17:53:12
Joseph
Points.

00:17:53:27 – 00:17:55:20
Joseph
That you could.

00:17:56:06 – 00:18:04:25
Joseph
Infer what emotions are, but you need to have those data points there. And it has to be more sensors than just trying to read someone’s face.

00:18:04:27 – 00:18:07:09
Sam
Yeah. And do we need this? Why would we need it?

00:18:08:13 – 00:18:25:02
Joseph
Right. Right. And that’s another point. You know, one of the one of the implementations that people have talked about with this is customer service. You know, they had the one hotel in in Tokyo that didn’t last very long, that the front desk was entirely run by automaton.

00:18:25:09 – 00:18:26:10
Sam
I’m sure that was a blast.

00:18:26:18 – 00:18:38:01
Joseph
And it was because they didn’t want they thought the automatons would interact better with people than people would, which I’m not really sure why you would think that. But the problem you run there is.

00:18:38:01 – 00:18:40:13
Joseph
You you get into that uncanny valley.

00:18:40:28 – 00:18:44:02
Joseph
Scenario where people are just creeped out by it. It looks.

00:18:44:02 – 00:18:44:20
Joseph
Enough.

00:18:44:27 – 00:18:47:17
Joseph
Like a human to to make.

00:18:47:17 – 00:18:49:05
Joseph
You think it is, but not enough.

00:18:49:10 – 00:19:06:26
Joseph
To to know it is. So it made people feel very uncomfortable. But I could see in a in a in a situation like that where you’re implementing emotional stuff detection, where you want to know if someone’s angry, you know, do I really need to get a manager? Are you yelling stuff.

00:19:06:26 – 00:19:07:11
Joseph
Like that.

00:19:07:25 – 00:19:09:17
Joseph
Where you’re interacting with.

00:19:09:17 – 00:19:10:05
Sam
I could see that.

00:19:11:25 – 00:19:30:13
Joseph
It’s funny that that they’re exploring that path now when you have grocery stores that are doing away with their self-checkout because people dislike not having a human to deal with at a register now. So it’s like, I don’t know if it’s a cultural thing or if it’s a regional thing or what, but it seems people are kind of finicky.

00:19:30:17 – 00:19:40:16
Sam
Yeah, I don’t know. And also it’s like, is it is it cost effective to build a robot to dogs to be able than to just hire someone for minimum wage to work at the front desk?

00:19:40:28 – 00:19:41:11
Joseph
Right.

00:19:41:13 – 00:19:53:24
Sam
That’s that’s a broad I know the argument is that, well, the robot could do it for forever, whereas we have to worry about this person, you know, their livelihood and you know, they need to go to the bathroom to take a break, things like that. Robots don’t have to do that. But it’s like I mean, come on.

00:19:54:07 – 00:19:56:25
Joseph
You don’t you don’t have a contract with the for one, quit robots.

00:19:56:25 – 00:20:10:04
Sam
Yeah, exactly. Yeah. You just got to swap out a few parts every once in a while. I don’t know in the practicalities where I get hung up on of of how are we really going to use these things and are they going to be worth the effort and time are doing putting in to trying to figure it out?

00:20:10:14 – 00:20:14:27
Joseph
I agree. And I think it’s almost like it’s a fun school project. People are.

00:20:14:27 – 00:20:15:11
Sam
Worried. Yeah.

00:20:16:20 – 00:20:37:05
Joseph
But you know, the golden ring or the, you know, the prize of artificial intelligence, at least as far as sci fi tells us, is self awareness. Self-awareness is where the AI, you know, this, they only exist hypothetically right now and only in science fiction for the most part.

00:20:37:05 – 00:20:37:12
Sam
Yeah.

00:20:38:03 – 00:20:44:16
Joseph
But these are the types of, of systems that can understand internal traits and states and conditions and.

00:20:44:16 – 00:20:46:04
Joseph
Perceive actual.

00:20:46:04 – 00:20:47:20
Joseph
Perceived human emotions.

00:20:48:18 – 00:20:50:03
Joseph
And these machines are smarter.

00:20:50:03 – 00:21:13:05
Joseph
Than the human mind, which is really what’s what’s terrifying, I think, to a lot of people. They’ll be able to understand and evoke emotions in those interacts with, but it’ll also have its own emotions, its needs, its beliefs. You know, this is where we get into things like a how 9000 from 2001 or Skynet or something like that.

00:21:14:08 – 00:21:17:29
Joseph
Is I don’t know if anyone’s shooting.

00:21:18:02 – 00:21:26:03
Joseph
For self-awareness. Is self-awareness possible with today’s technology?

00:21:26:08 – 00:21:42:16
Sam
I was actually just thinking about that because like you haven’t hear about it, it would having its own emotions and needs. And I think it would be more terrifying if it didn’t have emotions, if it was purely logical, right. Because, you know, you make a A.I. and it instantly determines that humans are bad for the planet and decides to wipe us out.

00:21:42:26 – 00:22:00:20
Sam
Right. Because like, I mean, we could make that conclusion, but we’re also, you know, we are humans going to wipe ourselves out, you know, intentionally. So it’s like I don’t know if we would want if we would get to the point where that’s possible. I don’t know. I obviously don’t know anything about programing, but how how would we program emotion?

00:22:00:20 – 00:22:22:15
Sam
I guess, you know, you’re seeing the baby steps with this facial recognition software, but at what point are we able to program emotions? Is it something like using a human brain as the blueprint? I have no idea where this would go, but it sounds like fantasy, but one that is close enough to reality that, you know, maybe. But I don’t I just don’t know how they would do it.

00:22:23:01 – 00:22:26:14
Joseph
And you make a very valid point there. I think humans in.

00:22:26:14 – 00:22:35:14
Joseph
General have a very difficult time dealing with emotions, recognizing emotions and coping with them.

00:22:35:14 – 00:22:37:11
Joseph
Already. I don’t know.

00:22:37:11 – 00:22:41:02
Joseph
How you would possibly translate that into software.

00:22:41:03 – 00:22:58:24
Sam
No, I think it would have to be. You can’t I mean, it’s all math based, right? So you can’t quantify that. Does the whole point of emotion. You can’t quantify it, right? Right. Unless you measure like heart rate, I guess if you’re angry or or sweating or, you know, biological reactions. But when it gets to the thinking about interpreting emotions, I just think it’s too nebulous.

00:22:58:24 – 00:23:06:19
Sam
And if we did have a self-aware A.I., it would have to be, you know, maybe based off of personality or it would just be purely logical.

00:23:07:10 – 00:23:18:06
Joseph
So then we’re talking something more along the lines of RoboCop, you know, where we take the X Connor, we stick his brain into the robot there and that’s the artificial intelligence.

00:23:18:06 – 00:23:31:24
Sam
Yeah, I was thinking that or in Halo, if you guys are familiar with that, they use a human brain to then make an A.I., right? So the the A.I. is like part human kind of something like that. But I don’t think we can do that yet and I don’t know how we would.

00:23:32:12 – 00:23:34:10
Joseph
You know what? I think that’s the the probably.

00:23:34:10 – 00:23:44:08
Joseph
The biggest limiting factor right now, I think is our computing power. Yeah. You know, in order to produce the amount of computational power that the human brain has.

00:23:44:27 – 00:23:46:11
Joseph
You need the fastest.

00:23:46:11 – 00:23:58:13
Joseph
Supercomputer right now to even approach it. And I think until we get to, you know, real practical quantum computing, I don’t think we have much of a hope of even reproducing AI people.

00:23:58:24 – 00:23:59:18
Joseph
People I think.

00:23:59:27 – 00:24:04:22
Joseph
Don’t understand how remarkably capable the human brain is.

00:24:05:00 – 00:24:05:12
Sam
Like going.

00:24:05:12 – 00:24:05:18
Joseph
On.

00:24:05:25 – 00:24:07:28
Joseph
Yeah, how fast it operates.

00:24:07:28 – 00:24:24:18
Joseph
How many different things it can do at one time. And computers just haven’t gotten to that point yet. Yes, the computers are very good at doing the same thing over and over again, and they’re very good at doing math, but they’re only that good because humans make them that good.

00:24:24:23 – 00:24:32:18
Sam
Yeah. And it’s when you get that line of where the what we’re creating is superseding us that it’s like, okay, well maybe we’ve gone a little bit too far.

00:24:32:18 – 00:24:33:22
Joseph
Yeah, yeah.

00:24:34:10 – 00:24:36:13
Joseph
Well, you know, it’s a good good point.

00:24:36:13 – 00:24:53:23
Joseph
Where you can take a quick break and we’ll come back and talk about is artificial intelligence dangerous in its current form and future forms? We’ll be right back.

00:24:53:23 – 00:25:23:29
Joseph
Forever. Seven years. The second Sith empire has been the premier community guild in the online game. Star Wars, The Old Republic, with hundreds of friendly and helpful active members, a weekly schedule of nightly events, annual guild, meet and greets in the community both on the Web and on Discord. The Second Sith Empire is more than your typical gaming group.

00:25:24:23 – 00:25:56:07
Joseph
We’re family. Join us on the Star Forge server for nightly events such as Operation Flashpoint, World Boss Funds, Star Wars Trivia Guild, Lottery and much more. Visit us on the web today at WW the second civ and empire dot com.

00:25:56:21 – 00:25:58:20
Joseph
So Elon Musk’s Elon.

00:25:58:20 – 00:26:21:04
Joseph
Musk not Musk’s Elon Musk wrote the pace of progress in artificial intelligence. He says, I’m not referring the narrow AI is incredibly fast. Unless you have direct exposure to groups like DeepMind, you have no idea how fast it’s growing at a pace close to exponential. The risk of something.

00:26:21:04 – 00:26:23:06
Joseph
Seriously dangerous happening.

00:26:23:14 – 00:26:28:09
Joseph
Is in the in the next five year time frame. Ten years at most.

00:26:29:25 – 00:26:30:14
Sam
Was is quote from.

00:26:30:21 – 00:26:33:18
Joseph
So this was from Elon mine in an interview where the.

00:26:33:18 – 00:26:35:29
Sam
Quote I just wanted to see if we were still in the window or not.

00:26:36:28 – 00:26:42:02
Joseph
I didn’t get a date on the on that. But I have to imagine we’re pretty close to that window any day now.

00:26:42:02 – 00:26:43:18
Sam
The robot uprising is going to happen.

00:26:43:22 – 00:26:46:22
Joseph
Well, the fact that he made cars that are self-driving.

00:26:46:22 – 00:26:51:16
Joseph
And running people over them tells me right now that actually physically is dangerous.

00:26:51:16 – 00:26:55:21
Sam
He can make himself sound smarter by just becoming a self-fulfilling prophecy.

00:26:55:21 – 00:27:00:14
Joseph
So what do you think of that? I mean, I think there’s a lot of.

00:27:00:14 – 00:27:03:00
Joseph
Scholars, there’s a lot of scientists to talk about.

00:27:03:06 – 00:27:05:02
Joseph
I Elon Musk.

00:27:05:02 – 00:27:06:00
Joseph
Is a guy who’s.

00:27:06:18 – 00:27:07:14
Joseph
Neck deep in it.

00:27:07:15 – 00:27:12:02
Joseph
You know, he’s building them. He’s he’s releasing them. You see what we do? You know.

00:27:12:13 – 00:27:13:10
Joseph
Space X has.

00:27:13:10 – 00:27:25:11
Joseph
Rockets that fly back to Earth and land themselves. Tesla has cars that can drive themselves and parked themselves and stuff. So that’s real practical A.I. that’s in the world now.

00:27:26:20 – 00:27:28:12
Joseph
How serious do you think.

00:27:29:02 – 00:27:31:24
Joseph
You take this kind of warning from Elon Musk?

00:27:32:06 – 00:27:52:19
Sam
I’m not that seriously just because I don’t see any any like real world application of this this idea. Right. I also don’t know what DeepMind is, but it sounds very scary. But I just don’t think I don’t know. I feel like we will be seeing more examples of it other than the Teslas running people over and causing traffic.

00:27:52:19 – 00:28:18:00
Sam
But it’s just faulty hardware. It doesn’t necessarily, you know, the I think for itself in choosing to kill all humans, that’s that’s a functionality issue. Right. So I don’t know, I don’t think it would be this quick. I just think we would see more implementation of AI in our day to day lives for it to then turn into something dangerous the most I could see it being is like, uh, an overreliance on computers and A.I. instead.

00:28:18:00 – 00:28:33:27
Sam
And when then they malfunction, then you have problems like the recent airport, the what was it? The FAA systems went down, things like that, you know, things where we have large scale operations being run by AI and then the AI’s break and then we have a problem.

00:28:33:27 – 00:28:35:24
Joseph
So Musk was quoted.

00:28:36:04 – 00:28:37:04
Joseph
In a separate quote.

00:28:37:24 – 00:28:40:03
Joseph
Comparing A.I. to the dangers.

00:28:40:03 – 00:28:43:18
Joseph
Of the dictator of North Korea.

00:28:43:18 – 00:28:45:27
Joseph
Now, yes, it’s Elon Musk.

00:28:45:27 – 00:28:47:01
Joseph
So there’s a lot of talk.

00:28:47:01 – 00:28:48:00
Sam
Of the big old grain of salt.

00:28:49:06 – 00:28:50:12
Joseph
But there’s a lot of.

00:28:50:12 – 00:28:54:15
Joseph
Self-Promoting, you know, angles that he goes through here.

00:28:55:03 – 00:28:57:10
Joseph
But we’ve got artificial.

00:28:57:10 – 00:29:04:09
Joseph
Intelligence in machines that are capable of killing people, not designed to be okay.

00:29:04:14 – 00:29:07:05
Joseph
And we also have a design in the.

00:29:07:17 – 00:29:17:26
Joseph
Devices that are designed to kill people. You know, you’ve got Predator drones that are AI driven. You’ve got Tomahawk cruise missiles that are AI driven. You’ve got.

00:29:18:20 – 00:29:21:12
Joseph
Phalanx C with.

00:29:21:28 – 00:29:25:25
Joseph
Closing war weapons systems on ships that are AI driven.

00:29:27:15 – 00:29:28:21
Joseph
These are machines that.

00:29:28:21 – 00:29:35:17
Joseph
Are built already to kill and are powered by A.I. Are those potentially dangerous or do you think we have enough control over those?

00:29:35:26 – 00:29:53:10
Sam
Well, I mean, they’re weapons grade, so they’re inherently dangerous. It would be more about again, I just think are we shooting for the notion of them like rising up and like actively trying to kill people? Because I think it would be much more realistic for them to just be taken over by another human right, to have somebody hack into them or to have them malfunction.

00:29:53:10 – 00:30:08:17
Sam
And and instead of targeting here, you’re targeting somewhere else you’re not supposed to. I just don’t think it would be the A.I. necessarily make like choosing to to, you know, kill a bunch of people. I think instead it would be a malfunction.

00:30:09:05 – 00:30:19:14
Joseph
So so, so to paraphrase, we’re hoping that it’s humans that are the problem, not the exact problem.

00:30:19:15 – 00:30:21:20
Sam
And we can count on that as yeah.

00:30:22:01 – 00:30:23:13
Joseph
You know, to a certain extent, that.

00:30:23:13 – 00:30:32:13
Joseph
Logic, that logic makes sense because you can calendar human factor like it’s very likely that you would have a foreign.

00:30:32:13 – 00:30:34:17
Joseph
Entity attack.

00:30:34:24 – 00:30:42:07
Joseph
Our infrastructure here before I think you would have an I go rogue and shut down the power grid.

00:30:42:13 – 00:31:01:11
Sam
That’s exactly what I’m talking about. Like you’re not going to like you talked about the Predator missiles. Like the Predator missile is not going to wake up one day and go, hey, I’m going to go blow up that town. It’s going to someone’s going to have to tell it to go do that. It doesn’t we’re not at the point where these things can like actively, you know, pursue missions or pursue goals on their own unless they’re told to by somebody else.

00:31:01:11 – 00:31:02:00
Sam
They’re an operator.

00:31:02:05 – 00:31:04:00
Joseph
So. All right, let’s run with that.

00:31:04:00 – 00:31:09:16
Joseph
For a second. So they’re not going to rise up. You’re not going to have Skynet take over the world and nuke the entire world.

00:31:10:16 – 00:31:12:01
Joseph
But you have machines that.

00:31:12:01 – 00:31:15:11
Joseph
Are trusted to go do a job.

00:31:15:14 – 00:31:21:02
Joseph
That may involve violence. If those machines can then be usurped.

00:31:21:02 – 00:31:25:08
Joseph
By a nefarious third party and are.

00:31:25:08 – 00:31:28:18
Joseph
Trusted, those machines allows.

00:31:28:18 – 00:31:28:26
Joseph
Them to.

00:31:28:26 – 00:31:32:21
Joseph
Continue to do their job when they get usurped. We have a.

00:31:32:21 – 00:31:34:00
Joseph
New target target designee.

00:31:34:01 – 00:31:52:08
Sam
Right, exactly. And that’s that’s I think where you get into the real world danger of this. But I mean, that’s any you know, our world in general has a heavy reliance on computers. Right. And so I think that that is just that’s just that comes with the you know, that comes with the the risk, right. Of the the ease of use and the scale of our operations.

00:31:52:08 – 00:32:08:10
Sam
You need something like AI to run it because you just it’s not feasible for people to do it all the time. And I think that that just comes with the territory of if this is how we’re going to run things, then there’s a chance that someone could take it over. It’s no different than well, it’s it’s different, but we’re reducing a little bit.

00:32:08:16 – 00:32:11:26
Sam
But if you had a gun and someone took your gun and killed you with it, it’s the same thing.

00:32:12:21 – 00:32:14:21
Joseph
So air is the tool.

00:32:15:09 – 00:32:17:00
Joseph
It’s not the intent for now.

00:32:17:00 – 00:32:44:15
Sam
Yeah. Okay. Until we get to the point where, you know, we have a fleet of drones that are, you know, flying around the globe just taking potshots wherever they want. You know that our program, well, I guess they want to be programed at that point. But I’m saying, like instead of a a fleet of jets flown by pilots, if it was a fully automated, you know, bunch of drones where they could all communicate and make their own decisions, once they get into that level of autonomy, then that’s when you’re running into more of an air based issue.

00:32:45:05 – 00:32:45:19
Sam
I.

00:32:46:02 – 00:32:47:12
Joseph
So Russia’s president.

00:32:47:12 – 00:32:48:27
Joseph
Vladimir Putin, everyone’s.

00:32:49:10 – 00:32:51:13
Sam
We’re getting all the banner people out here today.

00:32:52:11 – 00:32:53:11
Joseph
He said at.

00:32:53:11 – 00:33:06:06
Joseph
One point in time artificial intelligence is the future, not only for Russia but for all humankind. It comes with enormous opportunities, but also threats that are difficult to predict.

00:33:06:06 – 00:33:07:28
Joseph
Whoever becomes the leader in this.

00:33:07:28 – 00:33:10:28
Joseph
Sphere will become the ruler of the world.

00:33:10:29 – 00:33:14:07
Sam
That’s a little extreme.

00:33:14:12 – 00:33:15:04
Joseph
First off, do.

00:33:15:04 – 00:33:16:02
Joseph
You agree with that?

00:33:16:15 – 00:33:36:08
Sam
I mean, I guess because at that point, wouldn’t you just have the strongest army, too, if if it’s he says it’s got enormous opportunity. Right. So then by that logic, you would unless those those unpredictable threats like make it worse for you? I don’t know. I, I don’t think it’s any different than having the strongest artillery standing army.

00:33:36:20 – 00:33:48:21
Sam
Right. Whoever has that, you’re by default, the most powerful. Unless you control like the economy. That’s different. But in terms of like military force, if you have a bigger or more effective army, yeah, you’re going to be more power for it.

00:33:49:06 – 00:34:03:15
Joseph
So do you think that in order to combat nations like assuming Russia’s gearing up to write artificial intelligence, do you think the only way to combat nations like that is with your own artificial intelligence?

00:34:03:27 – 00:34:21:21
Sam
I think so. I mean, that’s how these things usually go, right? Like an arms race, which sci fi authors out there, that’s a free title. But yeah, I just think it’s going to you’re going to dig yourself in a hole, right? Because the more we go into this out, the more you’re getting into these threats that are difficult to predict like he’s talking about.

00:34:22:04 – 00:34:37:00
Sam
So the further go the technology, the more we’re getting on that edge of accidentally making something that could either get its own intelligence or something that is more likely to be, like you said, usurped and used against the, you know, whatever country’s making it.

00:34:37:13 – 00:34:37:23
Joseph
Okay.

00:34:38:03 – 00:34:39:27
Joseph
Now, that’s the we’re talking AI.

00:34:39:29 – 00:34:42:03
Joseph
In actual active weapons there, but.

00:34:42:26 – 00:34:44:01
Joseph
There’s potential danger.

00:34:44:01 – 00:34:45:03
Joseph
Elsewhere, too.

00:34:46:01 – 00:34:46:24
Joseph
What about social.

00:34:46:24 – 00:34:52:02
Joseph
Manipulation? What do you think I could do from a social manipulation standpoint? I mean.

00:34:52:04 – 00:34:55:09
Joseph
So far we’ve had social media.

00:34:56:00 – 00:35:03:04
Joseph
And I don’t want to say usurped because it’s being used as it’s designed. Right. But you’ve had situations where you’ve you’ve.

00:35:03:04 – 00:35:04:01
Joseph
Got Facebook and.

00:35:04:01 – 00:35:15:19
Joseph
The whole Cambridge Analytics and you’ve got Twitter trying to, you know, all the Twitter files coming out now telling us how people try to manipulate elections and stuff.

00:35:16:24 – 00:35:19:17
Joseph
Where do you think AI falls in there? As I, you know, is.

00:35:19:17 – 00:35:21:27
Joseph
That algorithm as you go through your.

00:35:22:11 – 00:35:26:02
Joseph
Social feed, does that AI manipulating you in such.

00:35:26:02 – 00:35:27:15
Joseph
A way that could be dangerous?

00:35:27:22 – 00:35:43:02
Sam
Absolutely. And I think that this is where you get into the things that are much more real world dangers. Right. Especially when it comes to things like social media. You’ve run into that echo chamber issue where you’re only hearing ideas and things that that reinforce what you think. So you never hear anybody else’s ideas, for better or worse.

00:35:43:11 – 00:36:03:09
Sam
But generally, I think that’s a bad thing. I think that you should be able to, you know, see ideas from everybody and then interpret them how you want. But when you get into this with social media, where you’re seeing the same things over and over again, it just reinforces your own beliefs and makes you double down. And it makes everybody close minded or more isolated from each other and more extreme in their beliefs, too, in certain situations.

00:36:03:28 – 00:36:20:17
Sam
So I think when you’re getting into things like that, and like you said, the Cambridge Analytica thing where it’s straight up manipulation of people, I think those when those have real world consequences, which they did and still do, I think that’s much more dangerous than worrying about a drone going rogue or something like that.

00:36:21:15 – 00:36:23:03
Joseph
Well, surely manipulating.

00:36:23:03 – 00:36:42:07
Joseph
The mindset of common citizens isn’t something that’s unique to a I mean, you had the Nazi Germany head of the entire propaganda ministry that was designed to do that. You’ve you’ve had this type of thing that was driven by humans for hundreds of years now.

00:36:43:21 – 00:36:46:01
Joseph
What does I bring to the.

00:36:46:01 – 00:36:50:27
Joseph
Plate that makes it any more dangerous than just humans doing this type of activity?

00:36:51:02 – 00:37:10:22
Sam
I think it’s the scale and the pace of it, right? So AI is very good at learning. That’s that’s what it does like or at least whatever. I’m talking about the algorithm in social media, right? It’s good at learning what you do, what you look at. And I think you can do it at a pace and within effectiveness that humans just can’t do or can’t do as fast or on a such a big of a scale.

00:37:10:22 – 00:37:28:05
Sam
I mean, a human could look at you, at what you do on Twitter, what you click on, what you like, but it would take them much longer to figure out, okay, well, what should I serve them next in their timeline that they’ll click on versus an AI looking at it and figuring it out like that. So I think that’s it just makes it more effective and efficient, honestly.

00:37:28:12 – 00:37:29:12
Joseph
That’s a very good point.

00:37:30:23 – 00:37:34:06
Joseph
So the other one they talk about is invasion of.

00:37:34:06 – 00:37:57:14
Joseph
Privacy and in social grading, I don’t know if we’ve never really talked about social grading in the past here. That’s it’s being done in some Asian countries already where depending on what your social your reputation in social media is is how you’re treated in society. But I’ve seen this in sci fi as well, where it’s extremely emphasized in how people treat you.

00:37:58:04 – 00:38:05:12
Joseph
But from an invasion of privacy, you brought up a very good point of, well, everything’s listening. And if I say something, I start getting advertisements.

00:38:06:19 – 00:38:09:21
Joseph
Is that a bad thing that you get targeted advertising.

00:38:10:11 – 00:38:19:09
Sam
Uh, I think it depends on how you look at it. Um, I know people that feel like they’re being manipulated. They’d rather it be not targeted and just give me what I want.

00:38:19:25 – 00:38:22:18
Joseph
Uh, well, that’s what. That’s what targeted advertising is, is they.

00:38:22:18 – 00:38:24:12
Joseph
Give you what they think you want.

00:38:24:29 – 00:38:32:15
Joseph
Without targeting advertising. You’re just going to get a bunch of crap thrown at there. I mean, is that really a bad thing that you get targeted advertising?

00:38:32:15 – 00:38:39:01
Sam
I think when you the people that have issues with it, I think they see it as of being manipulated that even if it’s something like one, well.

00:38:39:01 – 00:38:40:15
Joseph
Advertising is manipulative.

00:38:40:15 – 00:38:41:21
Sam
It’s true. Which is why it’s so the.

00:38:41:25 – 00:38:43:21
Joseph
Definition of advertising.

00:38:43:21 – 00:39:08:04
Sam
I know, I know. I just think it’s like. I think people have an issue with it because it’s, it’s, it’s that invasion of privacy of even in my own home, I’m being sampled for data and and it’s being regurgitated back at me so I can become a better consumer, you know, like it eventually. It just feels like it’s all boiling down to you just pumping money into a, into, you know, the the economy.

00:39:08:08 – 00:39:09:21
Joseph
Okay. Well, by that.

00:39:10:06 – 00:39:15:01
Sam
It is effective, though, if you’re a business owner. Absolutely. From a capitalist standpoint, it’s great.

00:39:15:12 – 00:39:16:08
Joseph
Exactly.

00:39:17:03 – 00:39:19:11
Joseph
What are the other what they talk about is misalignment.

00:39:19:11 – 00:39:21:10
Joseph
Between our goals and the machines.

00:39:21:10 – 00:39:23:27
Joseph
And this one, I don’t know how much of.

00:39:23:27 – 00:39:24:20
Joseph
A problem this was.

00:39:24:20 – 00:39:25:07
Joseph
The great.

00:39:25:07 – 00:39:26:07
Joseph
Example they give.

00:39:26:21 – 00:39:28:06
Joseph
Is you get into a.

00:39:28:06 – 00:39:29:09
Joseph
Self-Driving cab.

00:39:29:09 – 00:39:30:07
Joseph
And you say, get me to the.

00:39:30:07 – 00:39:34:06
Joseph
Airport as quickly as possible. Okay.

00:39:35:06 – 00:39:39:27
Joseph
But if the machine itself is doing what it thinks you asked it to do.

00:39:40:13 – 00:39:45:25
Joseph
That could involve exceeding the speed limit. It could involve dangerous driving.

00:39:46:06 – 00:39:48:14
Joseph
It could involve driving into the river.

00:39:48:14 – 00:39:50:28
Joseph
That’s between you and the airport to try to get you there.

00:39:52:06 – 00:39:56:05
Joseph
I think this is kind of a silly example of.

00:39:56:17 – 00:40:00:12
Joseph
You know, I mean, this is no different than people following their asses off the route. You know.

00:40:02:10 – 00:40:07:19
Joseph
Is this something we need to worry about, where if we’re putting our lives in the hands.

00:40:07:19 – 00:40:15:27
Joseph
Of these machines at some point in time and, you know, they’ve already they’re already talking about having air taxis to the airport and stuff like that. So you’re.

00:40:15:27 – 00:40:20:05
Joseph
Not you’re ramping up the level of complexity and the.

00:40:20:05 – 00:40:24:00
Joseph
Level of danger associated with reliance on these machines.

00:40:24:27 – 00:40:27:05
Joseph
Is it possible that somebody misses.

00:40:27:05 – 00:40:30:09
Joseph
A line of code that says stop and stop sites?

00:40:31:00 – 00:40:32:12
Joseph
Like, is that a danger?

00:40:32:12 – 00:40:49:19
Sam
I mean, not really. Right? If these things are being put out on a global scale, surely they would test them. Right, and see if that’s an issue. You’ve had some issues with this with self-driving cars where they have malfunctions rather. Well, you know, I saw one the other day where a Tesla was in like a tunnel and just stopped and I don’t know why.

00:40:49:19 – 00:41:09:11
Sam
And then like eight cars hit it. So things like that will happen. But I think if you’re talking about like the foundation of the code getting out there that something is wrong with it. I don’t really think have to worry about that. I feel like it would be extremely irresponsible of whatever company is making it to put it out there, not having it be fully vetted, especially when you’re getting into like transportation and things like that.

00:41:09:29 – 00:41:14:05
Joseph
There was one example of a Tesla that was that was driving on a highway in high.

00:41:14:05 – 00:41:17:04
Joseph
Speed, which they supposedly excel.

00:41:17:04 – 00:41:23:25
Joseph
At. They’re very good at that because that that’s how the sensors work best. And and the computer works best and everything going.

00:41:23:25 – 00:41:24:07
Sam
Fast.

00:41:25:13 – 00:41:26:06
Joseph
On a high risk.

00:41:26:06 – 00:41:30:18
Joseph
Education where they’re not doing stop and go traffic a lot. And what happened.

00:41:30:18 – 00:41:33:03
Joseph
Was there was a vehicle that was in the left lane.

00:41:33:16 – 00:41:35:04
Joseph
That this thing was driving in.

00:41:35:04 – 00:41:38:01
Joseph
And it wasn’t expecting a vehicle, it wasn’t expecting a.

00:41:38:01 – 00:41:39:04
Joseph
Stationary target.

00:41:39:25 – 00:41:41:21
Joseph
And because they use.

00:41:41:21 – 00:41:43:01
Joseph
Light and they use radar.

00:41:43:14 – 00:41:45:08
Joseph
And they’ll gauge the distance.

00:41:45:08 – 00:41:47:03
Joseph
Between themselves in the next.

00:41:47:03 – 00:41:50:21
Joseph
Machine. And the assumption is that machine is going to be moving.

00:41:50:21 – 00:42:03:13
Joseph
So we can make calculations based on timing. And what happened was this vehicle was stopped in the left hand lane. The device, the the smart car saw that the machine was there.

00:42:03:27 – 00:42:07:07
Joseph
But because it wasn’t moving, it assumed that it was a glitch and the.

00:42:07:07 – 00:42:12:18
Joseph
Sensor never tried to hit the brakes and literally accelerated right through the vehicle.

00:42:13:22 – 00:42:29:10
Joseph
And it was it did everything it was supposed to do based on the rules that it had. And someone had missed the rule that said, hey, there’s a chance that something could be stopped in your your lane. You have to stop for it. Yeah. And that got out there.

00:42:29:10 – 00:42:32:07
Joseph
After all this extensive testing that’s still going out there in a production.

00:42:32:07 – 00:42:35:00
Joseph
So there’s a chance that you can get code that.

00:42:35:04 – 00:42:36:20
Joseph
That’s that’s broken out.

00:42:36:22 – 00:42:37:06
Sam
That’s true.

00:42:38:18 – 00:42:39:14
Joseph
How do you deal with that?

00:42:39:14 – 00:42:41:05
Joseph
Who’s liable at that point in time?

00:42:41:13 – 00:42:58:27
Sam
I mean, probably the company is still right. I mean, it’s it’s still very self-driving car that rammed into it. I think it I think the onus is still on them. Even if it’s something that they could never predict, then don’t make the technology. If there’s a chance something like this could happen, then maybe it’s just not meant to be.

00:42:58:27 – 00:43:09:07
Sam
Or we just go through the trial and error of, you know, potentially people losing their lives until we figure it out and put it in the software updates if you want to look at it in like a morbid sense. But I mean, there’s things.

00:43:09:07 – 00:43:10:13
Joseph
That could be a high attrition.

00:43:10:13 – 00:43:27:09
Sam
Rate. That’s what I’m saying. Like, I just think maybe at a certain point, if the body count gets high enough before you hit version 4.0, maybe we just don’t do it. Maybe we just stick to driving ourselves around or we limit it to, you know, certain public transportation options instead of every consumer having their own self-driving car.

00:43:27:18 – 00:43:30:24
Joseph
And that’s that’s certainly a consideration to solve some of these things.

00:43:31:18 – 00:43:32:04
Joseph
In the last.

00:43:32:09 – 00:43:38:16
Joseph
Year I wanted to talk about was discrimination. You know, we had already kind of talked about this to a certain extent with.

00:43:38:16 – 00:43:39:15
Sam
The social grading, the.

00:43:39:15 – 00:43:46:04
Joseph
Social grading and with the malfunction of the Microsoft camera system for minorities.

00:43:46:26 – 00:43:47:29
Joseph
How do you think.

00:43:48:00 – 00:44:10:16
Joseph
This would have an impact on society itself from a discrimination standpoint, either facial recognition? AI We had one situation very recently in our area here where somebody was arrested and held for, I think 48 hours based on facial recognition that was completely wrong. So it’s.

00:44:10:16 – 00:44:15:24
Joseph
Already happening. Is that a danger that we need to be aware of?

00:44:15:24 – 00:44:17:08
Joseph
Is there checks and balances.

00:44:17:08 – 00:44:18:21
Joseph
That can be put in place?

00:44:18:21 – 00:44:19:27
Joseph
Should we not be relying.

00:44:19:27 – 00:44:21:15
Joseph
On A.I. for this type of thing?

00:44:21:26 – 00:44:48:29
Sam
I mean, yeah, no, we should not be if it’s going to if this is going to happen. Yeah, I mean, sure, there is. And there’s mix up with, you know, people getting held when they’re not supposed to. And that’s that’s an issue for another day of, you know, the justice system and things like that. But I think that when it comes to things like this, where you’re affecting people’s lives, you should probably have like a double check system or like a backup to be like, okay, well, the robot figured it out, but let’s just use our own eyes and see if this is the guy.

00:44:50:00 – 00:44:52:06
Joseph
And I see something like this being used.

00:44:52:06 – 00:44:57:00
Joseph
Extensively in high traffic areas, airports, right? So you want to.

00:44:57:26 – 00:45:00:15
Joseph
Be safe going through the airport, but you want to be.

00:45:00:15 – 00:45:13:18
Joseph
Expeditious in moving people through. So a lot of things that we do, we rely on technology to accomplish that. If you get a certain failure rate of that system itself.

00:45:14:06 – 00:45:15:15
Joseph
Is that tolerable.

00:45:15:15 – 00:45:18:07
Joseph
Or is that should that be 100% accurate?

00:45:18:19 – 00:45:41:05
Sam
Um, that’s a good question. I mean, if it was if you’re going to be, you know, 99% effective and you get that 1%, there still needs to be a human element to it. So that person isn’t held for three days because they got the wrong person like there needs to be, you know, if someone gets pulled up on facial recognition, someone needs to look at it and and, you know, or, you know, have that person come.

00:45:41:06 – 00:45:48:00
Sam
And I’m just saying, there needs to still be a human element. We can’t let it totally be run by by the machines. They’re not going to be 100% accurate.

00:45:48:00 – 00:45:49:26
Joseph
So you need a path of resolution.

00:45:50:00 – 00:46:00:29
Joseph
Yeah, there’s human involved. And I have my own story. You know, there was about two, three years ago, I got a ticket at the bail for a vehicle that I didn’t even own.

00:46:01:22 – 00:46:03:03
Joseph
And it came back.

00:46:03:15 – 00:46:16:24
Joseph
As a red light cameras somewhere in North Jersey, which I knew I’d never even been to the town like Amboy or something like that. So they send it with a picture and you look at the picture and there’s a link to the video for it. And you look at the video.

00:46:17:10 – 00:46:18:24
Joseph
And it had.

00:46:19:08 – 00:46:24:03
Joseph
What’s there five character or six characters on the license plate.

00:46:24:03 – 00:46:25:18
Joseph
It had all the.

00:46:25:18 – 00:46:39:27
Joseph
Characters on a license plate that I hadn’t had the vehicle in 20 years. And it was that plate was originally registered to me, but there was a glitch either on the camera or on the plate.

00:46:40:06 – 00:46:41:07
Joseph
You couldn’t read one of.

00:46:41:07 – 00:47:04:27
Joseph
The digits in the middle. So the I just assumed that it was my plate generates a ticket and sends the ticket out to me. And I get I got the order to appear in court. That’s a AI driven problems that you have that should have been looked at by a human being. But because of the number of tickets that they produce.

00:47:05:14 – 00:47:07:03
Joseph
You can’t the whole point.

00:47:07:03 – 00:47:09:07
Joseph
Of having a automated system is because.

00:47:09:07 – 00:47:09:29
Joseph
You can’t have a.

00:47:09:29 – 00:47:20:25
Joseph
Human looking at all these things. However, to your point, I was able to call them up, explain to them what it was. They were able to resolve the ticket on the phone. I didn’t have to go for the court appearance.

00:47:21:05 – 00:47:22:22
Joseph
So that system at least had.

00:47:22:22 – 00:47:30:18
Joseph
That path of resolution. Whereas I think as long as we had that in place, we’re safe.

00:47:31:15 – 00:47:34:12
Joseph
But do you want to be that guy that they make that mistake with?

00:47:34:22 – 00:47:59:04
Sam
No. And that’s just that’s you know, that’s the territory, right? If we’re going to rely on these things and it’s like, you know, it’s something similar to getting on a plane or something where you’re giving up a amount of control for convenience or for, you know, convenience, honestly, if you’re going to give up that that element of what you can control, then there’s a risk that comes with that no matter what it is.

00:47:59:12 – 00:48:00:26
Sam
And I think that this is an example of that.

00:48:00:26 – 00:48:04:19
Joseph
That just sounds like a really poor way of explaining the way airplane crashes.

00:48:05:12 – 00:48:08:09
Sam
And then you get on the plane, you got to expect it. All right.

00:48:09:03 – 00:48:12:15
Joseph
What do you expect to live the least expect?

00:48:12:16 – 00:48:17:02
Sam
I bring a parachute every time I travel.

00:48:17:02 – 00:48:20:18
Joseph
All right. I’m I think we’ve kind.

00:48:20:18 – 00:48:28:11
Joseph
Of exhausted where we can go wrong with A.I. at this point from a danger standpoint. Let’s take our last break and we’ll come back and we’re going to talk about.

00:48:28:15 – 00:48:28:23
Joseph
The.

00:48:28:23 – 00:48:42:05
Joseph
Impact of A.I. on our world today and moving forward. We’ll be right back.

00:48:42:05 – 00:49:12:10
Joseph
Forever, seven years. The second Sith empire has been the premier community guild in the online game. Star Wars, The Old Republic, with hundreds of friendly and helpful active members, a weekly schedule of nightly events, annual guild meet and greets in the of community both on the Web and on Discord. The second season of Empire is more than your typical gaming group.

00:49:13:05 – 00:49:38:16
Joseph
We’re Family. Join us on the Star Forge server for nightly events such as Flash Points, World Boss Funds, Star Wars, Trivia Guild Lottery and much more. Visit us on the web today at WW dot the second surf and fire dot com.

00:49:43:28 – 00:49:44:25
Joseph
Welcome back to.

00:49:44:25 – 00:49:46:29
Joseph
Insights into tomorrow and thank you to.

00:49:46:29 – 00:49:49:20
Joseph
Today’s sponsor the second Sith empire.

00:49:49:21 – 00:49:50:29
Sam
Really paid for all that airtime.

00:49:50:29 – 00:49:56:18
Joseph
Hey, they are an awesome guild. I recommend everyone look them up. Anyway, moving right along.

00:49:56:22 – 00:49:59:21
Sam
One more.

00:49:59:21 – 00:50:00:28
Joseph
Anyway, so let’s.

00:50:00:28 – 00:50:01:17
Joseph
Look at what the.

00:50:01:17 – 00:50:09:12
Joseph
Impact of AI on the world today. So some quick figures I dug up here from simply learning dot com.

00:50:10:06 – 00:50:12:10
Joseph
So the revenue from artificial intelligence.

00:50:12:10 – 00:50:30:20
Joseph
Software market worldwide is expected to reach $126,000,000,000.20 25. That’s significant. Not huge, but significant. Gartner predicts 37% of organizations have implemented AI in some form.

00:50:30:20 – 00:50:31:12
Sam
It’s got to be higher.

00:50:31:21 – 00:50:36:02
Joseph
I would have to. The percentage of enterprises employing AI grew.

00:50:36:03 – 00:50:37:10
Joseph
270.

00:50:37:10 – 00:50:41:21
Joseph
Percent over the last four years. And according to a survey on.

00:50:41:21 – 00:50:44:25
Joseph
Global Solutions by 2025, 95.

00:50:44:25 – 00:50:48:11
Joseph
Percent of customer interactions will be powered by a AI.

00:50:48:12 – 00:50:50:09
Sam
So we’ve got two years to figure that out.

00:50:50:09 – 00:51:00:09
Joseph
That’s right. And the recent 2020 report from Statista reveals the global AI software market is expected to grow approximately.

00:51:00:09 – 00:51:01:06
Joseph
54.

00:51:01:06 – 00:51:11:03
Joseph
Percent year over year and is expected to reach a forecast size of 22.6 billion, which doesn’t really make sense.

00:51:12:01 – 00:51:16:04
Sam
Sure, the mandate is weird. They all got the numbers right.

00:51:16:04 – 00:51:16:17
Joseph
Don’t worry.

00:51:16:17 – 00:51:21:03
Joseph
About it. I think the conclusion here is that A.I. is very much.

00:51:21:03 – 00:51:27:09
Joseph
Embedded in our lives now. Are we better off for that, or are we worse off for that?

00:51:27:09 – 00:51:46:03
Sam
That’s a good question. We’ve we’ve looked at every aspect of AI, and I think it’s good in some aspects, right? We’re taking the load off of humans. And if we can run things with an AI, just to make our lives a little bit easier, I think that’s a good thing because at a certain point when we’re talking about things on a global scale, it’s just not practical to expect humans be able to run all that 20.

00:51:46:03 – 00:52:06:12
Sam
Four seven But then when you get into the more dangerous aspects of social manipulation, we’re about to get into the discussion of how it affects creative endeavors. I think that’s when you start getting into the dangerous territory, and it’s really going to be a question of where, where does this go in the next 5 to 10 years? Are we getting that that, you know, full on A.I. integration in every day of our lives?

00:52:06:20 – 00:52:12:07
Sam
Or is it still like it is now where it’s it’s more in the background. It’s not necessarily as prominent then?

00:52:12:08 – 00:52:33:13
Joseph
I agree. I think we’re very, very much on the cusp of something significant here. I think we’re kind of nipping around the edges right now until we get deeper into it. Some of the special projects that I did want to talk about, one that’s in the news now is Chat GPT. Their tagline is Optimizing Language Models for dialog.

00:52:34:02 – 00:52:35:05
Joseph
And this is a chat bot.

00:52:35:12 – 00:52:35:21
Joseph
That.

00:52:36:05 – 00:52:37:23
Joseph
Is human like interaction.

00:52:37:24 – 00:53:04:15
Joseph
You can ask it things that’ll answer them. One of the things this happens to actually be something that I had directed my developers at at the office to dig into a little bit. One of the functions that I see with this is helping to automate and personalize communication for our Salesforce. So the concepts that we had come up with a few years back and the technology wasn’t there yet, was our salespeople know their customers.

00:53:04:15 – 00:53:13:23
Joseph
So the salespeople can go in and drop in notes. You know, Tom likes the Phillies. His wife’s name is this, his kids names are this. And they’re going to these kind of.

00:53:14:03 – 00:53:15:07
Joseph
They can put in these.

00:53:15:23 – 00:53:39:00
Joseph
This metadata about the person. And eventually the idea was to have this technology that could take this metadata and write a monthly email to each one of your customers that’s personalized, that might have specific information as far as offers that we might have. But it looks like it’s coming from you. And we want to write these, hand them off to the salesperson, let the salesperson read it, okay.

00:53:39:00 – 00:53:45:16
Joseph
And then send it out. And we’re literally within months of being able to have the system start writing emails like that.

00:53:46:27 – 00:53:47:16
Joseph
Is that.

00:53:48:16 – 00:53:51:24
Joseph
In your opinion, is that a viable.

00:53:52:13 – 00:53:53:12
Joseph
Good way of using.

00:53:53:12 – 00:53:59:12
Joseph
A.I. or are we stealing some of the human element there, muddying the waters?

00:54:00:07 – 00:54:18:00
Sam
I mean, it depends on how much you care about these handwritten alert emails. You know, these these personalize emails if you’re okay, sacrificing the the human element of it and it’s fine and it still has the human element, right? The salespeople still have to know their customers. They still have to know exactly the the things that make it personal and they still okay before it signed off.

00:54:18:00 – 00:54:21:14
Sam
I think that’s fine. Yeah. I don’t really have any issues with that.

00:54:21:14 – 00:54:22:24
Joseph
But if I have a if I have a.

00:54:22:24 – 00:54:23:24
Joseph
Salesperson who.

00:54:24:00 – 00:54:25:29
Joseph
Who doesn’t compose.

00:54:25:29 – 00:54:37:29
Joseph
Emails very well, is it ethical for them to use this system to compose their emails for them? Or is it is it the same as having an assistant doing doing it for you or is it different because it’s a I?

00:54:38:01 – 00:54:40:08
Sam
That’s a good question. It’s like cheating on homework kind of.

00:54:40:09 – 00:54:40:24
Joseph
Right.

00:54:41:08 – 00:54:54:29
Sam
But I mean, in the real world, you’re not like you don’t know it’s cheating on homework in the real world doesn’t as big of a deal it is in high school. Ultimately, the end of the day you’ve got to make these sales, right. So if you’re not going to write in emails, first off, I don’t know how far you would have gotten as a salesperson.

00:54:55:04 – 00:55:14:25
Sam
But second, yeah, it’s like a tool. I think in this instance it’s like another tool to use. If emails are your thing, but when you’re in one on one interactions, you can make the sale go for you. You can use this as a tool if you are relying on this to do your job for you, that’s a little bit different, right?

00:55:14:25 – 00:55:27:17
Sam
If you’re if you’re relying on a to to make sales pitches for you and to reach out to customers and you’re kind of just sitting back and maybe putting in a few inputs, then you’re relying on it too much. And it’s becoming an issue because you’re you’re not doing the work.

00:55:28:07 – 00:55:32:06
Joseph
So it’s just a matter of pacing, how much you use the tool.

00:55:32:06 – 00:55:33:09
Sam
Yeah, an implementation.

00:55:33:26 – 00:55:34:26
Joseph
So the next one I.

00:55:34:26 – 00:55:36:09
Joseph
Wanted to talk about was Canva.

00:55:36:09 – 00:55:40:01
Joseph
So Canva is a is a AI based.

00:55:40:10 – 00:55:45:04
Joseph
Image generating software out there and there’s a number of them out there now.

00:55:45:16 – 00:55:48:12
Joseph
And you basically give it text input and say, give me a.

00:55:48:12 – 00:55:56:02
Joseph
Picture done up like Van Gogh of a horse in a rainbow.

00:55:56:25 – 00:55:58:11
Joseph
And it’ll go out there.

00:55:58:11 – 00:56:12:16
Joseph
And it’ll source thousands of different images and it’ll come back and it’ll generate what’s considered an original image for you so you don’t have to make it yourself.

00:56:13:17 – 00:56:19:03
Joseph
But that image itself is a compilation of.

00:56:19:03 – 00:56:29:19
Joseph
All these other works. Is that something that is going too far or is that are there any moral implications to that that we have to worry about?

00:56:29:25 – 00:56:50:12
Sam
Yeah, this is actually the whole reason I wanted to do this episode was was things like this. I know we covered the, you know, the doom and gloom of A.I. taking over the world. But this is where I think it really is. It strikes a chord with me of your it’s essentially making like art sausage. But like, if you didn’t make any of the components of the sausage to begin with, you kind of just put it in a blender and called it your own.

00:56:50:22 – 00:57:18:09
Sam
It’s really it’s like it’s really devaluing art and the human element of it. And when people feel that they can get this app and make art, like with no effort, that kind of defeats the purpose of it in general. There’s also Lanza, which is similar to it, and I had an article linked here where there was cases where you could see artist signatures on the final product, what Lanza would create because it was just stealing artwork from other people.

00:57:18:16 – 00:57:42:01
Sam
And that’s how a lot of these A.I. work is that you put in what you want and it scours the Internet and will take from other artist pages, usually without permission. It just kind of steals it and then, you know, incorporates it back into itself and spit it out for you. But you feel like you’ve made this and you really haven’t you didn’t do any I mean, you know, you didn’t you don’t know how to do digital art or anything like that.

00:57:42:01 – 00:57:51:20
Sam
It just it’s it’s like putting it on easy mode. But also the bigger thing is stealing from the people that did make something. And it’s really I really think it’s an issue.

00:57:52:07 – 00:57:52:29
Joseph
So let me ask.

00:57:52:29 – 00:57:53:29
Joseph
You something and.

00:57:53:29 – 00:57:54:26
Joseph
We’ll kind of go down the.

00:57:54:26 – 00:57:55:27
Joseph
Moral path here.

00:57:57:04 – 00:57:58:19
Joseph
If I don’t have.

00:57:59:01 – 00:58:08:03
Joseph
Any artistic ability whatsoever. But I need to come up with a picture and let’s say I need a picture of a tree. And I don’t know how to make a picture of a tree.

00:58:08:12 – 00:58:09:17
Joseph
So I’m going to go out and I’m going to.

00:58:09:17 – 00:58:13:05
Joseph
Look at other people’s samples and I’m going to find one that I really.

00:58:13:05 – 00:58:15:22
Joseph
Really like. And then I go and.

00:58:15:22 – 00:58:20:06
Joseph
I kind of try to recreate that as my image of a tree.

00:58:21:05 – 00:58:23:13
Joseph
Is that the same.

00:58:23:13 – 00:58:30:08
Joseph
Moral ambiguity as what a canva is doing because Canva had the same problem, they wound up getting sued by Getty Images.

00:58:30:14 – 00:58:31:00
Sam
Oh, really?

00:58:31:00 – 00:58:33:18
Joseph
Yeah, that’s pretty good. They weren’t even pulling the watermarks.

00:58:33:18 – 00:58:34:09
Sam
Oh, my hand.

00:58:36:03 – 00:58:38:07
Joseph
Is me using someone else’s.

00:58:38:07 – 00:58:45:03
Joseph
Inspiration to make my own different than these guys going out there and grabbing these.

00:58:45:09 – 00:58:48:15
Joseph
Because they don’t always take a 1 to 1 image. They make a portions.

00:58:48:15 – 00:58:56:23
Joseph
Of an image almost like sampling a track in music, and then they assemble it together from multiple different sources. What’s the difference in the two of those?

00:58:57:06 – 00:59:17:12
Sam
Well, I think, you know, in your example, I’m assuming you’re like drawing it yourself. Exactly. Yeah. That’s the thing. That’s where I think the line is with with, uh, the A.I. generated art. You’re just typing in what you want, and it just appears and it’s, you know, it’s compiled from other images with the inspiration. You are taking inspiration from that thing that somebody else made and then making your own interpretation.

00:59:17:12 – 00:59:33:20
Sam
Um, but even if it’s not very good, you still put in the time in the work to make it something original, even if it’s inspired by something else. Like you said, with sampling music, you can take a sample from another song that somebody else made, but then you incorporate that section of it and you make a whole new thing out of it.

00:59:33:20 – 00:59:40:17
Sam
It’s, it’s, it’s it’s its own thing. Even if it’s borrowing from something else. And I think that’s kind of where the line is drawn.

00:59:41:18 – 00:59:42:11
Joseph
That’s a good point.

00:59:42:27 – 00:59:49:00
Joseph
The other thing with Lanza is Lanza has one of the big functions, Lanza.

00:59:49:20 – 01:00:04:26
Joseph
Adds, is what they call their magic correction. So you take a picture, someone you run it through the lens of filtering. You have the filters with Snapchat and stuff like that. They can make the googly eyes and all that crazy stuff that you can do.

01:00:05:29 – 01:00:07:02
Joseph
Is the A.I..

01:00:07:02 – 01:00:23:19
Joseph
That’s doing that different, or do you take exception to that from a creative standpoint when it’s taking your art and enhancing it because maybe you don’t have the technical capabilities to do that?

01:00:24:07 – 01:00:43:15
Sam
I mean, that’s a good question. It’s it’s similar to, you know, photoshopping an image or using a filter on Instagram or it’s not authentically the thing, but it’s you know, I don’t know, as long as you still made it from the beginning, I think it’s a little bit better than just straight up stealing the art, but I still think it’s messing with the authenticity of it.

01:00:43:15 – 01:01:00:01
Sam
Right. And it gets into our perception of of what is made by humans and what is me and what does that mean for art if we can’t tell the difference? And I think that that’s sort of the road we’re going down at this point. I want to get to the enhancement. It’s not it’s not as Aspen. It’s still not great.

01:01:00:26 – 01:01:04:20
Joseph
Well, and I’ll counter that and say if you can’t tell the difference.

01:01:05:03 – 01:01:15:20
Joseph
Between whether it is man made or machine made, isn’t that really the ultimate goal of A.I. at that point is to make it indistinguishable from human hands?

01:01:15:20 – 01:01:31:10
Sam
Well, that’s the scary part, right? Because if you can’t tell the difference, but it still stole a bunch of other people’s art, then it is actively devaluing all the art that it just stole from. Even if you can tell the difference, even if you don’t know that it took from, you know, ten other artists to make this, it still happened, right?

01:01:31:10 – 01:01:38:08
Sam
And it’s still it even if it’s indistinguishable, the the the consequences of it are still real.

01:01:38:20 – 01:01:49:08
Joseph
So the the dilemma that you’re referring to here is more the sourcing of. Yeah. Where it gets its material from rather than the generation of that material.

01:01:49:09 – 01:02:03:00
Sam
Right. Because it’s not, it’s not magic. The AI is not making it is it’s not like physically drawing it. I mean, but it’s not necessarily entirely originally from the mind of the AI it’s taking from other people to make it. And that’s kind of where the issue is.

01:02:03:06 – 01:02:04:28
Joseph
So it’s just creative copy.

01:02:04:28 – 01:02:05:07
Sam
Yeah.

01:02:05:13 – 01:02:07:28
Joseph
Yeah. So, so let me ask you this.

01:02:07:29 – 01:02:09:05
Joseph
If you were an artist.

01:02:09:05 – 01:02:11:27
Joseph
And you had a vast portfolio of your own artwork.

01:02:12:08 – 01:02:13:13
Joseph
And you use that.

01:02:13:13 – 01:02:19:27
Joseph
Solely to power your eye, and you decided that you’re going to recreate your art in the eye style. Is that.

01:02:19:27 – 01:02:20:12
Joseph
Legitimate?

01:02:20:28 – 01:02:34:07
Sam
Yeah, as long as it’s only taken from your stuff. I guess if that’s things that you made that are I don’t know, I’m not going to get a legal aspect of it. I’m not a lawyer. But if you’re only if you can guarantee that it’s only important from your art and you are totally signing off on it, then yeah, that’s fine.

01:02:34:23 – 01:02:40:12
Sam
It gets in it when it’s taking other people’s art without their consent or knowledge.

01:02:40:24 – 01:02:50:19
Joseph
So. All right. Being a Star Wars fan and a Eagles fan, let me throw a couple examples of non-elite out there where.

01:02:51:03 – 01:02:51:29
Joseph
Both of those.

01:02:51:29 – 01:03:19:14
Joseph
Entities have taken what they’ve had. They’ve modified those and regurgitated it back out to the public, just strictly for money purposes. There wasn’t anything to enhance the art. There wasn’t anything to make it better or different or really anything. It was literally, I just want more money, so I’m going to rerelease it. Do you have a moral issue with that, with people basically exploiting their own material for for money?

01:03:19:14 – 01:03:25:22
Joseph
They’re trying to, I want to say, con people out of buying it again, because I’m one of those idiots.

01:03:25:25 – 01:03:45:15
Sam
I was going to say. Not really. No. I mean I mean, morally, yeah. It’s not ideal, right? Like you shouldn’t be scamming people, but these things have value and there’s a reason that, you know, they’re getting the money that they are. They wouldn’t be rereleasing if they weren’t making money. But it’s not like like fundamentally impacting how we view art.

01:03:45:15 – 01:04:02:00
Sam
Right. It’s just someone making a quick buck because they’re rereleasing their greatest hits for the thousandth time. It’s not it’s not like it would be like if the Eagles released a record that was music, that was all samples and they didn’t credit anybody like that is what we’re talking about here. But everybody loved it and acted like it was their original music.

01:04:02:07 – 01:04:12:19
Joseph
So are we really to the point where Canva and GB, T and Lindsay are making us fundamentally reconsider our art? At this point?

01:04:12:28 – 01:04:29:26
Sam
I think we’re getting there. And I mean, I think the lens is specifically, at least in my sphere, really took a lot of people by storm of like, this is awesome. Look how cool I look. I’m a cowboy, I’m a robot. But then it came out of how it really works and everyone was like, Oh, so it’s not magic.

01:04:30:04 – 01:04:47:14
Sam
It’s not like it’s a computer making. Like it’s not a computer coming up with this art itself. It’s a computer interpreting other forms of art, stealing and chopping it up and then mashing it into one new thing. And that was kind of where the issue came in. So I think it is we’re getting there. We’re getting there. Right.

01:04:47:25 – 01:05:12:08
Sam
And I have like a soap box I wrote on here, but I’m looking at things like the Indiana Jones five trailer where it’s a wholly digital younger Harrison Ford, and you really can’t tell the difference like we’re getting. We’re now there where in movies these CGI creations are getting close enough where you can’t really tell the difference. You know, we’re past the days of Rogue One where it’s zombie talking and all that was terrible.

01:05:12:15 – 01:05:32:22
Sam
But we’re getting to the point where the the CGI creations or the aging in movies is becoming so realistic that it’s becoming legitimately hard to tell the difference. And it’s, you know, within that and the AI art generation, it’s like, where is there room for people at this point, for the human touch of it all? And that’s that’s what really, you know, interests me and kind of scares me.

01:05:33:18 – 01:05:47:09
Joseph
So for years now, before A.I. existed, before computer generated graphics in movies and TV existed, people have been using makeup to make you look older or make you look younger.

01:05:48:14 – 01:05:49:24
Joseph
That never caused.

01:05:49:24 – 01:05:54:17
Joseph
Us to question the purpose of the place of art in the world.

01:05:55:22 – 01:05:57:14
Joseph
Why would de-aging someone.

01:05:57:14 – 01:06:01:21
Joseph
Digitally cause that kind of conflict in the art world?

01:06:01:21 – 01:06:20:09
Sam
Bill Um, well, because for makeup like you have to apply that, right? You have to it’s a physical thing you have to do with this. It’s it, it’s less about the de-aging and more about the like making someone young again so you can make more movies with them. Like Harrison Ford is like 80 years old. He can’t be Indiana Jones like he was in the eighties.

01:06:20:21 – 01:06:38:28
Sam
But if we can de-age him and like make him look young again, even though he’s an old man in real life, we can keep making these movies and keep pumping them out. That’s where I think the issue is. When you’re doing old age makeup or trying to make someone look young with makeup, it’s a little bit different because you still know underneath that makeup is still the person, right?

01:06:38:28 – 01:06:41:27
Sam
Whereas with this it’s not. It’s a robot.

01:06:42:05 – 01:06:42:18
Joseph
So.

01:06:42:19 – 01:06:43:15
Joseph
Okay, I will.

01:06:43:15 – 01:06:51:05
Joseph
Counter that with Chris. What’s Chris Pine.

01:06:51:13 – 01:06:51:27
Joseph
In.

01:06:52:19 – 01:06:58:00
Joseph
Star Trek? So they didn’t use makeup, they didn’t use digital de-aging.

01:06:58:07 – 01:06:59:03
Joseph
They just changed the.

01:06:59:03 – 01:07:01:15
Joseph
Actor and they relaunch the entire franchise.

01:07:02:12 – 01:07:03:28
Joseph
So it’s it happens.

01:07:03:28 – 01:07:05:01
Joseph
There’s just there’s different.

01:07:05:01 – 01:07:08:15
Joseph
Methods of this happening in doing what they did.

01:07:08:22 – 01:07:15:28
Joseph
Because they did the same thing with Luke Skywalker, you know, where they didn’t even bring Luke, they didn’t even bring Mark Hamill in to play the role.

01:07:16:09 – 01:07:19:01
Joseph
Or even to do the voice because in the last.

01:07:19:28 – 01:07:23:19
Joseph
In Boba Fett, whatever book of Boba.

01:07:24:07 – 01:07:24:12
Sam
Or.

01:07:24:16 – 01:07:25:20
Joseph
65 or.

01:07:26:03 – 01:07:30:24
Joseph
You know, in that episode where he comes in, his voice is even.

01:07:30:24 – 01:07:34:15
Joseph
Digitally redone by an A.I. voice manipulating.

01:07:34:19 – 01:07:38:26
Sam
And now Darth Vader, James Earl Jones is no longer the voice it’s all signed over to.

01:07:38:28 – 01:07:40:09
Joseph
So he signed it over.

01:07:40:09 – 01:07:45:23
Joseph
They can generate James Earl Jones voice through A.I..

01:07:45:23 – 01:07:46:22
Joseph
I’m struggling to.

01:07:46:22 – 01:07:50:28
Joseph
Understand where the issue is with that.

01:07:51:08 – 01:08:08:09
Sam
I just I think when it comes to will use general Jones as the example. When you get into like a vocal performance, when you get into the human element of it. Laura things that that he might do when he does a read for the fifth take that he does something different and it’s way better. Right? And it was a fluke.

01:08:08:14 – 01:08:25:17
Sam
But hey, he read it this way. And that’s the line we’re going to use when you get it with A.I.. I don’t think they’re going to be able to do like there’s that that human element where you can get that random that random spark of creativity or performance, that that elevates a role more than it would just be if it was a human doing it.

01:08:25:17 – 01:08:29:19
Sam
I think there’s it’s really that loss of the human element that I think is the dangerous part.

01:08:30:10 – 01:08:32:03
Joseph
And I think the ultimate goal.

01:08:32:03 – 01:08:39:02
Joseph
Of A.I. is to not have the human element injected into it, but have its own unique element injected into it. So I.

01:08:39:02 – 01:08:40:02
Joseph
Think we still have.

01:08:40:22 – 01:08:41:21
Joseph
The potential for.

01:08:41:21 – 01:08:44:29
Joseph
That. It’s not just it didn’t just have.

01:08:45:12 – 01:08:52:16
Joseph
James Earl Jones record every word in the English dictionary, and it just assembles them together like a really bad A.I. They.

01:08:52:16 – 01:08:53:22
Sam
Should do that. They’ve been pretty funny.

01:08:55:02 – 01:08:56:04
Joseph
It basically.

01:08:56:04 – 01:08:59:25
Joseph
Recorded his inflections and his ability to say.

01:08:59:25 – 01:09:00:22
Joseph
Certain syllables.

01:09:01:03 – 01:09:04:09
Joseph
You know, it was a much more sophisticated than just recording words.

01:09:04:26 – 01:09:09:25
Joseph
And then the A.I. assembles those sounds, but it doesn’t.

01:09:09:25 – 01:09:11:10
Joseph
Do it the same way every time.

01:09:11:11 – 01:09:25:04
Sam
But, like, isn’t that weird? Like, James Earl Jones is still alive, right? And unfortunately, he’s going to die one day, but they’re still going to use that after he’s dead to make Star Wars movies with Darth Vader. Isn’t that weird? It’s like it’s like putting them in The Ghost in the machine.

01:09:25:18 – 01:09:27:18
Joseph
You know what they put in? They embalm.

01:09:27:18 – 01:09:31:21
Joseph
Lenin’s body. They stuck it on display in the public square for that.

01:09:31:24 – 01:10:00:07
Sam
Yeah, but they did Weekend at Bernie’s. I’m like a puppet in a movie after he was dead. Like, I don’t know. I just. It’s something about like like when it comes to real life people being like, turned into AI generated things, whether it be the Harrison Ford, Indiana Jones five or James Earl Jones. And now with Darth Vader, when you’re taking real people that are in real life and and and harnessing them using AI for, for creative things, it just it doesn’t surprise me.

01:10:00:07 – 01:10:02:14
Sam
It just seems strange. I think. I don’t know.

01:10:02:18 – 01:10:03:27
Joseph
Well, you know, I want.

01:10:04:15 – 01:10:06:00
Joseph
To play devil’s advocate here.

01:10:06:01 – 01:10:10:06
Joseph
James Earl Jones wasn’t it wasn’t just James Earl Jones, who was the.

01:10:10:06 – 01:10:18:21
Joseph
Voice of Darth Vader. There was so much audio work that went into that voice that it wasn’t just him talking into a microphone.

01:10:19:01 – 01:10:20:28
Joseph
So there was already more.

01:10:20:28 – 01:10:22:20
Joseph
To it than just James Earl Jones.

01:10:23:02 – 01:10:24:24
Joseph
If they can get that out.

01:10:24:24 – 01:10:32:13
Joseph
Of a computer and James Earl Jones doesn’t have to show up and he already got his paycheck.

01:10:32:13 – 01:10:41:28
Sam
I can’t. But like, sure, there are other elements, but his voice is still the core of it, right? So if he died in the world where we don’t have a I baby it we wouldn’t.

01:10:41:29 – 01:10:43:08
Joseph
So I will counter that.

01:10:43:08 – 01:10:45:03
Joseph
Argument with every.

01:10:45:03 – 01:10:45:20
Joseph
Other.

01:10:46:07 – 01:10:53:26
Joseph
Element, every other example of Star Wars, that Darth Vader shows up in, and it’s voiced by someone beside James Earl Jones.

01:10:54:19 – 01:10:55:25
Joseph
It’s not believable.

01:10:56:15 – 01:11:11:02
Sam
I don’t know, man. We’ve got to figure something out. Then I guess at a certain point you got to like the real world has to have some kind of impact. Like, we can’t like I don’t know if it was Carrie Fisher alive when they did that rogue one thing I think she was at the time. Right. Like that’s so weird.

01:11:11:02 – 01:11:22:22
Sam
It’s so weird that these are real people that are aging that we were just plucking like the best version of them and putting them in like CGI. And now with, with the voices is like eternals. Oh, the freaks me out.

01:11:23:00 – 01:11:26:11
Joseph
Well, all right, so I’ll give you a different, different angle on that.

01:11:26:21 – 01:11:28:14
Joseph
So you have as an.

01:11:28:14 – 01:11:37:04
Joseph
Actor especially a female actor, you have a finite lifespan that the industry will tolerate. You’re aging, too, right? So if.

01:11:37:04 – 01:11:38:28
Joseph
You get to a point, let’s say.

01:11:39:00 – 01:11:46:18
Joseph
You’re 50, okay? The number of roles that you run into when you’re 50 are severely limited.

01:11:47:09 – 01:11:48:18
Joseph
If someone could extend.

01:11:48:18 – 01:12:02:08
Joseph
Their acting career and their opportunity to make money in the business by allowing digital manipulation of their features, don’t you think? That’s at least a worthwhile endeavor for.

01:12:02:17 – 01:12:15:11
Sam
You, I guess. But like, do we just want to completely give in to, like, ageism and not try to fix it? Like if we just want to submit to, like, the industry wide like issue, I don’t know. It just seems like we’re compromising just to not have to, like, tackle the real problem head on.

01:12:15:16 – 01:12:16:20
Joseph
Well, and I agree that it’s.

01:12:16:20 – 01:12:23:27
Joseph
A problem that needs to be tackled, but in the meantime, you’re robbing people of opportunities if you take that tool away.

01:12:24:17 – 01:12:28:23
Sam
Yeah, but you’re robbing them of opportunities by, like, being easiest too.

01:12:28:25 – 01:12:34:14
Joseph
So it’s like. I agree. But the problem is a role. Requiring a role may require somebody of a certain.

01:12:34:14 – 01:12:35:26
Joseph
Age to be convincing.

01:12:35:29 – 01:12:46:23
Sam
Wouldn’t you be taking away opportunities from potentially another actress? That is right. For the part, if we’re going to instead use our robot Carrie Fisher instead, instead of recasting someone that might be good at it, it’s possible.

01:12:46:23 – 01:12:50:25
Joseph
But if you were if you were doing a Star Wars film, that was after Rogue One.

01:12:51:14 – 01:12:52:16
Joseph
And you need it, Princess.

01:12:52:16 – 01:13:04:20
Joseph
Leia in there, not great is a little different because she’s passed away. But if you needed someone, that’s age appropriate. Who was that actor? Would it be easier and more convincing to age reduce that actor so they could do that role?

01:13:05:05 – 01:13:05:20
Sam
No.

01:13:06:26 – 01:13:11:09
Joseph
Why not? Because Then you’d wind up getting like somebody who looks nothing like Han.

01:13:11:09 – 01:13:13:12
Joseph
Solo showing up in a Han Solo movie.

01:13:13:12 – 01:13:32:18
Sam
Okay, that’s true. That movie had other problems. I think that was I don’t know. We’re not this is not a solo podcast, but I think I don’t know. For me personally, I think it should be a recasting. I don’t think it should be using AI people killing, especially when it comes to Star Wars and properties like it. People cling to that past so hard.

01:13:32:27 – 01:13:46:03
Sam
The past what? Kylo Ren’s, they kill it if you have to like. I think I don’t know we’re getting like it’s like the metatextual level of this but I think, like, we can’t just like keep digging up these people and put them in the, into computers. And like Indiana.

01:13:46:03 – 01:13:47:22
Joseph
Jones, you can. That’s the point.

01:13:47:26 – 01:13:51:03
Sam
That’s the truth. That’s the beauty of it. We can we never have to move.

01:13:51:03 – 01:13:52:18
Joseph
Nobody has to die that.

01:13:52:18 – 01:14:02:02
Sam
No one’s ever really gone. I don’t know, man. I guess we’ll see what Indiana Jones five comes out and they just make a whole movie with CGI, Harrison Ford and call it a day.

01:14:03:08 – 01:14:03:24
Joseph
Okay.

01:14:04:23 – 01:14:05:10
Joseph
So, I mean.

01:14:05:11 – 01:14:08:12
Sam
Support your local artist all right. Before the robots come for us all.

01:14:08:16 – 01:14:10:06
Joseph
So that’s really what you’re getting at here.

01:14:10:06 – 01:14:14:15
Joseph
You’re the risk that you’re seeing here is that it.

01:14:14:15 – 01:14:19:13
Joseph
Could severely, negatively impact actual artists. With what?

01:14:19:28 – 01:14:35:10
Sam
Yeah, it’s like. It’s like the old auto factories, right? When once they figured out how to automate all that stuff, all those people got fired. If we can automate writing a script for a movie, the people that are in the movie, their voices, the effects, if we can automate all that stuff, we don’t even need people anymore. And that is so.

01:14:35:11 – 01:14:56:21
Joseph
That’s a twisted idea of what history is. When they automated factories, people, instead of doing menial, repetitive tasks, were trained to do better tasks, were killed. Auto factories were unions that kept trying to get more and more money for people. Okay, so let’s just put history aside for a second there and not distorted.

01:14:56:21 – 01:15:03:15
Sam
It would still be the same outcome though, if we found a way to automate all the stuff where people couldn’t tell the difference. We wouldn’t need people to do it anymore.

01:15:03:15 – 01:15:05:05
Joseph
And I think but instead of being a makeup.

01:15:05:05 – 01:15:06:08
Joseph
Artist, you become a digital.

01:15:06:08 – 01:15:08:11
Joseph
Effects artist. Put it, they don’t want to do that.

01:15:08:11 – 01:15:11:10
Sam
What if they like makeup? They shouldn’t be forced to change.

01:15:11:10 – 01:15:13:13
Joseph
Because so then you work in stage, okay.

01:15:13:14 – 01:15:14:25
Sam
They go work at Walmart.

01:15:15:05 – 01:15:18:05
Joseph
They’re not digitally enhancing anybody on the stage in a play. Okay.

01:15:18:09 – 01:15:19:10
Sam
Yeah, we have holograms.

01:15:20:06 – 01:15:20:29
Joseph
You’re killing.

01:15:20:29 – 01:15:38:29
Sam
Me. I’m just saying, it’s something to think about. There’s a real it’s really important that we keep the human element in these things, because that’s where we get sparks of creativity that you couldn’t get from a robot. There’s things that they don’t understand. There’s. There’s ways to perform, ways to say things. Ways to create that I will never know.

01:15:39:02 – 01:15:42:17
Joseph
Until we get to the software. And then you got to worry about the other things.

01:15:42:19 – 01:15:47:25
Sam
Yeah. Then they’re coming here and they’re going to get me first look at their me and I sausage.

01:15:48:16 – 01:15:52:16
Joseph
You picked over most of your soapbox here. Was there anything else that you wanted to sum up in there?

01:15:52:18 – 01:16:09:27
Sam
No, that’s pretty much it. You know, watch out. Just be weary. Anything. I think it should be weary. Make sure you’re you’re making sure where this stuff is coming from. Don’t just look at it like it’s like science, magic. It’s not all these things have to come from somewhere real, tangible things.

01:16:10:01 – 01:16:12:00
Joseph
I agree. And to be honest with you.

01:16:12:00 – 01:16:30:22
Joseph
The point of support your local artist. I, I absolutely agree with that’s why one of the things that I really enjoy when we go to these comic book shows and these pop culture shows, is interacting with some of these artists, looking at what they’re doing, watching what they’re doing live, communicating with them, talking to them, what inspires them.

01:16:31:22 – 01:16:35:11
Joseph
I find that to be one of the most rewarding things of going to these shows and.

01:16:35:11 – 01:16:36:11
Joseph
Actually seeing.

01:16:37:06 – 01:16:40:13
Joseph
The brilliance that’s out there and a lot of the young artists today.

01:16:40:19 – 01:16:45:11
Sam
Absolutely. I don’t really have any artistic talent, so I have a lot of like admiration for people and a lot of.

01:16:45:11 – 01:16:46:20
Joseph
Pent up frustration.

01:16:46:20 – 01:17:05:10
Sam
Why couldn’t it be me? I want to use a robot now, but people like it. Just pick up a pen and just make something brilliant like it’s incredible. And I just think that we need to make sure that that sticks around. Also, when you’re going to these shows, make sure that whoever you’re buying from isn’t using lens, that it just make this art and sell it because that is for sure going to happen because people will try to swindle you.

01:17:05:10 – 01:17:06:15
Sam
So just think about it.

01:17:06:20 – 01:17:12:29
Joseph
All right. I think that was all we had today. Unless you were no. How was it you still worked on? You get off your dress.

01:17:13:00 – 01:17:18:00
Sam
Hey, man, I think we fixed it okay. As with every show. We’ve solved all the problems.

01:17:18:05 – 01:17:19:18
Joseph
You know what we would do?

01:17:19:26 – 01:17:21:19
Sam
That’s true. We led with it, and then we worked our way back.

01:17:21:19 – 01:17:23:23
Joseph
There you go. A little reverse psychology.

01:17:25:06 – 01:17:26:23
Joseph
All right. So that was it for.

01:17:27:01 – 01:17:53:03
Joseph
The show today. Before we do go, I do want to once again ask that you subscribe to the podcast. Audio versions of this podcast can be found listed as insights into tomorrow. Audio and video versions of all the network’s podcasts can be found listed as insights things we are available on Pandora, Castro, Stitcher, Amazon, anywhere you get a podcast.

01:17:54:06 – 01:18:01:19
Joseph
I would also you to give your feedback. You can email us at comments and insights of the things that we do.

01:18:01:19 – 01:18:03:14
Joseph
Stream on Twitch five days a.

01:18:03:14 – 01:18:22:23
Joseph
Week at Twitch.tv, slash insights into things. If you’re an Amazon Prime subscriber, you do get a free monthly twitch prime subscription. We’d appreciate it through that. Our way you find us on Instagram at Insights into Things, or you can find links to all that and more on our website at WW w.

01:18:22:23 – 01:18:27:01
Joseph
That insights into things dot com that’s it in the book.

01:18:27:02 – 01:19:01:01
Sam
Stay safe out there everybody buy.

This site uses Akismet to reduce spam. Learn how your comment data is processed.