Tilt Talks

Are You Thinking For Yourself

Is there actually a problem with thinking for ourselves?

With any artificial intelligence tools like ChatGPT, Google Bard or Jasper, people always say, you should be thinking for yourself. Outsourcing our brain waves to these tools could be damaging, right? Where is the line?

But, let’s be honest, has AI really changed how we use our brains to access information, generate ideas and build knowledge? Thinking for yourself often means using Google, Bing and other search engines, so how much thinking for ourselves have we done for the last 30 years? What does thinking for ourselves even mean? Key to this topic is, how does our own bias play into it?

Watch ex-human rights lawyer, digital ethicist and all-round brilliant human mind, Uthman Ali, explain the problem with ‘thinking for yourself’.

Video Transcript

“So often we say with things like ChatCPT, you should fact check. You should think for yourself, but let’s be honest. No one really thinks for themselves these days we all use Google or search engines and there’s already biased within these. So for example, if you type in ‘beautiful’ to Google and go on images you get pictures of beautiful white women.

So the question is why they always white? why they’re women? why they even people? Why is it not beautiful hills, trees, mountains? Because this is our bias, our mirror looking back at us.

And for a while the front page of search engines was just outright misogyny these few women need to be put in their place be controlled be disciplined, and there’s a great book called ‘Algorithms of Oppression‘ that talks about how much bias there is.

But it’s so pervasive that you don’t even notice it unlike other forms of bias, which are very human search engines just technologies all around you, but we just don’t really acknowledge it as much as we should.

And this has some unintended consequences. So with search engines and online content, we often end up with feedback loops where we feed algorithms our biases, they reflect them back to us because they know what we want to click on.

We click the link again, and this reinforces the bias. And during covid, I think we can all agree, things even online got a little bit weird, right, you got loads of conspiracy theories. And you found the algorithms outwardly promoted conspiracy theories because it was good content. They got trending loads of people like to click on them. They’re eye catching.

Then we had the issue of echo chambers, and I myself and guilty of this, think we all are, that you often like to be in online communities or spaces where people kind of agree with you. Well, you naturally gravitate to the politically ideologically, religiously as we’re all comfortable to be around people that sort of reflect your views.

But the issue with echo chambers is that the most extreme radical ideas can be absolutely absolutely normalised.

So for example, if you look at fringe white nationalists stuck in their own online chat service and everything was for them in this space, it just seems like the norm and because we live so much in online worlds now the lines between what is virtual and what is real are blurred.

The lines are so blurred. But if you’re living in this online community where these harmful ideas are completely normal it just feels like this is just how the rest of society thinks.

And it’s going to get even harder to figure out what is true what is in because of things like deepfakes. So none of these people are real. These are all examples of deep fakes and you can make them instantly with certain websites. So in the future, even the points of fact checking and information hygiene is gonna be so so critical for us. To actually tell, like, what is real anymore.”

Can I Kick My Robot Dog?

As AI rewrites the rules of what it means to be human, how do we avoid wandering into an ethical catastrophe of our own making? Ex-human rights lawyer and digital ethicist Uthman Ali took us on a brain-expanding journey through digital ethics — from AI bias to homicidal toasters.

Featuring Uthman Ali — digital ethicist, neuro-ethicist & ex-human rights lawyer

Are You Addicted To Your Phone?
Tilt Talks

Are You Addicted To Your Phone?

Uthman Ali talks about our addiction to our phones and dependence on technology and what that means for the future.

Are You Thinking For Yourself
Tilt Talks

Are You Thinking For Yourself

Ex-human rights lawyer, digital ethicist and all-round brilliant human mind, Uthman Ali, explains the problem with 'thinking for yourself'.

Predicting Bad Apples – AI & Bias
Tilt Talks

Predicting Bad Apples – AI & Bias

Bad apples in the data set. How can AI be used for good and help us predict bad things that could harm humanity?

Is This Our Black Mirror? – AI & Legislation
Tilt Talks

Is This Our Black Mirror? – AI & Legislation

Do we need to legislate artificial intelligence?

Are We Being Ripped off? AI & Copyright
Tilt Talks

Are We Being Ripped off? AI & Copyright

Are AI tools just recycling our work and spitting it back out to us? Watch this video to hear what Uthman Ali says about AI and copyright.

When Do We Stop Being Human?
Tilt Talks

When Do We Stop Being Human?

Where is the line between humans & robots? Watch Uthman Ali, neuro ethicist, confront this important topic at the Tilt Talk 'Will You Stroke Your Robot Dog'

Is A.I Sentient?
Tilt Talks

Is A.I Sentient?

Digital ethicist, ex-human rights lawyer and A.I expert, Uthman Ali, spoke to Jon Maylon and our Tilt Talks audience about A.I sentience. Watch the video!

Are We Degenerating?
Tilt Talks

Are We Degenerating?

When life becomes too easy, because of AI tools, then will we start to lose all our creative and problem solving skills? Watch this video from Tilt Talks.

Do Ethics Hinder Innovation?
Tilt Talks

Do Ethics Hinder Innovation?

Can I Kick My Robot Dog?
Tilt Talks

Can I Kick My Robot Dog?