Are You Thinking For Yourself

13 Apr 23 Rachel Pearson

SHARE:

Is there actually a problem with thinking for ourselves?

With any artificial intelligence tools like ChatGPT, Google Bard or Jasper, people always say, you should be thinking for yourself. Outsourcing our brain waves to these tools could be damaging, right? Where is the line?

But, let’s be honest, has AI really changed how we use our brains to access information, generate ideas and build knowledge? Thinking for yourself often means using Google, Bing and other search engines, so how much thinking for ourselves have we done for the last 30 years? What does thinking for ourselves even mean? Key to this topic is, how does our own bias play into it?

Watch ex-human rights lawyer, digital ethicist and all-round brilliant human mind, Uthman Ali, explain the problem with ‘thinking for yourself’.

Video Transcript

“So often we say with things like ChatCPT, you should fact check. You should think for yourself, but let’s be honest. No one really thinks for themselves these days we all use Google or search engines and there’s already biased within these. So for example, if you type in ‘beautiful’ to Google and go on images you get pictures of beautiful white women.

So the question is why they always white? why they’re women? why they even people? Why is it not beautiful hills, trees, mountains? Because this is our bias, our mirror looking back at us.

And for a while the front page of search engines was just outright misogyny these few women need to be put in their place be controlled be disciplined, and there’s a great book called ‘Algorithms of Oppression‘ that talks about how much bias there is.

But it’s so pervasive that you don’t even notice it unlike other forms of bias, which are very human search engines just technologies all around you, but we just don’t really acknowledge it as much as we should.

And this has some unintended consequences. So with search engines and online content, we often end up with feedback loops where we feed algorithms our biases, they reflect them back to us because they know what we want to click on.

We click the link again, and this reinforces the bias. And during covid, I think we can all agree, things even online got a little bit weird, right, you got loads of conspiracy theories. And you found the algorithms outwardly promoted conspiracy theories because it was good content. They got trending loads of people like to click on them. They’re eye catching.

Then we had the issue of echo chambers, and I myself and guilty of this, think we all are, that you often like to be in online communities or spaces where people kind of agree with you. Well, you naturally gravitate to the politically ideologically, religiously as we’re all comfortable to be around people that sort of reflect your views.

But the issue with echo chambers is that the most extreme radical ideas can be absolutely absolutely normalised.

So for example, if you look at fringe white nationalists stuck in their own online chat service and everything was for them in this space, it just seems like the norm and because we live so much in online worlds now the lines between what is virtual and what is real are blurred.

The lines are so blurred. But if you’re living in this online community where these harmful ideas are completely normal it just feels like this is just how the rest of society thinks.

And it’s going to get even harder to figure out what is true what is in because of things like deepfakes. So none of these people are real. These are all examples of deep fakes and you can make them instantly with certain websites. So in the future, even the points of fact checking and information hygiene is gonna be so so critical for us. To actually tell, like, what is real anymore.”