EN 48: Brief thoughts on AI

Hey! This week’s newsletter is severely delayed, as you can see. While I’ve had some positive improvements, I’m still struggling with getting a stable routine, and sleeping hasn’t been great at times, which has made writing the newsletter a challenge.

Still, when I finally sit down to write, it’s enjoyable and time flies, so I've been trying to write, even if I’m not feeling in the best mood or have the right mindset. Writing sometimes feels like an exercise in synthesising, clarifying, learning and discovering what’s on my mind. Many times, an idea appears clear inside my head, but when put on paper, things change, expand or go sideways.

Anyway, it’s been difficult to write, and while I’m satisfied with having committed to it these past two or three weeks, I need to get a week or two off. Here’s the plan: next week there won’t be a newsletter, and I’ll be back on Friday the 23rd.

Brief thoughts on AI

AI is taking the world by storm. It’s perceived like it’s the next big thing, and I’ve already heard the advice that we should embrace it and learn to use the tool, or we’ll be left behind.

I’ve dabbled with ChatGPT a bit while coding, asking it clarifying questions, to generate rough outlines of code that I can use as ideas, or to explain what the code is doing. See Thorsten Ball’s post “How I Use AI” for an example of what the process would look like. AI could potentially enhance our productivity, mainly because of the ability to synthesise knowledge and make it accessible via a conversational interface. Before, you had to search for many links and read countless documents, now you can ask a question and get an answer—whether the answer is actually good or a hallucination is another topic—, skipping the laborious search. Having said that, there’s still a lot to understand about the claims of enhanced productivity and quality, as Paul Raschky writes:

In parallel, a number of studies from computer science have shown that ChatGPT output for more complex tasks, such as code development, can often be faulty hallucinations. Kabir et al. (2024) found that 52% of ChatGPT answers to coding questions were incorrect and that the users presented with these answers overlooked these errors 39% of the time.

If tasks are more complex and the output requires accuracy to be ultimately useful (e.g. software development), relying on generative AI might prolong task completion and decrease workers’ output quality. In particular, if you are not a domain expert.

Paul Raschky, Professor at Monash University

I don’t see the arrival of these artificial intelligence tools as the beginning of the end of humanity. Just check out ChatGPT struggling to write Honda in ASCII and you tell me whether Skynet is here or not. What I see is the negative parts of it, which made me quite sceptic of it all and a bit hesitant to use the tools. The fabulous episode of Tech Won’t Save Us podcast, AI Hype Distracted Us From Real Problems w/ Timnit Gebru, which I highly recommend listening to, covers a lot of it. Some ideas of the episode and a few of mine:

  • Labour is at the core of the issue. As Timnit Gebru puts it: “if they were not able to exploit labour, their market calculations would say that this is not going to work. And so they won’t be so quick to go to market with this stuff.”

  • Techno bros and grifters that were all on board with crypto now are all on board with AI, which is not really a good sign, just by looking at what happened with the crypto hype.

  • Suddenly the narrative changes to be that AI will be the next big thing or that it will destroy the world, when we actually have way more pressing challenges right now (e.g. climate change?)

  • I’m worried about the carbon cost of training large machine models, as it needs tons of resources and infrastructure.

  • How would society look like when people can’t differentiate between deep fakes and reality? Fake images and videos can impact politics and have a negative impact on people’s lives (deepfake pornography comes to mind). What about fake or biased information?

On top of this, I’m starting to see companies selling their users’ data to AI companies for training, making opt-in the default, meaning that if you don’t want your data to be sold you have to actively opt out.

An example of exploiting labour can be seen in the use of AI to create images and videos. These companies are training the models with stolen art. No artist is getting credit, money, or even the option to opt out, art is just getting stolen without anything they can do. The same can be said about articles, books, blogs, etc. Everything publicly available could be used for training without any way to opt out or to see what resources were used. Furthermore, what you share with the AI, could be used for training as well. Everything that has been consumed by the AI is used by it as if it comes from it. In fact, OpenAI openly admits that it is impossible to create tools like ChatGPT without copyrighted material, essentially they’re saying that it’s so important to develop these tools for humanity, that they should be allowed to steal and exploit labour. I don’t know, maybe if what your company’s doing requires laws to be removed so you can freely steal and exploit, it’s not that good of a business model? Imagine a criminal organisation saying that it would be impossible for them to make money if they aren’t allowed to commit crimes.

Off-topic

This week, I’ve discovered Indigo Girls, an incredible American duo. I can’t believe I didn’t know about them until now—thanks to Rick Biato. The lyrics, the guitars, the harmonies… Check them out:

Join the conversation

or to participate.