The arrival of generative AI tools into the mainstream in recent times caused a real shudder within me. That has to be the best way to describe it. As someone who's always been excited by technology and saw some hope in the developments that have come along in my own time, this was something very different. Perhaps it came down to timing and that fact that the mainstream IT multinationals – overlayed on top of capitalism – have and continue to demonstrate the worst aspects of the use of technology. Knowledge that was developed by grassroots communities all over the world, and shared freely has been appropriated by those with a one-eyed view of humanity, who cynically place monetary profit over all other concerns. This has been a rapid erosion and it intersects with (what I believe to be a minority of) rouge actors, who have organised themselves around causing significant harm through the theft of data for nefarious ends. Added to this are governments, who don't seem to understand the significance and reach of the social and cultural aspects of technology, and are hopelessly inept at providing regulation and protections for communities.
I could write a lot about regulation, but it's the disruption that generative AI has promised that has been a real concern. At first, when tools such as ChatGPT, Midjourney and DALL-E came onto the scene, my impression was, "okay, this is it, machines will now produce writing and art at such speed that it will just render us irrelevant." Their presence shook to the core what it meant to be a person. If what we are – our language – can be reproduced mechanically, what does this now mean for us? I personally felt a kind of violation; that anything I wrote could now be misconstrued as having been produced by an AI model. It made me feel vulnerable and exposed. As such, I personally refused to participate in using this technology. For the first time, I'd found an appreciation for what the Luddites must have been feeling when they saw their craft being threatened by machines.
At the same time, it seemed inevitable that this would be a real race to the bottom. We all use GPS tools every day now, and they are incredibly useful; but look what they've done for our spatial awareness! I for one, used to be able to pick up a map, examine it and translate and retain enough information to get me close to a destination. These days, that part of my brain has been decommissioned and I fully rely on GPS to get me to a place. Would the adoption of LLMs for content generation not just result in the same outcome for our use of language? This thought was terrifying.
Then it emerged how resource intensive these tools are. At a period when we are clearly witnessing the destructive impacts of climate change, it seems absurd that so much is thrown at training and running LLMs, with no pause for refection on how things could be done in a more sustainable way. The tech corporations seem to be in an arms race with each other to develop models that will ultimately dissolve the livelihoods of vast layers of society. Locked within a capitalist framework, the only value to these organisations is their accrual of economic wealth. What other conclusion can we reach than that they ultimately seek to extract value globally from societies through subscriptions which will replace workers? We've seen this model of scorched earth capitalism play out already with organisations such as Amazon and Uber exemplifying the toxicity of late-stage capitalism.
And yes, while the statements above are unfortunately true, what I've come to realise is that I had conflated generative AI within the paradigm of big tech, specifically OpenAI and Microsoft's vision. It was hard not to get caught up in the fervour of debate and speculation when virtually everywhere I looked was saturated by news of "AI". I immersed myself in discussion groups and other forums to try to make a contribution to regulating this development. However, since that time, it's also become apparent that what these tools generate out of the box is not that good. It might pass for the corporate documents that get circulated around businesses, but fundamentally, it's regurgitated, unoriginal content. I do say, out of the box, however, because I've come to the view that generative AI can certainly be useful, and used as a tool, but in specific, narrow applications, which require careful thought, design and implementation. I've also discovered that anyone can tap into vast communities of folks who are developing tools with the open source approach (and some private companies that back these developments). These can typically be used with general hardware and as such do not pose as high a risk to privacy and ecology as the big tech models. These tools and communities have helped me to better understand the way these generative AI models work. And by this I mean, Open WebUI, Hugging Face, Ollama, and others.
I could be very wrong here, but where I've landed today is that big tech would like us to believe that tools like ChatGPT and CoPilot will solve all business problems and wipe out jobs. But from what I've seen so far – at least in my own industry – is that at best, these tools can help with some tasks, but for any fundamental transformation to occur, they are either not up to the task, or significant groundwork has to be done in areas like knowledge and information management to really benefit from generative AI. This work is tedious, difficult and neglected as it is time intensive and has no immediate tangible value for organisations. What I'm hopeful for is that generative AI will become part of the mix of the complex tapestry of organisations and create its own ecosystems of problems, challenges and rewards; without upturning people's livelihoods. And I know this is probably a kind of naive optimism, but it helps me get through the day.
But there are many other considerations to reflect on, such as ethics, intellectual property theft, misinformation and alignment. I hope to write more about these soon.