It's mildly disturbing that as I think about what I'm planning to write, I feel a strong need to censor what I say as it may result in damaging my employment outcomes. But I'll write it anyway because I think it's important.
I was recently at a forum in work with about half a dozen colleagues who I know and work with occasionally. In a sense, the context and picture is that we were in a trusted space and not strangers to each other. We are all at a similar 'level' within the organisation, too, so a certain frankness was possible. I also work at a university, where debate and critical thinking is the basis of what we do – at least that's the idea I have and hold onto.
The topic of using generative artificial intelligence (gen-AI) for the purposes of work came up. It was for the purposes of a very specific task, which requires detailed analysis and succinct writing skills. Honestly, the gen-AI solution proposed and demonstrated was quite primitive and my feeling was that using it in this way was just pure laziness. I acknowledge that most gen-AI has just been conflated into notions of the chat bot, and that there are very specific applications of these tools that can be used to further research and knowledge; however, the use of chat bots to spit out text and avoid thinking is pure folly. I agree with Aaron French, who recently asked if gen-AI chat bots are making us stupid? His conclusion was that there's a divide in how these tools are used. Some people use them 'as a substitute for creativity and thinking. Others use them to enhance their existing cognitive capabilities.' With the former use case, this is likened by French to being "stuck at the peak of mount stupid", whereas if such tools are used to leverage, enhance and augment our own intelligence at least we continue to have some agency and cognitive control over what we're doing.

While I hadn't read French's article at the time, when a piped up with a similar position, the reaction was surprising. My comment was that I would much rather continue to sharpen my own skills and cognition, and that using a chat bot for the purpose demonstrated would weaken those skills.
You know those moments when you acutely realise you've said the wrong thing in public? The point may have been well-intentioned, and the logic reasonable, but its reception was not welcome. I was immediately shot down by one colleague (in a borderline offensive fashion), and clearly the others in the room were at least mildly surprised and felt uncomfortable by my stance.
I realised at that point that the use of these gen-AI tools has reached the status of doxa – at least in some areas of my organisation. Doxa, as used by Bourdieu, is the unquestioned truth of a social group. In other words, it is taken as given that one would use these tools and not offer any critical position or stance. And I know the arguments that prop this up: the pandora's box is open now, and there's no going back, etc. I completely reject this groupthink line of thought . As I've written previously, the particular mobilisation of these tools within multinational IT corporations' aspirations, which are locked within the forces of late stage capitalism represent distorted and highly problematic motivations. There may be useful applications of these tools, but they can be utilised in ways that don't align with big tech's preferred paradigm, and we don't need to dig our own graves by outsourcing our own ability to think and be creative in the workplace.
Sure, you might say this article straddles a number of positions and thoughts in a seemingly contradictory way. Should we wholesale reject the use of gen-AI tools on ethical grounds or adopt their use in a critical way that enhances our intelligence? I still feel uncomfortable with the ecological impacts and jettisoning of jobs that is unfolding, but perhaps the avoidance of the mainstream, corporate driven models is a way to consider; and their use in a deliberate, critical fashion, which augments our own intelligence and creativity.
It should be mentioned that despite the reaction of my colleagues, there many in the room for whom I have a lot of professional respect and I acknowledge that the use and adoption of generative articificial intelligence is complex and people are all thinking of their self-preservation.