For over thirty years, I have helped to bring technological innovation to creative teams, yet I have never felt as conflicted as I do now regarding the rapid deployment of Artificial Intelligence (A.I.) tools.

Social media is awash with tutorials on how to make a children’s book in 7 days, a comic book in a day, write a novel in 10 days, etc. With the current image and text generators, these are not false claims. On the surface, this looks to be exactly the area I would normally get involved. Showing the benefits of powerful new technologies to help with making the creative process easier or more streamlined. So why am I so hesitant? I think there are four main concerns.

Speed of change.
Now that the A.I. tools are now publicly available, the speed of change is unlike anything I have seen before. Day by day, new innovations are being released, changed and adapted. In some cases hundreds of open source developers are lending their weight to the developments. People are fusing one algorithm with another, creating instances undreamed of the previous week. A small example is the bringing together of ChatGPT and Stable Diffusion. This merger reduces the need for constructed prompts, in simple terms basic English is now used to generate and modify images. The work to make this verbal rather than written is already in development and the early results are simply mind blowing. It will soon be possible to sit in front of a web browser and ask for an image to be generated and then, by drawing around areas with a mouse or stylus, ask for changes to be made, all in near real-time.
With this constantly changing environment, where do you start? By the time you have learned a method for your workflow, they have superseded it with something faster and simpler.

Legal standing.
A.I. has already provoked several legal actions, the most recent noticeable examples being the class action against Stable Diffusion in the USA and that of Getty images in the UK. Both cases will question the right companies have to use media derived from the internet to train models which can then be used to generate new content. We have already seen examples where pure A.I. generated images are not allowed copyright status. Where these legal actions will take us is hard to predict. On the positive side, it will move forward the clarification we all need before fully committing to the integration of A.I. tools in our working environments.
Where people’s jobs are likely to be put at risk and human skills are made redundant, it is inevitable that many more legal actions will occur.

With established news outlets currently stating they use A.I. tools alongside writers and photographers, it feels like the die is already cast.

Moral position.
The fast deployment of high-quality image generators through A.I. has called into question much more than the legal status of such tools. It has forced the creative communities to ask again, amongst other things, the fundamental question, what is art? As well as, is it morally acceptable to train A.I. systems on images where permission has not been sought or given? Where contemporary artists and illustrators now face their style of work being accessible to anybody with a browser, in so far as their specific work is not replicated and thereby infringing copyright, the images created can easily be mistaken as work of the established creatives potentially threatening their livelihood. All this at the press of a button. The creative community is divided with vocal advocates for and against these new ways of generating media.
In a time where moral lines are being drawn, what are the risks to businesses hoping to capitalise on these innovative new tools? Will the legal arguments set the precedent or will the moral obligations decide whether to deploy A.I. or not? Or is it more likely that the cost of production will be the final decider?

Audience reaction.
It is probably too early to gauge how the audience of these new media generating tools will react. The results can appear to be so closely matched to purely human endeavours; it is likely that the public will struggle to distinguish between the end results. We are already seeing Chat GPT pass University grades and image generators like Midjourney win art competitions. If the experts can’t tell the difference, will anybody else? Apart from a few high-profile news stories, it would appear the general populace is oblivious to the turmoil the creative communities now face. It is likely that the uptake of these A.I. tools will cause an explosion of new content. Whether that is a good or bad thing is yet to be determined. History shows that new media usually create a flood of low-quality content that eventually gives way to superior quality. The real question is, will it be humans creating it or machines?

Having spent five months looking at the various A.I. tools, trying them out and establishing how they fit into traditional creative workflows, I have little doubt they will be part of all our creative futures. With established news outlets currently stating they use A.I. tools alongside writers and photographers, it feels like the die is already cast.

Sean Briggs helps creative teams improve their workflows. He also makes a living as a traditional and digital artist.

0 Comments