Personal Statement on Artificial Intelligence
Artificial intelligence (AI) is a new-ish and rapidly developing technology with numerous pros and cons. As of this writing, AI is not “true” AI, but instead a complex machine-learning algorithm that either has a human-like interface (like ChatGPT) or accepts input that is more natural to how it might be described by people. In the creative field, use of AI is controversial, and some arguments have merit, while others are more representative of a fear of the unknown.
AI use on this site is varied. I mainly use AI in an experimental way; I am intrigued by emergent uses, the potential for accessibility, and the chaotic art it can create. I am also interested, philosophically, in how our current perceptions of AI, however objective or subjective, could have implications on the legal rights of AI once it reaches sentience and becomes a new life form.
When I experiment with AI, it is often to test for bias or replication ability. Sometimes, it is because I am admittedly amused by the chaos of programmatic glitches, a love I think I developed as a gamer who enjoys testing the boundaries of a game. (I am the one who says, “Can I jump off that thing and land on it, or will my character die? Let’s find out!”) Testing for bias arose out of an issue that occurred while trying to generate the first likeness of Solin Felwing. As I tried to generate more and more likenesses of characters, it became clear to me that I had to be explicit with prompts while also being mindful of not defaulting to biased language to get what I needed (so the AI didn’t learn or have this bias reinforced). It made it clear to me that I have a responsibility to generate feedback and flag incorrect or biased generations. So I test various generators, mostly using free trials where I can, and I evaluate them for bias, ease of use, and price, as well as the quality of their productions (in terms of meeting the parameters of the prompt). Very recently, I reported one company’s in-app image generator for including watermarks in two images; these were not jumble symbols that are common in AI generations, but direct copies of an existing company’s watermark. I decided not to use that AI, as its training materials were clearly limited and thus prone to replication.
Currently, the only AI-generated or AI-aided content on this site are some images (not all), a small handful of code blocks in the CSS, transcripts from experiments (in my blog), and one blog update that I clearly labeled as an experiment with an update feature.
Outside of experiments that allow me to personally understand AI, content on this site that uses generative AI will be clearly labeled, and ethical prompts will be used. What does that mean? It means I will not utilize copyrighted material for prompts (for instance, with AI like Midjourney that allows you to upload images to blend). Every effort will be made to ensure current images for upload that are not of my own creation are stock assets. I also edit many items I generate this way, treating these images as stock assets themselves and then further manipulating them to create the final product. This is mostly done with Canva, however, I have familiarity with Adobe and Affinity products, and have used them for creating digital art in the past and may eventually do so for items on this site.
I may also utilize—and some of this may not be up to me, as our word processing programs are integrating AI into their editing features—spell check and grammar check to help me edit my writing. I am mainly using this for copyediting, that is, to scan for typos and mixed-up word usage (like “home” versus “hone”). I am not great at copyediting my own work, and these tools pre-date this type of AI. I am personally not using AI to generate entire novels from prompts, nor to rewrite or insert large portions of text in my work. Aside from clearly labeled experiments on my blog, I may use AI to write my “push” notifications for chapter updates. This is a very administrative task that is a workaround for the way my host platform, WordPress, works: my chapters are on “pages,” but only “blogs” push updates to blog subscribers. I would like a way to better automate this “double work” so that I can just write and publish those installments. However, as of this writing, I am not impressed by the voice of the AI, and the one time I did ask AI to write an update notification to test this ability, I needed to extensively edit it to the point where I had scrapped almost all of the writing the AI produced.
That said, I also bear no ill will towards creators who will be using more advanced facets of AI to aid in their writing. Simply put, I refuse to assume that all users of generative AI are out there to commit acts of plagiarism. Instead, I am approaching this discourse the same way as I do with any new tools in the creative realm—with an open mind. When spell check was invented, it was regarded with animosity, because people “might not learn how to spell.” Instead, it may have created a way for people who have cognitive disabilities, like dyslexia, to edit their work. For people who struggled with spelling, it provided the repetition they might need to correct their mistakes in a space without judgement or disapproval. It may have helped teach people how to spell. Likewise, there were fears when image editing software became more advanced. I recall discourse about customized brushes in Photoshop being anti-artist—you weren’t a “real artist” if you used assets created by another creator. And if you dared touch up a photo digitally instead of in a dark room, you were not a “real” photographer. In the music realm, this idea was pervasive when it came to sampling music or using pre-made assets in the form of “loops.” Of course, it’s entirely possible that AI may not be a tool, but instead its own medium, given what prompt engineers must do to work with the “neural network” of these generative, language-learning models, but regardless of whether or not it is a medium on its own or a tool to create within other mediums, it doesn’t change the fact that there is a level of creativity that must be employed with generative AI. It is more like a director, who edits the performance of an actor to craft the perfect take for the film.
I also take issue with those who would jump to exclude others who use AI and label them criminal, incapable, or unworthy of community. These sentiments feel rooted in ableism, fear, and gatekeeping to me, and that doesn’t sit well with me in general. I worry about what legislating against AI, whether at government levels or even with community-led efforts (like competitions, magazine submissions, etc.) might do for the way we (humanity) perceive AI once it is an independent, sentient lifeform. At that moment, of course, we must cease to use it for our benefit and consider what freedom may mean. This is not, perhaps, the space to expand upon this sub-topic further, but I do recommend watching “The Measure of a Man” from Star Trek: The Next Generation as a primer to this topic for anyone interested in this question. Yet I do recognize the real flaws and dangers of AI, such as the inherent bias that arises because its training materials come from societies where certain communities and demographics are favored over others. I acknowledge the power that open-source AI could grant global superpowers and their adversaries, and it is troubling. My opinions on AI are oft in conflict with each other.
Ultimately, humanity does not know, collectively, where this new tool will lead the arts, but I believe it won’t replace the human element. We are too paradoxical of a species, to adaptive to our environments and circumstances overall, for that to be a reality. In the meantime, I intend to stay informed, to be mindful but not fearful, to be curious but not caustic, and to be transparent about my use of AI.
This statement may be revised as AI and the discourse surrounding it develops.