Thursday, March 20, 2025

Putting the Science back into SF: AI

I'm leading a full day workshop on SF on Saturday for WritersSA (State Library, Adelaide, SA) that includes 5 session with both "lecture" and "practical" components.

Science, Magic & Speculative Fiction

https://writerssa.org.au/event-registration-civicrm/?id=358

SESSION 1: AIs AS TOOLS AND CHARACTERS (45 MINS)

Discussion on effective research methods, using AI tools ethically, and avoiding common cliches and AI tropes in science fiction writing.

SESSION 2: ENGINEERING YOUR WAY OUT OF PROBLEMS (45 MINS)

Exploration of theme and plot development, world and character building, and solving problems using scientific principles.

SESSION 3: WRITING AN AUTHENTIC SCIENTIST (45 MINS)

Overview of character motivations, goals, and conflicts specific to scientists.

SESSION 4: WORLDBUILDING AND CHARACTER ARCS (45 MINS)

Discussion on how to develop compelling worlds and integrate them into character development.

SESSION 5: SCIENCE AS MAGIC + MAGIC AS SCIENCE (45 MINS)

Discussion on the balance between mystique and plausibility in speculative fiction; Arthur C Clarke’s idea that any sufficiently advanced science/technology is indistinguishable from magic; Marti Ward’s Appearance of Magic™ universe.

LLMs as AIs

One of the trickiest aspects of this (Session 1) is how to deal with AIs - and how today's so-called AIs deal with the various manifestations of AI in science fiction literature and film.

In particular, currently LLM-based AI chatbots have vague purposes, but Copilot has recently been incorporated into Office365 (both the products and as a price increment in your subscription).

Basically in that context it is designed to help you get started on your document or restructure it. Originally however, Copilot had a reputation for being able to write code for you, and tabulate data. It still can, although as risk management constrains it what is does is increasingly restricted/poor.

One of the issues, as an author, is that it can plagiarize work. The more well-known and cited/quoted the work, the more copies it has seen, the more likely this is - and of course the more likely a user explicitly tries to elicit quotes from a work. Again with increasing risk management, this direct copying and copyright violation is largely being eliminated in the big name products. 

The problem with Large Language Models that are too large is that they go beyond being language models to being general repositories of human knowledge and culture, and that includes the stories that have become part of our culture, and it also includes all the social commentary and misinformation available on the internet today. Again, a risk that needs management but is not adequately dealt with. 

I try to direct it (prompt engineer) to provide only information that it can find in refereed sources, and provide those citations in a formal way. But they still point to lots of websites.

Several of my own books are known to them, and it can give the plot and discuss characters sensibly. But I've not found any tendency to regurgitate the texts.

LLMs vs AIs in fiction

The flip side of this new development in AI is how primitive the AIs in our classic movies and shows now appear. Interestingly, HAL holds up well - illustrating some of the risks that our current crop of LLM-based AIs exhibit. Although my Casindra Lost AI, AL, is quite deprecating in his discussion of HAL - regretting the similarity of names and the bad rap/rep that gives him.

One area that HAL and other AIs in fiction get right, and current LLM AIs still get wrong, is 'grounding' - that is being the brain of a robot or vehicle that allows it to fully understand the world in terms of the way it can interact with it - and to learn when new parts of the world expose itself to its sensors.

Really LLM AI Chatpots are not the pinnacle of AI, our autonomous vehicles are. The GAN-enabled AIs dealing with sound and/or images have an edge too, although the understanding isn't there in the same sense that it is for a self-driven car or plane. On the other hand those systems are limited to what is needed for their driving tasks. Although gradually control of sounds systems, integration with smart phones and entertainment systems and climate control, etc. is expanding on this. And LLM AI is starting to join the party.

Did you know?

The prove-you’re-human Turing Tests on websites are actually used to train AIs, and they think they know around half: as the AIs get better, the tests get longer and harder.

This is feeeding into the above systems. This grounded type of system is where AIs will reach maturity and understanding and sentience - in the sense of sensing and understanding and reacting to the world.

Lost Missions

My Paradisi Lost Missions aims to put realistic research in AI into illustrating what we can expect of AI in the stories. While LLMs have hit mainstream since the series appeared, it is based on my 50 years of experience pioneering Learned Language Models - with what constitutes 'Large' growing exponentially year by year, growing by a factor of ten or more each decade — now boosted by the resources of some of the largest companies in the world.

Casindra Lost's AL is introduced as an already fully functioning AI that develops a personality over the course of the book.

The Moraturi arc is a kind of sequel, but focuses on another mission where the main quantum computer and AI are out of action, and other lesser AIs must taken on more responsibilities than they were originally designed for.

Quantum Talents/PsyQ

I've already written/blogged appendices to my Quantum Talent Series' Time for PsyQ (included in the education edition) so won't say more about them here:

Amazon book pages

(tiny.cc/AmazonCL – Kindle e-book; paperback: ISBN: 9781696380911)

Other outlets

https://books2read.com/MartiWard