Thursday, September 28, 2023

Quantum Physics and Psychology

[Airlie's essay on]
Quantum Physics and Psychology

— Online (bonus) appendix (with help from Marti Ward) to her story in




Quantum Science

Once upon a time, we believed that Physics was simple: what we could see at an everyday level was the same as what happened at the level of atoms and electrons which was the same as what happened at the level of stars and planets.

Then Einstein came along, and General Relativity. And Heisenberg came along, and Quantum  Mechanics. But we still don’t quite know how to make these theories play nice with each other.

General Relativity limited velocities (of objects and information) to the speed of light, but also allows the possibility of Einstein-Rosen bridges (postulated by them in 1935), that is to say wormholes from one part of the universe to the other, and between different times. These come about as potential solutions to the various equations.

Quantum Mechanics limited how accurately you could measure location and velocity at the same time, with Einstein, Podolsky and Rosen proposing (also in 1935) an experiment where a pair of particles is entangled, so that when the quantum state of one is determined, the quantum state of the other is also determined — irrespective of how far apart they are.

These problems lead not just to the time-travel paradoxes but expose apparent internal inconsistencies within Quantum Mechanics, which can be resolved by adding the assumption that the wormholes take you between different universes, parallel universes, different dimensions.

So these theories provide plenty of ammunition for Science Fiction writers. And periodically we discover that things we thought were Science Fiction might actually be true.

The funny thing about Science is that it is actually all Fiction: that is we don’t know that any of it is true: all we have is theories with varying degrees of certainty. And occasionally theories we though were (almost certainly) true turn out to be (quite obviously) false.

In Science, we can never really prove a theory, but we can find contradictions, and indeed Karl Popper’s (1932) definition of good science and good theory, is one that involves developing refutable theories — that is making predictions into the unknown and then constructing experiments to test those predictions. 

If the prediction is false, then it is back to the drawing board. If the prediction is true, all that that does is give us a bit more confidence and push us to find and test other predictions. Of course, in practice people don’t like their theories being disproved: Thomas Kuhn (1962) noted that in reality a whole lot of tension and invalidation has to build up, and usually the original proponents and defenders of the theory have to die off, and only then can there be a ‘paradigm shift’.

One quantum prediction that does seem to be borne out is that of quantum teleportation (which is really more like quantum telepathy). This is precisely what Einstein, Podolsky and Rosen were concerned about in 1935, because this is instantaneous and nothing — not even information — is meant to go faster than the speed of light according according to relativity. With the teleportation entangled particles are taken as far apart as possible, then probed. The act of observing a particle actually affects it according to quantum mechanics, and for entangled particles, the statistical distribution for the twin predicted by quantum mechanics is different from what is predicted by classical physics (Bell, 1964) — and these differences have now been verified. But still, we are talking about creating a twin of the particle rather than teleporting the same physical entity — but still it is a bit like Star Trek's transporter which recreates things/people in a different location.  Though the information still has to be sent by conventional means, limited by the speed of light.

Quantum Dimensions

The idea of different dimension arises in a number of ways in Physics. For example, the hypothesis that there are ten dimensions arises from SuperString theory — to be precise five different versions of SuperString theory. Rather than trying to try to identify which of these theories are right (impossibly hard) or wrong (slightly easier), another approach is to consider that they form another dimension.

This then leads to the idea that they could all be instances or limiting cases of a higher order theory, Edward Witten’s (1995) 11-dimensional M-theory (M for membrane, as in a 2-D plane like object that exists within 3 dimensions, and generalizes as p-planes which sweep analogously through p+1-dimensional space – jokingly, Witten also said it could stand for magic or mystery).

Theories of supergravity also suggest 11-dimensions as maximal, and in some sense optimal or parsimonious, in generalizations of Einstein’s theory of General Relativity, in which time (measured in seconds) acts like an imaginary dimension analogous to the three spatial dimension (measured in light-seconds, the distance light travels in a second in a vacuum). The key equation is D² = X² + Y² + Z² – T², which itself generalizes the normal Euclidean idea of distance.

In essence, in these “Kaluza-Klein” theories, dimensions thus correspond to mathematical variables or physical constants that could potentially be different in another universe. But the supergravity idea suggests a way of testing for these orthogonal dimensions by looking for the missing gravitational force of higher order Euclidean generalizations. 

For this missing component to be unobserved, it would seem the dimensions must be very small, compactified dimensions. For example, we can conceive of circles or mini-spheres (with 1D length resp. 2D surface) at each point in 4D spacetime, giving us 5D or 6D. The Heisenberg uncertainty principle also means it is difficult for us to measure accurately in all dimensions simultaneously, so that small differences can be swamped by errors.

Whereas Kaluza (1919/1921) sought to extend Einstein’s theory of Gravity to include Maxwell’s model of electro-magnetism (in which electrical and magnetic fields are orthogonal), Klein (1929) related it to the new Quantum Theory, generalizing Heisenberg’s particle-wave work and Schrödinger’s equation, and interpreting its solutions as particle-like waves moving gravitational and electromagnetic fields through 4D spacetime.

Be warned: many theories of multiple dimensions are pure mathematics, with the dimension corresponding to variables, while others are pure speculation. But once we see orthogonal dimensions we can see the possibility of other set of dimensions similar to ours, except probably much much smaller. Finally, there is the quantum many worlds idea that every choice point spawns a new universe.

A good starting point for understanding Quantum Dimensions is the Scientific American article by Freedman and and van Nieuwenhuizen (1985), followed by the two part sequence of Plus Maths articles by David Berman (2005).

For a more imaginative look at different kinds of possible Parallel Universe, see Max Tegmark’s (2003) Scientific American article and his Crazy universe website at https://space.mit.edu/home/tegmark/crazy.html along with the rather speculative descriptions in Matt Williams’ (2014) Universe Today and Phys.org article of what the 10 dimensions could represent.

Quantum Computing

Recently, the credibility of Quantum Mechanics has received a boost from quantum computers being able to compute things faster than a conventional computer. These rely on the idea of superposition, that quantum bits (qbits) can be in an indeterminate state that is not yet true of false, and that normal logical (and hence arithmetical and algorithmic) operations can be performed with these superimposed states, adding constraints on what the actual solution state can be until, if enough constraints are added to make it unique, we have the answer – computer in a linear amount of time (n steps) rather than an exponential because we don’t have to explore the two possible states of each bit/qbit, i.e. Ω(2^n) steps to explore the full tree of possibilities.

This gives rise to the idea of quantum probability, and it also changes the game in terms of the time taken for problems like factorization of large numbers, or related tasks like breaking encryption. Except our quantum computers are still relatively small compared with current encryption keys.

Quantum Psychology

In a BBS treatment of quantum probability as a new direction for Cognitive Modeling, Pothos & Busemeyer (2013) argue that quantum probability provides a descriptive model of behavior and can also provide a rational analysis of a task.

The underlying question here is whether quantum effects actually play a role in the brain, in thinking. In trying to formulate this as a refutable theory, it comes down to whether classical or quantum probability provides a better model for the observed data, and can make more accurate predictions. In particular can it provide a better model for human decisions that don’t fit with conventional ideas of probabilistic reason.

Quantum entanglement has been proposed as potentially allowing some form of telepathy, and there are some interesting observations about similarities between quantum processes and cognitive process, in the mathematical equations again, that have led people to suggest that quantum processes play a role in every day cognition, including in particular free will and decision making.

If quantum effects do play a role in cognition, as quantum psychologists suggest, then this opens the way for entangled particles to explain psychic phenomena (Roll, 2010).

Particle Accelerators

Exploring the predictions of Quantum Science is very expensive. 

Quantum Physicists tend to be looking at very small things, like subatomic particles (and maybe the Higgs bosun), or very large things, like stars and black holes (and maybe an Einstein-Rosen bridge). Large or small, large energies are involved — and it takes a lot to accelerate even the tiniest particles to close to the speed of light, not to mention the electricity budget of a small country.

The particles we are trying to discover are also very small and very fast and very penetrating.

There are a few accelerators and colliders around. Most people have probably heard about the Large Hadron Collider on the Swiss-French border (which is internationally funded). Most people have probably not heard of the one built in Texas, south of Dallas (which was cancelled when it got too expensive for the national budget).

Sometimes the aim is to create entangled particles of various kinds — the kind that leads to so-called quantum teleportation or perhaps quantum telepathy.

Further Reading

  • Behavioural and Brain Sciences, Target Article by EM Pothos and JR Busemeyerand.  Can quantum probability provide a new direction for cognitive modeling? Volume 36 , Issue 3 , June 2013 , pp. 255 – 274. https://doi.org/10.1017/S0140525X12001525
  • New Scientist, Miriam Frankel, 1 June 2024, pp.32-36. Time loops. https://www.newscientist.com/issue/3493/
  • New Scientist, Thomas Lewton, 16/23 December 2023, pp47-49. The mystery of the quantum lentils. https://www.newscientist.com/issue/3469/
  • New Scientist, Michael Marshall, 16/23 December 2023, pp44-46. In their dreams. https://www.newscientist.com/issue/3469/
  • New Scientist, Editorial, 9 September 2023, pp32-39. The Amazing Theory of Almost Everything. https://www.newscientist.com/issue/3455/
  • New Scientist, Michael Brooks on 25 August 2021. Beyond quantum physics: The search for a more fundamental theory.  https://www.newscientist.com/article/mg25133493-300
  • Phys.org, Universe Today’s Matt Williams on December 11, 2014. A universe of 10 dimensions. https://phys.org/news/2014-12-universe-dimensions.html       AND     https://www.universetoday.com/48619/a-universe-of-10-dimensions/
  • Physics Central on 21 July 2013. Migration via quantum mechanics. https://www.physicscentral.com/explore/action/pia-entanglement.cfm now archived at
    https://web.archive.org/web/20201112034950/https://www.physicscentral.com/explore/action/pia-entanglement.cfm
  • Plus Maths, David Berman on 10 October, 2012,  Kaluza, Klein and their story of a fifth dimension. https://plus.maths.org/content/kaluza-klein-and-their-story-fifth-dimension
  • Plus Maths, David Berman on 9 October, 2012, 10 Dimensions of String Theory. https://plus.maths.org/content/10-dimensions-and-more-string-theory
  • Plus Maths, Chris Budd and Cathryn Mitchell on 7 September 2023, Maths in a minute: Inverse problems.  https://plus.maths.org/content/maths-minute-inverse-problems
  • Scientific American, Daniel Z. Freedman and Peter van Nieuwenhuizen, The Hidden Dimensions of Spacetime, Vol. 252, No. 3 (March 1985), pp. 74-83. https://www.jstor.org/stable/pdf/24967594
  • Scientific American’s George Musser on The Strangeness of Physics and Telepathy. https://bigthink.com/hard-science/george-musser-on-the-strangeness-of-physics-and-telepathy/
  • Scientific American, Space.MIT’s Max Tegmark, Parallel Universes and Welcome to my Crazy Universe http://space.mit.edu/home/tegmark/multiverse.html
  • Universe Today, Jean Tate on November 11, 2009. Parallel Universe.  https://www.universetoday.com/44769/parallel-universe/
  • Universe Today, Nancy Atkinson on September 16, 2009. What! No Parallel Universe? Cosmic Cold Spot Just Data Artifact, https://www.universetoday.com/40413/what-no-parallel-universe-cosmic-cold-spot-just-data-artifact/
  • Universe Today, Nancy Atkinson on October 15, 2009. If We Live in a Multiverse, How Many Are There?  https://www.universetoday.com/42696/if-we-live-in-a-multiverse-how-many-are-there/

References

  1. Bell, J. S. (1964). On the Einstein Podolsky Rosen Paradox. Physics Physique Физика 1 (3): 195–200. 
  2. Fuss IG, Navarro DJ (2013). Open Parallel Cooperative and Competitive Decision Processes: A Potential Provenance for Quantum Probability Decision Models. Topics in Cognitive Science 5 (4), pp.818–843. https://doi.org/10.1111/tops.12045
  3. Freedman, DZ, van Nieuwenhuizen (1985). The Hidden Dimensions of Spacetime, Scientific American, 252 (3) pp.74-83. https://www.jstor.org/stable/pdf/24967594
  4. Kaku M (2006). Parallel Worlds: A Journey Through Creation, Higher Dimensions, and the Future of the Cosmos. Anchor.
  5. Kuhn TS (1962). The Structure of Scientific Revolutions. The University of Chicago Press.
  6. Kuhn TS (1970). Logic of Discovery or Psychology of Research? In Lakatos, Irme; Musgrave, Alan (eds.). Criticism and the Growth of Knowledge. Cambridge University Press. pp. 1–24.
  7. Popper K, (1934/1959). The Logic of Scientific Discovery (2 ed.). Martino Publishing.
  8. Roll WG, Williams BJ (2010). Quantum theory, neurobiology, and parapsychology. In Krippner S & Friedman HL (Eds.), Mysterious minds: The neurobiology of psychics, mediums, and other extraordinary people. Praeger/ABC-CLIO. Pp.1–33.


Availability of Time for PsyQ

Time for PsyQ is available from your favourite bookseller:

 

Awards for Time for PsyQ

Time for PsyQ won the Silver medal for Teen and Young Adult Sci-Fi Action & Adventure in the 2023 Global Book Awards.


Reviews of Time for PsyQ

4.7/5 Amazon
5.0/5 Emerald
4.8/5 Goodreads 
4.0/5 OnlineBookClub
5.0/5 Reedsy


OBC

★ ★ ★ ★ 

https://forums.onlinebookclub.org/viewtopic.php?f=21&t=490460  Merrit Fletcher
https://forums.onlinebookclub.org/viewtopic.php?f=21&t=384028  Surekhna Krishnakumar
https://forums.onlinebookclub.org/viewtopic.php?f=21&t=376706  Gerald Stewart

★ ★ ★ 


Brain Science and Technology

 [Airlie's essay/glossary on]
Brain Science and Technology

— Online (bonus) appendix (with help from Marti Ward) to Airlie's story in


There’s a great journal called Behavioral and Brain Sciences (BBS), published by Cambridge University Press. It allows people to present their theories and evidence, and get discussion on it from other researchers, from many different disciplines and perspectives. There are also many journals relating to Behavioural Science, Brain Science, Cognitive Science, Cognitive Linguistics, Cognitive Neuroscience and Cognitive Psychology that are surprisingly readable even to the lay person. But the advantage of BBS is the diversity of opinions and the richness of the discussion which helps clarify things more than a single author could possibly do.

Of course, popular magazines like Scientific American and New Scientist are targeted more to budding scientists, and you don’t have to have a PhD to subscribe to those — and they are written by science journalists who are good explaining things to non-scientists, and might just happen to have the goal of encouraging people to become scientists.

I guess with a father who is a neuroscience professor and a mother who is a clinical psychologist, both whom have shelves full of relevant journals, it is not surprising that I have developed an interest in this area. And I’ve never looked back since the day my father first pointed me in the direction of BBS.

Then my mother likes Science Fiction, and has a good library of Science Fiction books, and she particularly likes the kind of Science Fiction that is based on real science, not just shooting round the galaxy breaking the laws of physics. But she particularly likes the kind of Science Fiction that makes you think about thinking, about how the brain works, or could work, or could work better, or how telepathy could work.

Yes, ironic isn’t it!

Since my aim here is to help school students like me understand various things about Brain and Quantum Science, I’m going to try to keep a balance between citing the popular press (see Further Reading) and the research literature (see References). Because it is always important to take things back to the original source, as otherwise the Chinese whisper effect can distort things (R. A. Sanderson, personal communication; see also entries on Chinese Whispers in Wikipedia and Britannica, as well as the Merriam-Webster and Collins dictionaries).

Definitions

Brain Science: The science of how the brain works, with a particular emphasis on the physical and neural processes involved, although the American Psychological Association, has a definition that focuses on Cognitive Psychology (https://www.apa.org/education-career/guide/subfields/brain-science).
Cognitive Science: The collective science that help us understand how the human brain and mind work, with a much broader emphasis on interdisciplinary science including the Behavioural and Biological Sciences, Linguistics, Cognitive Psychology and Cognitive Neuroscience, Psycholinguistics and Philosophy of Science (https://plato.stanford.edu/entries/cognitive-science/).
Human Computer Interface (HCI): The application of Cognitive Science to the study of how to optimize human interfaces to computers and their peripherals as well as embedded applications (like smart phones, cars and microwaves). HCI also includes the physical and software embodiment of such an interface plus the understanding of human communication that comes from investigation into and implementation of such interfaces. (https://www.interaction-design.org/literature/book/the-encyclopedia-of-human-computer-interaction-2nd-ed/human-computer-interaction-brief-intro).
Brain Computer Interface (BCI)  or Brain Machine Interface (BMI): A subfield of Human Computer Interface focussed on using Cognitive Neuroscience technologies to allow control of a computer or other device in a hands-free way. These may be invasive (involving electrodes inside the head) or non-invasive (involving electrodes on the head). Controversially, the term BCI is often used for what would more accurately be called a Brain Muscle Interface because the electrodes are placed in places where it is impossible to pickup brain signal (EEG) and the BCI device is really operated by inducing the much stronger muscle signals (EMG) — e.g. through jaw clenching, eye or eyebrow movement, etc. Eye-tracking and eye-gaze interfaces can also be hard to distinguish from true BCI interfaces. (https://www.sciencedirect.com/topics/neuroscience/brain-computer-interface). 

The term EEG can thus be misleading, and it is helpful to clarify some related terms. Some of these, like the Stentrode™ technology developed at Melbourne University, and being commercialized by Synchron, represent radical advances that blur the traditional invasive vs non-invasive distinction, and their website is helpful in tracing the history of the BCI technologies (https://synchron.com/history):

Electroencephalogram/Electroencephalography (EEG):  EEG has its origins in experiments by Richard Caton in 1875. The letterism ‘EEG’ refers both to the signal and the method of obtaining a signal non-invasively by placing electrodes on the head. Many different kinds of electrodes can be used, and low density EEG may involve a dozen electrodes or less, while high density EEG may involve a couple of hundred electrodes.
Electromyogram/Electromyography (EMG): EMG measures electrical signals originating in muscles, rather than the brain, e.g. from jaw muscles.
Electrocardiogram (ECG/EKG):  ECG measures electrical signals originating in the muscles of the heart, and is thus a specific form of EMG.
Electrooculogram (EOG):  EOG refers to unwanted ‘artefact’ when EEG equipment measures electrical signals (EMG) originating in the muscles that control the eyes, with very strong signals being generated from eyeblinks or eye movements (saccade).
Electrocorticogram/Electrocorticography (ECog):  ECog has its origins just over a century after the first EEG experiments, with Philip Kennedy implanting the first ECog electrodes in 1988 (in Monkeys). A more general term for ECog is intracranial electroencephalography (iEEG). ECog generally involves placing dense grids of electrodes directly on the surface of the cortex, and thus involves cutting into the scalp and skull to place them. This invasive techniques is thus not appropriate for use on volunteer or student subjects and is limited to subjects with neurological disorders or brain injury which may be addressed with such invasive surgical intervention. 
Stereotactic Electroencephalography (sEEG):  sEEG is actually a form of iEEG that involves drilling holes into the head to place individual needle or wire-like electrodes in specific places, and is being pioneered by the University of Pittsburgh and the associated Children’s Hospital Medical Centre (with patent application WO2021174061A1  filed in 2021).
Stentrode™:  A revolutionary approach pioneered by Melbourne University and Synchron (with patent application  WO2017070252A1 filed in 2017) is to insert an expanding stent through a blood vessel. This device avoids both drilling and wires using similar wireless power technology to what is used to charge a phone, allow wireless entry or contactless credit card transactions.

Then there are all sorts of other kinds of brain imaging technologies, some also used scan the whole body, or parts other than the brain. All of the things we’ve discussed so far have been variations on the use of electrodes: sometimes inside the head; sometimes on the surface of the head (or the hair); sometimes on the skin (of the face or elsewhere); often above some kind of muscle…

These other imaging techniques are more like X-rays – but in some cases, safer.

Computer Assisted Tomography (CT/CAT)  — basically 3D X-rays where the X-ray source is rotated or spiralled around the patient.
Electron Beam Tomography (EBT)  — basically the same as CAT but a beam of negatively-charged electrodes (like in an old Cathode Ray Tube television) can be manipulated electronically, avoiding the physical rotation and allowing for much faster scanning.
Positron Emission Tomography (PET)  — basically the opposite of EBT in two senses, positrons are the positively charged siblings of electrons, and rather than being sent into the body by the scanner, they are emitted by radioisotopes (radiotracers) injected into the patient.
functional Magnetic Resonance Imaging (fMRI)  — this is a scan, but is more like EEG in the sense that it is non-invasive and doesn’t require dosing the patient with radiation, or injecting radioactive substances. Blood haemogobin contains iron, and different levels of oxygenation causes it to be more magnetic (deoxygenated → paramagnetic) or less magnetic (oxygenated → diamagnetic). Since active neurons need oxygenated blood, blood races to feed them when they are active, so that for a few seconds afterwards there is more oxygenated blood.
Magnetoencephalography (MEG)  — involves using SQUIDs (not the fishy kind but superconducting quantum interference devices). Again it is relatively non-invasive, and because the magnetic fields are orthogonal to electric fields it provides different information from EEG as well as higher spatial resolution than EEG and much higher temporal resolution than fMRI. But MEG equipment is relatively rare (there are only around a hundred around the world as they are very expensive).
functional Near-Infrared Spectroscopy (fNIRS)  — like fMRI this is focussed on the changes in the flow and oxygenation of haemoglobin, but using near infrared light rather than magnetic effect — the skin is almost transparent to IR light in the 700-900nm range). It has the advantage of being portable, so is more of a competitor for EEG in that sense, and can be placed using the same 10-20 electrode arrangement as EEG (which is based around angular intervals of 10° and 20° for low density EEG, and interspersed electrode/sensor locations for higher densities). It is portable with similar temporal resolution to EEG but higher latency, and thus uncertainty (blood flows much more slowly than electrical currents), but may suffer less from artefacts due to vibration/movement whereas EEG electrodes can lose contact with the skin.

A helpful introduction to Brain Computer Interface and Brain Imaging technology more generally can be found at https://learn.neurotechedu.com/introtobci/.

Hardware


In addition to the medical and research grade EEG/BCI equipment which costs hundreds of thousands of dollars, there is also consumer grade or hobbyist BCI equipment costing as little as a couple of hundred dollars. This is largely oriented to the gaming market, and in fact some of these BCI companies have actively discouraged use for serious purposes by use of proprietary software libraries, nasty countermeasures, and exorbitant ongoing license fees, for access to the captured signal. Examples are noted as we discuss significant market entrants.

Furthermore, there are other kinds of HCI that are worth including in the discussion, including in particular eye-tracking or gaze-tracker hardware. Eye-gaze tracking has been used since the 1800s to understand how we read (Javal 1907; Huey, 1899/1908).

Eye-tracking hardware is now also available as much cheaper consumer grade products, but again manufacturers restrict usage for non-authorized applications with limited licenses for the game devices, and expensive licenses for the required software libraries needed for even amateur research and development purposes. Again, such restrictions on usage of the consumer level device are noted in the discussion of the specific product concerned.

Gaming EEG/BCI hardware may also include motion sensors to provide an additional form of input for games.

Electrodes

A superficial difference between EEG electrode setups is whether they use a cap (feeding through holes), or a band or headset. A more significant difference distinguishing different kinds of EEG hardware is whether the electrodes are wet or dry. In particular, traditional medical and research electrodes have used a messy conductive gel or expensive (silver-containing) paste, while many modern systems, professional or hobbyist, tend to be dry (no gel or paste or saline needed).

Electrodes can also be flat, ring-like (with the gel squirted into the hole), or comb-like (with multiple feet to get through your hair). Traditionally, electrodes had to make actual contact with the scalp, or at least low-impedance contact with the scalp through a conductive gel or paste or liquid. Some experimental electrodes make use of capacitive effects and don’t make actual contact with the scalp (here electrons on one side of a gap repel those on the other side, which means alternating current can effectively get through, and the higher the frequency the lower the impedance, because electrons are pushed less distance before they pushed in the opposite direction).

Products

We will now move to discuss some of the hobbyist/gamming brands and some important models of EEG/BCI headsets.

First though, it is important to be careful. Many products claiming to collect EEG don’t seem to actually be able to pick up EEG, but instead rely on EMG — that is they cheat and work off muscle signal rather than actual brain signal. While it is easier to make contact direct on skin rather than through hair, electrodes that are located on the skin at the sides of the head, and in particular on the forehead, cannot pick up clean EEG as the signal is swamped by the much stronger muscle activity. 

If there are multiple electrodes on the top of the head as well, then they will pick up satisfactory EEG, and it is then also possible to use signal processing techniques to clean up those peripheral electrodes because of the cleaner and less muscle-dominated mixed-source signal we have at the central electrodes.
We do not discuss the NeuroSky or Mindflex systems, as they only have peripheral electrodes, and it is dubious as to how much brain signal they are really able to extract.

The consumer product that did make a breakthrough in getting real EEG cheaply was the Emotiv (https://www.emotiv.com/) EPOC (a spinout of technology from Melbourne University). This uses conductive saline solution (salt water) to moisten flat electrode pads for very good EEG signal until the pads dry out. Its fourteen electrodes are well distributed around the head. Unfortunately, this is one of the companies/products where access to raw data and their software libraries involves very expensive subscription-style licenses, although there are some third parties that provide alternatives.

Emotiv also brought out a cheaper 5-electrode model that uses a really clever hydrophilic polymer electrode that pulls sweat from the user and doesn’t dry out, maintaining good connections continuously.  Unfortunately only one electrode is in a central position, and the others are in positions that are strongly contaminated by muscle. To get a voltage with real EEG you need to be able to measure it across two electrodes, one of which is typically chosen as a reference (or sometimes the average of several is used as a reference). But this reference needs to be in a clean signal position, and at least some of the other electrodes need to be in clean signal positions to allow signal processing and noise cancelling algorithms to work.

The other very important recent development is OpenBCI (https://openbci.com/). OpenBCI’s open-source hardware and software is based around the Texas Instruments ADS1299 chip specifically designed for EEG (although the ADS1298 family is also usable for EEG and EMG and is arguably more flexible).

Their classic design uses a rather ugly 3D-printed headset with screw-in electrodes, but these days they also offer caps and headbands, and third party electrodes. The system is used by researchers as much as hobbyists, and saves developers of new EEG sensors the hassle of designing their own circuits and printed circuit boards around the TI ADS129* family of chips.

Their headline product is the 8-channel Cyton board, with or without an 8-channel piggybank board, which together provide sixteen channels that are wirelessly, sampled at 125Hz. This is a rather slow sampling rate due to its use of BLE (Bluetooth Low Energy) for the wireless, and means it is only suitable for use below around 50-60Hz (mains frequency will interfere within that range).

Typical usage of EEG hardware for BCI is implementing grid-like keyboards based on Steady State Visual Evoked Potentials (typically operated in a range from 12 to 24 Hz, or more broadly from 4 to 48Hz, but then is more risk of confusion with alpha waves (below 12 Hz) or contamination by EMG (above 24Hz) or mains frequencies (between 48 and 62Hz). 

The rows and columns of the grid are flashed in sequence, and when the row and column VEP or SSVEP is detected we have the coordinates of square you are looking at. However, this can also be done with an eye-tracker.

Another SSVEP eye-tracking approach is to have strong LEDs (light emitting diodes) flashing at different frequencies at the corners, and the mix of those (relatively prime) frequencies allows determination of the fixation point. (Cottrell et al., 2012).

Microsoft introduced support for eye-trackers in Windows 10, and teamed up with leading eye-tracker manufacturer, Tobii, to provide a consumer grade product (the model 4C). However, for this product there are also license restrictions and technical impediments for using this for research or your own HCI project (although a legacy Matlab toolbox exists that is sufficient to allow developing a BCI-like typing interface).

Wetware

I’ve mentioned things like alpha waves and SSVEPs, and to understand what is going on when we use the above non-invasive hardware that sits outside our heads, it is important to understand something of the wetware that sits inside our heads.

The brain and central nervous system (CNS) are made up of neurons, some of which are configured as receptors (for the senses) and some as effectors (for the muscles). Individual neurons branch and connect to other neurons chemically across gaps called synapses, which may be excitatory or inhibitory. The neurons themselves open and close channels that allow ions to pass across their cell membranes, or hinder them.

Neurons

Once the activation of a neuron builds to a sufficient threshold level, it discharges suddenly producing an electrical spike and transmitting its chemical neurotransmitters across the synaptic gaps to other neurons. It is these spikes that EEG is detecting.

The neurons that are involved in a given perceptual or motor ‘event’ or ‘task’ are all involved in stimulating each other, with different kinds of neurons operating in different ways and with different speeds. The back and forward transmissions produce complex patterns, including specific frequencies that relate to the distances between the neurons involved, as well as the number of neurons involved.

Spikes

EEG spikes are labelled with N for negative or P for positive, depending on their ‘direction’ or ‘polarity’, along with a number that gives the approximate time in milliseconds after a triggering sensory event or intention. One important one is the P300, which is often used in BCI.

Spikes can also be measured backwards from a motor event, such as pressing a key, and the intention to press a key also tends to occur around 300ms before the motor event happens, and can be detected in the premotor cortex.

For sensory events, we have SEPs (Sensory Event Potentials), with Auditory and Visual events being particularly important in BCI (i.e. AEP and VEP). When we have a repeated event, this tends to produce spikes at a matching frequency. For a flashing light this is called a steady state VEP (SSVEP) and SSVEPs are recorded in the V1 visual cortex at the back of the head (occipital lobe). SSVEP is also a key signal for BCI.

If you shut your eyes and there is no visual input, your brain tends to fall into a relaxed state with only low frequency, long distance, interactions between neurons. This 8 to 12Hz frequency is called an alpha wave, and an open/closing eyes test is one of the standard ways of seeing if our EEG equipment is working properly — with the resulting low frequency signals being detected most strongly in the occipital lobe around the primary visual cortex (V1).

EEG is typically measured in broad bands (although their precise definition varies according to source). These include notably alpha (7-14Hz), beta (14-28Hz) and gamma (28-56Hz). The beta band is most strongly associated with active processing and is commonly used in BCI. The Flinders University researchers showed that the gamma band tended to be very strongly contaminated by muscle, invalidating much of their previous work (Whitham et al., 2007-11; Pope et al. 2009).

To explore what was really muscle (EMG) and what was really neural (EEG), the Flinders researchers created a unique dataset in which they paralysed subjects (with two anaesthetists on hand to ensure they breathed) and were able to do a direct contrast between muscle-free and muscle-contaminated EEG with the same subjects and tasks. The dangers involved to subjects were such that ethics was granted on the condition that it couldn’t be advertised and that only volunteers with the medical understanding to appreciate the risks could be used as subjects — thus the subjects were generally drawn from the research group itself.

Using this dataset they were then able to characterize which electrodes and frequencies were worst contaminated, and to develop signal processing pipelines that removed all the contamination of central electrodes, and (given enough central electrodes) most of the contamination of peripheral electrodes (Fitzgibbon et al. 2007-2016; Janani et al. 2017-18).

Cortex

The cortex is kind of the skin of the brain, the outer layers. And this is pretty well all that EEG can pick up directly, although with advanced signal processing and a dense array of electrodes it is possible to probe deeper (without actually digging or drilling).

Working back to front, we have the occipital lobe and visual cortex at the back, and in front of that visual association areas and somatosensory association areas. We actually have far more than five senses, and the somatosensory system relates to our internal sensing of our own body, while association areas make connections between different parts of the body (which allows us to learn about and understand the whole sensorimotor landscape of an event).

At the sides, we have the auditory cortex towards the back and the speech cortex nearer the front, with language processing in association areas adjacent to them.

Generally, speech processing and logical thought take place in the left half of the brain for naturally right-handed people, while the right brain is more associated with artistic and intuitive thought and creativity. Some early BCI experiments on subliminal audio messages (that affect you but are not consciously seen), suggest that the syntactic processing takes places in the left hemisphere while the semantic processing takes place in the right — and the syntactic processing was bypassed for the subliminal audio (Powers et al., 1998).

The two hemispheres of the brain are connected by the corpus callosum, and when this is severed there is a mismatch where things seen with one eye (sent to opposite hemisphere) may be understood practically/functionally but can’t be named correctly, or vice-versa (Sachs, 1985). The corpus callosum is slightly different between the sexes (bigger in females), and also gets smaller in old age, and some functional differences between the sexes have been characterized (Davatzikos and Resnick, 1998).
Across the top, from ear-to-ear, we have a band of sensory cortex (about receptors/perception) then a band of motor cortex (about muscles/movement) then a band of pre-motor cortex (where intentions to move start). These are a kind of distorted human projection (homunculus) where the more important bits have more real estate (like hands and face).

Of course, the identified mouth area is adjacent to the areas for language, including grammar (syntax), and for producing speech; while the ears area is adjacent to the areas for understanding language and meaning (semantics) and for understanding speech.

In front of the motor areas, we have the prefrontal cortex — just behind the forehead. This is associated with attention, decision making, planning and working memory. 

See e.g. the article https://en.wikipedia.org/wiki/Cortical_homunculus in Wikipedia for details and homunculus illustrations.

Software

The management, running and testing of BCI/EEG systems is quite complex, and there are many software packages in Python or Matlab for performing the signal processing and managing stimulus presentation and synchronization.  

However, the most accessible and easy to use software is probably the BCI2000 software which directly implements a BCI SSVEP speller keyboard and has an active user community to help get things working: https://www.bci2000.org/bbs/.

It is not appropriate here to discuss particular software in detail, or provide set-up instructions, and generally the libraries and tutorial information provided by the hardware manufacturer and/or BCI2000 (plus the help of the user community) will be sufficient to get going.

So in this section, we discuss briefly the general principles of how the EEG is collected and processed, and what algorithms and neuroprocessing phenomena are used to implement BCI.

EEG signals are very weak, as they must pass through skull and scalp and hair, so high amplification is needed, — as well as great care to maintain good contact.

Moreover, each electrode receives signal from multiple sources, neural and muscular, even signals from the eyes muscles and from as far away as the heart. The basic approach to handling this is called Blind Signal Separation or Blind Source Separation (BSS), the aim being to separate out the sources mixed together in the electrodes. A basic principle is that you need as many electrodes as you have sources you want to separate out.

A secondary goal here is to localize the sources so that you can figure out where they all came from. This is the so-called inverse problem, since given location information about both the electrodes and the sources, and the speed and attenuation of the propagation between them, it is possible to calculate what signal we would expect — this is the much simpler forward problem, and if you know the location information you are no longer ‘blind’, so no longer doing BSS.

The simplest approach is called Principal Component Analysis, and minimizes the variances (or sum of squared errors) in the reconstruction of the signals from the extracted components. It is usually the first step in any analysis, and also allows estimation of the number of sources — and thus compression of the data to a smaller number of putative sources.

A higher order family of approaches called Independent Component Analysis seeks to minimize the higher order (powers of) error as well. But this is mathematically more difficult (although there are libraries of many such algorithms,  and their many variants).

Other techniques also try to take into account the delays.

Similar principles apply to soundwaves and are used by noise cancelling microphones and microphone arrays, and indeed can allow you to home in on an arbitrary point in 3D space if you have at least four non-colinear microphones.

Similarly for radiowaves, modern cellular and WiFi systems use multiple antennae at different distances, they seek to localize the receivers and beam multiple low-power delay-synchronized signals towards them, and receive them back the same way.

This means high power can be sent to the receiver rather than blasting it in all directions; and when receiving, the different sensors can combine signals together with the appropriate delays and increase noise resilience.

This approach is called beamforming, as it focuses the signal in both space and time, while the unwanted signals  tend to cancel out because they are not in phase.

A closely related approach is the Surface Laplacian, which used first and second order differences amongst a cluster of electrodes on the curved surface of the scalp to focus in on an internal point where signals of interest are expected to occur.

For some BCI typewriters, this might be over the part of the motor and premotor cortex associated with particular fingers (where the corresponding locations are only millimetres apart in the homunculus). For other very common BCI typewriters, they may be focussed over the occipital cortex in order to pick up Visual Evoked Potentials (VEP or SSVEP).

Using appropriate pipelines of such signal processing techniques it is in principal possible to home in on brain signal and eliminate almost all of the muscle signal if there are enough central electrodes, in which case the muscle signal can largely filtered out of even the peripheral signals that are directly over muscle (although it is important to use difference signals between pairs of nearby signals, because with a common reference, too many bits of resolution are lost in coping with the huge EMG amplitudes, while close electrodes have similar signals and can focus their 16 or 24-bits of resolution on the differences. 
Note that 16-bit resolution is accurate to ± one part in 32767, while 24-bit resolution is accurate to about ± one part in eight million, providing the analogue part of the circuit is up to scratch. This is a very important consideration when you consider that the EMG signals at peripheral electrodes can be many thousands of times greater than the EEG signals in amplitude.

Various such pipelines have been explored, and even optimized using information from the Flinders paralysis experiment (Fitzgibbon et al., 2007-16; Janani et al., 2017-18).

With appropriate application of these signal processing pipelines, it is possible to successfully analyse gamma-band EEG, and it is possible to get good results from consumer/hobby grade EEG equipment so long as there are electrodes in the required locations and the software and interface allows for sufficient temporal resolution: for hobby grade systems with fixed frames, the electrodes are very often in poor positions, and if there are not multiple electrodes in central positions, little actual EEG can be recovered; for wireless systems using Bluetooth/BLE, performance will tend to be poor due to the sampling frequency and thus low temporal resolution. (Grummett, 2014-2015).

The Emotiv EPOC or EPOC+ remains a good compromise system, and is the only fixed frame system recommended. The 3D printed frame of the OpenBCI system allows some flexibility, but best results will be achieved by using an electrode cap or flexible band with good electrodes. 

Generally 802.11 WiFi is preferred to 802.15 Bluetooth standards as it provides much greater bandwidth.
Of course, wired systems can also be used successfully, but dragging cables can often pull the electrodes out of position and make them lose contact with the scalp.

Further Reading

References

  1. Carabali CA, Willoughby JO, Fitzgibbon SP, Grummett T, Lewis T, DeLosAngeles D, Pope KJ (2015). "EEG source analysis of data from paralysed subjects", 11th International Symposium on Medical Information Processing and Analysis
  2. Caton R (1875). "Electrical currents of the brain". British Medical Journal. 2 (765): 278.
  3. Chen XG, Wang YJ, Nakanishi M, Gao XR, Jung TP, and Gao SK (2015). "High-speed spelling with a noninvasive brain–computer interface", Proceedings of the National Academy of Sciences 112 (44), E6058-67. www.pnas.org/cgi/doi/10.1073/pnas.1508080112 
  4. Cottrell J, Fitzgibbon SP, Lewis TW, Powers DMW (2012). "Investigating a Gaze-Tracking Brain Computer Interface Concept Using Steady State Visually Evoked Potentials", 2012 Spring Congress on Engineering and Technology, pp.1-4
  5. Davatzikos C, Resnick SM (1998). "Sex differences in anatomic measures of interhemispheric connectivity: Correlations with cognition in women but not men." Cerebral Cortex 8 (7): pp.635–40. https://doi.org/10.1093%2Fcercor%2F8.7.635
  6. Fitzgibbon SP, DeLosAngeles D, Lewis TW, Powers DMW, Whitham EM, Willoughby JO, Pope KJ (2015). "Surface Laplacian of scalp electrical signals and independent component analysis resolve EMG contamination of electroencephalogram", International Journal of Psychophysiology 97 (3), pp.277-284
  7. Fitzgibbon SP, Lewis TW, Powers DMW, Whitham EM, Willoughby JO, Pope KJ (2013). "Surface Laplacian of Central Scalp Electrical Signals is Insensitive to Muscle Contamination", IEEE Transactions on Biomedical Engineering 60 (1), pp.4-9
  8. Fitzgibbon SP, Powers DMW, Pope KJ, Clark CR (2007). "Removal of EEG noise and artifact using blind source separation", Journal of Clinical Neurophysiology 24 (3), pp.232–239
  9. Fitzgibbon SP, DeLosAngeles D, Lewis TW, Powers DMW, Grummett TS, Whitham EM, Ward LM, Willoughby JO, Pope KJ (2016). "Automatic determination of EMG-contaminated components and validation of independent component analysis using EEG during pharmacologic paralysis", Clinical Neurophysiology 127 (3), pp. 1781-1793
  10. Fuss IG, Navarro DJ (2013). "Open Parallel Cooperative and Competitive Decision Processes: A Potential Provenance for Quantum Probability Decision Models." Topics in Cognitive Science 5 (4), pp.818–843. https://doi.org/10.1111/tops.12045
  11. Freedman, DZ, van Nieuwenhuizen (1985). "The Hidden Dimensions of Spacetime", Scientific American, 252 (3) pp.74-83. https://www.jstor.org/stable/pdf/24967594
  12. Grummett TS, Leibbrandt RE, Lewis TW, DeLosAngeles D, Powers DMW, Willoughby JO, Pope KJ, Fitzgibbon SP (2015). "Measurement of neural signals from inexpensive, wireless and dry EEG systems", Physiological Measurement 36 (7), p.1469
  13. Grummett TS, Fitzgibbon SP, Lewis TW, DeLosAngeles D, Whitham EM, Pope KJ, Willoughby JO (2014). "Constitutive spectral EEG peaks in the gamma range: suppressed by sleep, reduced by mental activity and resistant to sensory stimulation", Frontiers in Human Neuroscience 8, p.927
  14. Huey EB (1908/1968). The Psychology and Pedagogy of Reading, MIT Press, Cambridge MA.
  15. Janani AS, Grummett TS, Lewis TW, Fitzgibbon SP, Whitham EM, DelosAngeles D, Bakhshayesh H, Willoughby JO, Pope KJ (2017). "Evaluation of a minimum-norm based beamforming technique, sLORETA, for reducing tonic muscle contamination of EEG at sensor level", Journal of Neuroscience Methods 288, pp.17-28
  16. Janani AS, Grummett TS, Lewis TW, Fitzgibbon SP, Whitham EM, DelosAngeles D, Bakhshayesh H, Willoughby JO, Pope KJ (2018). "Improved artefact removal from EEG using Canonical Correlation Analysis and spectral slope", Journal of Neuroscience Methods 298, pp.1-15
  17. Javal LÉ (1907). Physiologie de la lecture et de l’écriture. Annales d’Oculistique, Paris, pp.137-187.
  18. Kaku M (2006). Parallel Worlds: A Journey Through Creation, Higher Dimensions, and the Future of the Cosmos. Anchor.
  19. Kennedy PR, Bakay RA (1998). Restoration of neural output from a paralyzed patient by a direct brain connection. Neuroreport 9, pp.1707-1711
  20. Kuhn TS (1962). The Structure of Scientific Revolutions. The University of Chicago Press.
  21. Kuhn TS (1970). Logic of Discovery or Psychology of Research? In Lakatos, Irme; Musgrave, Alan (eds.). Criticism and the Growth of Knowledge. Cambridge University Press. pp. 1–24.
  22. Kunjan S, Lewis TW, Grummett TS, Powers DMW, Pope KJ, Fitzgibbon SP, Willoughby JO (2016). "Cross subject mental work load classification from electroencephalographic signals with automatic artifact rejection and muscle pruning", Brain Informatics and Health BIH 2016, Omaha
  23. Oxley TJ, Opie NL, John SE, Rind GS, Ronayne SM, Wheeler TL, Judy JW, McDonald AJ, Dornom A, Lovell TJH, Steward C, Garrett DJ, Moffat BA, Lui EH, Yassi N, Campbell BCV, Wong YT, Fox KE, Nurse ES, Bennett IE, Bauquier SH, Liyanage Kishan A, van der Nagel NR, Perucca P, Ahnood A, Gill KP, Yan B, Churilov L, French CR, Desmond PM, Horne MK, Kiers L, Prawer S, Davis SM, Burkitt AN, Mitchell PJ, Grayden DB, May CN, O'Brien J (2016). "Minimally invasive endovascular stent-electrode array for high-fidelity, chronic recordings of cortical neural activity", Nature Biotechnology 34, pp.320-327.
  24. Pope KJ, Fitzgibbon SP, Lewis TW, Whitham EM, Willoughby JO (2009). "Relation of gamma oscillations in scalp recordings to muscular activity", Brain Topography 22 (1), pp.13-17
  25. Popper K, (1934/1959). The Logic of Scientific Discovery (2 ed.). Martino Publishing.
  26. Powers DMW, Dixon SE, Clark CR, Weber DL (1996). "Cocktails and brainwaves-experiments with complex and subliminal auditory stimuli." Australian New Zealand Conference on Intelligent Information Systems (ANZIIS 96), pp.68–61.
  27. Powers DMW, Fitzgibbon SP, Clark CR (2007). Brain Computer Interface, Joint HCSNet-HxI Workshop on Human Issues in Interaction and Interactive Interfaces
  28. Powers DMW, Clark CR, Fitzgibbon SP, Pope K (2007). Removal of EEG Noise and Artifact Using Blind Source Separation, Journal of Clinical Neurophysiology 24 (3), pp.232-243.
  29. Roll, W. G., & Williams, B. J. (2010). Quantum theory, neurobiology, and parapsychology. In S. Krippner & H. L. Friedman (Eds.), Mysterious minds: The neurobiology of psychics, mediums, and other extraordinary people. Praeger/ABC-CLIO. Pp.1–33
  30. Sacks O (1985). The Man Who Mistook His Wife for a Hat and Other Clinical Tales. Pan Books.
  31. Whitham EM, Pope K, Fitzgibbon S, Lewis T, Clark C, Loveless S, Willoughby J (2007). Scalp EEG during paralysis: augmented gamma activity can be detected during cognitive processing, Scientific Meeting of the International Brain Research Organisation, Melbourne
  32. Whitham EM, Pope KJ, Fitzgibbon SP, Lewis T, Clark CR, Loveless S, Broberg M, Wallace A, DeLosAngeles D, Lillie P, Hardy A, Fronsko R, Pulbrook A, Willoughby JO (2007). Scalp electrical recording during paralysis: Quantitative evidence that EEG frequencies above 20Hz are contaminated by EMG, Clinical Neurophysiology 118 (8), 1877-1888
  33. Whitham EM, Fitzgibbon SP, Lewis TW, Pope KJ, DeLosAngeles D, Clark CR, Lillie P, Hardy A, Gandevia SC, Willoughby JO (2011). Visual Experiences during Paralysis, Frontiers in Human Neuroscience 5
  34. Whitham EM, Lewis TW, Pope KJ, Fitzgibbon SP, Clark CR, Loveless S, DeLosAngeles D, Wallace AK, Broberg M, Willoughby JO (2008). Thinking activates EMG in scalp electrical recordings, Clinical Neurophysiology 119 (5), 1166-1175
  35. Whitham E, Fitzgibbon S, DeLosAngeles D, Lewis T, Pope K, Clark C, Lillie P, Hardy A, Gandevia S, Willoughby J (2009). Experiences during paralysis: vision and humour, Australian Neuroscience Society Inc. 29th Annual Meeting
  36. Yazdani N, Khazab F, Fitzgibbon SP, Luerssen MH, Powers DMW, Clark CR (2010). Towards a brain-controlled Wheelchair Prototype, 24th BCS International Conference on Human-Computer Interaction

Availability of Time for PsyQ 

Time for PsyQ is available from your favourite bookseller:

 

Awards for Time for PsyQ

Time for PsyQ won the Silver medal for Teen and Young Adult Sci-Fi Action & Adventure in the 2023 Global Book Awards.


Reviews of Time for PsyQ

4.7/5 Amazon
5.0/5 Emerald
4.8/5 Goodreads 
4.0/5 OnlineBookClub
5.0/5 Reedsy


OBC

★ ★ ★ ★ ★

https://forums.onlinebookclub.org/viewtopic.php?f=21&t=395112  Sanu Keditionhttps://forums.onlinebookclub.org/viewtopic.php?f=21&t=378174  Tejas Koli
https://forums.onlinebookclub.org/viewtopic.php?f=21&t=386772  Gladys Ratish Kumar
https://forums.onlinebookclub.org/viewtopic.php?f=21&t=401860  Bhagyashree Makde
https://forums.onlinebookclub.org/viewtopic.php?f=21&t=388234  Donna Marie McGuire
https://forums.onlinebookclub.org/viewtopic.php?f=21&t=387954  Devesh Patel
https://forums.onlinebookclub.org/viewtopic.php?f=21&t=377346  Sachin S
https://forums.onlinebookclub.org/viewtopic.php?f=21&t=382193  Munmun Samanta
https://forums.onlinebookclub.org/viewtopic.php?f=21&t=385824  Stormy Shuler
https://forums.onlinebookclub.org/viewtopic.php?f=21&t=404982  Wajida

★ ★ ★ ★ 

https://forums.onlinebookclub.org/viewtopic.php?f=21&t=490460  Merrit Fletcher
https://forums.onlinebookclub.org/viewtopic.php?f=21&t=384028  Surekhna Krishnakumar
https://forums.onlinebookclub.org/viewtopic.php?f=21&t=376706  Gerald Stewart

★ ★ ★ 




Saturday, August 26, 2023

What's in a star?

I read a lot of books as well as reviews.

I also write both books and reviews.

Reviews and Ratings are the currency by which books are valued in the book industry — but this value is only a reflection of reality if readers take the time to write reviews, or at least give a rating and upvote any reviews that are helpful, fair and capture their own views (rewarding the good reviewers). Of course it important that reviews don't give away things that should come as surprise (fiction) or would give away secrets (non-fiction/how-to). 

I try to be guided by the author/publishers' own description and blurb, what they themselves reveal. But some newer Indie authors do give away too much, particular in descriptions of sequels or multibook box sets (so avoid reading beyond the description of the first book). Recently I wrote a review in which the Subject/Title was a warning not to read too much of the author's description, but since the dots for reading more come before the reviews on Amazon, most will read that before getting to the reviews.

Me? If a description seems to giving away the plot, I will simply click on the stars/ratings link and have a look at the reviews - again skipping forward quickly if I sense any spoilers coming. Another approach is to start on a review site like Goodreads and look for the highly rated books in your genres of interest. Many reviewers post on both Goodreads and Amazon (and indeed getting to the last page of an Amazon eBook should trigger opportunity rate/review on both with one click. Professional or author reviews who review on other sites or their only blog should do so too (and you shouldn't have to ask).

As reviewers, there are many things we have to balance, and the balance will vary depending on the purpose of the review, and the type of review.

Editorial Reviews, Advance Reader Copies and Galley Proofs

The first kind of review is the editorial review, written before a book is published, by a 'professional' reviewer. This is intended to supply the kind of endorsements and comments that might appear on the book cover or on its online sales/preorder page, and thus is obtained well before publication using a preliminary version of the book (ARC or Proof).

Newspapers and magazines, blogger and websites are the major sources of such reviews. Professional can mean one of two things, one that the person has written a lot of reviews (experienced), or two that the reviewer has been independently certified in some way (those associated with a formal publication or reputable company).

Advance Reader Copies (ARCs) are often made available by authors on an ad hoc basis (or through sites like BookFunnel). Especially for traditionally published books, they are made available on sites like NetGalley and Edelweiss that are dominated by traditional publishers, make ARCs available to reviewers from the galley proof stage, and vet the reviewers as professional both in the sense of having experience and having an outlet.

Another major source of early reviews is authors of similar books — publishers like to get quotable endorsements from writers in their stable. I am always wary of these, because often the words are put into the mouths of those authors by the publishers, and that author may not have even read the entire book. I won't write an endorsement or positive review unless I have (though publishers tend to make their publishing decisions on a synopsis and an opening chapter).

There are also websites that allow you to buy such review and endorsements. The reputable ones seek to be 'honest'. Some try to show they are honest by being 'balanced' - and this is often be interpreted to mean that they must include some negatives in a review. Unfortunately, these can be forced or even totally made up ("well it is self-published so there must be some typos and grammos even if I didn't notice them").

If it is worth mentioning typos and grammos in a review, then it is important to provide examples. Without evidence, such a statement is worthless even libellous - and for an ARC review there will be a path to give specific feedback to the author or publisher if the reviewer is inclined to be helpful and there aren't too many errors. So it is important to address the actual issues clearly and accurately (I take notes as I read), to be clearly helpful and obscurely diplomatic... What ever you do, don't attack the author or say anything that can be construed as such! Read it from the author's perspective before hitting that commit button.

Unhelpful and Misleading Reviews vs Helpful Reviewers

One reviewer (from a company deliberately not mentioned in this blog post) said of one of my ARCs/Galleys "The story is marred somewhat by formatting mistakes, grammatical errors, misused words, and other linguistic details, but the heart of the story is strong, and the premise is enchanting from the start." 

The company would not retract that, but did correct the repeated use of a masculine pronoun for my decidedly female heroine (who was one of a group of middle grade girls, and at one point was asked to take off her bra for a medical scan). The Editor-in-Chief did take the time to scan the book, and pointed out deviations from their formatting expectations (clearly based on the much deprecated Chicago Manual of Style), ignoring the fact that this was a prepublication ARC/galley/proof provided them for prepublication editorial review purposes. However, my query explicitly focused on the "grammatical errors, misused words, and other linguistic details" and these remain unidentified: a new read through the book confirmed that this statement was unwarranted (not to mention all the previous input I've had on the book).

So I am left with a statement, in multiple places on the web, that impugns my reputation as a Professor of Computational Neuroscience and Psycholinguistics writing a story that builds on this specific expertise, uses the appropriate terminology, refers to actual biomedical equipment and scientific theories using the appropriate terms, and sometimes even names the brand of the device concerned. Frankly, I am dubious that the reviewer really read the book (missing that the protagonist was a girl), but maybe they did a quick scan like the EiC and saw some words they didn't understand. Needless to say, I am disinclined to trust my prepublication ARCs/Proofs to this company again for editorial review.

On the other hand I, as a reviewer, have to make sure that I don't fall into the same trap. Especially as you can't argue with anonymous reviewers, and authors really have no recourse. Anyway, as an author/reviewer I want to help authors, particularly indie authors, not make problems for them.

If you are dealing with a prepbulication ARC/galley/proof, errors should be communicated to the author or publisher so they can be fixed. You are not dealing with the final format of the book, so complaints about format are inappropriate and unprofessional. Personally I dislike the double-spaced formats that are expected in manuscripts (or more correctly typescripts) per the traditional and modern formats specified by most publishers and agents, and some review services. Note that manuscripts are, etymologically, handwritten handscripts, and that the wide-margin double-spacing convention in typescripts and thesis submissions was primarily so that proofreaders and examiners could write corrections between the lines, with proofreading marks in the margins. The secondary advantage, for proofreaders, is that it slows you down as a reader, stops you getting too immersed in the story, and focusses you on the minutae of formatting, spellings and grammar; whereas the final format of the book is designed to guide your eye through the text quickly and make reading easier. 

As reviewers, we are not paid proofreaders, and are meant to be appreciating the work as whole, getting the same big picture that a normal reader will have. Double-spaced PDFs (or ePubs - yes I got one of those recently) are totally inappropriate.

On the other hand, supplying editorial reviewers in a double-spaced format might avoid the kind of complaints about formatting that I quoted above. But as a reviewer, I avoid these double-spaced copies, and indeed PDFs in general (if permissions allow, I will tend to convert such PDFs or DOCs to single-spaced versions, ideally ePubs, that can be read comfortably on my eReader — if permissions don't allow this, I have on occasions declined to read things too unwieldy to read fluently on my eReader, or been blocked from reading them).

In particular, sometimes books are provided with Digital Rights Management as the encrypted ACSM files that can hide different formats, and the app that opens it doesn't necessarily know how to deal with that hidden format, the once-only DRM locking you out for ever. Edeweiss is the big culprit here as it does not tell you what formats are availabe, while NetGalley does and prioritizes ePubs. Also ePubs are sometimes faked by taking page images to make an illegible ePub that doesn't scale to a handheld device; moreover Adobe Digital Editions (ADE) is not suitable for reading on tablets/phones without keyboard/mouse. However  NetGalley also has its own app in which selected and approved or autoapproved ARCs magically appear, which navigates with simple left/right swipes like the Kindle app.

So ARCs tend to be ePubs or PDFs in galley proof form, in which case they should not be double-spaced, and may or may not approximate the final publication (galley proofs exist primarily so that final proofreads can get rid of the remaining typographical errors, while ARCs are for advance/beta readers and editorial reviewers — but these days the same document usually serves both roles). They may also be physical books, e.g. Proof or Author copies sent direct from Amazon, or similar copies obtained from other PoD publishers like Direct2Digital. 

[Also be aware that (Word 365) DOCX reduces embedded images to about 200dpi,  irrespective of settings, which can impact the quality of images/cover and thus reviews. The older DOC format doesn't compress and is much safer (just don't make a DOCX and then use that saved version to create a DOC or PDF, as the loss is by then already irretrievable). I have also stayed with the non-X version for posters in Powerpoint for similar reasons (and keep several older versions of Microsoft Office installed on one of my computers to deal with the various other incompatibilities and downgrades of the last 20 years).]

Children's Books: Picture Books through Middle Grade to Young Adult

I am always happy to send either ePubs or paperbacks to reviewers (and for children's books the reviewer may actually mediate/mentor a child's reviews - yes, I'm happy to send them to parents/teachers who write reviews for their children to look at, and for children's books physical books may be necessary/preferred).

At this point it is worth mentioning the Wishing Shelf Awards and Reviews, out of Europe (UK+Sweden), which actually gets groups of school children to review of appropriate age groups to review Children's/MG/YA books - and provides good feedback if requested. They also have adult reviewers reviewing the full range of books, so don't let the name put you off.

Incidentally, beta readers and beta testers refer to a second phase of testing by people removed from the creative process — the first, or alpha, stage of reading/testing is done in house. Again, I try to include children in a broad age range around my target audience. It is important to incorporate feedback from alpha and beta readers of the appropriate demographic before sending books out for editorial review. 

A writers group can be a good source of readers — in fact, for my writers group it is in the first instance alpha hearers, although later those who are interested will have opportunity to feed in at the galley/proof/ARC/beta stage. But we are still second guessing what the children will think.

Children live in the adult world and hear and watch adult conversations, adult news broadcasts and adult-oriented TV shows. And just because a child is in a certain class or of a certain age doesn't mean they are the same as every other child of that class/age. So explore, ask children to read your ARCs — and ask them which of their friends would enjoy it!

"Friends, neighbours and countrymen" can be a good source of parents and hence kids — and yes, "lend me your ears": reading the book out loud, or segments of it, can be useful. This is especially the case when your audience is MG/YA or younger.

Teachers, parents and librarians and publishers have a fair idea of what children will like, but these days some gatekeepers tend to play down to a perceived low level expectation or up to a particular agenda, rather than encouraging young readers to explore and grow and develop, raising their reading level, developing their vocabularies, enhancing their cognitive capabilities, and learning social skills. 

Publication of Editorial Reviews

Editorial reviews, or their snappy one liners, glowing phrases and explicit endorsements, often find themselves on the cover of the book or in the description on the bookshop websites. Amazon's author page also allows you to put these into a separate section on your Amazon Author Page (which is used on their main North American .com site, but not on others).

The full reviews are generally published in the reviewers' own media outlets, whether website, blog or physical magazine or newspaper.

Many reviewers will also (optionally or automatically) put them up on Goodreads and/or Amazon. Some allow you to opt out, or have it go up only if it gets a certain number of stars. I always allow it to go up, and don't select the 'suppress if bad' option. In fact, every review helps (the number of reviews is important) and a spectrum of reviews reflects different people having different tastes and different expectations. But this is where upvoting is important (or downvoting if allowed) as this affects which reviews are shown first, even if filtered by number of stars. In some cases, commenting is also possible (ideally by readers rather than the author).

Reviewers should acknowledge it if they received an ARC, and specify how (from the author, publisher, Netgalley, Edelweiss, ...), and should then go on to explain that it was voluntary. I tend to say something like: I received an obligation-free copy of this book from the publisher and am reviewing it voluntarily — my opinions are my own.

Amazon Verified Purchaser vs Kindle Unlimited - Customer/Reader Reviews

While reviews of books you receive for free can, and should, be included by the reviewer amongst the normal reviews on the book pages, Amazon makes special note of 'Verified Purchaser' reviews. That is people who actually spent money to get the book. This doesn't include ARCs, and unfortunately it also doesn't include borrows on Kindle Unlimited — even though we have paid for the privilege and authors get paid for each page read (typically close to half a cent a page, which is actually about half what they'd get for a 300 page eBook at the minimum price of USD2.99, although actually more like a third of the per page printing cost for pBooks).

There are websites that encourage authors to review books in a pool, and carefully develop reviewers skills and rate or certify their reviewers. 

A long established one is OnlineBookClub, although its website is rather clunky. It is strongly moderated, meaning there can be delays due to the moderation process as they sort out discrepancies (which is good for authors and extra work for reviewers). Authors pay different amounts for the editorial reviews according to the level of the professional 'team' reviewer (who gets paid). Authors can't be team reviewers, but once a book is team-reviewed, the Authors Review Authors (ARA) scheme allows authors to gain credits for reviews of other people's books they've completed and then spend these on getting a review from the pool of ARA members.  

CES Pro has a similar, but much newer, Community Book Exchange Program where authors can add a book to the program for review and in turn be given a book to review (they try to make it relevant, and you can request another).

In both cases, you obtain the book in the normal way, and if available for a dollar (e.g. on an Amazon Countdown) I will buy the book rather than borrow it, where it would take up limited space in my KU library and the review would forgo the coveted 'Verified Purchaser' label. For a 600 page book at the minimum price of $2.99 or a 200 page book at the countdown price of $0.99, the author gets around the same through Kindle Unlimited/Kindle Select as for a sale; for a shorter book, proportionally less.

Note that, in both OBC's ARA and CESP's CBEP, these are not book swaps where authors review each other's books. This is regarded as unethical, and generally sanctions will be applied if discovered (e.g. by Amazon). An important principle is that reviewers are at arms length and reviews are voluntary but this doesn't mean you can't know an author you're reviewing (e.g. personally, through professional correspondence, or by meeting them at a conference or fan meet). 

What about if you are reviewing a book in return/credit for reviewing a third party book from a review pool? This meets Amazon's guidelines: OBC ARA shows you 50 possible books at a time, and allows you to select up to five at a time to look at, and CESP CBEP requests you to review one. But if you don't like the book, and don't feel that you want to waste more time on it — and it is not that it is a bad book, just not to your taste — in both reviewing pools you can opt out and get another book to review. 

Feedback and Errata: English Grammar, Spelling and Formatting

Unless a book is really unprofessionally presented, I would normally make no comment in the review about occasional typos, grammos and formatting errors in the book, but if possible will pass them back to the author (through OBC or CESP if they nominated the book). But more generally if reviewing directly on Amazon or Goodreads, this direct author contact is not easy (unless there is contact information for the author in the book). In such cases I might mention the errors and give one or two examples of each kind.

There are, however, some things I find really annoying but have to hold back on, e.g. routine use of "and I" when it is the object of a verb or preposition, and thus accusative, e.g. in constructions like "He gave my brother and me a lift", "He then wanted my brother and me to pay for the petrol", "In the end he got physical and demanded money from my brother and me." Incorrect using "and I" is okayish in dialogue where you are trying to convey that the speaker is uneducated and trying to obey the grammar rules of the upper crust, but best practice is not to get too carried away conveying dialect. The educational problem here is that in response to the argot "me and John went to the park" parents/teachers simply say "No, don't say 'me and John' say 'John and I went to the park'" and don't explain the rules, and often don't even know them themselves (and there are two type of rule bound up together here: the grammatical nominative/accusative distinction; and the social convention of putting yourself last). This leads to overgeneralization — the rule the children learn is always say 'and I' and never 'me and' or 'and me', and then it may be further generalized with 'and him' becoming 'and he', etc.

In relation to formatting and spelling, reviewers need to be aware that there are different models, and while US publishers can insist on Webster and the Chicago Manual of Style, there are other countries and other conventions — and for the US, the New York Times Manual of Style is much much better than the outdated Chicago one that was originally dictated by printers on the basis of what looks good, rather than sound principles of grammar and logic. Similarly, modern British spelling, grammar and punctuation was largely standardized by William Caxton's publishing house in the late 1400s, in the name of consistency — presumably some authors railed against that too when their preferred forms were deprecated. I would never complain about such conventions in a review just because it looked strange or unusual to me. But if it really is a reason not to buy the book, if it really makes the story hard to understand, then breaking conventions is worth mentioning — with explicit explanations of the specific issues you are concerned about.

One bugbear in relation to punctuation is the em-dash — which is meant to be the width of an 'M' (and in modern typesetting conventions should have space around it, but ideally narrower spaces than usual) while the en-dash (the width of an 'N' with normal spaces around it) is also accepted/recommended. No spaces means that it combines with the words on either side to look like a single word, and word processors treat that combination as a word. Poorly (or 'artistically') designed fonts can have very long em-dashes; and long dashes, small spaces or no space cause problems for the word processor in trying to break lines typeset as (left and right) justified text. Dashes are used as weaker form of parenthesis than brackets — often with explanation or examples — but stronger than commas, and less ambiguous when there are commas elsewhere in the sentence. When the closing bracket would be at the end of the sentence, comma and dash parenthesis is still quite appropriate — it is just that there is no explicit closing punctuation needed as the sentence terminating punctuation terminates it too (as does nesting enclosure by actual parentheses).

Note that, generally speaking, parenthesis using round or square brackets should be kept to a minimum in fiction, and such incidental information set off with other punctuation, most often commas, if minor, or dashes — for significant chunks of text that include a verb or a comma.

The (thin-spaced) em-dash or (normally-spaced) en-dash can also be used to mark a sharp external interruption (usually followed by end quote or resuming after a start quote), and should not be confused with the three dot elision mark  (a single  Unicode character, that Word will tend to substitute for the individual dots). Elision indicates where a speaker has trailed off without finishing a thought or giving their conclusions or ... 

Sometimes this is because it is understood, and potentially even completed, by the hearer. Sometimes it may be because they've changed their mind, realized what they were going to say is wrong. Sometimes it is simply an indication that they need time to think... and they will then resume the thought. Like period, the dots should adjoin the word when they mark that part of the word is missing. Like period, it can also adjoin at the end of an utterance (no actual spoken/written words are broken or omitted). But, in contrast with period and apostrophe for just part of a word missing (elided), it is spaced when whole words are missing (elided) — word-level elision is used in formal quotation to indicate where extraneous details of parenthetical comments are omitted (and square brackets with replacement/summary text can be used instead of dots if needed in order to make a proper sentence).

And it is the three dot pause marker that should be used, not the single point, when you draw out a sentence: never... use... fullstops... The individual words are not fully complete utterances, which is what period indicates. Period! Also, in prose, you don't capitalize after anything but sentence final punctuation (.?!) as (apart from proper nouns/names/titles and 'I') you only capitalize when you are starting a new sentence. Once except to this is when colon is followed by a numbered or bullet list of full sentences.

I also often see colon/semicolon anomalies (notably semicolons used where colons are needed) in "professionally reviewed" and "traditionally published" books. 

Colon is used to introduce explications, exemplifications or implications of what went before (any of Kipling's servants, often a list: What and Where and When, and How and Why and Who) and using a semicolon rather than colon signals that there is no direct relationship (semicolon is more like a comma but for more complex items, whereas colon is more like a period but for more directly connected items). The dash can sometime be used instead of colon , and indeed a ':–' usage was formerly common in introducing lists, but is now commonly deprecated.

Semicolon is like a comma, looks like one, and functions like one as a list separator. It also looks like a colon, and will mostly be used in a list of complex phrases or clauses that follow a colon. Colon separates the sentence into two parts, where the first part is usually a clause and the second part is often a list of examples or alternatives using its lower level 'semi' form to separate the examples. Period separates sentences, and where a sentence is a list of examples comma is the lower level derived form used to separate examples.

Semicolon is also used without colon to separate complex phrases or clauses (each usually containing a verb or perhaps commas) in a list, much as comma is used to separate individual words or simple phrases (typically not containing a verb or commas) — really it is a matter of complexity, and ideally the semicolon separated clauses match in some way: e.g. they may be temporally sequential or logically parallel, exemplifying different facets or outcomes of an underlying concept or situation. Modern editors will sometimes call these 'run-on' sentences, but in artistic, literary, poetic, stylistic writing, the semicolon can be preferred for balance and effect. 

They are partly right, in that such 'run-on' sentences could be separated into separate sentences with period. However, when there is a strong connection, e.g. the actions were performed in sequence to achieve a particular outcome, the list of sequential actions may be separated by commas; or a list of parallel actions or alternatives may be separated by semicolons.

There is one final point that needs to be made about lists: 'and' or 'or' are often used at the end of the list to clarify the conjunctive or disjunctive nature of the list. To emphasize the completion of a complex action, 'and then' may be used instead, or some other adverbial like 'and finally'. The 'and' (etc.) may be used bare — and this is typical when the components are simple — or may be used in conjunction with comma or semicolon, and this ', and' "Oxford comma" should be regarded as optional: as merely a guide to how to read the sentence (especially out loud). So inclusion or omission of the Oxford comma is not a punctuation error but a stylistic choice. 

Generally, I will include the comma if I want a pause, and exclude it if I want the reader to read without a significant pause. This also applies to adverbs and adverbials more generally, at the start of a sentence. You really don't want commas around every individual adverb or adverbial phrase, and if you have commas for adverbs, commas for parenthesis, and commas for lists — all the the one sentence — it can simply get too complex. So it is better to omit the optional commas before and/or after conjunctions and adverbials in such situations. As another hint, the longer words/phrases (-ly adverbs  or multiword adverbials) tend to be used as introductions to longer units of text — most likely at the start of a paragraph: so 'On the other hand,' 'Alternatively', 'Therefore' are used with a comma at the start  of a paragraph; while 'And', 'But', 'So', 'Then' generally are used without a comma at the start of a sentence or clause; and  'when', 'where', 'while', 'if' are not immediately followed by a comma, but rather introduce an adverbial clause that is going to be followed by or preceded by a comma).

[See Kipling's 'serving-men' poem below for example, in which each quatrain is a single sentence, and the colon could have been used on several occasions but wasn't (because the poet is layering detail rather than providing explication — viz it is not intended as explanation/exemplification). We see illustrated the similarity between the use of colon and dash to introduce examples, where the dash has a more parenthetic feel and a stronger prosodic effect signaling a fading voice, a longer pause and a gathering of thoughts (and indeed the parenthesis is paralleled across the final two quatrains). In the same context we can also see the semicolon used to parallel and contrast ideas rather than using the colon to subordinate them and highlight an explicatory or implicatory relationship: in the middle of the poem, the semicolon is highlighting a tongue-in-cheek contrast between the repeated first person "I", "folk" with different views, and "she" with a different practice that she doesn't even think about (colon might have worked, as might elision, but would invoke different levels of pause, with different prosody); at the start, without the daringly parenthesized and tense-challenging "all I knew" line, a colon would have worked, but semicolons are appropriate for the three element list with its implication that the items have no direct relationship — what they taught and what they're called, is implied to be incidental, although of course they are important clues to the meaning of the poem because they aren't just names, and of course 'she' is the beneficiary of the present tense teaching.]

Unfortunately a lot of modern reviewers and publishers, and grammar checkers and teachers, don't take the time to understand the correct usage and logic of such grammar and punctuation, and even promote incorrect usage, and provide incorrect corrections. They also seem to be immune to the prosodic effect and literary intent of careful choice of punctuation. Like Wilde and Kipling, I will sometime agonize for a morning about the precise punctuation and then change my mind again in the afternoon (the earliest attested such anecdote, published in 1884, is attributed to Oscar Wilde: he took out a comma and put it back in again).

Then there are the exclamation-mark nazis who would eliminate them all with extreme prejudice. If it is an exclamation, it requires an exclamation mark — and this includes anything ordered or commanded, or delivered with sharp intonation.

Incidentally, the serving men turn up in different guises. Kipling uses them as interrogative pronouns, but they may also be used as relative pronouns — although there is come confusion possible here:

  • I talked to the politician [that/who/whom] I'd never met before.
  • I talked to the politician, whom I'd never met before.
The first (adjectival) version identifies which politician I talked to (there were several in the room). It is not necessary to use any of the three choices here, which illustrates that it is a 'weak' usage where an unaccented 'that' would be the best alternative. The 'who/whom' gets mixed up with the second meaning and becomes very sensitive to the prosody used (with 'whom' being grammatically correct, and 'who' being a common corruption of the required accusative to its nominative form).

The second (pronoun) version is parenthetic, simply commenting that I'd never previously met the the person. 

The case corrupting 'who' form is problematic here because it suggests I'm going to talk about something the politician did (nominative = subject of a verb), where as the 'whom' form is appropriate as I am talking but something that happened to the politician (accusative= object of a verb or preposition): I met him. Note that the accusative is used for the object of a verb even when the actor in a following apposition (where the verb is expressed by a participle or infinitive), but the nominative is used for the actor with active indicative verb forms:
  • I saw she left on the train. [simple past]
  • I saw her left (behind) by the train. [past passive participle: resulting from a past action]
  • I saw her leave on the train. [simple infinitive: expressing a contemporaneous action]
  • I saw her leaving in the train. [present active participle: expressing durative action]
  • I asked her to leave on the train. [indirect infinitive: expressing optative aim]
Another set of examples using a less regular verb to make the past distinctions clearer:

  • I saw he ate the shark. [simple past]
  • I saw him eaten by the shark. [past passive participle: resulting from a past action]
  • I saw him eat the shark. [simple infinitive: expressing a contemporaneous action]
  • I saw him eating the shark. [present active participle: expressing durative action]
  • I asked him to eat the shark. [indirect infinitive: expressing optative aim/act]

Productive vs Unproductive forms, and new usages

I've also had reviews rejected, apparently, for using -ize rather than -ise verbs ('fixing' these got it approved). This -ize is a productive ending pronounced with the 'z' sound: that is, it is used for modern and newly coined words, while ancient -ice and -ise words have roots in other languages (note that in older words a noun/verb distinction is reflected with -ice/-ise (e.g. device/devise, advice/advise) and some word families have lost  either the orthographic distinction or the voicing that normally characterizes the verb (practice/practise, use/use, abuse/abuse, etc., extent/extend, intent/intend), although in speech the missing voicing may still leave marking with extra length for the verb and falling intonation (tone) for the noun. An adjective or noun that doesn't already have its own underlying or cognate verb for how to make/achieve it, can form a verb by adding -ize. When you use something to do something (means rather than result), the noun can be used directly as a verb (he headed the ball to the forward who shouldered past the goalie before kneeing it into the goal). 

Coined or coerced words, and new usages in general, are not errors, and are indeed absolutely essential in Science and Science Fiction, to label new inventions, even if Word and Amazon etc. flag them.

Traditionally Published vs Self-Published

Another factor that might affect reviews, is who published the book under review. 

At the ARC/Galley stage, prepublication, it is usually possible to communicate with the author or publisher (depending on how you got the ARC), and you can usually choose between a private communication and a formal review if there are issues, or indeed do both. The tricky part comes when there is no such obvious backchannel. But if I've agreed to write are review and have completed the book, then how it was published and priced becomes a factor in how I present the review.

If the book has a big name publisher and a big note author and a big number price, then I will tend not to hold back on scathing comments if I feel my time and/or money have been wasted. And unfortunately a lot of rubbish does come this route these days, marketed on reputation and churned out without much quality control — sometimes even by ghostwriters. Frankly, these days, I find I am far more likely to be happy with an indie book that I've got through Kindle Unlimited on Amazon than an expensive traditional route book that has hit all the trendy marketing buttons the traditional publishers demand (even if I got it as an ARC, if they view me as worthy of one — they review the reviewers).

For the struggling new author going the self-published or small/own press route, I have a lot of sympathy. I actually fit in both camps with half my books published by a well known publisher, and half published under a very small imprint. I also see a large part of my role as author and reviewer as being educative — that's why I write these long and hopefully not too tedious blogs as well.

So the same book that got a scathing review as a big big big book, might get a much more encouraging and enlightening review when it is the little guy. Of course the number of stars should be the same, but I am enjoying indie books more, and BookSirens documents that I tend to rate indie and small press book higher (noting that I am in the top 1% of reviewers of self-published books by volume). 

Writing a helpful review that, in a public space, actually helps indie authors improve their game while not scaring off their readers — that takes real time and effort: think hours rather than minutes, think sleeping on it, think revisiting and listing some of the specific kinds of issues with extracted examples and suggested improvements.

So what about the stars?

Of course, the obvious question is exactly what the stars mean in a review or rating. I think of them like grade points, which in turn have a connection to statistical concepts like standard deviation. 

Standardized scores (like IQ) are actually scaled to have a nominal mean or median (100) and standard deviation (15). 67-68% of people are then expected to lie within 1 standard deviations (30) of the mean and 95-96% within 2 standard deviations, on the assumption that the distribution of scores (IQ) is relatively normal (bell-shaped).

Marks out of 100 are sometimes standardized with a mean or median of 65 and a standard deviation of 10, so the 55-65-75 Pass/Credit or C/B range corresponds to one standard deviation either way, and grading to the curve tries to achieve ~65% of students in this range (2/3), while 45-55 (Conceded Pass or D) and 75-85 (Distinction or A) can each expect about ~15% (1/7), with 2-4% getting High Distinctions or A+ (1 in 25 to 50), and hopefully <5% of those who complete everything miss the mark and actually fail. Of course, this doesn't account for those that don't even make it to or through the exam. In practice, the pass mark is normally 50, pushing a few more (~8%) from D to C (if normally distributed), and supplementary assessment may also push a student from D to C.


So what about the stars?

***** A or DN+ — maybe 15-20% — if you give five stars out freely, it ceases to mean much!

**** B or CR — maybe 20-25% — this is where most good but not exceptional books will lie

*** C or P — maybe 35-40% — this is fine, passable, but not distinguishable from the crowd

** D or CP — maybe 20-25% — this is poor, but I'll concede it has some redeeming features

* E or F — maybe ~5% — books should have been through enough wringers to avoid this!!!


This effectively converts the percentage marks associated with the grade points to the five star scale, by dividing by 20 and rounding up. 

Of course there is a tendency to be lazy regarding the *** books. If there is nothing particularly good or particularly bad about a book, I am disinclined to actually write a detailed review— so may not bother, or may just give it a rating and/or a quick comment. These days I try to give it a rating and may actually look at other reviews and upvote those that I agree with. Reading should be fun, but reviewing is hard work, most especially for those books that lie in the middle with good and bad points. 

If a book is getting consistent * and/or ** reviews, then that suggests revision and supplementary assessment is needed! If the reading was hard work, or I did not finish, then (unless there was a genre issue) * or ** is what it will get. If it simply wasn't my kind of book, I might skip the review. But if I can, I will try to express why I found it hard to finish, and identify what kinds of readers I would expect to like and not like the book — and may even rate it ***.

**** is reserved for books I actually liked (in some degree — remember the rounding), while ***** is for those relatively few books that I felt were exceptional, the kind of books I'd like to write — though I'd likely still think there were some things I'd have done differently: it doesn't have to be perfect to get five stars (again, remember the rounding).

Confusingly some rating systems are out of different numbers of stars, e.g. four. In this case I'd divide the percentage by 25 and round up. Actually, I tend to mark out of 10 (for complex work 20) with half marks, and that then multiplies by 10 (resp. 5) to get a percentage, so rating out of 5 in whole numbers doesn't come easy although I am used to it now — and there is still that last minute rounding decision. I tend to rate first, write the review while it percolates, and then adjust at the end if needed (maybe up, if I found positive points to write about).

But I'm think in terms of fractions or decimals or "would be five stars except", and currently I am rating/ranking things for the Aurealis Awards out of 5 (as requested) but with one decimal place — so effectively with 50 shades of grey.


References and Notes

Some people may quibble about the word 'grammo' for 'grammatical error', or more precisely 'confused word error', like the wrong version of their/there/they're. 

Google's Ngram Viewer shows 'grammo' has been around since the mid 1800s, but its usage decreased throughout the 20th century, but since 1996 on the increase again.

Google Ngrams are a very useful tool for understanding the chronological (diachronic) usage of words and phrases (Ngrams).


Here is a paper about fixing the 'grammo' in 1997 (it is about a system that corrected such confused words with learned statistics/AI of a kind now ubiquitous in word processors and grammar checkers: the demo system took the form of a macro package for Word, which didn't correctly handle such things innately):

David M W Powers (1997), Learning and Application of Differential Grammars, CoNLL97, ACL.

https://aclanthology.org/W97-1011.pdf

https://scholar.google.com.au/citations?view_op=view_citation&citation_for_view=qTvbbD4AAAAJ:_FxGoFyzp5QC

Google Scholar is another tool that's worth knowing about to see the impact of scholarly work and track down all an author's publications when writing literature reviews (for non-fiction books and theses) — but that is a whole separate topic (and yes, as both author and reviewer I regard a lit. review and bilbiography/references as essential in non-fiction books: to be taken seriously non-fiction authors need to demonstrate they are familiar with the existing sources of information out there, and ideally critique them; and I've even included footnotes and scientific/technical references/evidence in my fiction, and fictional fables and citations in my non-fiction).


And here is the full quotation and citation for Kipling's serving men:

I keep six honest serving-men
(They taught me all I knew);
Their names are What and Where and When
And How and Why and Who.
I send them over land and sea,
I send them east and west;
But after they have worked for me,
I give them all a rest.
I let them rest from nine till five,
For I am busy then,
As well as breakfast, lunch, and tea,
For they are hungry men.
But different folk have different views;
I know a person small —
She keeps ten million serving-men,
Who get no rest at all!
She sends’em abroad on her own affairs,  
From the second she opens her eyes —
One million Hows, two million Wheres,
And seven million Whys!


Rudyard Kipling (1865-1936)
in The Elephant’s Child (1900)
a Just-so Story