Artificial Intelligence are the newest bywords of this technology-driven world. From labor-saving robots that “think”, to quantum super-computers that may solve currently unsolvable problems of encryption and cyber security—governments, corporations, NGO’s, laboratory and university-based scientists but also consumers and workers have their own visions and concerns about a burgeoning AI reality that rivals the imagination of sci-fi writers. So what is the role of faces technology as an aspect of this trend?


For example, scanners that read faces and body language, while coupled to drones, satellites and the guidance systems that control their flight, are types of machines which cooperate to “see”, “think” and “act” in monitoring and affecting human activity.

(Machines may be said to “think”, when, like human counterparts, they use sensors (corresponding to eyes, ears, touch) to recognize patterns, and can not only make “logical” conclusions based on such analyzed data, but can also “learn” and change behavior because their algorithmic “brains” have the capacity to adjust to changing circumstance and experience.)

And now, with increasing attention to issues of social control and privacy surrounding the spread of mass surveillance worldwide, facial recognition and facial analysis programs that “think”, while offering apparent benefit to law enforcement, employment evaluation and medical diagnosis, may also pose unique kinds of problems.

In the context of world communications increasingly connected and complicated by social media and the Internet (themselves growing faster than means to gauge and control them), observers have questioned society’s ability to even comprehend the ultimate implications of AI wherein machines can be taught to “read” and “talk” to each other along with their human “supervisors”—as in Day of the Dolphin, when human testers realize that these non-human marine mammals are testing them as well.

Sci-fi fiction has speculated on interaction between humans and humanoid robots in which human emotion and intuition (which does not necessarily travel in defined and predictable neural circuits typical of robots) contrast with robots’ purported strictly “logical” reasoning based on stable, unchanged memory and straightforward circuitry.

(It is also fair to point out that human memory, itself, is prone to distortion when accessed repeatedly and over time and is subject to random influences. Interestingly, quantum supercomputing involves changeable memory and random influences and could be the basis for imagining or even developing a true or tru-er “hybrid” of humans and humanoid robots.)

Earlier sci-fi themes may have touted the supposed superiority of humanoid robots’ machine-like consistency of reasoning while conceding that mechanical parts can break or malfunction, but they generally failed to account for the inherent evolutionary advantages of human traits such as compassion, empathy and altruism which favor survival, in groups, if not in individuals—as predicted by Darwin himself as well as biologist E.O. Wilson.

Some have further speculated that imposing human facial features and expressions onto humanoid robots may be necessary to achieve optimal interaction between humans and their mechanical counterparts.

Indeed, studies have shown that people react more positively and comfortably with humanoid robots that have faces; and yet among those, preferences are for robots that more accurately mimic human features and expressions while appearing friendly, intelligent and trustworthy. And, in some instances, people were put off more by expressions of eyes and mouths that seemed incongruent—therefore, phony—than they were by totally blank or neutral expressions of robots.

Curiously, humans seem to be more at ease with humanoid robots that appear less human and more mechanical than with those that are ambiguously human—much like their reactions to animalistic faces in sci-fi or horror movies. This may be a protective instinct favoring one’s own species but which can be, in the extreme, the basis of undue bias and racism against members of that species showing variability. This point was brought out by the spy novel, Faces Tell All, wherein the characters speculate on how people do or don’t relate to facial features or complexions more or less like their own.

On October 4, 2019, Dr. Philip Wolfson, author of Faces Tell All, was invited to participate in the Rutgers University “Big Ideas Symposium”. Its stated purpose was to ultimately benefit society by presenting, discussing and collaborating to help actualize visionary thinking and planning of social and physical scientists—enabled by the considerable resources and technology afforded by major centers of higher learning like Rutgers.

One presentation, titled Minds and Machines, dealt with the ethical, economic, legal and philosophical concerns and responsibilities of institutions and society as we “push the frontiers of science”. Among technological advances mentioned were AI and quantum computing which would require major public and private support for education, training, programs and infrastructure. In a Q&A session, the ethical implications of AI surveillance, a technology dramatized in Faces Tell All, was acknowledged by the presenter, Dr. Peter March of Rutgers.

Dr. Stephanie Bonne presented “The Rutgers Institute for Patient-Centered Outcomes in Health Care”. It was emphasized that technology-aided feedback from patients was critical to producing optimal quality of care and outcomes. The mechanisms included self-reporting through attached electronic devices, the use of “predictive analytics”, as well as “call-ins” enhanced by “Skype” or other audio/visual transmissions. In discussion, the importance of visual feedback that could reflect patient affect and appearance, especially in the case of mental illness, was acknowledged—another role for faces technology.

Also on the medical front, Dr. Sohail Contractor lectured on “Advancing Artificial Intelligence Applications in Medicine.” In discussion, he touted AI diagnostic advances in radiology and ultrasonics, and acknowledged possible role for face and body scanning, as well as integrated Western and Chinese Medicine diagnostic models—as foretold in Faces Tell All.

“The Rutgers University Crime Lab Unit” was presented by Dr. Kimberlee Moran and addressed the role of new technologies in forensic intelligence gathering, usually in homicide or missing persons cases She also acknowledged the value of “live” methodologies used to catch criminals such as polygraphs and body and face reading as practiced by law enforcement and national security professionals in interviews and interrogations.
The final presentation “The Rutgers Drone Port” described advances in drone technology and the need for proper planning, regulation and supervision of drones with their great potential for transportation and commercial uses as well as those for surveillance and the military.

In summary, technology described in Dr. Philip Wolfson’s innovative AI thriller, Faces Tell All, was relevant to the topics presented at Rutgers Big Ideas Symposium. Reporting on this event in a blog article is intended to further the world-wide conversation about how man interacts with his machines.

Now available at, as print on demand and as ebook.