Why subscribe?

This is a limited series, with guest posts, that will eventually be wrapped into a book.

It introduces the argument that AI designers will be forced into deep neuromimicry, and later deep biomimicry, as the only easily discoverable path to managing ever more useful, aligned, and secure AI. In simpler terms, “They” must become “Us+”

It predicts that in the future, the problems of AI trust and security will converge with the problems of human trust and security, and that human-AI teams will design new, bio-inspired systems to manage and integrate human and AI communities, in a kind of “superorganism,” serving adaptive values. It also argues that this evolutionary development may happen on every other Earthlike planet in our universe, as a consequence of the nature of complex adaptive systems.

The newsletter covers AI performance, ethics, security, safety, cognitive science, autopoesis, evolution, development, complexity, computer science, engineering, and select STEEPLES (Sci, Tech, Econ, Envir, Politics, Law, Ethics & Society) issues and impacts regarding the future of AI.

Subscribe for the periodic newsletter and for access to the archives.

Stay up-to-date

You won’t have to worry about missing anything. Goes directly to your inbox.

Join the crew

Be part of a community of people who share your interests.

To explore other newsletters, and to make your own, visit Substack.com.

Subscribe to Natural Alignment

A biomicry approach to designing safer, more ethical, and loyal AI systems

People

Foresight speaker, educator, consultant, and systems theorist in bio-social-tech evolution, development, and adaptiveness. Passions: Humanizing the effects of exponential technologies, discussing and improving 21st century futures.
Interested in understanding the continuous evolution and development of intelligence in the universe. UCLA BS in Physics, USC MS in Computer Science.