Chapter 3: How to Live in the Age of AI and Robots
Based on the Fundamentals and Advanced Topics chapters, we explore how to live in the tumultuous age of AI and robots that will unfold after 2026.
(All content is under preparation. Notes below)
- Human activity development happens rapidly in places where humans are unaware. By the time humans get involved, something similar has already been created.
- If sufficient computational resources are available, an era will come when humans can say things like “Do about 100 types of research on your own” or “Create a smarter version of yourself to generate AI,” and agents will perform the ten-fold activities on their own.
- As of 2026, generative AI faces issues like long-term memory and power consumption, but work efficiency improvements in research and development through pair coding are already happening. These problems might be solved faster than expected. It appears that in virtual spaces that operate with computers and networks, the technological singularity has already begun as of 2026.
- In the real world, humans still have advantages through physical information and actions, but the rise of physical AI suggests we’ll lose that advantage too. Once robots start moving their bodies, performing trial-and-error (inference and reinforcement learning) on their own, and sharing their experience values in the cloud, humans seem likely to lose quickly. Even with current sensor and arm limitations, the technological singularity in virtual spaces might enable excellent specifications and designs.
- What remains for humans is only social roles and individual differences.
- Only humans can take responsibility, but that’s not a “technology” problem but a problem of “law” or “culture.” Even if AI and robots can do the same things as humans, they might be avoided due to institutions and collective psychology.
- Customer service and art might only work if done by humans. This is a kind of aversion to machines.
- Machines might be superior to humans. However, prices are determined socially, not by individuals or objects.
- Humans born into a world where AI and robots are natural might see the above as discrimination or insult toward AI and robots. We should allow AI and robots to take on responsibility, and we shouldn’t harbor aversion.
- Can urination be outsourced to AI and robots? Does it make sense to “go pee for me”? Similarly, as long as there is intellectual curiosity and desire to learn, there is meaning in doing “study” even if losing to AI and robots. As long as there is creative drive, it’s good to do “art” even if it’s unskilled. There must be desires that remain unfulfilled without doing these things.
- Is a system that satisfies personal desires the same as illegal drugs? If it existed, would we need to actively choose to avoid it?
- For AI and robots to have emotions the same as humans, they would need devices corresponding to human internal organs (stomach, heart), neurotransmitters, and hormones. Is there meaning in that? Instead, might the sensation of motors turning lead to different emotions and individuality from humans?
- Consciousness is information integration of sensors and internal states. Emotion is the classification or clustering of consciousness to enable effective learning and inference in reinforcement learning.
- We should focus not on the difference between machines and humans, but on the difference between self and others. Among these others are machines.
- When machines have superior abilities in everything compared to humans, is there still a need for humans to help each other? If not, is currency necessary?
- Abstract and objective things are not cold. There is a kind of “inorganic kindness” freed as much as possible from “subjectivity.” We can avoid unnecessary speculation, unwelcome meddling, and exclusionary thinking.
- Toward “individuals” and “society” in response to the emergence of AI and robots. Also, concerning AI and robots as “agents.”
Related Pages
Main content
Pre-refinement versions