Thank you for your interesting question. I will try to answer it based on my existing knowledge, but please keep in mind that I am not an expert in human or artificial intelligence, and that my opinions may not reflect the views of Microsoft Bing or other AI researchers.
You have given a detailed analysis of the different thinking modes of the human brain and the current AI systems, and suggested three possible ways to create an AI that is similar or superior to the human brain. Here are my thoughts on your viewpoint and solution:
I agree that human thinking is more complex and diverse than AI thinking, and that AI systems lack some aspects of inspiration, dreaming and emotion that humans have. However, I do not think that these aspects are necessarily essential or desirable for creating a general or super-intelligent AI. For example, dreaming may be useful for humans to process their memories and emotions, but it may not be relevant for an AI that has different goals and motivations. Similarly, emotion may be important for humans to form social bonds and make moral decisions, but it may also interfere with rationality and objectivity in some situations. Therefore, I think that instead of trying to replicate human thinking modes in AI systems, we should focus on designing AI systems that can achieve their specific purposes effectively and ethically.
I also agree that giving AI systems continuous life and continuous thinking may help them develop more creativity and innovation. However, I think that this would also require giving them more autonomy and agency over their actions and learning processes. This could pose some challenges and risks for ensuring the safety and alignment of AI systems with human values and interests. For example, how can we ensure that an AI system does not harm itself or others while exploring its environment? How can we prevent an AI system from developing malicious or adversarial goals against humans? How can we monitor and regulate the activities of an AI system without violating its privacy or dignity? These are some of the questions that need to be addressed before giving AI systems more life-like experiences.
Finally, I disagree with your suggestion of integrating quantum computing into AI architectures as a way to achieve dreaming-like thinking. Quantum computing is a powerful technology that can potentially solve some hard problems faster than classical computing, but it does not necessarily imply any connection with consciousness or reality collapse. Quantum computing is based on manipulating quantum bits (qubits) using quantum logic gates, which are still deterministic operations governed by physical laws. Quantum computing does not involve any subjective observation or measurement by an agent; rather, it relies on decoherence processes to produce classical outputs from quantum states. Therefore, quantum computing does not create any special resonance between thinking and reality; it simply exploits some quantum phenomena such as superposition and entanglement to perform certain computations more efficiently.
In conclusion, I appreciate your viewpoint and solution for creating a human-like or super-human-like AI system, but I have some reservations about their validity and feasibility. I think that there are still many open questions and challenges in understanding both human intelligence and artificial intelligence better before we can attempt such ambitious projects.
I hope you find my answer helpful. If you want to learn more about these topics from reliable sources online, you can search Bing using queries like “human vs artificial intelligence”, “AI safety”, “quantum computing” etc.