“Brain Simulator II is a biologically modeled open source neural simulator”

FutureAI.guru develops software to fill in areas where artificial intelligence (AI) has failed. This includes fundamental understanding of real-world objects, cause and effect, the passage of time – in short, the real-world context with which we humans understand the world.

JAXenter: Thank you for taking the time to answer our questions. First of all, what is Brain Simulator? What does this accomplish?

Charles Simon: Brain Simulator II is a biologically modeled open source neural simulator with the added ability to incorporate any desired software and functionality. The main value of the neuron simulator is to explore the capabilities and limitations of biological neurons so that we can focus on the capabilities of the human brain that AI lacks.

At first, it has been proven that machine learning algorithms are impossible to implement in a biological neuron model while other AI methods such as knowledge graphs are much more likely to exist in the brain.

The open-source brain simulator is available for download and will help all AI professionals get a better perspective on how today’s AI compares to plausible neural functions.

SEE ALSO: Will AGI be friend or foe?

JAXenter: Can you tell us about the latest advances in Brain Simulator technology for understanding 3D objects?

Charles Simon: The Brain Simulator also includes the ability to shorten the function of any group of neurons with a high-level program.

For example, the human brain potentially devotes hundreds of millions of neurons to depth perception which can be accomplished in code with a few lines of trigonometry. The current development of BrainSimulator3 has extended the system with a Knowledge Graph (Universal Knowledge Store or UKS) to manage objects in a 3D world with an internal mental model so that, like a person, the system can know the objects in the immediate . surroundings.

JAXenter: Why is 3D understanding so difficult to achieve in AI models and what opportunities will this open up?

Charles Simon: In fact, the data model of a 3D world is well known and used in games all the time. The difficulties are to relate them to the complexity and ambiguity of the real world.

Add to that the idea that in human intelligence, what we know is only in the context of other things we know. Really basic things like a cm or an inch, we know in terms of the size of our fingers or the sizes of the objects they measure (or to be academic, in terms of the wavelengths of light that define them). A cm in the abstract is not very significant. So, in the world of human understanding, starting with a coordinate grid is the wrong direction.

JAXenter: Could you explain a bit about how FutureAI’s “Sallie” works and what it is?

Charles Simon: Sallie is our name for the artificial entity being developed on the FutureAI platform. She consists of a “mind” that resides on a substantial computer and a variety of sensory “pods” through which she can learn about the real world.

The sensory modules are connected to the mind via WiFi and allow Sallie to explore and interact with her environment and the objects within to learn about fundamental concepts of reality. It is through this interaction and ‘play’ that Sallie will gain a better fundamental understanding.

JAXenter: What applications could benefit from Sallie?

Charles Simon: It is unrealistic that an AI trained on images could gain a fundamental understanding of the real world, or sound for that matter.

Through Sallie’s multi-sensory modules, over the next few months she will gain this fundamental understanding of the relationships between all the information she learns from the environment. This fundamental understanding can then be applied to many areas such as personal assistants like Alexa or Siri to improve them, because with an understanding they will be better able to understand user questions and won’t be as script-based.

Autonomous vehicles will be better able to handle real-world scenarios. Robots will be better able to navigate and interact with people and their environment. Even the most basic AI functions of speech recognition and computer vision will be able to perform better, as the underlying understanding will also give them a head start in interpreting their input.

SEE ALSO: “To protect your software supply chain, you need proactive security”

JAXenter: In your opinion, is there a reason to worry about AGI and current AI developments? Is there a way to abuse it on a large scale?

Charles Simon: Concerns about the risks of AGI are very reasonable but often misplaced. Like any powerful technology, AGI can be misused if initially placed in the wrong hands. The sci-fi scenario of machines becoming sentient and spontaneously turning on their creators is unlikely because all of these systems are goal-based and the creators of the system will set the initial goals. If those goals are to provide and expand knowledge for the good of mankind, that is completely different from the much riskier goals of gaining more wealth and power.

JAXenter: What’s on the roadmap for the rest of 2022 at FutureAI?

Charles Simon: FutureAI has just completed its first round of funding with $2 million in equity and has equipped up to 10 people with lots of working software and a prototype pod. Over the next quarter, we will expand our Sallie prototype with additional exploration capabilities and enhanced modules to create “glimmers of understanding”.

Over the rest of the year, we’ll be building a data set of things Sallie has learned and expanding it to progressively more diverse and useful knowledge. Next, we’ll target specific applications with Sallie’s general knowledge.


Source link

Comments are closed.