The AI Risk project studies future tools with general AI, called tool AIs. Tool AIs based on the human neocortex answer difficult questions. The tools are the basis for different types of intelligent devices. There are risks associated with the tools themselves and the answers they provide. The goal is to design neocortex-based tool AIs without existential risk to humanity and then describe systems of tool-based devices providing answers with acceptable non-existential risk to users and other stakeholders.
The project’s first part is described in two published papers Tutorial on systems with antifragility to downtime and A thousand brains: toward biologically constrained AI. The second part studies the properties tool AIs need to eliminate existential risk and how to create tool-based antifragile systems with acceptable non-existential risk. The project’s third part explores other risks users face when using tool-based personal assistants.