BluePink BluePink
XHost
Gazduire site-uri web nelimitata ca spatiu si trafic lunar la doar 15 eur / an. Inregistrare domenii .ro .com .net .org .info .biz .com.ro .org.ro la preturi preferentiale. Pentru oferta detaliata accesati site-ul BluePink

The Black and White problem

[ back to projects | introduction | definitions | results so far | references and further reading ]

Introduction

Utilitarianism is a theory of ethics with the purpose of quantitatively maximizing the good consequences for a population. In other words, "to maximize the pleasure, and minimise the pain". This sometimes requires that the individual interests are sacrificed for the greater good.

In order to develop some kind of mathematical formalisation of utilitarianism, one has to consider an abstract universe, in which entities have "utility" and "morality" just the way a fundamental particle has "position" and "spin".

The Black and White problem is a simple game with artificial intelligences (some are good, some are evil) and a god. Each turn, every bad AI randomly selects another AI and kills it. It is assumed it's possible for two AIs to kill each other simultaneously. The game ends when the remaining population is stable.

Before the game starts, the god randomly kills a number of artificial intelligences. The question that arises is: Knowing the total number of AIs and the number of bad AIs, how many should the god kill so that, in the end of the game, the utility of the remaining population is maximized.

The problem is Paul Wright's idea and is not yet completely solved. Read further for definitions, results so far or references and further reading. The problem is being discussed on Alkaline's Tetraspace forum (link).

[ home | legal | sitemap | admin ]
All content (c) 2003-2006 to the webmasters, unless otherwise noted.