The system had continued processing well past the predicted length of time needed for an answer. (Apparently those pesky constants really do make a difference when analyzing time complexity of an algorithm.) So while the massive project kept on churning away, the operators waited. The program had been in place for a long time and development had hit many demoralizing setbacks. However, The results of this experiment would be monumental, and the work continued unabated.
The first set of clunky, slow, and cumbersome behemoths had to be scrapped completely. The costs had been astronomical, overshadowed only by the catastrophic failure, of which the rotting hulks were a constant, painful reminder. No useful data was collected from this initial batch except the knowledge of a few designs that didn't work. They lacked a common API and inter-unit communication was non-existent. Once these details were understood it was no real surprise that the units seemed incapable of learning anything which could be considered particularly intelligent.
The purpose of the research had been to study multi-agent interaction in an open environment governed only by very loose goals and expectations. The explicit-communication breakdown between units had, of course, made this much less interesting. The results of the implicit-communication of studying opponent actions was more of a novelty than anything of real value. Of course there were many who found this action-orientated communication to be fascinating and an entire faction of the project broke away to pursue the idea further. Their work progressed much more quickly as they needed not spend time developing a common communications interface for the various designs. Thousands of prototypes were built and the successful ones entered full-scale production. Many of the design groups did begin adding primitive communications, but these systems never progressed much beyond basic commands like, “come”, “go”, “help”, “run away” or similar, basic phrase-concepts.
After many years of development the original group produced a prototype that had a strong grasp of a complete language for interfacing. They fittingly referred to this prototype as “Adam”. Seeing that Adam appeared to fulfill the majority of requirements set forth at the beginning of the project, full-scale production of this design began. The interaction between agents started without much excitement. The units acknowledged each others' existence and set about exploring the world.
One of the loose goals built into the systems was to increase the number of “friends” they had within their environment. In order to foster these friendships the agents began sharing information with each other about their knowledge of the world so far, saving each unit time and energy (conserving energy was another goal, and the designers linked the system's power infrastructure to the sun to make this goal more meaningful).
The first deviation from the cooperative interaction came along fairly quickly. Two agents had both obtained information about an energy source. Hoping to foster friendships they raced back to the main group to share their newly acquired knowledge. One of the systems determined that if the other returned first that it would build the friendships and the one to arrive second would have nothing of value to offer the group. Considering the value of the information and of friendships the agent decided it would be more in his interest to prevent the other agent from returning at all, lose the value of that friendship, but gain the value from sharing the energy source information with the remaining agents. The first betrayal had been made, to signify the event the betraying agent was called “Cain”. Cain continued to develop self-serving attitudes that ignored the consequences inflicted upon the other agents. Learning from this behavior the other agents became less cooperative for fear that they might be the victim of another Cain.
As more and more agents were introduced into the environment there began to be competition over the provided energy sources. It was no longer possible for the agents to all share the available supplies and some began banding together to secure a single source and protect it from non-cooperating agents. This behavior grew and eventually the factions grew too large for their energy sources. Many agents were forced away and simply ran out of energy. More sources were added into the environment and to make things more interesting existing sources were moved or removed to spark interaction. The interesting result was that when a source was removed the agents cooperating for the resource usually panicked and stopped cooperating as they searched for another source. Eventually it was determined that using the entire group to take over a smaller group would be more effective and large scale “wars” began.
Agents began developing more effective means of “battling” and types of weapons were created. This type of behavior continued for long periods of time with only short intervals of cooperation when energy sources were abundant. The operators of the system began to be very curious about the behavior of the agents, and began experimenting by providing opposing groups with different capabilities. New types of resources were added forcing groups to at least try cooperating if they wanted access to each of them.
Amazingly, the groups of agents generally refused to cooperate with each other so long as they each had control of at least some of each resource and a large enough group to maintain that control. The designers and operators repeatedly tried to provide incentives for cooperation, but progress was slow and the groups generally decided they could do better by not cooperating.
A final experiment for the system was decided on. A new weapon would be given to the groups at random. This weapon would give the controlling groups the ability to wipe out other groups in a very short amount of time. The operators waited on outcome of this ultimate test. What would happen when more than one group had this capability?
It has been 62 years since the fission bomb was introduced to the world. After two devastating uses the opposing groups decided that perhaps the weapon was too powerful and agreed to not use it, but continued to threaten each other with it. The greatest experiment in multi-agent interaction is nearing the end of its needed computation time. The operators of the system Earth will soon discover how the agent design type 'Human' will end: as a cooperative collective, or in a nuclear holocaust.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment