Название | Smart Swarm: Using Animal Behaviour to Organise Our World |
---|---|
Автор произведения | Don Tapscott |
Жанр | Социология |
Серия | |
Издательство | Социология |
Год выпуска | 0 |
isbn | 9780007411078 |
“When I went back to Milan to discuss these ideas with Professor Alberto Colorni, who was supervising my work, he asked me to write a simple program as a proof of principle, to show it wasn’t just a crazy idea,” Dorigo says. At the time, Dorigo was working on a class of mathematical puzzles known as combinatorial optimization problems, which are relatively easy to describe but deceptively difficult to solve. One of the best-known examples is the traveling salesman problem, which involves the following scenario: A salesman needs to visit customers in a number of cities. What’s the shortest path he can take to visit each city once before returning back home?
When the problem involves just a few cities—let’s say, Moscow, Hong Kong, and Paris—you can figure out the answer on the back of an envelope. Leaving from the airport near his home in Cleveland, the salesman has three options for his first stop: Moscow, Hong Kong, or Paris. Let’s say he chooses Hong Kong. From there he has two choices: Moscow or Paris. Let’s say he flies to Paris. That leaves only Moscow before he can fly home. If you made a list of all the other possible sequences (such as Paris to Moscow to Hong Kong, or Moscow to Hong Kong to Paris, and so on), you would have a total of six to consider. Compare the mileage for each sequence and you have your answer.
But here’s the tricky part. If you add a fourth city to the salesman’s journey, you make the problem significantly more difficult. Now you have four times as many possible routes to consider—twenty-four instead of six. Add a fifth city and you get 120 possible routes. Jump ahead to ten cities and you’re talking about more than 3.6 million possible routes. The number of solutions, in other words, increases exponentially with each new city. When you get to thirty cities, there aren’t enough years in a lifetime to list all the possible routes.
Dorigo thought it might be interesting instead to let virtual ants give it a try. Rather than trying to identify every possible outcome of the salesman’s decision making, the ant system used trial-and-error shortcuts to find a handful of good ones. Instead of being straightforward and linear, it was decentralized and distributed. Instead of calling for complicated calculations, it relied on simple rules of thumb. Instead of getting swamped by the exponential nature of the math, it took advantage of that same snowballing effect to rapidly turn small differences into big advantages. It was different, in other words, because it harnessed self-organization in a smart way.
So Dorigo and Vittorio Maniezzo, a fellow graduate student, created a set of virtual ants capable of cooperating with one another to find the shortest route for the traveling salesman. Their secret weapon: “virtual pheromones” that the ants would leave along the way. Imagine a map with fifteen cities that a salesman needed to visit. At the beginning of the first cycle, ants were placed randomly on all of the cities. Then each ant used a formula based on probability to decide which town to visit next. This formula considered two factors: which city was closest, and which city had the strongest pheromone trail leading to it. At the start, there were no pheromone trails, so the closest cities tended to be selected. As soon as each ant completed a tour of all fifteen cities, it retraced its path, depositing virtual pheromone on its route. The shortest routes discovered by the ants tended to receive the most pheromone, while the longest ones were allowed to “evaporate” more rapidly. This enabled the ants, as a group, to remember the best routes. So when the second cycle was run, and the ants worked their way from city to city again, each ant built upon the successes of the first cycle by favoring the strongest pheromone trails. After repeating this time and again, the ants kept reducing their travel times, until the pheromone trails on the shortest segments were so strong that none of them could resist choosing them.
The results were quite encouraging.
“We discovered that the ants could find nearly optimal solutions for as many as thirty, fifty, or even a hundred cities,” Dorigo says.
Not that the ants didn’t sometimes make mistakes. If a particular ant, while hopping from city to city, happened to get caught in a loop along the way—like a twig in a river eddy—other ants occasionally followed, resulting in a nonsolution to the problem. To prevent this, Dorigo and Maniezzo instructed the ants to forget such loops when depositing pheromone on completed trails. Other problems required similar fixes. But none of the ants’ bad habits were so serious that they couldn’t be tweaked in one way or another, or made more effective by pairing them with more specialized algorithms.
“The important thing was that the ant colony optimization worked because of the implicit cooperation among many agents,” Dorigo says. “On their own, each artificial ant built a solution which was usually not very good. But together, by exchanging information—not talking to each other, but simply by exchanging information through virtual pheromones—the ants ended up finding some very good solutions.” In this way, cooperation became cumulative. Instead of representing simply the sum of the ants’ individual efforts, the search got smarter as it went forward, powered by the mechanisms of self-organization—decentralized control, distributed problem-solving, and multiple interaction between agents.
It wasn’t long before other computer scientists were adapting the ant-colony approach to solve a variety of difficult problems. A few researchers even experimented with real-world situations. At Hewlett-Packard’s lab in Bristol, England, for example, scientists created software to speed up telephone calls. Using a simulation of British Telecom’s network, they dispatched antlike agents into the system to leave pheromone-like signals at routing stations, which function as intersections for traveling messages. If a station accumulated too much digital pheromone, it meant that traffic there was too congested, and messages were routed around it. Since the pheromone evaporated over time, the system was also able to adapt to changing traffic patterns as soon as congested stations opened up again.
What did an ant-based algorithm offer that other techniques didn’t? The answer goes back to the foragers from Colony 550. If conditions in the desert changed while the ants were out searching for seeds—if something unpredictable disrupted the normal flow of events, such as a hungry lizard slurping down ants—then the colony as a whole reacted quickly: foragers raced back to the nest empty-handed; other ants didn’t go out. They didn’t wait for news about the disruption to travel up a chain of command to a manager, who evaluated the situation before issuing orders that traveled back down the chain to workers, as might happen in a human organization. The colony’s decision making was decentralized, distributed among hundreds of foragers, who responded instantly to local information. In the same way, virtual ants racing through the telephone network responded instantly to congested traffic. In both cases, an ant-based algorithm offered a flexible response to an unpredictable environment, and it did so using the principles of self-organization.
An obvious application of this approach would be to develop an algorithm—or set of algorithms—that would enable a business enterprise to respond to changes in its environment as quickly and effectively as ant colonies do. That’s exactly what a company in Texas set out to accomplish.
The Yellow Brick Road
Charles Harper looked out his office window at the flat landscape south of Houston. As director of national supply and pipeline operations at American Air Liquide, a subsidiary of a $12 billion industrial group based in Paris, he supervised a team monitoring a hundred or so plants producing medical and industrial gases. This was a daunting task on the best of days. The company’s operations were so complex that no two situations ever looked