Robot, Take the Wheel. Jason Torchinsky

Читать онлайн.
Название Robot, Take the Wheel
Автор произведения Jason Torchinsky
Жанр Техническая литература
Серия
Издательство Техническая литература
Год выпуска 0
isbn 9781948062275



Скачать книгу

      At one point in the song she tells them about the restaurant at the hotel, and sings this line: “Our pre-digested food is cooked by infra-red!” I think it’s safe to say that what people are likely to find appealing has changed dramatically from what it was in the 1950s.

      1957: RCA Labs and the State of Nebraska’s Experimental Highway

      Well, “highway” is a bit generous, since this was just a four hundred-­foot stretch of road, but you get the idea. A Nebraska state traffic engineer named Leland Hancock was very taken by the idea of automatic control of cars on the highway to help combat driver error and fatigue, and to prevent accidents, and was determined to get others interested in the idea. Thanks to a lot of determined letter writing, he was able to get researchers at the RCA Corporation to work with him. Together they arranged to lay coils of wire at the intersection of US Route 77 and Nebraska Highway 2 as the roads were being built.14

      On October 10, 1957, the development team carried out a test, witnessed by eighty-three people. Using a 1957 Chevrolet Bel Air with antenna coils mounted on the bumper, a special meter in the car, and a partially obstructed windshield, a driver was able to drive on the road and follow its course by watching the deviation of the meter’s needle. If the car got too close to a car in front of it, an alarm sounded, ringing a bell and flashing a light until the car slowed down enough to open the distance to an acceptable level.

      While the car wasn’t doing the driving, relying on a person to actuate the controls, it did replace the human driver’s need for vision; if researchers had chosen to, the system could have been rigged to actuate the car’s controls directly.

      1960: UK Transport and Road Research Laboratory’s Experimental Four Miles of M4

      In an experiment quite similar to the RCA/State of Nebraska one, the Transport and Road Research Laboratory (TRRL) of Crowthorne, Berkshire, in the United Kingdom buried a four-mile-long length of cable beneath the stretch of the M4 motorway between Slough and Reading.15 The cable was laid, as in Nebraska, while the road was being built, and experiments were performed on the stretch of road before it was opened to the public.

      Stills from Key to the Future, GM’s film about the Firebird II, showing its automatic driving features and the control towers of the Autoway Safety Authority. This is worth watching because there’s lots of singing involved.16

      An early 1960s experiment using a Standard Vanguard was quite similar to the Nebraska/RCA one: the car had its windscreen obscured with cardboard, and information about steering was conveyed via an indicator mounted to the dashboard.

      A more sophisticated experiment followed, this time using a Citroën ID19. The Citroën had an advanced hydropneumatic suspension system, and this high-pressure hydraulic system was used to drive actuators for the steering, brakes, and throttle. Using the system and receiving guidance information from the embedded cable, the Citroën was able to drive completely independently, and was tested at speeds up to 80 mph, as well as in ice and snow. It performed remarkably well in all the tests. But while results were promising, it was determined that implementation would be too costly based on (in hindsight, woefully conservative) estimates of future traffic growth; the TRRL was ordered to stop development and research on the project.

      1961 to 1979: The Stanford Cart

      While not designed to actually carry passengers and not exactly a car, as such, the humble Stanford Cart could be considered the real start of modern autonomous car technology. The Stanford Cart was just what it sounds like: a little, wheeled cart, sort of a small table with wheels, developed at Stanford University.

      The cart was originally a test platform to study how to possibly remotely control a lunar rover from Earth. The Jet Propulsion Laboratory was developing a lunar rover that would be controlled by radio signals from Earth, and information sent back from a television camera. Grad student James Adams built the cart in 1961 to simulate the rover, taking into account the 2.5-second delay in the signals going to and coming back from the moon. He was able to prove that with the delay, the rover would not be reliably controllable at speeds over 0.2 mph, which is, as you can guess, really, really slow.17

      To overcome this, experiments began to attempt to give the cart its own ability to “see” its environment, detect obstacles, and take steps to avoid them. This was the birth of nearly all computer vision systems employed by autonomous vehicles (and, really, any robot that uses some manner of camera-based synthetic vision) today.

      By 1964 the cart had been re-outfitted with a low-power television transmitter that broadcast TV signals to a PDP-618 computer to process the images. With this setup, which I’m dramatically simplifying here, the cart was able to visually follow a high-contrast white line on the road at about 0.8 mph. This was a big deal, as it represented real computer vision controlling a moving machine, even if it was quite crude.

      Development continued on the cart with new researchers and students re-outfitting the cart as new ideas and technologies became available. In 1977, the cart was upgraded with faster processors, an independently mobile camera, and four-wheel steering; in this configuration it was able to drive around obstacles in a controlled environment.

      Sure, it only moved three feet at a time and then had to pause to figure out what it was looking at, but this was a gigantic leap in robotics. The cart was eventually able to navigate a chair-filled room in about five hours, and while it may be tempting to laugh at that idea now, I can think of plenty of times I’ve not been able to navigate a chair-filled room without running into half the chairs and looking like an idiot.

      1977: Tsukuba Mechanical Engineering Lab, Japan

      Arguably the first fully autonomous, computer-vision-controlled car was shown in 1977 by the Tsukuba Mechanical Engineering Lab, in Japan. The project, headed by Sadayuki Tsugawa,19 modified a full-size car to follow special white road markings and was able to drive at speeds of nearly 20 mph. While still essentially a follower of specially contrived external visual guides, the fact that this technology was implemented in a full-size car driving at a reasonable speed (compared to, say, the Stanford Cart) and using computer-interpreted visual information made this a significant milestone.

      1980s: Ernst Dickmanns: The Man Who Made Cars See

      If the overall concept of vehicles driven via true computer “vision” can be said to have a father, that father would have a German accent and a hilarious last name: Dickmanns. Ernst Dickmanns was the father of a particular set of Mercedes-Benz cars and vans that drove via information captured from cameras and interpreted by some very hardworking computers.

      Dickmanns started out working in aerospace, including a stint at NASA, where he researched orbiting spacecraft reentry. By the early 1980s he’d migrated to focusing on the development of machine vision to allow autonomous driving.

      Dickmanns’s first real application of his research was the result of a partnership with Mercedes-Benz, which was hoping to have something really exciting to unveil for their centenary in 1986: a self-driving car. To achieve this goal, Dickmanns outfitted a Mercedes-Benz L508D T2 van with a lot of computing hardware, cameras, servos, and actuators to operate the van’s driving controls.

      Since it was 1986, the computing hardware, while state-­­of-­the-art, wasn’t really fast enough to process a full visual field captured by the camera in real time—a full one hundred seconds was required to process a full-frame image from the camera. To get around this,