Saturday, August 16, 2014

Artificial Intelligence(1956)


Since the dawn of computers, people have wondered if they can be made to show intelligence-to think in the way that humans think. Charles Babbage and Ada Lovelace first debated the question when they worked together to create the first computer in 1835.
   By 1950, U.S. mathematician Claude Shannon was busy trying to figure out how computers could play a good game of chess. On the other side of the Atlantic, Alan Turing published his paper "On Computing Machinery and Intelligence," which considered the thorny problem of how you could actually tell if a machine was intelligent or not.
   In 1955, John McCarthy, of Dartmouth College, New Hampshire, proposed a conference to study the issue of intelligence research. In his proposal, he used the phrase "artificial intelligence" for the first time, and an entire field of study was born.The 1956 Dartmouth College, New Hampshire, proposed a conference to study the issue of intelligence research. In his proposal, he used the phrase "artificial intelligence" for the first time, and an entire field of study was born. The 1956 Dartmouth Conference is now known as the defining moment of artificial intelligence(AI) research. The conference set the path for research for years to come, asking questions that remain unanswered to this day. Many of the great minds who attended, such as Harvard's Marvin Minsk, devoted the rest of their careers to a subject that had only just been given a name.
   While there was great optimism at the conference, with many attendees expecting intelligent machines to appear within a decade, artificial intelligence remains elusive. Although there have been notable successes-IBM's Deep Blue computer beating chess champion Gary Kasparov in 1997, for example-they have been in very narrow fields.

Speed Camera(1955)


Rather ironically, it was a Dutch rally driver, Maurits Gatsonides (1911-1998), who invented the speed camera. Gatsonides enjoyed most of his driving successes in the 1950s, and it was during this period that he came up with a device-known as the Gatso-to measure his speed while cornering in a bid to improve his driving, that is, to make him drive faster.
   The camera works by using radar to measure the speed at which a vehicle passes the device, photographing those that break the limit. Two photographs are taken and, should the initial measurement be questioned, the position of the vehicle relative to the white lines painted on the road indicates the average speed that the vehicle traveled at during a set time interval.
   Fixed speed cameras have been used widely in the United Kingdom, Australia, and France, but only marginally in the United States. Despite being billed as life-saving devices, speed cameras remain unpopular with large portions of society. Some people consider speeding fines an unethical source of revenue for the local law enforcement agencies or the private organizations that operate them. However, a research study conducted in 2006 estimated that "Gatsos" and other traffic enforcement measures have reduced road fatalities by about a third in the United Kingdom.

Podcast(2003)


The first decade of the twenty-first century has seen a proliferation in new communications technology. It is now possible to watch a favorite television show on a laptop, read a newspaper on a cell phone, or listen to a radio broadcast on an MP3 player. The evolution of the podcast is one more important development.
   A podcast is a digital audio or video file, distributed automatically to a subscribed user. That user can then listen to or view the file on a mobile device such as a personal computer, MP3 player, or cell phone. The podcaster first creates a "show"-usually a video or MP3 audio file-and then an RSS (Really Simple Syndication) feed file that points to where the podcast can be found. The receiver uses "aggregator" software to subscribe; this periodically checks the RSS feed to see if content exists or new content has been added, and then downloads that content automatically.
   The term itself was coined by British technology writer Ben Hammersley in 2004,from the words "iPod" and "broadcast." Although former New York Times reporter and radio broadcaster Christopher Lydon (b. 1940) is often cited as creating the first true podcasts, much credit also belongs to background innovators such as Dave Winer (b. 1955), who developed the RSS syndication feed that enabled Lydon's blogs-a series of cutting-edge interviews to achieve automated widespread distribution.

Voice Over Internet Protocol(VoIP)(1995)


In 1973, researcher Danny Cohen's Network Voice Protocol was first used on ARPANET (Advanced Research Projects Agency Network), where it allowed research sites to talk with each other over the computer network. For many years afterward, however, sending your voice over the Internet was the preserve of researchers, geeks, and early computer gamers.
   But in 1995 a company called VocalTec released a piece of software it called Internet Phone. Designed for Microsoft Windows, it turned the speaker's voice into computer data, compressing it enough to send it in real time over a modern connection to another computer on the Internet.
   Many people suddenly became interested in Internet Telephone, for one simple reason-it was cheap. In the United States, for example, the local call to connect to the Internet was often free, whereas long-distance calls were costly.
   As Internet speeds improved and other companies started to offer similar services, making telephone calls across the Internet gained a generic name: Voice over Internet Protocol, commonly known as VoIP. VoIP is now extremely popular. Skype, one of the best known VoIP companies, has clocked up more than a hundred billion minutes of calls between its users since 2003 when the service started.

WLAN Standard(1996)


Today's wireless networks owe much to one of the earliest computer networks, the University of Hawaii's ALOHAnet. This radio-based system, created in 1970, had many of the basic principles still in use today. Early wireless networks were expensive, however, and their equipment was bulky. They were used only in places where wired networks were awkward, such as across water or difficult terrain. It was not until the 1980s, with the arrival of cheaper, more portable equipment, that wireless networking began to go mainstream.
   There was a problem, however. By the end of the 1980s several companies were selling wireless networking equipment, but it was all incompatible. What was needed was some joined-up thinking. Step forward the Institute of Electronic Engineers (IEEE) and in particular Vic Hayes (b.1941).Hayes did not invent any new technology, but he took charge of the IEEE's wireless standards committee, and fostered cooperation between the manufacturers. In 1996 they released the first standard wireless local area network (WLAN); its IEEE designation was "802.11."
   Adopted in 1999 by a group of like-minded industrial leaders, who gave it the more catchy name of "Wireless Fidelity," or Wi-Fi, the standard lets us take our laptops around the world, confident that, without wires, we will be able to browse the Web anywhere, from an airport in Australia to a zoo in Zanzibar. 

DVD(1995)


After the futuristic-looking compact disc (CD) took the audio market by storm-consigning the humble cassette tape to the back of a billion cupboards-it was only a matter of time before technological wizard set their sights on abolishing the VHS tape.
   Although the technology for LaserDisc already existed, it never really took off in the way that CD technology did, and so the market for a compact digital video disc was still very much open. The first proposals for a high-density CD were put forward in 1993, leading to the creation of two competing formats. Electronic powerhouses Sony and Philips led their collected investors forward with the MMCD format, going head to head with industry giants Toshiba, Masushita, and Time Warner's effort, the SD. Then, in 1995, a combined effort-known as the DVD-was officially announced and consequently developed by a consortium of ten companies.
   The DVD was capable of storing two hours of high-quality digital video, eight tracks of digital audio, and thirty-two tracks of subtitle information, as well as offering the practical benefits of being light weight, compact, easily rewindable, and two-sided DVDs(which can be flipped over like vinyl LPs) doubled it again without creating needless bulk.
   Although DVD is often cited as being an acronym for digital video disc or digital versatile disc, the official line on it-as stated in 1999 by the 250 company members in the DVD Forum- is that it is simply a three-letter name. So, in short, DVD stands for DVD.

USB Connection(1995)


Long-term computer users will be all too familiar with the frustration of having to switch off and reboot their machines. Thanks to USB (universal serial bus) connectivity, however, this scenario has become, largely, a distant memory. These days, almost every device you plug into a computer, such as a printer or scanner, comes complete with a USB connector, instead of a card that the machine has to learn to recognize in a lengthy installation process.
   The impetus behind the USB was toward a future where you could connect any device to any computer, using any port-because all the ports and plugs would match. The universal, three-pronged "trident" symbol is used on all plugs and sockets to indicate USB functionality. The reality, of course, is that  there are still a few rougue devices that do not conform, but now all PCs are made with USB ports as standard.
   The first USB interconnect (USB 1.0) appeared in the mid-1990s. Today's USB 2.0 & USB 3.0 connections allow data transfer to occour at ten times the speed users expected of older types of connectors.
   The formation of the USB Implementers Forum(USB-IF) by Intel in 1995 marked an important point in computing history. For the computing history. For the computing industry, it was a way to show their commitment to increasing connectivity. The next step will be to do away with physical connections altogether. In 2007 the USB-IF announced significant progress in wireless USB communications. Wireless USB will work like a small scale WiFi network, so that your printer can be placed unconnected anywhere you like in the room.

Laptop/Notebook Computer (1983)



Today's laptop computer has evolved over decades from different types of portable computers, but the Compaq was the most successful early model.
   Alan Kay of the Xerox Corporation proposed the Dynabook concept in1971. His idea was to create a portable, networked personal computer. However, at the time there was no market for it so the idea was shelved. In 1981 Alan Osborne of the Osborne Computer Corporation invented the Osborne 1, the first fully portable personal computer. The size of a small suitcase, it weighed about 24 ponds(11kg).
   The first clamshell design was the GRiD Compass 1101, invented by Bill Moggride and released in 1982. Galivan Computer released what is considered to be smallest and lightest portable computer to date.
   However, it was Compaq Computer Corporation that stole the market from these rivals in 1983, with the Compaq Portable. Rod Canion, Jim Harris, and Bill Murto founded Compaq in 1982 after leaving Texas Instruments. The idea for the Compaq Portable was supposedly sketched out on a placemat from a Houston pie shop. The computer was "reverse-engineered" using IBM BIOS source code to create a new version of a system that operated like IBM's. This was important considering IBM's huge success in the computer market during this time.
   Compaq enjoyed record-breaking revenue in 1983, the year it first released the Portable. Following the model's success in the market, laptops have evolved even further, developing into the smaller and faster models of today.

Mechanical Computer (1835)


An accomplished mathematician and mechanical engineer, Charles Babbage (1791-1871) combined these two disciplines to create a "difference engine" capable of solving polynomial functions without having to use unreliable, hand-calculated tables.
   Despite generous government funding, Babbage sadly never fully finished his difference engine and the project was abandoned in 1834. This did not stop Babbage from thinking about computing, however. In 1835 he released designs for his "analytical engine," a device similar to the difference engine but which, by using programmable punched cards, had many more potential functions than just calculating polynomials.The analytical engine was never built, although Babbage produced thousands of detailed diagrams.
   Using the lessons learned from the analytical engine, Babbage created a more efficient and smaller difference engine in 1849. Difference Engine No.2 was not built until 1991, when the London Science Museum completed the machine from the original blueprints and found that it worked perfectly. Babbage's reputation as a man years ahead of his time was then restored to its rightful place. 

Friday, August 15, 2014

Supercomputer (1976)


Imagine if you could revolutionize the design of computers and leave the competition standing: U.S. inventor Seymour Cray (1925-1996) made a habit of doing just that. In 1972, with a long history of extending the reach of computer technology already behind him, Cray set up the Cray Research company to concentrate on building a powerful computer. His design for the Cray 1 was the first major commercial success in supercomputing. It was essentially a giant microprocessor capable of completing 133 million floating-point operations per second with an 8megabyte main memory. The secret to its immense speed was Cray's ownvector register technology and its revolutionary "C" shape, which meant its integrated circuits could be together as tightly as possible. It produced an immense amount of heat and it needed a complex Freon-based cooling system to prevent it from melting.
   The Cray 2, introduced in 1985, was six to twelve times faster, with ten times the memory. Cray's science and have been used to predict the weather, design airplanes, explore for oil, and even provide computer simulations of nuclear tests.

Touch Screen (1971)


During the rapid rise of the computer in the second half of the twentieth century, people were always searching for the next best way to interact with them. The early days of punched cards and paper tape became too cumbersome as computers advanced and keyboards became the input device of choice.
   In the 1960s, U.S. inventor Douglas Engelbart invented the computer mouse, which represented a milestone in computer interaction. The next big leap forward came in 1971 when Dr.Samuel C. Hurst invented the electronic touch screen interface. While teaching at the University of Kentucky, he was faced with the daunting task of reading a huge amount of data from a strip chart. Realizing that this work would normally take graduate students at least two months to complete,he decided to work on an easier method.
   What he came up with was the Elograph coordinate measuring system. It was an input tablet that could measure where the user was pressing a sylus. Hurst quickly formed the Elographics company(now Elo TouchSystems) to make and sell the device. Working furiously to devolep their concept, Hurst and his team took just  three years to make a proper transparent version that could sit over a screen. Four years later, in 1977, they came up with what was to become the most popular technology for touch screens today. The five-wire resistive touch screen contains transparent layers that are squeezed together by the pressure of a finger touching them. Easily translated into electrical resistive data, this modern touch screen is durable and offers high resolution.

Computer Mouse (1968)


The 1968 Fall Joint Computer Conference at San Francisco in the United States presented a remarkable number of "firsts." Among them was the first video teleconference; the first use of hypertext (the foundation of today's web links); and the first presentation, by the Stanford Research Institute (SRI), of NLS, short for oNLine System, the revolutionary ancestor of modern computer server software. Such dazzling displays likely distracted people from another important first, moved b the hand of SRI researcher Douglas Engelbart (b.1925): the computer mouse.
   Far from the sleek ergonomic devices of today, the first computer mouse was a wooden box with wheels and a thick electric cord. Engelbart and colleague Bill English (b.1929) first came up with the idea in 1963 and created the device as a very small piece of a much larger computer project. They were looking for something that allowed computer. The first prototype had a cord to the front, but this was so cumbersome it was moved to the back, becoming a "tail," which gave rise to the device's name. "It just looked like a mouse with a tail, and so we called it that in the tab," commented Engelbart.
   Neither Engelbart, English, nor SRI ever marketed the mouse. The next lab to work on it, Xerox's Palo Alto Research Center (PARC), gave it some modern touches but failed to bring it to the masses. That job was done by Steve Jobs, founder of Apple, Inc in the 1980s. Job's company polished up the mouse, making it affordable, available, and integral part of the personal computer. Apple may have made the mouse famous, but Engelbart and English were there first.

3D Computer Graphics (1976)


The idea of 3D graphics-just like painting-is making an image that tricks the brain into thinking it is looking at something with three dimensions rather than two. To do this, you must consider the effect of lighting on the object, as well as depth,perspective,texture,and many more qualities, which the computer then has to project on a two-dimensional surface in a realistic way.
   The advance in computer graphics started in the early 1960s, with the first commercially available graphics terminal, the IBM 2250, hitting the market in 1965. Three years later, Ivan Sutherland (b. 1938) created the first computer-controlled head-mounted display (HMD). The wearer of this helmet was able to see a computer scene in steroscopic 3D, because separate images were displayed for each eye.
   Sutherland subsequently joined what was then the world's leading research center for computer graphics at the University of Utah. One of his students was Edwin Catmull (b.1945), a wannabe animator who conceived the idea of texture mapping, which is based on the fact that the majority of real-life objects have detailed surfaces. He was convinced that you could apply similar patterns to computer-generated object.
   Using this method, he created an animated version of his left hand. Following the world's first computer animation in movies-in the Canadian short film The Hunger from 1974-an animated face as well as Catmull's hand became the first 3D computer-generated images in movies when they featured in Futureworld in 1976.