lunes, 1 de marzo de 2010
History of GPS
When GPS was first being put into service, the US military was concerned about the possibility of enemy forces using the globally-available GPS signals to guide their own weapon systems. To avoid this, the main "coarse acquisition" signal (C/A) transmitted on the L1 frequency (1575.42MHz) was deliberately degraded by offsetting its clock signal by a random amount, equivalent to about 100 meters of distance. This technique, known as "Selective Availability", or SA for short, seriously degraded the usefulness of the GPS signal for non-military users. More accurate guidance was possible for users of dual frequency GPS receivers that also received the L2 frequency (1227.6MHz), but the L2 transmission, intended for military use, was encrypted and was only available to authorised users with the encryption keys.
This presented a problem for civilian users who relied upon ground-based radio navigation systems such as LORAN, VOR and NDB systems costing millions of dollars each year to maintain. The advent of a global navigation satellite system (GNSS) could provide greatly improved accuracy and performance at a fraction of the cost. The accuracy inherent in the S/A signal was however too poor to make this realistic. The military received multiple requests from the Federal Aviation Administration (FAA), United States Coast Guard (USCG) and United States Department of Transportation (DOT) to set S/A aside to enable civilian use of GNSS, but remained steadfast in its objection on grounds of security.
Through the early to mid 1980s, a number of agencies developed a solution to the SA "problem". Since the SA signal was changed slowly, the effect of its offset on positioning was relatively fixed – that is, if the offset was "100 meters to the east", that offset would be true over a relatively wide area. This suggested that broadcasting this offset to local GPS receivers could eliminate the effects of SA, resulting in measurements closer to GPS's theoretical performance, around 15 meters. Additionally, another major source of errors in a GPS fix is due to transmission delays in the ionosphere, which could also be measured and corrected for in the broadcast. This offered an improvement to about 5 meters accuracy, more than enough for most civilian needs.
The US Coast Guard was one of the more aggressive proponents of the DGPS system, experimenting with the system on an ever-wider basis through the late 1980s and early 1990s. These signals are broadcast on marine longwave frequencies, which could be received on existing radiotelephones and fed into suitably equipped GPS receivers. Almost all major GPS vendors offered units with DGPS inputs, not only for the USCG signals, but also aviation units on either VHF or commercial AM radio bands.
They started sending out "production quality" DGPS signals on a limited basis in 1996, and rapidly expanded the network to cover most US ports of call, as well as the Saint Lawrence Seaway in partnership with the Canadian Coast Guard. Plans were put into place to expand the system across the US, but this would not be easy. The quality of the DGPS corrections generally fell with distance, and most large transmitters capable of covering large areas tend to cluster near cities. This meant that lower-population areas, notably in the midwest and Alaska, would have little coverage by ground-based GPS.
Instead, the FAA (and others) started studies for broadcasting the signals across the entire hemisphere from communications satellites in geostationary orbit. This has led to the Wide Area Augmentation System (WAAS) and similar systems, although these are generally not referred to as DGPS, or alternately, "wide-area DGPS". WAAS offers accuracy similar to the USCG's ground-based DGPS networks, and there has been some argument that the latter will be turned off as WAAS becomes fully operational.
By the mid-1990s it was clear that the SA system was no longer useful in its intended role. DGPS would render it ineffective over the US, precisely where it was considered most needed. Additionally, experience during the Gulf War demonstrated that the widespread use of civilian receivers by U.S. forces meant that SA was thought to harm the U.S. more than if it were turned off.[citation needed] After many years of pressure, it took an executive order by President Bill Clinton to get SA turned off permanently in 2000.
Nevertheless, by this point DGPS had evolved into a system for providing more accuracy than even a non-SA GPS signal could provide on its own. There are several other sources of error that share the same characteristics as SA in that they are the same over large areas and for "reasonable" amounts of time. These include the ionospheric effects mentioned earlier, as well as errors in the satellite position ephemeris data and clock drift on the satellites. Depending on the amount of data being sent in the DGPS correction signal, correcting for these effects can reduce the error significantly, the best implementations offering accuracies of under 10 cm.
In addition to continued deployments of the USCG and FAA sponsored systems, a number of vendors have created commercial DGPS services, selling their signal (or receivers for it) to users who require better accuracy than the nominal 15 meters GPS offers. Almost all commercial GPS units, even hand-held units, now offer DGPS data inputs, and many also support WAAS directly. To some degree, a form of DGPS is now a natural part of most GPS operations.
What is a GPS?
Differential Global Positioning System (DGPS) is an enhancement to Global Positioning System that uses a network of fixed, ground-based reference stations to broadcast the difference between the positions indicated by the satellite systems and the known fixed positions. These stations broadcast the difference between the measured satellite pseudoranges and actual (internally computed) pseudoranges, and receiver stations may correct their pseudoranges by the same amount. The correction signal is typically broadcast over UHF radio modem.
The term can refer both to the generalized technique as well as specific implementations using it. It is often used to refer specifically to systems that re-broadcast the corrections from ground-based transmitters of shorter range. For instance, the United States Coast Guard runs one such system in the US and Canada on the longwave radio frequencies between 285 kHz and 325 kHz. These frequencies are commonly used for marine radio, and are broadcast near major waterways and harbors.
Australia runs two DGPS systems: one is mainly for marine navigation, broadcasting its signal on the longwave band;[1] the other is used for land surveys and land navigation, and has corrections broadcast on the Commercial FM radio band.
Two systems for air navigation and precision landing of aircraft, in Australia, will eventually replace the Instrument Landing System. Both utilise DGPS techniques and are called the Ground Based Augmentation System and Ground based Regional Augmentation Systems. Both of these systems broadcast corrections via the aviation VHF band.
A similar system that transmits range corrections from orbiting satellites instead of ground-based transmitters is called a Satellite Based Augmentation System. Different versions of this system include the Wide Area Augmentation System, European Geostationary Navigation Overlay Service, Japan's Multi-Functional Satellite Augmentation System, Canada's CDGPS and the commercial VERIPOS, StarFire and OmniSTAR.
viernes, 26 de febrero de 2010
History of the Mobile Phone
In 1908, U.S. Patent 887,357 for a wireless telephone was issued to Nathan B. Stubblefield of Murray, Kentucky. He applied this patent to "cave radio" telephones and not directly to cellular telephony as the term is currently understood. Cells for mobile phone base stations were invented in 1947 by Bell Labs engineers at AT&T and further developed by Bell Labs during the 1960s. Radiophones have a long and varied history going back to Reginald Fessenden's invention and shore-to-ship demonstration of radio telephony, through the Second World War with military use of radio telephony links and civil services in the 1950s, while hand-held mobile radio devices have been available since 1973. A patent for the first wireless phone as we know today was issued in US Patent Number 3,449,750 to George Sweigert of Euclid, Ohio on June 10, 1969.
In 1945, the zero generation (0G) of mobile telephones was introduced.[citation needed] Like other technologies of the time, it involved a single, powerful base station covering a wide area, and each telephone would effectively monopolize a channel over that whole area while in use.
In 1960, the world’s first partly automatic car phone system Mobile System A (MTA)|MTA was launched in Sweden. With MTA, calls could be made and received in the car to/from the public telephone network, and the car phone could be paged. The phone number was dialed using a rotary dial. Calling from the car was fully automatic, while calling to it required an operator. The person who wanted to call a mobile phone had to know which base station the mobile phone was covered by. The system was developed by Sture Laurén and other engineers at Televerket network operator. Ericsson provided the switchboard while Svenska Radioaktiebolaget (SRA) owned by Ericsson and Marconi provided the telephones and base station equipment. MTA phones were consisted of vacuum tubes and relays, and had a weight of 40 kg. In 1962, a more modern version called Mobile System B (MTB) was launched, which was a push-button telephone, and which used transistors in order to enhance the telephone’s calling capacity and improve its operational reliability. In 1971 the MTD version was launched, opening for several different brands of equipment and gaining commercial success.
The concepts of frequency reuse and handoff, as well as a number of other concepts that formed the basis of modern cell phone technology, were described in the 1970s; see for example Fluhr and Nussbaum, Hachenburg et al. , and U.S. Patent 4,152,647, issued May 1, 1979 to Charles A. Gladden and Martin H. Parelman, both of Las Vegas, Nevada and assigned by them to the United States Government.
Martin Cooper, a Motorola researcher and executive is considered to be the inventor of the first practical mobile phone for hand-held use in a non-vehicle setting. Cooper is the first inventor named on "Radio telephone system" filed on October 17, 1973 with the US Patent Office and later issued as US Patent 3,906,166; other named contributors on the patent included Cooper's boss, John F. Mitchell, Motorola's chief of portable communication products, who successfully pushed Motorola to develop wireless communication products that would be small enough to use outside the home, office or automobile and participated in the design of the cellular phone. Using a modern, if somewhat heavy portable handset, Cooper made the first call on a hand-held mobile phone on April 3, 1973 to a rival, Dr. Joel S. Engel of Bell Labs.
What is a Mobile Phone?
A mobile phone or mobile (also called cellphone and handphone) is an electronic device used for mobile telecommunications (mobile telephone, text messaging or data transmission) over a cellular network of specialized base stations known as cell sites. Mobile phones differ from cordless telephones, which only offer telephone service within limited range, e.g. within a home or an office, through a fixed line and a base station owned by the subscriber and also from satellite phones and radio telephones. As opposed to a radio telephone, a cell phone offers full duplex communication, automates calling to and paging from a public land mobile network (PLMN), and handoff (handover) during a phone call when the user moves from one cell (base station coverage area) to another. Most current cell phones connect to a cellular network consisting of switching points and base stations (cell sites) owned by a mobile network operator. In addition to the standard voice function, current mobile phones may support many additional services, and accessories, such as SMS for text messaging, email, packet switching for access to the Internet, gaming, Bluetooth, infrared, camera with video recorder and MMS for sending and receiving photos and video, MP3 player, radio and GPS.
The International Telecommunication Union estimated that mobile cellular subscriptions worldwide would reach approximately 4.6 billion by the end of 2009. Mobile phones have gained increased importance in the sector of Information and communication technologies for development in the 2000s and have effectively started to reach the bottom of the economic pyramid.
History of Computer
The first use of the word "computer" was recorded in 1613, referring to a person who carried out calculations, or computations, and the word continued to be used in that sense until the middle of the 20th century. From the end of the 19th century onwards though, the word began to take on its more familiar meaning, describing a machine that carries out computations.
The history of the modern computer begins with two separate technologies—automated calculation and programmability—but no single device can be identified as the earliest computer, partly because of the inconsistent application of that term. Examples of early mechanical calculating devices include the abacus, the slide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 150–100 BC). Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when. This is the essence of programmability.
The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to be the earliest programmable analog computer. It displayed the zodiac, the solar and lunar orbits, a crescent moon-shaped pointer travelling across a gateway causing automatic doors to open every hour, and five robotic musicians who played music when struck by levers operated by a camshaft attached to a water wheel. The length of day and night could be re-programmed to compensate for the changing lengths of day and night throughout the year.
The Renaissance saw a re-invigoration of European mathematics and engineering. Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers, but none fit the modern definition of a computer, because they could not be programmed.
In 1801, Joseph Marie Jacquard made an improvement to the textile loom by introducing a series of punched paper cards as a template which allowed his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.
It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer, his analytical engine. Limited finances and Babbage's inability to resist tinkering with the design meant that the device was never completed.
In the late 1880s, Herman Hollerith invented the recording of data on a machine readable medium. Prior uses of machine readable media, above, had been for control, not data. "After some initial trials with paper tape, he settled on punched cards ..." To process these punched cards he invented the tabulator, and the keypunch machines. These three inventions were the foundation of the modern information processing industry. Large-scale automated data processing of punched cards was performed for the 1890 United States Census by Hollerith's company, which later became the core of IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.
During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.
Alan Turing is widely regarded to be the father of modern computer science. In 1936 Turing provided an influential formalisation of the concept of the algorithm and computation with the Turing machine. Of his role in the modern computer, Time magazine in naming Turing one of the 100 most influential people of the 20th century, states: "The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine".
The inventor of the program-controlled computer was Konrad Zuse, who built the first working computer in 1941 and later in 1955 the first computer based on magnetic storage.
George Stibitz is internationally recognized as a father of the modern digital computer. While working at Bell Labs in November 1937, Stibitz invented and built a relay-based calculator he dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to use binary circuits to perform an arithmetic operation. Later models added greater sophistication including complex arithmetic and programmability.
What is a Computer?
A computer is a programmable machine that receives input, stores and manipulates data, and provides output in a useful format.
Although mechanical examples of computers have existed through much of recorded human history, the first electronic computers were developed in the mid-20th century (1940–1945). These were the size of a large room, consuming as much power as several hundred modern personal computers (PCs). Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space. Simple computers are small enough to fit into small pocket devices, and can be powered by a small battery. Personal computers in their various forms are icons of the Information Age and are what most people think of as "computers". The embedded computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots are however the most numerous.
The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore computers ranging from a netbook to a supercomputer are all able to perform the same computational tasks, given enough time and storage capacity.
miércoles, 24 de febrero de 2010
Mdern Telephones
There are very different kinds of telephones which are used to do different things. For example, there are computer modems. The computer modems are what people use a lot today for sending written messages. E-mailing and instant messages are written messages sent back and forth to one another using the telephone line and a computer modem.
The Fax Machine is another different use of the telephone network. The fax is like a copy machine except you can send a document to someone else using telephone lines.
Videophones are a very different use of the telephone network because instead of talking through a phone and not seeing the person, you get to see who you are talking to through a camera that sends a visual image through the network to a television.
The telephone network allows immediate voice, print and visual contact between people separated by thousands of miles. Our world seems a lot smaller after Alexander Graham Bell`s invention because now instead of taking 10 days to talk to someone on a different continent it only takes about ten seconds.
Suscribirse a:
Entradas (Atom)