A new Apple patent (number 20110199479) shows that the company plans to expand on its Map + Compass feature on iOS devices with augmented reality.

According to the patent, a user points a handheld communication device to capture and display a real-time video stream. The handheld communication device detects geographic position, camera direction, and tilt of the image capture device. The user sends a search request to a server for nearby points of interest. The handheld communication device receives search results based on the search request, geographic position, camera direction, and tilt of the handheld communication device.

The handheld communication device visually augments the captured video stream with data related to each point of interest. The user then selects a point of interest to visit. The handheld communication device visually augments the captured video stream with a directional map to a selected point of interest in response to the user input. Jaron Waldman is the inventor.

Here’s Apple’s background and summary of the invention: “Augmented reality systems supplement reality, in the form of a captured image or video stream, with additional information. In many cases, such systems take advantage of a portable electronic device’s imaging and display capabilities and combine a video feed with data describing objects in the video. In some examples, the data describing the objects in the video can be the result of a search for nearby points of interest.
For example, a user visiting a foreign city can point a handheld communication device and capture a video stream of a particular view. A user can also enter a search term, such as museums.

“The system can then augment the captured video stream with search term result information related to nearby museums that are within the view of the video stream. This allows a user to supplement their view of reality with additional information available from search engines.

“However, if a user desires to visit one of the museums, the user must switch applications, or at a minimum, switch out of an augmented reality view to learn directions to the museum. However, such systems can fail to orient a user’s with a poor sense of direction and force the user to correlate the directions with objects in reality. Such a transition is not always as easy as it might seem.

“For example, an instruction that directs a user to go north on Main St. assumes that the user can discern which direction is north. Further, in some instances, street signs might be missing or indecipherable, making it difficult for the user to find the directed route.

“Such challenges can be overcome using the present technology. Therefore, a method and system for displaying augmented reality maps are disclosed. By interpreting the data describing the surrounding areas, the device can determine what objects are presently being viewed on the display. The device can further overlay information regarding the presently viewed objects, thus enhancing reality.

“In some embodiments, the device can also display search results overlaid onto the displayed video feed. Search results need not be actually viewable by a user in real life. Instead, search results can also include more-distant objects.

“The user can interact with the display using an input device such as a touch screen. Using the input device, the user can select from among objects represented on the screen, including the search results.

“In one form of interaction, a device can receive an input from the user requesting directions from a present location to a selected search result. Directions can be overlaid onto the presently displayed video feed, thus showing a course and upcoming turns. As the user and associated device progress along a route, the overlaid directions can automatically update to show the updated path.

“In some embodiments the display can also include indicator graphics to point the user in a proper direction. For example, if the user is facing south but a route requires the user to progress north, “no route” would be displayed in the display because the user would be looking to the south but the route would be behind him or her. In such instances, an indicator can point the user in the proper direction to find the route.

“In some embodiments, multiple display views can be presented based on the orientation of the device. For example, when the device is held at an angle with respect to the ground of 45 degrees to 180 degrees, the display view can present the augmented reality embodiments described herein. However, when the device is held at an angle less than 45 degrees, an illustrated or schematic view can be represented. In such embodiments, when the device is held at an angle with respect to the ground of less than 45 degrees, the device is likely pointed at the ground, where few objects of interest are likely to be represented in the displayed video. In such instances, a different map view is more likely to be useful. It should be appreciated that precise range of tilt can be adjusted according the actual environment or user preferences.

“In practice, a user points a handheld communication device to capture and display a real-time video stream of a view. The handheld communication device detects a geographic position, camera direction, and tilt of the image capture device. The user sends a search request to a server for nearby points of interest. The handheld communication device receives search results based on the search request, geographic position, camera direction, and tilt of the handheld communication device.

“The handheld communication device visually augments the captured video stream with data related to each point of interest. The user then selects a point of interest to visit. The handheld communication device visually augments the captured video stream with a directional map to a selected point of interest in response to the user input.

“A method of augmenting a video stream of a device’s present surrounding with navigational information is disclosed. The user can instruct the device to initiate a live video feed using an onboard camera and display the captured video images on a display. By polling a Global Positioning System (GPS) device, a digital compass, and optionally, an accelerometer, location, camera direction, and orientation information can be determined. By using the location, camera direction, and orientation information, the device can request data describing the surrounding areas and the objects therein. In some embodiments, this data includes map vector data.

“This can be requested from an onboard memory or a server. The data describing surrounding areas can further be requested in conjunction with a search request. The search request can also include a request for information about nearby places of interest.”

— Dennis Sellers