Microsoft Trying To Patent “GRASP SIMULATION OF A VIRTUAL OBJECT” And Many Interesting Others

GRASP SIMULATION OF A VIRTUAL OBJECT

The claimed subject matter provides a system and/or a method for simulating grasping of a virtual object. Virtual 3D objects receive simulated user input forces via a 2D input surface adjacent to them. An exemplary method comprises receiving a user input corresponding to a grasping gesture that includes at least two simulated contacts with the virtual object. The grasping gesture is modeled as a simulation of frictional forces on the virtual object. A simulated physical effect on the virtual object by the frictional forces is determined. At least one microprocessor is used to display a visual image of the virtual object moving according to the simulated physical effect.

RECOGNIZING MULTIPLE INPUT POINT GESTURES

The present invention extends to methods, systems, and computer program products for recognizing multiple input point gestures. A recognition module receives an ordered set of points indicating that contacts have been detected in a specified order at multiple different locations on a multi-touch input surface. The recognition module determines the position of subsequently detected locations (e.g., third detected location) relative to (e.g., to the left of right of) line segments connecting previously detected locations (e.g., connecting first and second detected locations). The gesture module also detects whether line segments connecting subsequently detected locations (e.g., connecting third and fourth detected locations) intersect line segments connecting previously detected locations (e.g., connecting first and second detected locations). The gesture module recognizes an input gesture based on the relative positions and whether or not line segments intersect. The gesture module then identifies a corresponding input operation (e.g., cut, paste, etc.) to be performed.

EMAIL VIEWS

image

Email viewing techniques are described. In implementations, a determination is made regarding one or more types of content that are included in an email through examination of metadata that describes the one or more types of content. The determination is made responsive to selection of an email in a user interface for output. A choice is made from one of a plurality of views for the email based on the determination; and the email is output in a user interface using the chosen view.

CHANGING POWER MODE BASED ON SENSORS IN A DEVICE

An orientation of a device is detected based on a signal from at least one orientation sensor in the device. In response to the detected orientation, the device is placed in a full power mode.

COLLAPSIBLE TABBED USER INTERFACE

A tab-based collapsible user interface includes selectable user interface tabs, a ribbon area, and an editing surface. When a browse tab is selected, the ribbon area displays information and does not include any user interface controls for performing commands. When the browse tab is selected, a vertical scroll bar is displayed adjacent to the ribbon area and the editing surface. When the vertical scroll bar is used, the ribbon area and the editing surface are both scrolled. When a page tab or a contextual tab is selected, the ribbon area displays user interface controls for performing commands. When a page tab or a contextual tab is selected, a vertical scroll bar is displayed adjacent to the editing surface but not adjacent to the ribbon area. When the vertical scroll bar is used, the contents of the editing surface are scrolled but the ribbon area is not scrolled.

THROWING GESTURES FOR MOBILE DEVICES

At least one tilt sensor generates a sensor value. A context information server, receives the sensor value and sets at least one context attribute. An application uses at least one context attribute to determine that a flinging gesture has been made and to change an image on a display in response to the flinging gesture.

About the author  ⁄ pradeep

Pradeep, a Computer Science & Engineering graduate.