InvestorsHub Logo
Followers 27
Posts 2838
Boards Moderated 1
Alias Born 12/23/2009

Re: None

Friday, 05/29/2015 7:15:42 AM

Friday, May 29, 2015 7:15:42 AM

Post# of 46516
Bungie petitions can be found at:

https://ptabtrials.uspto.gov/prweb/PRWebLDAP2/HcI5xOSeX_yQRYZAnTXXCg%5B%5B*/!STANDARD?UserIdentifier=searchuser#

Paper No. ____
Filed: May 26, 2015
Filed on behalf of: Bungie, Inc.
By: Michael T. Rosato
Matthew A. Argenti
WILSON SONSINI GOODRICH & ROSATI
701 Fifth Avenue, Suite 5100
Seattle, WA 98104-7036
Tel.: 206-883-2529
Fax: 206-883-2699
Email: mrosato@wsgr.com
Email: margenti@wsgr.com
UNITED STATES PATENT AND TRADEMARK OFFICE
_____________________________
BEFORE THE PATENT TRIAL AND APPEAL BOARD
_____________________________
BUNGIE, INC.,
Petitioner,
v.
WORLDS INC.,
Patent Owner.
_____________________________
Patent No. 7,945,856
_____________________________
PETITION FOR INTER PARTES REVIEW OF
U.S. PATENT NO. 7,945,856-iTable
of Contents
Page
I. INTRODUCTION ................................................................................................. 1
A. Brief Overview of the ’856 Patent ........................................................ 1
B. Brief Overview of the Prosecution History ........................................... 3
C. Brief Overview of the Scope and Content of the Prior Art ................... 4
D. Level of Skill in the Art ......................................................................... 9
II. GROUNDS FOR STANDING ................................................................................. 9
III. MANDATORY NOTICES UNDER 37 C.F.R. § 42.8 ............................................... 9
IV. STATEMENT OF THE PRECISE RELIEF REQUESTED FOR EACH CLAIM
CHALLENGED .................................................................................................. 10
V. CLAIM CONSTRUCTION ................................................................................... 11
i. “avatar” ..................................................................................... 11
ii. “determining” ............................................................................ 11
iii. “client process”/”server process” .............................................. 13
VI. STATEMENT OF NON-REDUNDANCY ............................................................... 14
VII. DETAILED EXPLANATION OF GROUNDS FOR UNPATENTABILITY ................... 14
A. [Ground 1] Claim 1 is Anticipated under 35 U.S.C. § 102 by
Funkhouser .......................................................................................... 14
B. [Ground 2] Claim 1 is Anticipated under 35 U.S.C. § 102 by
Durward ............................................................................................... 26
VIII. CONCLUSION ................................................................................................... 36
IX. PAYMENT OF FEES UNDER 37 C.F.R. §§ 42.15(A) AND 42.103 ....................... 37
X. APPENDIX – LIST OF EXHIBITS ........................................................................ 38-1-
I. Introduction
Pursuant to the provisions of 35 U.S.C. § 311 and § 6 of the Leahy-Smith
America Invents Act (“AIA”), and to 37 C.F.R. Part 42, Bungie, Inc.,
(“Petitioner”) hereby requests review of United States Patent No. 7,945,856 to
Leahy et al. (hereinafter “the ’856 patent,” Ex. 1001) that issued on May 17, 2011,
and is currently assigned to Worlds Inc. (“Patent Owner”). This Petition
demonstrates, by a preponderance of the evidence, that there is a reasonable
likelihood that claim 1 of the ’856 patent is unpatentable for failing to distinguish
over prior art. Thus, claim 1 of the ’856 patent should be found unpatentable and
canceled.
A. Brief Overview of the ’856 Patent
The ’856 patent is entitled “System and Method for Enabling Users to
Interact in a Virtual Space.” See also Ex. 1002, ¶¶ 14-26. In a general sense, the
’856 patent is directed to a client-server network system for enabling multiple
users to interact with each other in a virtual world. See, e.g., Ex. 1001 at Abstract,
claim 1. The specification describes a method where a number of users can
interact in a virtual world at the same time. Id. at Abstract. Each user is
represented by an avatar and interacts with a client system that “is networked to a
virtual world server.” Id. at 3:9-10.
A user’s movement and viewing of the virtual world, per the ’856 patent,
includes server-based processing of users’ virtual world positional information, in
addition to client processing techniques similar to previous peer-to-peer systems.
Id. at Abstract, 1:66-2:5. The ’856 patent indicates “each user executes a client -2-
process to view a virtual world from the perspective or point of view of that user.”
Id. at Abstract, 2:35-38, 5:22-30. The ’856 patent states “n order that the view
can be updated to reflect the motion of the remote user’s avatars, motion,
information is transmitted to a central server process which provides positional
updates to client computers for neighbors of the user at that client.” Id. at Abstract,
2:36-43, 5:36-54. As the user avatar moves throughout the virtual space, the user’s
client system sends the server updates regarding the user’s “current location, or
changes in its current location, . . . and receives
updated position information of the other
clients.” Id. at 3:34-39.
Figure 1 (shown) of the ’856 patent
provides a depiction of one user’s field of view
or perspective showing two other users that are
represented in a virtual world by penguin avatars. Id. at 3:28-30, 3:40.
Claim 1 of the ’856 patent recites the following:
1. A method for enabling a first user to interact with second users in a
virtual space, wherein the first user is associated with at first avatar
and a first client process, the first client process being configured for
communication with a server process, and each second user is
associated with a different second avatar and a second client process
configured for communication with the server process, at least one
second client process per second user, the method comprising:
(a) receiving by the first client process from the server process
received positions of selected second avatars; and -3-
(b) determining, from the received positions, a set of the second
avatars that are to be displayed to the first user,
wherein the first client process receives positions of fewer than all of
the second avatars.
Step (a) of claim 1 relates to a central concept of the ’856 patent: server
filtering, by which the server filters information to send to a client so that the client
will receive positional information on a subset of users in a virtual world.
Ex. 1002, ¶ 19. This is reflected in claim 1 as a client receiving from a server
“positions of selected second avatars” wherein the selected second avatars are
“fewer than all of the second avatars.” Id. The client processes information
received from the server to “determin[e], from the received positions, a set of the
other users’ avatars that are to be displayed,” as in step (b) of claim 1. Id. As
discussed in more detail below, both the server-side filtering and client-side
processing claimed by the ’856 patent were described in the prior art and known to
those in the field of virtual reality systems prior to the ’856 patent. Id. at ¶ 26.
B. Brief Overview of the Prosecution History
Application No. 12/353,218 was filed on January 13, 2009 and issued on
May 17, 2011 as the ’856 patent. The ’856 patent is a continuation claiming
priority benefit back to U.S. Provisional Patent Application No. 60/020,296, filed
on November 13, 1995.
Following an initial office action rejecting the claim, and consistent with the
reason for allowance in the parent applications, the sole distinction argued over the
prior art was that the reference relied upon by the Examiner did not disclose “the
first client process receives positions of fewer than all of the second avatars” and -4-
rather “appear[ed] at least to suggest that each terminal receives positions of all of
the other avatars.” See Ex. 1004 at 0214-0215. The Examiner accepted this
distinction and allowed the claim. Ex. 1004 at 0231. While the server-filtering
requirement was thus relied on as a key distinction over the prior art during
prosecution, as explained by Dr. Zyda and discussed in detail below, this
limitation, as well as all other claim elements, was in fact disclosed by multiple
references predating the ’856 patent.
Both references which form the basis for challenge in this Petition were
among over 300 references cited to the Examiner in IDSs. The Examiner indicated
that he considered both references. Ex. 1004 at 0243,0246 (“ALL REFERENCES
CONSIDERED EXCEPT WHERE LINED THROUGH. /K.N./”). Neither
reference was ever applied against the claims in an office action, either in this
prosecution or any related prosecution, and it is unclear to what extent the
Examiner had an opportunity to substantively consider them. Moreover, the
specific arguments presented in this petition were not before the Examiner during
prosecution, nor did the Examiner have the benefit of Dr. Zyda’s testimony as
submitted here.
C. Brief Overview of the Scope and Content of the Prior Art
This petition is supported by the expert declaration of Dr. Michael Zyda.
Ex. 1002; see also, e.g., Ex. 1002, ¶¶ 1-10. Dr. Zyda is currently the Founding
Director of the University of Southern California GamePipe Laboratory, and a
Professor of Engineering Practice in the USC Department of Computer Science.
Dr. Zyda oversees the USC Games program, one of the top academic programs for -5-
game development in the world. He has over 40 years of experience in the fields
of computer graphics and large-scale networked 3D virtual environments, among
other game-related disciplines. From 1986 to 2000 he was the Director of the
NPSNET Research Group while at the Naval Postgraduate School, developing
large-scale, low-cost networked virtual reality systems for military simulation
exercises. He also is a co-founder and past chair of the Association for Computing
Machinery (ACM) Special Interest Group on Computer Graphics (SIGGRAPH)
Symposium on Interactive 3D Graphics. He is a member of the Academy of
Interactive Arts & Sciences, was given a lifetime appointment as a National
Associate of the National Academies by the Counsel of the National Academy of
Sciences, and has published over 130 technical books, reports, and papers relating
to modeling, simulation, virtual reality, virtual environments, and computing.
As explained in detail in Dr. Zyda’s declaration and addressed in further
detail below (Section VII), the involved claims would not have been considered
new or nonobvious to a person of ordinary skill in the art at the relevant time.
Ex. 1002, ¶¶ 34-49, 65-110. Specifically, virtual reality systems utilizing
client/server network architecture were known prior to the alleged invention of the
’856 patent, as were the server-side filtering and client processing that are the
focus of the ’856 patent claims. Id.
Beginning with virtual worlds generally, the term “virtual world” was first
coined in the mid-1960s by, Ivan Sutherland, a leader in the field of computer
graphics. Ex. 1002, ¶ 34. Some of the earliest such systems were developed in the
1970s, presenting users with simple displays representing a common space where -6-
they could interact. Ex. 1002, ¶ 35; see also Ex.1012. As computer networks
became more sophisticated throughout the 1980s, the type of networks used to
implement virtual reality systems likewise evolved. Ex. 1002, ¶¶ 36-37; see also
Ex. 1013; Ex.1014; Ex. 1015; Ex. 1016. By the early to mid-1990s, the clientserver
network approach was utilized in many virtual reality systems. Ex. 1002,
¶ 38. In fact, the ’856 patent acknowledges this. Ex. 1001 at 1:52-65.
With regard to the subject matter claimed in the ’856 patent, virtual reality
systems and methods utilizing a client-server network approach, and employing
server filtering and client processing in the manner claimed, were also known.
This is illustrated in the prior art on which the current challenge is based and
includes the references briefly discussed below and in further detail in Section VII.
Thomas Funkhouser’s article “RING: A Client-Server System for MultiUser
Virtual Environments” (hereinafter “Funkhouser,” Ex. 1005) described a
virtual reality system utilizing a client-server architecture, server filtering, and
client processing before the application for the ’856 patent. Funkhouser appears in
a collection of papers entitled PROCEEDINGS OF THE 1995 SYMPOSIUM ON
INTERACTIVE 3D GRAPHICS (“1995 SI3D”). Ex. 1006. The 1995 SI3D symposium
was sponsored by ACM SIGGRAPH, and took place on April 9-12, 1995. Id. at
Title Page. Funkhouser, in particular, was distributed at the symposium and
presented on the morning of April 11, 1995. Id. at 2. Dr. Zyda served as the chair
of this symposium in 1995, among other years, and has personal knowledge that
copies of the proceedings, which included Funkhouser, were distributed to the
approximately 250 attendees at the symposium. Ex. 1002, ¶ 40-41. Accordingly, -7-
Funkhouser was published and distributed no later than April 12, 1995, the final
day of the 1995 SI3D symposium, and qualifies as prior art under 35 U.S.C.
§ 102(a). Moreover, Funkhouser itself explicitly confirms that the Proceedings
publication was provided to ACM members pursuant to its Member Plus program,
“which means distribution of the proceedings to more than 4,000 individuals.” Ex.
1006 at 4. Further evidence of Funkhouser’s public availability and distribution is
provided by a copy of the Proceedings publication received by the University of
Illinois Department of Computer Science Library on June 27, 1995. Ex. 1007.
Funkhouser “describes the client-server design, implementation and
experimental results for a system that supports real-time visual interaction between
a large number of users in a shared 3D virtual environment.” Ex. 1005 at 01
(Abstract). Funkhouser also teaches the concept that a virtual reality network’s
resources can be most efficiently utilized by filtering updates at the server level,
explaining that “a server may determine that a particular update message is
relevant only to a small subset of clients and then propagate the message only to
those clients or their servers.” Id. at 03; Ex. 1002, ¶ 44. In Funkhouser, this
“server-based message culling” is based on “precomputed line-of-sight visibility
information,” i.e., if a particular user’s avatar is not in a location within the virtual
space where it is possible to see another user’s avatar, the server will filter out
updates regarding the other user’s position and not send them to the user’s
computer. Ex. 1005 at 04; Ex, 1002, ¶ 44.
Funkhouser also discloses the client processing claimed in the ’856 patent,
as the system it describes employs the common practice of providing a client with -8-
somewhat more information than is strictly necessary to display the virtual world
to the user, whereby the client then narrows this information down to produce a
field-of-view display from the perspective of the user’s avatar. Ex. 1002, ¶ 46.
For example, Funkhouser provides an example where a client received information
regarding 14 remote entities from the server, but determined to display only the
four that fell within the client avatar’s field of view. See Ex. 1005 at 09, Plate II;
Ex. 1002, ¶ 46.
U.S. Patent No. 5,659,691 to Durward, et al. (hereinafter “Durward,”
Ex. 1008), describes a “virtual reality system” where “multiple users located at
different remote physical locations may communicate with the system.” Ex. 1008
at Abstract, 1:46-48. In the Durward virtual reality system, all interaction between
users passed through a “central control unit” server rather than directly from user
to user. See id. at 2:49-55, 6:13-52, Figs. 1, 2 & 7. Like Funkhouser, Durward
discloses the server filtering claimed in the ’856 patent, as it describes a system
where “data communicated to the user typically corresponds to the portion of the
virtual space viewed from the perspective of the virtual being.” Id. at 1:65-67;
Ex. 1002, ¶ 43. In Durward, “visual relevant spaces” are defined for various users,
and those spaces “determine which state changes are communicated to . . . users.”
Ex. 1008 at 4:54-56. Durward teaches that this server-side update filtering helped
“reduce the amount of data communicated between the [central] computer and
each user.” Id. at 2:9-12.
Durward also discloses the client “determining” aspect of the ’856 patent
claim, as even after the server-side filtering was applied each client would receive -9-
information regarding more remote users than would actually be displayed to the
user. For example, the client would receive information regarding elements in the
“field of view of the [user’s] virtual being and areas in close proximity to it”
before then determining which of those elements actually fall in the field of view
in order to “display[] the artificial world from the perspective of the virtual being.”
Id. at 1:28-31, 5:13-27, Fig. 5; Ex. 1002, ¶ 45.
For these reasons, and as explained below and in Dr. Zyda’s declaration, the
method for allowing a plurality of users to interact with a virtual space as recited in
claim 1 was already described in the prior art as of the earliest priority date for the
’856 patent.
D. Level of Skill in the Art
As Dr. Zyda explains, a person of ordinary skill in the relevant field prior to
November 13, 1995 would include someone who had, through education or
practical experience, the equivalent of a bachelor’s degree in computer science or a
related field and at least an additional two years of work experience developing or
implementing networked virtual environments. Ex. 1002, ¶¶ 50-55.
II. Grounds for Standing
Petitioner certifies that, under 37 C.F.R. § 42.104(a), the ’856 patent is
available for inter partes review, and Petitioner is not barred or estopped from
requesting inter partes review of the ’856 patent on the grounds identified.
III. Mandatory Notices under 37 C.F.R. § 42.8
Real Party-in-Interest (37 C.F.R. § 42.8(b)(1)): Bungie, Inc. is the real
party-in-interest. -10-
Related Matters (37 C.F.R. § 42.8(b)(2)): Petitioner is aware of the
following matter in which the ’856 patent has been asserted: Worlds, Inc. v.
Activision Blizzard, Inc. et al, Case No. 1:12-cv-10576-DJC (D. Mass.). Petitioner
is also seeking inter partes review of U.S. Patents Nos. 7,181,690, 7,493,558,
8,082,501, and 8,145,998, each of which claims priority to the same provisional
application as the ’856 patent.
Lead and Back-Up Counsel (37 C.F.R. § 42.8(b)(3))
Lead Counsel: Michael T. Rosato (Reg. No. 52,182)
Back-Up Counsel: Matthew A. Argenti (Reg. No. 61,836)
Service Information – 37 C.F.R. § 42.8(b)(4). Petitioners hereby consent to
electronic service.
Email: mrosato@wsgr.com; margenti@wsgr.com
Post: WILSON SONSINI GOODRICH & ROSATI, 701 Fifth Avenue, Suite 5100,
Seattle, WA 98104-7036
Tel.: 206-883-2529 Fax: 206-883-2699
IV. Statement of the Precise Relief Requested for Each Claim Challenged
Petitioners request review of claim 1 of the ’856 patent under 35 U.S.C.
§ 311 and AIA § 6. The grounds for relief expressly limited to a determination that
claim 1 of the ’856 patent be canceled as unpatentable as follows:
Ground Claims Description
1 1 Anticipated under 35 U.S.C. § 102 by Funkhouser
2 1 Anticipated under 35 U.S.C. § 102 by Durward -11-
V. Claim Construction
A claim subject to inter partes review receives the broadest reasonable
construction or interpretation in light of the specification of the patent in which it
appears, because among other reasons, the patent owner has an opportunity to
amend the claims. See 37 C.F.R. § 42.100(b); In re Cuozzo Speed Techs., LLC.,
778 F.3d 1271, 1279-82 (Fed. Cir. 2015). A few terms that warrant discussion are
identified and discussed below.
i. “avatar”
Claim 1 of the ’856 patent recites the term “avatar.” The ’856 patent
specification states in the “Summary of the Invention” that “[t]he virtual world
shows avatars representing the other users who are neighbors of the user viewing
the virtual world.” Ex. 1001 at 2:36-38. The usage of the term “avatar” in the
’856 patent is consistent with the way it would be understood by one of ordinary
skill, who would understand it to refer to a graphical representation of a user.
Ex. 1002, ¶ 57; see also Ex. 1010 (Microsoft Press Computer Dictionary (3rd Ed.
1997) (“avatar: 1. In virtual-reality environments such as certain types of Internet
chat rooms, a graphical representation of a user.”). Accordingly, the term “avatar”
should be construed to mean “a graphical representation of a user.”
ii. “determining”
Claim 1 requires that the claimed “client process” associated with a first user
performs the step of “determining, from the received positions, a set of the second
avatars that are to be displayed,” where the “received positions” refers to the -12-
filtered set of position information regarding other users’ avatars that is received
from the server. Ex. 1002, ¶ 58.
Under the broadest reasonable interpretation, “determining” as recited in the
claims at least includes executing a client process to determine, from user positions
received from the server, other users’ avatar(s) located within a point of view or
perspective (e.g., field of view) of the first user. Id. at ¶ 59. Such a perspective
can then be output for display to the first user. This is consistent with disclosure in
the specification. See, e.g., Ex. 1001, Abstract (“[e]ach user executes a client
process to view a virtual world from the perspective of that user.”); see also id. at
5:28-32 (“Register 114 also provides the current position to rendering engine 120,
to inform rendering engine 120 of the correct view point for rendering. Remote
avatar position table 112 contains the current positions of the ‘in range’ avatars
near A’s avatar.”).
The specification further discusses a “crowd control” function that can be
applied “in some cases.” Id. at 5:31-54. The specification provides an example
where the client determines a set of other users’ avatars to display based on a
maximum number that can be displayed. See id. at 5:37-6:19 (discussing variable
N' maintained by client); see also id. at 5:65-67 (“...user A might have a way to
filter out avatars on other variables in addition to proximity, such as user ID.”);
Ex. 1002, ¶ 60. While not required under the BRI, the “crowd control” function
would also fall within the scope of “determining” per claim 1.
The construction proposed for purposes of this review is also consistent with
the position advanced by the Patent Owner in district court litigation involving the -13-
’856 patent. Patent Owner argued that the term “determining, from the received
positions, [a/the] set of the other users’ avatars that are to be displayed” “contains
no technical terms” and should be given its plain and ordinary meaning. Ex. 1009
at 6; see also id. at 11 (arguing that numerous methods may be used as the basis of
the “determining,” including “(among others), proximity, user ID, orientation,
strain on computing resources, local user selection, or any other participant
condition”). Accordingly, the step of determining, as recited in the claims, can
herein reasonably be construed in light of the specification as including executing a
client process to determine, from user positions received from the server, other
users’ avatar(s) located within a point of view or perspective (e.g., field of view) of
the first user.
iii. “client process”/”server process”
Claim 1 of the ’856 patent requires a “client process” and a “server process.”
The specification explains that “each user executes a client process to view a
virtual world from the perspective of that user.” Ex. 1001 at 2:35-36 Additionally,
n order that the [user’s] view can be updated to reflect the motion of the remote
user’s avatars, motion information is transmitted to a central server process which
provides positions updates to client processes for neighbors of the user at the client
process.” Id. at 2:38-43. The specification further discloses that “one or more
computer systems are used to, [sic] implement virtual world server 22.” Id. at
3:61-63; see also Ex. 1002, ¶ 61.
The use of these terms in the ’856 patent is consistent with their common
and ordinary meaning to one of skill in the art. Ex. 1002, ¶ 62. Accordingly, a -14-
person of ordinary skill would understand the term “client process” to refer to a
program being executed by a user’s computer that receives services from a server,
and would understand the term “server process” to refer to a program being
executed by a computer or computers that provide(s) services to a client. Id.
VI. Statement of Non-Redundancy
Each ground raised in this Petition is meaningfully distinct. Ground 1 relies
on Funkhouser, a conference-presented research paper that constitutes a printed
publication qualifying as prior art under 35 U.S.C. § 102(a). Funkhouser was
published approximately seven months before the priority date for the ’856 patent.
Ground 2 relies on different art in Durward, a U.S. Patent qualifying as prior art
under 35 U.S.C. § 102(e). The application for the Durward patent was filed in
September 1993, over two years before the priority date for the ’856 patent. In
addition to their separate and distinct disclosures, Funkhouser and Durward are
non-redundant should Patent Owner attempt to establish an earlier date of
invention for the ’856 patent. For example, should the Patent Owner attempt to
disqualify Funkhouser as prior art (e.g., swear behind), the availability of Durward
would likely render such an attempt moot considering the latter reference predates
the ’856 patent by some two years.
VII. Detailed Explanation Of Grounds For Unpatentability
A. [Ground 1] Claim 1 is Anticipated under 35 U.S.C. § 102 by
Funkhouser
Funkhouser, published no later than April 12, 1995, is qualified as a prior art
printed publication for this proceeding under 35 U.S.C. § 102(a). See supra, -15-
Section I.C. As described in further detail below, the teachings of Funkhouser
disclose each and every feature of claim 1 of the ’856 patent. Ex. 1002, ¶¶ 63-84.
Funkhouser describes a client-server system for multi-user virtual
environments. Ex. 1005 at Title. The system disclosed in Funkhouser “supports
real-time visual interaction between a large number of users in a shared 3D virtual
environment.” Id. at 01. As described in Funkhouser, a “key feature of the system
is that server-based visibility algorithms compute potential visual interactions
between entities representing users in order to reduce the number of messages
required” to maintain the virtual environment across a wide-area network. Id.
Funkhouser discloses that this message reduction is accomplished by sending
avatar update information “only to workstations with entities that can potentially
perceive the change.” Id.; see also Ex. 1002, ¶¶ 65-66.
Funkhouser explains that this server-side filtering is based on computations
regarding which users are potentially visible to each other, based on their location
in the virtual environment. Ex.1005 at 03 (Fig. 6); see also Ex. 1002, ¶ 66.
Funkhouser further explains that these computations regarding potential visibility
are then used to filter distribution of update messages only to those users to which
the updates are potentially relevant (i.e., potentially visible). Ex. 1005 at 04 (Fig.
7); see also Ex. 1002, ¶ 67.
Funkhouser also discloses the client processing in the manner claimed in the
’856 patent. Upon receiving the server-filtered update information, the client
workstation then “process[es] the update messages” and “simulat[es] behavior for
a small subset of the entities participating in the simulation.” Ex. 1005 at 08. This -16-
processing includes a determination of which user avatars should be “displayed on
the client workstation screen from the point of view of one or more of its entities.”
Id. at 03; see also Ex. 1002, ¶ 68.
Funkhouser’s Figure 6 (annotated version shown here) provides an example
illustrating server-side filtering and client processing as claimed in the ’856 patent:
Ex. 1002, ¶ 69.
As Dr. Zyda explains, Funkhouser illustrates a virtual space (highlighted and
marked with Box #1) including all user entities (i.e., avatars) A, B, C and D
positioned therein. The shaded area Funkhouser shows in “stipple” (marked with
Box #2) includes users A and B (less than all users), and represents the cells within
the virtual space that are potentially visible to A. Client A will only receive
positional updates from the server for users which are within this area (Box #2).
The cross-hatched region (marked with Box #3) represents A’s perspective or field
of view. Thus, after receiving the filtered positional updates from the server for -17-
less than all users as shown by Box #2 (i.e., corresponding to Step (a) of claim 1),
the client responsible for A will determine which, if any, remote users fall within
A’s field of view in order to display the perspective from A’s avatar as shown by
Box #3 (i.e., corresponding to Step (b) of claim 1). Id. at ¶ 69.
The discussion below further illustrates that Funkhouser discloses each and
every element of claim 1 of the ’856 patent. The particular citations listed are
intended to be illustrative, not exhaustive.
Assuming that the claim 1 preamble is limiting, Funkhouser discloses the
preamble elements.
’856 Patent Funkhouser
1. A method for
enabling a first
user to interact
with second users
in a virtual space,
wherein the first
user is associated
with at [sic] first
avatar and a first
client process,
the first client
process being
configured for
communication
with a server
process, and each
second user is
associated with a
different second
avatar and a
second client
process
configured for
communication
with the server
process, at least
“This paper describes the client-server design, implementation,
and experimental results for a system that supports real-time
visual interaction between a large number of users in a shared
3D virtual environment.” p. 01 (Abstract).
“Each user is represented in the shared virtual environment by
an entity [avatar] rendered on every
other user’s workstation, and multi-user
interaction is supported by matching
user actions to entity updates in the
shared virtual environment.” p. 01.
“Every RING entity is managed by
exactly one client workstation. Clients
execute the programs necessary to generate behavior for their
entities.” p. 03.
“In a multi-user visual simulation
system, users run an interactive interface
program on (usually distinct)
workstations connected to each other via
a network.” p. 01.
“Communication between clients is
managed by servers. Clients do not send messages directly to
other clients, but instead send them to servers which forward
them to other client and server workstations participating in the
same distributed simulation (see Figure 5).” p. 03. -18-
one second client
process per
second user, the
method
comprising:
See also Ex. 1002, ¶¶ 70-72.
Funkhouser describes enabling interaction between users in a virtual space,
as recited in claim 1. Funkhouser describes “a system that supports real-time
visual interaction between a large number of users in a shared 3D virtual
environment.” Ex. 1005 at 01, Fig. 1 and corresponding discussion. Funkhouser
describes users as each having an associated avatar—e.g., “each user is represented
in the shared virtual environment by an entity [i.e., avatar] rendered on every other
user’s workstation.” Id.; see also Ex. 1002, ¶ 71. Each user avatar/entity “is
managed by exactly one client workstation.” Ex. 1005 at 03 (emphasis in
original). Dr. Zyda explains that each user and avatar has a client process
associated with it, in the form of software executed on the client workstation. Ex.
1002, ¶ 71; see also Ex. 1005 at 03 (“Clients execute the programs necessary to
generate behavior for their entities.”). Each client process is also in
communication with a server process, as Funkhouser explains that
“[c]ommunication between clients is managed by servers.” Ex. 1005 at 03, Fig. 5
and corresponding discussion; see also Ex. 1002, ¶ 72. Accordingly, Funkhouser
discloses a method for enabling a first user to interact with second users in a virtual
space as specifically recited in the preamble of claim 1.
Funkhouser discloses claim 1.1, or step (a) of claim 1:
’856 Patent Funkhouser
[1.1] (a)
receiving by the
“When an entity changes state, update messages are sent only
to workstations with entities that can potentially perceive the -19-
’856 Patent Funkhouser
first client
process from the
server process
received
positions of
selected second
avatars; and
change – i.e., ones to which the update is visible.” p. 01. See
also, similar discussion at p. 02.
“In the current implementation, RING servers forward update
messages in real-time only to other servers and clients
managing entities that can possibly ‘see’ the effects of the
update. Server-based message culling is implemented using
precomputed line-of-sight visibility information. Prior to the
multi-user simulation, the shared virtual environment is
partitioned into a spatial subdivision of cells whose boundaries
are comprised of the static, axis-aligned ”polygons of the
virtual environment. A visibility precomputation is performed
in which the set of cells potentially visible to each cell is
determined by tracing beams of possible sight-lines through
transparent cell boundaries. During the multi-user simulation,
servers keep track of which cells contain which entities by
exchanging ‘periodic’ update messages when entities cross cell
boundaries. Real-time update messages are propagated only to
servers and clients containing entities inside some cell visible
to the one containing the updated entity” p. 03.
See also:
“If N entities move through a shared virtual environment
simultaneously, each modifying its position and/or orientation
M times per second, then M * N updates are generated to a
shared database per second.” p. 01.
“Clients sent update messages only for changes in derivatives
of entity position and/or orientation (i.e., dead reckoning) while
other clients simulated intermediate positions with linear
‘smooth-back.’” p. 05.
“Object-space visibility algorithms are used to compute the
region of influence for each state change, and then update
messages are sent only to the small subset of workstations to
which the update is relevant.” p. 02.
“If entity A is modified: client a send an update message to
server X. Server X propagates that message to server Y, but
not to server Z because entities C and D are not inside cells in
the cell-to-cell visibility of the cell containing entity A. Server
Y forwards the message to Client B which updates the local
surrogate for entity A. If entity B is modified...” p. 04. Figs. 4
& 6 and corresponding discussion.
See also Ex. 1002, ¶¶ 73-77. -20-
As illustrated above and further described below, Funkhouser discloses the
claimed step of receiving a position of selected avatars from the server process.
First, Funkhouser discloses that clients receive updates regarding other entities
from the server, including updates regarding the other entities’ “position and/or
orientation.” Ex. 1005 at 01; see also Ex. 1002, ¶ 73. Funkhouser further
discloses that a particular client receives less than all of the positional updates from
the server, because “[w]hen an entity changes state, update messages are sent only
to workstations with entities that can potentially perceive the change – i.e., ones to
which the update is visible.” Ex. 1005 at 01. Upon computing the clients for
which the positional update is relevant (i.e., entities that can possibly see the
change), “update messages are sent only to the small subset of workstations to
which the update is relevant.” Id.; see also Ex. 1002, ¶ 74. Funkhouser refers to
this server-side filtering as “server-based message culling.” Ex. 1005 at 03; see
also Ex. 1002, ¶ 75.
Funkhouser provides an example illustrating its server-based message
culling with respect to the four entities (A, B, C, and D) depicted in Figures 4 and
6. See, e.g., annotated Figure 6 above (e.g., Box #2). The examples provided in
Funkhouser for A and B are illustrative:
If entity A is modified: client a send an update message to server X.
Server X propagates that message to server Y, but not to server Z
because entities C and D are not inside cells in the cell-to-cell
visibility of the cell containing entity A. Server Y forwards the
message to Client B which updates the local surrogate for entity A.” -21-
If entity B is modified: client B sends an update to server Y. Server Y
then propagates that message to servers X and Z, which forward it to
clients A and C. Server Z does not send the message to client D
because the cell containing entity D is not in the cell-to-cell visibility
of the cell containing entity B.
Ex. 1005 at 04; see also Fig. 6; Ex. 1002, ¶¶ 76-77. As this example
illustrates, Funkhouser discloses that when users A and B, represented by their
respective avatars, provide updated positional information (i.e., move), avatar C
receives the positional update for B but not for A, because C is not in a cell in
which A is potentially visible. Ex. 1002, ¶¶ 84-85.
Accordingly, Funkhouser discloses receiving by the first client process from
the server process received positions of selected second avatars, as recited in claim
1.1.
Funkhouser discloses claim 1.2, or step (b) of claim 1:
’856 Patent Funkhouser
[1.2] (b)
determining,
from the received
positions, a set of
the second
avatars that are to
be displayed to
the first user,
“A visibility precomputation is performed in which the set of
cells potentially visible to each cell is determined by tracing
beams of possible sight-lines through transparent cell
boundaries. . . . Since an entity’s visibility is conservatively
over-estimated by the precomputed visibility of its containing
cell, this algorithm allows servers to process update messages
quickly using cell visibility ‘look-ups’ rather than more exact
real-time entity visibility computations which would be too
expensive on currently available workstations.” p. 03.
“If entity A is modified: client a send an update message to
server X. Server X propagates that message to server Y, but
not to server Z because entities C and D are not inside cells in
the cell-to-cell visibility of the cell containing entity A. Server
Y forwards the message to Client B which updates the local
surrogate for entity A. If entity B is modified, client B sends an
update to server Y. Server Y then propagates that message to
servers X and Z, which forward it to clients A and C. Server Z -22-
’856 Patent Funkhouser
does not send the message to client D because the cell
containing entity D is not in the cell-to-cell visibility of the cell
containing entity B.” p. 04.
Figs. 4 & 6.
“Client workstations must store, simulate, and process update
messages only for the subset of entities visible to one of the
client’s local entities.” p. 04.
“Each client workstation must store in memory, process update
messages, and simulate behavior for only a small subset of the
entities participating in the entire distributed simulation.” p.
08.
“Each client processed only 11.35 messages (4360 bits) per
second on average (row 13 of Table 1).” p. 06.
“Every RING entity is managed by exactly one client
workstation. Clients execute the programs necessary to
generate behavior for their entities. They may map user input
to control of particular entities and may include viewing
capabilities in which the virtual environment is displayed on
the client workstation screen from the point of view of one or
more of its entities. In addition to managing their own entities
(local entities), clients maintain surrogates for some entities
managed by other clients (remote entities). Surrogates contain
(often simplified) representations for the entity's geometry and
behavior. When a client receives an update message for an
entity managed by another client, it updates the geometric and
behavioral models for the entity's local surrogate. Between
updates, surrogate behavior is simulated by every client.” p.
03.
“[U]sers run an interactive interface program on (usually
distinct) workstations connected to each other via a network.
The interface program simulates the experience of immersion
in a virtual environment by rendering images of the
environment as perceived from the user’s simulated
viewpoint.” p. 01.
“Clients . . . may include viewing capabilities in which the
virtual environment is displayed on the client workstation
screen from the point of view of one or more of its entities.” p.
03.
“Client managing this entity must process updates simulate
behavior, and store surrogates for only 14 remote entities (2.7%
of all entities) due to message processing in RING servers.” p. -23-
’856 Patent Funkhouser
09. Plate II.
See also Ex. 1002, ¶¶ 78-82.
Funkhouser discloses the client determining, from the received positions, a
set of the second avatars that are to be displayed to the first user. This includes
determining other users within the field of view of the first user (see, e.g., Box #3
of annotated Fig. 6). First,
Funkhouser discloses that the
set of entities for which a
given client receives updated
positional information is
“conservatively overestimated”
using the concepts of cells and a precomputed visibility determination,
such that it is broader than just those entities that are presently within the client’s
field of view. Ex. 1005 at 03; Ex. 1002, ¶ 78. This conservative overestimation
provides clients with updates for other avatars that are “potentially visible” to the
client, rather than a narrower set of updates that are actually visible based on the
client’s line of sight. Id.
As Dr. Zyda explains, this can be seen in the example discussed above
regarding annotated Fig. 6. Ex. 1002, ¶ 78. Client A receives positional
information from the server of less than all users and determines the avatar of
Client B is within A’s field of view and available for display to A. As another
example, client C receives a positional update when client B moves, even though
client C is facing away from client B and will not actually see the changed -24-
position. See Ex. 1005 at 04; Ex. 1002, ¶ 78. This is further illustrated in Figure 4
of Funkhouser (shown here), in which each
client’s cone-shaped field of view is shown (see,
e.g., “User C Visibility”). Ex. 1002, ¶ 78; see
also Ex. 1005 at Fig. 4.
Funkhouser further discloses that upon
receiving the position information from the server, the client processes the
information, for example explaining that “[c]lient workstations must store,
simulate, and process update messages.” Ex. 1005 at 04; see also Ex. 1002, ¶ 79.
As Dr. Zyda discusses, this client processing includes identifying which of the
received positions fall within the client’s field of view to determine a set of the
other avatars to display to the user, based on “update[d] geometric and behavioral
models” calculated by the client when updates are received. Ex. 1005 at 03; Ex.
1002, ¶¶ 79-80.
In addition to maintaining and updating “surrogates” of other users at the
client workstation, the “interface program” run by users “on (usually distinct)
workstations simulates the experience of immersion in a virtual environment by
rendering images of the environment as perceived from the user’s simulated
viewpoint.” Ex. 1005 at 01, 03. In doing so, “the virtual environment is displayed
on the client workstation screen from the point of view of one or more of its
entities.” Id. at 03. In other words, as Dr. Zyda explains, after receiving the
filtered positional updates from the server, the client performs its own calculations,
including updating the surrogates of the remote entities, in order to determine -25-
which of the remote entities to display within the client’s field of view. Ex. 1002,
¶ 81. Dr. Zyda further explains that this is further illustrated by Funkhouser’s Plate
II (showing a user’s viewpoint) and the corresponding caption, which describes
that the client received and processed updates for 14 remote entities but, as shown
in the picture, determined a set of four entities to display based on the user’s field
of view. Ex. 1005 at 09 (Plate II); Ex. 1002, ¶ 81.
In addition, Funkhouser teaches client processing in the context of
multiresolution simulation determination. Ex. 1005 at 07. For example,
Funkhouser describes “y allowing clients to choose a behavior model to
simulate for each remote entity dynamically based on its perceptible importance to
local entities, we can further reduce the processing requirements of client
workstations.” Id.
Funkhouser therefore discloses determining, from the received positions, a
set of the second avatars that are to be displayed to the first user, as recited in claim
1.2.
Funkhouser discloses claim 1.3, as discussed below:
’856 Patent Funkhouser
[1.3] wherein the
first client
process receives
positions of
fewer than all of
the second
avatars.
Funkhouser pp. 01, 02, 03, 04, 05, figs. 4 & 6 and
corresponding discussion
See also Ex. 1002, ¶ 83; see also discussion of claim 1,
limitation [1.1].
As described above with respect to limitation 1.1, Funkhouser discloses that
the client process receives positions of fewer than all of the other avatars. The -26-
client process receives a position of less than all of the other users’ avatars from
the message process, because, for example, Funkhouser discloses “server-based
message culling” in which “[r]eal-time update messages are propagated only to . . .
clients containing entities inside some cell visible to the one containing the updated
entity.” Ex. 1005 at 03. To the same point, Funkhouser discloses that “update
messages are sent only to workstations with entities that can potentially perceive
the change” (id. at 01) and that “state changes must be propagated only to hosts
containing entities that can possibly perceive the change” (id. at 02). See also Ex.
1002, ¶ 83. Funkhouser therefore discloses each aspect of limitation 1.3.
Accordingly, Funkhouser anticipates claim 1. Ex. 1002, ¶¶ 63-84.
B. [Ground 2] Claim 1 is Anticipated under 35 U.S.C. § 102 by
Durward
Durward, filed on September 23, 1993, is qualified as prior art for this
proceeding under 35 U.S.C. § 102(e). As described in further detail below,
Durward discloses each and every requirement of claim 1 of the ’856 patent.
Ex. 1002, ¶¶ 85-110.
Durward describes a virtual reality network with selective distribution and
updating of data to reduce bandwidth requirements. Ex. 1008 at Title; see also
Ex. 1002, ¶ 87; supra Section I.C. The system disclosed in Durward is “a virtual
reality network wherein multiple users at remote locations may telephone a central
communications center and participate in a virtual reality experience.” Ex. 1008 at
1:7-11. “[G]raphics data per se is not communicated on an interactive basis.
Instead, each user's computer has a copy of the entire virtual space (e.g., -27-
background, objects and primitives), and the data defining the virtual space
communicated to the users comprises only position, motion, control, and sound
data.” Id. at 4:17-23.
Durward discloses the server filtering as claimed in the ’856 patent.
Ex. 1002, ¶ 88. For example, Durward describes that, “[t]o provide maximum
flexibility and to facilitate communication with the users, each virtual being, and
hence each user, is assigned a visual relevant space which determines which data
defining the virtual space may be perceived by the user.” Ex. 1008 at 4:50-54.
“[V]isual relevant spaces determine which state changes are communicated to . . .
the users.” Id. at 4:54-56. Accordingly, the server of Durward sends position
information regarding less than all of the other users’ to any given client—i.e.,
users in the relevant space, rather than all system users.
Durward also discloses the client processing as claimed in the ’856 patent.
Ex. 1002, ¶ 89. Upon receiving updated data, the client workstation “may update
the images viewed and sounds heard with the new positional and sound data.”
Ex. 1008 at 6:60-62. This update includes a determination of which of the other
user avatars to display in order to display “the portion of the virtual space viewed
from the perspective of the virtual being defined for user 18 together with all other
defined virtual beings and objects within its field of vision.” Id. at 3:50-54.
-28-
As explained by Dr. Zyda, Figure 5 (annotated Fig. 5 shown below) of
Durward illustrates these concepts. Ex. 1002, ¶ 90; see also Ex. 1008 at 4:43-5:22.
Server filtering is evident
because the area for which
virtual being 184 receives
position updates (virtual space
204) is smaller than the entire
visual space (169). Ex. 1002, ¶
90. Client processing is evident
because the virtual being 184’s
field of vision (represented by the dashed lines) is narrower than the area for which
it receives position updates. Id.
The argument and claim charts provided below further illustrate that
Durward discloses each and every element of the ’856 claim 1.
Assuming without conceding that the claim 1 preamble is limiting,
Durward discloses the preamble elements.
’856 Patent Durward
1. A method for
enabling a first
user to interact
with second users
in a virtual space,
wherein the first
user is associated
with at [sic] first
avatar and a first
client process,
the first client
“This invention relates to virtual reality systems and, more
particularly, to a virtual reality network wherein multiple users
at remote locations may telephone a central communications
center and participate in a virtual reality experience.” 1:7-11
“The communications unit also receives data corresponding to
the position, orientation, and/or movement of the user relative
to a reference point and uses the data to define a virtual being
within the virtual space, wherein the position, orientation,
and/or movements of the virtual being are correlated to the
received data” 1:59-64 -29-
’856 Patent Durward
process being
configured for
communication
with a server
process, and each
second user is
associated with a
different second
avatar and a
second client
process
configured for
communication
with the server
process, at least
one second client
process per
second user, the
method
comprising:
“The system defines other virtual beings within the database in
response to position, orientation, and/or movement data
received from other users” 2:1-3
“Network 10 includes a central control unit 14 for
communicating with a plurality of users.” 2:51-52; see also fig.
1
“The virtual being may take the form of another human being,
an animal, machine, tool, inanimate object, etc., all of which
may be visible or invisible with the virtual space.” 3:30-32
See also Ex. 1002, ¶¶ 91-93.
Durward, as exemplified by the cited portions above, discloses each element
of the preamble. Durward discloses interaction between users in a virtual space, as
specifically recited in claim 1 (e.g., “virtual reality network wherein multiple users
. . . participate in a virtual reality experience.”) Ex. 1008 at 1:7-11. Durward
discloses users “associated with...avatar[s]” as in claim 1, including describing that
the virtual reality system uses data received from each user to “define a virtual
being [or avatar] within the virtual space” for that user. Id. at 1:59-64; see also id.
at 2:1-3. Durward discloses a client-server architecture, such that each user/client
is “associated with...a client process,” and the client processes are in
communication with a server process, as recited in claim 1. See, e.g., id. at 2:51-52
& fig. 1; see also Ex. 1002, ¶¶ 91-93.
Durward discloses claim 1.1, or step (a) of claim 1:-30-
’856 Patent Durward
[1.1] (a)
receiving by the
first client
process from the
server process
received
positions of
selected second
avatars; and
See title of Durward: “VIRTUAL REALITY NETWORK
WITH SELECTIVE DISTRIBUTION AND UPDATING OF
DATA TO REDUCE BANDWIDTH REQUIREMENTS”
“The system defines other virtual beings within the database in
response to position, orientation, and/or movement data
received from other users, and the portions of the virtual space
communicated to the other users may correspond to the
perspectives of their associated virtual beings. The system
periodically updates the database and communicates the
updated portions of the virtual space to the users to reflect
changes in the position of moving objects within the virtual
space.” 2:1-9
“The number of users supported by network 10 is not limited to
the two shown. Any number of users, even thousands, may be
supported.” 2:63-65
“Each user may talk to and interact with other virtual beings
and objects in the virtual space as the user desires, subject to
constraints noted below when discussing the concepts of
relevant spaces.” 3:54-57
“[V]isual relevant spaces determine which state changes are
communicated to (or perceivable by) the users.” 4:54-56
“In the preferred embodiment which communicates only
position, control and sound data to the users, those elements
outside of a visual relevant space may be visible to the user, but
any real-time or program controlled position/motion associated
with the element is not processed for that user so that the
element appears stationary in a fixed position, or else the
element moves in accordance with a fixed script. The visual
relevant space may be fixed as shown for virtual being 182.
Alternatively, the user’s visual relevant space may be defined
by the field of view of the virtual being and areas in close
proximity to it (as with virtual being 184), in which case the
visual relevant space may move about the virtual space as the
perspective or position of the virtual being changes.” 5:5-18
“Virtual being 182 is assigned a visual relevant space 200, and
virtual being 184 is assigned a visual relevant space 204.
Virtual beings 182 and 184 may view only those elements
(objects or states changes) which are disposed within their
visual relevant spaces. For example, elements 194 and 196
and/or their motion may be visible to both virtual beings 182
and 184; element 197 and/or its motion may be visible only to -31-
’856 Patent Durward
virtual being 184; element 183 [which is an avatar] and/or its
motion may be visible only to virtual being 182, and element
198 and/or its motion may be visible to neither virtual, motion
being 182 nor virtual being 184.” 4:61-5:5 & fig. 5
“At the same time central control unit 14 searches for other
users in the same universe. If other users are found, central
control unit 14 calculates the proximity of the other users (and
their defined virtual objects) based on XYZ coordinate
differentials. Once the coordinate differentials are calculated,
locations of the other users and their defined virtual objects
within and without the relevant and priority spaces may be
determined and used to ascertain which position, motion and
sound data is transmitted to which user in a step 314. In this
manner position, motion and sound data is not transmitted to
users that would not be able to use that data at that point in
time.” 8:46-58.
See also Ex. 1002, ¶¶ 94-102.
As exemplified by the cited portions above, Durward discloses a client
receiving a position of selected avatars from the server process, as specifically
recited in claim 1. Avatars are associated with position data: “[t]he system defines
other virtual beings within the database in response to position, orientation, and/or
movement data received from other users.” Ex. 1008 at 2:1-3; see also Ex. 1002,
¶ 94. That position data is sent to the other users: “[t]he system periodically
updates the database and communicates the updated portions of the virtual space to
the users to reflect changes in the position of moving objects within the virtual
space.” Ex. 1008 at 2:5-9.
A given client, however, only receives position data for other users’ avatars
within its “visual relevant space.” Ex. 1002, ¶ 95. That is, “visual relevant spaces
determine which state changes are communicated to . . . the users.” Ex. 1008 at
4:54-56. “Each user may talk to and interact with other virtual beings and objects -32-
in the virtual space as the user desires, subject to [the] constraints . . . of relevant
spaces.” Id. at 3:54-57. By using the “locations of the other users and their
defined virtual objects within and without the relevant . . . spaces” to “ascertain
which position, motion and sound data is transmitted to which user[,]” the system
disclosed by Durward ensures that “position, motion and sound data is not
transmitted to users that would not be able to use that data at that point in time.”
Id. at 8:46-58. This server filtering is reflected in the title of Durward as “selective
distribution.” Id. at Title. Thus, Durward’s disclosure regarding visual relevant
spaces and server-based filtering satisfies the claim 1 limitation regarding “selected
second avatars” and corresponding receiving. Ex. 1002, ¶¶ 94-101.
Moreover, as described by Dr. Zyda, Durward Figure 5 (annotated Fig. 5
shown here—see also above) illustrates
aspects of selective distribution (e.g., less
than all user position information). Ex.
1002 ¶, 102. Element 183 is a virtual
being. Ex. 1008 at 4:44-48. Virtual being
184 does not receive position information
regarding virtual being 183, because virtual being 183 is not within virtual being
184’s visual relevant space (labeled 204). Ex. 1002, ¶ 102; see also Ex. 1008 at
4:61-5:5. Therefore, virtual being 184 receives a position of less than all of the
other users’ avatars from the server. Ex. 1002, ¶ 102. Accordingly, Durward
discloses receiving a position of selected avatars from the server process, as recited
in claim 1.1 or step (a) of claim 1. Ex. 1002, ¶¶ 94-102. -33-
Durward discloses claim 1.2, or step (b) of claim 1:
’856 Patent Durward
[1.2] (b)
determining,
from the received
positions, a set of
the second
avatars that are to
be displayed to
the first user,
“In the preferred embodiment, head mounted display 46
displays the portion of the virtual space viewed from the
perspective of the virtual being defined for user 18 together
with all other defined virtual beings and objects within its field
of vision.” 3:50-54
“In the preferred embodiment, however, graphics data per se is
not communicated on an interactive basis. Instead, each user's
computer has a copy of the entire virtual space (e.g.,
background, objects and primitives), and the data defining the
virtual space communicated to the users comprises only
position, motion, control, and sound data. After initial
position, motion, control and sound data is communicated to
the users, only changes in the position, motion, control and
sound data is communicated thereafter. This dramatically
reduces bandwidth requirements and allows the system to
operate with many concurrent users without sacrificing realtime
realism.” 4:17-29
“The visual relevant space may be fixed as shown for virtual
being 182. Alternatively, the user's visual relevant space may
be defined by the field of view of the virtual being and areas in
close proximity to it (as with virtual being 184), in which case
the visual relevant space may move about the virtual space as
the perspective or position of the virtual being changes. Visual
relevant spaces need not be contiguous and need not have a
direct spatial relationship to the virtual space.” 5:12-20
“As noted above, in the preferred embodiment, each user has a
copy of the selected virtual space in his or her computer, and
processor 100 periodically sends only the positional and sound
data assigned to points within the user's relevant space or field
of view to the user so that the user's computer may update the
images viewed and sounds heard with the new positional and
sound data.” 6:55-62.
See also Ex. 1002, ¶¶ 103-107.
As exemplified by the cited portions above, Durward discloses a client
determining, from the received positions, a set of the second avatars that are to be
displayed to the first user. -34-
First, Durward discloses that each “each user’s computer has a copy of the
entire virtual space[.]” Ex. 1008 at 4:17-29; see also id. at 6:55-62. As discussed
above with respect to claim 1.1, or step (a) of claim 1, Durward discloses that a
client receives, from the server, position information for avatars within the client’s
visual relevant space, which may be broader than the client’s field of view (e.g., as
shown in Fig. 5). For example, Durward describes that “the user's visual relevant
space may be defined by the field of view of the virtual being and areas in close
proximity to it[.]” Id. at 5:12-20 (emphasis added); see also Ex. 1002, ¶ 104-105.
Then, after receipt of the position information from the server, the client
determines a set of other users’ avatars to be displayed to the first user, by
identifying which of the received positions of the “other defined virtual beings” are
within the “field of vision” of the user. Ex. 1008 at 3:50-54; see also Ex. 1002,
¶ 106.
Again as described by Dr. Zyda, this client processing is apparent at least
from Figure 5 (annotated Fig. 5 shown
here—see also above) of Durward.
Ex. 1002, ¶ 107. Virtual being 184’s
visual relevant space 204 follows but is
broader than virtual being 184’s field of
vision (represented by dashed lines). Id.
Accordingly, the client associated with virtual being 184 must determine the subset
of avatars, among the larger set of avatars in its visual relevant space, that are
within its field of vision to display those avatars to the user. Id. -35-
Accordingly, Durward discloses determining, from the received positions, a
set of the second avatars that are to be displayed to the first user, as recited in claim
1.2 or step (b) of claim 1. Ex. 1002, ¶¶ 103-107.
Durward discloses claim 1.3 of claim 1:
’856 Patent Durward
[1.3] wherein the
first client
process receives
positions of
fewer than all of
the second
avatars.
Title of Durward, 2:1-9, 2:63-65, 3:54-57, 4:54-56, 4:61-5:18,
8:46-58, fig. 5
See Ex. 1002, ¶¶ 108-109; see also discussion of claim 1.1, or
step (a) of claim 1.
As discussed above with respect to claim 1.1, or step (a) of claim 1, Durward
discloses that the client process receives positions of fewer than all of the other
avatars.
For example, as discussed above with respect to claim 1.1, or step (a) of
claim 1, the client process receives a position of less than all of the other users’
avatars from the server process, because Durward discloses that “motion and sound
data is not transmitted to users that would not be able to use that data at that point
in time.” Ex. 1008 at 8:55-58. To the same point, Durward discloses that “each
user has a copy of the selected virtual space in his or her computer, and processor
100 periodically sends only the positional and sound data assigned to points within
the user's relevant space or field of view to the user . . . .” Id. at 6:55-62.
Accordingly, Durward discloses that the client process receives positions of
fewer than all of the other avatars, as required by claim 1.3. Ex. 1002, ¶¶ 108-109. -36-
As Durward discloses each and every requirement of claim 1, Durward anticipates
claim 1. Ex. 1002, ¶ 110.
VIII. Conclusion
For the reasons set forth above, claim 1 of the ’856 patent is unpatentable.
Petitioners therefore request that an inter partes review of this claim be instituted.
Respectfully submitted,
Dated: May 26, 2015 /Michael T. Rosato/
Michael T. Rosato, Lead Counsel
Reg. No. 52,182
-37-
IX. Payment of Fees under 37 C.F.R. §§ 42.15(a) and 42.103
The required fees are submitted herewith. If any additional fees are due at
any time during this proceeding, the Office is authorized to charge such fees to
Deposit Account No. 23-2415.
-38-
X. Appendix – List of Exhibits
Exhibit No. Description
1001 U.S. Patent No. 7,945,856 to Leahy et al.
1002 Declaration of Michael Zyda, D.SC.
1003 Michael Zyda, curriculum vitae
1004 File History of 12/353,218 to Leahy et al.
1005 Thomas A. Funkhouser, RING: A Client-Server System for MultiUser
Virtual Environments, Symposium on Interactive 3D
Graphics, 1995, at 85-92 and 209.
1006 1995 Symposium on Interactive 3D Graphics, The Association for
Computing Machinery, Inc. (1995).
1007 Date Stamped Copy of the Excerpt of the Proceedings of the 1995
Symposium on Interactive 3D Graphics, The Association for
Computing Machinery, Inc. (1995).
1008 U.S. Patent No. 5,659,691 to Durward
1009 Worlds’ Opening Claim Construction Brief, Worlds, Inc. v.
Activision Blizzard, Inc. et al., No. 1:12-cv-10576-DJC (D.
Mass Apr. 22, 2013).
1010 Excerpt, Microsoft Press Computer Dictionary (3rd Ed. 1997).
1011 Ivan Sutherland Photo Essay – A.M. Turing Award Winner,
http://amturing.acm.org/photo/sutherland_3467412.cfm#photo_
2 (last visited May 25, 2015).
1012 DigiBarn Games: Maze War Retrospective,
http://www.digibarn.com/collections/games/xerox-maze-war/
(last visited May 22, 2015).
1013 U.S. Patent No. 4,521,014 to Sitrick
1014 Douglas B. Smith, et al., An Inexpensive Real-Time Interactive
Three-Dimensional Flight Simulation System (1987).
1015 Michael Zyda et al., NPSNET: A 3D Visual Simulator for Virtual
World Exploration and Experience (1991).
1016 James O. Calvin, et al., STOW Realtime Information Transfer and
Networking System Architecture (1995).
1017 Thomas A. Funkhouser, Adaptive Display Algorithm for
Interactive Frame Rates During Visualization of Complex
Virtual Environments (1993). -39-
Exhibit No. Description
1018 Symposium, Computer Graphics Proceedings, The Association for
Computing Machinery, Inc. (1993).
1019 U.S. Patent No. 5,777,621 to Schneider
-40-
CERTIFICATE OF SERVICE
Pursuant to 37 C.F.R. §§ 42.6(e) and 42.105(a), this is to certify that I
caused to be served a true and correct copy of the foregoing Petition for Inter
Partes Review of U.S. Patent No. 7,945,856 (and accompanying Exhibits 1001-
1019) by overnight courier (Federal Express or UPS), on this 26th day of May,
2015, on the Patent Owner at the correspondence address of the Patent Owner as
follows:
WORLDS INC.
11 Royal Road
Brookline, MA 02445
Joel R. Leeman
SUNSTEIN KANN MURPHY
& TIMBERS LLP
125 Summer Street
Boston, MA 02110-1618
Irving Rothstein
FEDER KASZOVITZ LLP
845 Third Avenue
New York, NY 10022-6601
Max L. Tribble
Brian D. Melton
Chanler A. Langham
Joseph S. Grinstein
Sandeep Seth
Ryan Caughey
SUSMAN GODFREY L.L.P.
1000 Louisiana, Suite 5100
Houston, TX 77002-5096
Respectfully submitted,
Dated: May 26, 2015 /Michael T. Rosato/
Michael T. Rosato, Lead Counsel
Reg. No. 52,182