Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Yes of course it is clearly for sale!
IMO MDT could but pay less than Vance wants, which is more than you hope!
If J&J moves, then everything changes because the stakes go up...
Are you looking for short friends to cover you?
The Company has not set a timetable for completion of the process and does not intend to provide any updates on developments unless and until the Company executes a definitive agreement with respect thereto, or the Board otherwise determines that an update is appropriate or required.
While the Company intends to evaluate all options fairly to maximize shareholder value, there can be no assurance that the strategic review process will result in any transaction, or if a transaction is undertaken, as to its terms or timing.
IDE Investigational Device Exemption
As of December 6, 2022, the Company has withdrawn all forward-looking statements included in its continuous disclosure documents and other documents filed on
SEDAR or EDGAR with respect to the cost and timing of the development of its Enos surgical system. The timeline shown above is only for informational purposes as
previously disclosed prior to December 6, 2022, detailing the sequence of events that would likely need to be completed for a commercial launch of the Enos surgical
system and does not depict the actual target dates of completion of the indicated milestones. As of December 6, 2022, the Company has limited its work to tasks
related to (i) the strategic review process announced on November 30, 2022, (ii) the IDE filing, and (iii) fulfilling certain other contractual development and supply 20 obligations.
Three-armed Enos is a gem!
I think you have more shares than me!
All this pantomime just to rake in stocks!
“We are looking forward to meeting with strategic buyers and investors during the conference week and providing a future view into what a three-arm single access robotic-assisted surgery technology could look like. With the time it takes to bring complex products to market in the regulated medical device industry, you must continuously envision the next generation. Our focus on purpose-driven innovation has always been at the forefront of our work. We are excited to share our three-arm design that was independently created by Titan and builds upon technologies developed for our two-arm system and those generated under a previously executed and completed development and license agreement, and addresses evolving customer needs in single-access robotic surgery."
Very good!
Very interesting the next generation!
Participation is now official!
The technology sold is on the new patents... understood Livendi!
Let's hope for some amazement at Medtronic!
https://titanmedicalinc.com/
I like It!
3 arms
As we pack our suitcases for San Francisco, we will tuck in our current technology in single access robotic-assisted surgery! Looking forward to seeing you there!
I like It!
Go TMDI!
A truce only for 7 and a little more, may it be the prelude to peace!
it is not possible that there is no other way than war!
With war, almost everyone loses....
strategic review
they're building the fireworks!
It will be a show!
Come on, dream a little!
pick our rose of love
he who despises buys
IMO the longer you wait, the more expensive it will be
time to go back?
Kaboom
Bill, don't make the same mistake twice
What's the point of RBOT?
Will Acquire Controlling Interest in Continent, a China-Based Commercial Pharma Company, from the GNI Group in Subsequent Transaction
Announces $7.5 Million Special Dividend and Contingent Value Right (CVR)
On December 26, 2022, Catalyst Biosciences, Inc., a Delaware corporation (?Catalyst?), acquired the F351 Assets (as defined below) from GNI Group Ltd., a company incorporated under the laws of Japan with limited liability (?GNI Japan?), and GNI Hong Kong Limited, a company incorporated under the laws of Hong Kong with limited liability (?GNI Hong Kong? and, together with GNI Japan, the ?Sellers?), pursuant to that certain Asset Purchase Agreement, dated December 26 2022 (the ?F351 Agreement?), by and among Catalyst and the Sellers. Pursuant to the F351 Agreement, Catalyst acquired all of the assets and intellectual property rights primarily related to the Sellers? proprietary Hydronidone compound (collectively, the ?F351 Assets?), other than such assets and intellectual property rights located in the People?s Republic of China. The F351 Assets include 15 issued or pending patents and patent applications outside of the People?s Republic of China, with the last acquired issued patent expected to expire in August 2037.
Under the terms of the F351 Agreement and upon the effective time of the transactions contemplated by the F351 Agreement (the ?F351 Effective Time?), Catalyst paid the Sellers $35,000,000 in the form of: 6,266,521 shares of Catalyst common stock, par value $0.001 per share (the ?Catalyst Common Stock?); and 12,340 shares of Catalyst Series X Convertible Preferred Stock, par value $0.001 per share (the ?Catalyst Convertible Preferred Stock? and collectively with the Catalyst Common Stock issued pursuant to the F351 Agreement, the ?Catalyst F351 Securities?).
Each of Catalyst and the Sellers has agreed to customary representations, warranties and covenants in the F351 Agreement, including, among others, covenants relating to (1) Catalyst filing with the U.S. Securities and Exchange Commission (the ?SEC?) and causing to become effective a registration statement (the ?Registration Statement?) to register (a) the shares of Catalyst Common Stock issued pursuant to the F351 Agreement, and (b) the shares of Catalyst Common Stock reserved for issuance upon conversion of the Catalyst Convertible Preferred Stock, (2) Catalyst using reasonable best efforts to maintain the existing listing of the Catalyst Common Stock on The Nasdaq Stock Market LLC (?Nasdaq?) and Catalyst causing the (a) shares of Catalyst Common Stock issued in connection with the F351 Agreement and (b) the shares of Catalyst Common Stock reserved for issuance upon conversion of the Catalyst Convertible Preferred Stock, to be approved for listing on Nasdaq, and (3) the Sellers assuming and paying, discharging, performing or otherwise satisfying the liabilities and obligations of any kind and nature relating to the Purchased Contracts (as defined in the F351 Agreement).
Will Acquire Controlling Interest in Continent, a China-Based Commercial Pharma Company, from the GNI Group in Subsequent Transaction
Announces $7.5 Million Special Dividend and Contingent Value Right (CVR)
On December 26, 2022, Catalyst Biosciences, Inc., a Delaware corporation (?Catalyst?), acquired the F351 Assets (as defined below) from GNI Group Ltd., a company incorporated under the laws of Japan with limited liability (?GNI Japan?), and GNI Hong Kong Limited, a company incorporated under the laws of Hong Kong with limited liability (?GNI Hong Kong? and, together with GNI Japan, the ?Sellers?), pursuant to that certain Asset Purchase Agreement, dated December 26 2022 (the ?F351 Agreement?), by and among Catalyst and the Sellers. Pursuant to the F351 Agreement, Catalyst acquired all of the assets and intellectual property rights primarily related to the Sellers? proprietary Hydronidone compound (collectively, the ?F351 Assets?), other than such assets and intellectual property rights located in the People?s Republic of China. The F351 Assets include 15 issued or pending patents and patent applications outside of the People?s Republic of China, with the last acquired issued patent expected to expire in August 2037.
Under the terms of the F351 Agreement and upon the effective time of the transactions contemplated by the F351 Agreement (the ?F351 Effective Time?), Catalyst paid the Sellers $35,000,000 in the form of: 6,266,521 shares of Catalyst common stock, par value $0.001 per share (the ?Catalyst Common Stock?); and 12,340 shares of Catalyst Series X Convertible Preferred Stock, par value $0.001 per share (the ?Catalyst Convertible Preferred Stock? and collectively with the Catalyst Common Stock issued pursuant to the F351 Agreement, the ?Catalyst F351 Securities?).
Each of Catalyst and the Sellers has agreed to customary representations, warranties and covenants in the F351 Agreement, including, among others, covenants relating to (1) Catalyst filing with the U.S. Securities and Exchange Commission (the ?SEC?) and causing to become effective a registration statement (the ?Registration Statement?) to register (a) the shares of Catalyst Common Stock issued pursuant to the F351 Agreement, and (b) the shares of Catalyst Common Stock reserved for issuance upon conversion of the Catalyst Convertible Preferred Stock, (2) Catalyst using reasonable best efforts to maintain the existing listing of the Catalyst Common Stock on The Nasdaq Stock Market LLC (?Nasdaq?) and Catalyst causing the (a) shares of Catalyst Common Stock issued in connection with the F351 Agreement and (b) the shares of Catalyst Common Stock reserved for issuance upon conversion of the Catalyst Convertible Preferred Stock, to be approved for listing on Nasdaq, and (3) the Sellers assuming and paying, discharging, performing or otherwise satisfying the liabilities and obligations of any kind and nature relating to the Purchased Contracts (as defined in the F351 Agreement).
I called Hassen
Hahahahahahahaha
do not ask me why… I don't know
maybe I was a little out of tune
https://siuj.org/index.php/siuj/article/download/139/73/
But now:
Haptic Feedback YES
AI YES
3 ARMS YES
miniaturized 3D high-definition (along with its integrated light source) YES
1 HOLE The Enos RAS System YES
Instinctive movement and triangulation of the end-effectors is achieved with snake-like, multi-articulating instruments and a primary steerable 3D HD endoscope delivered through a single point of entry.
The engineering team has also designed a more compact and significantly lighter patient cart, with a reduced footprint designed to optimize portability, minimize set-up time and facilitate unencumbered assistance of surgical staff at the patient bedside. Additionally, they have improved the surgeon interface at the workstation with a more comfortable handle design, a new 4K monitor and upgraded haptic feedback with image overlays to assist the surgeon with the positioning of the instruments for optimal performance.
Huge Market 42 Bil 2027
Enos gold mine
Cambridge Design Benchmark Hassen
IDE
Merry Christmas to all good, bad and ugly!
Who knows who's good or bad?
I know who's ugly... LOL
BOOM BOOM!
HAND CONTROLLER FOR ROBOTIC SURGERY SYSTEM
DOCUMENT ID
US 20220401162 A1
DATE PUBLISHED
2022-12-22
INVENTOR INFORMATION
NAME
CITY
STATE
ZIP CODE
COUNTRY
Unsworth; John D.
Hamilton
N/A
N/A
CA
APPLICANT INFORMATION
NAME
Titan Medical Inc.
CITY
Toronto
STATE
N/A
ZIP CODE
N/A
COUNTRY
CA
AUTHORITY
N/A
TYPE
assignee
APPLICATION NO
17/813898
DATE FILED
2022-07-20
DOMESTIC PRIORITY (CONTINUITY DATA)
parent US continuation 16913809 20200626 PENDING child US 17813898
parent US continuation 16455192 20190627 parent-grant-document US 10695139 child US 16913809
parent US continuation 16160200 20181015 parent-grant-document US 10357319 child US 16455192
parent US continuation 15490098 20170418 parent-grant-document US 10130434 child US 16160200
parent US continuation 15211295 20160715 parent-grant-document US 9681922 child US 15490098
parent US continuation 14831045 20150820 parent-grant-document US 9421068 child US 15211295
parent US continuation 14302723 20140612 parent-grant-document US 9149339 child US 14831045
parent US continuation 12449779 20090825 parent-grant-document US 8792688 WO continuation PCT/CA2008/000392 20080229 child US 14302723
us-provisional-application US 60904187 20070301
us-provisional-application US 60921467 20070403
us-provisional-application US 60907723 20070413
us-provisional-application US 60933948 20070611
us-provisional-application US 60937987 20070702
us-provisional-application US 61001756 20071105
US CLASS CURRENT:
1/1
CPC CURRENT
TYPE
CPC
DATE
CPCI
A 61 B 90/361
2016-02-01
CPCI
A 61 B 34/30
2016-02-01
CPCI
A 61 B 34/25
2016-02-01
CPCI
G 06 F 3/016
2013-01-01
CPCI
A 61 B 34/37
2016-02-01
CPCI
H 05 K 999/99
2013-01-01
CPCI
A 61 B 34/74
2016-02-01
CPCI
G 06 F 3/0308
2013-01-01
CPCI
A 61 B 34/20
2016-02-01
CPCI
A 61 B 34/76
2016-02-01
CPCI
A 61 B 34/10
2016-02-01
CPCI
G 01 D 5/262
2013-01-01
CPCI
G 06 T 11/00
2013-01-01
CPCI
A 61 B 90/06
2016-02-01
CPCI
A 61 B 34/70
2016-02-01
CPCI
G 06 F 3/0325
2013-01-01
CPCA
A 61 B 2017/00703
2013-01-01
CPCA
A 61 B 2017/00207
2013-01-01
CPCA
A 61 B 2034/2051
2016-02-01
CPCA
A 61 B 2034/2055
2016-02-01
CPCA
A 61 B 2090/062
2016-02-01
CPCA
A 61 B 2090/365
2016-02-01
CPCA
A 61 B 90/36
2016-02-01
CPCA
A 61 B 2034/107
2016-02-01
CPCA
A 61 B 2034/2068
2016-02-01
Abstract
A Robotic control system has a wand, which emits multiple narrow beams of light, which fall on a light sensor array, or with a camera, a surface, defining the wand's changing position and attitude which a computer uses to direct relative motion of robotic tools or remote processes, such as those that are controlled by a mouse, but in three dimensions and motion compensation means and means for reducing latency.
Background/Summary
INCORPORATION BY REFERENCE TO ANY PRIORITY APPLICATIONS
[0001] Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet as filed with the present application are hereby incorporated by reference under 37 CFR 1.57.
BACKGROUND OF THE INVENTION
Field of the Invention
[0002] This invention relates to operator interfaces for controlling robots and remote processes, including pointing devices, such as a mouse. It also relates to methods and systems for controlling remote processes.
Description of the Related Art
[0003] Real-time operator control of robots has been accomplished with electro-mechanical controls such as joysticks and multiple axis hand grips. These devices suffer from a limited range of motion, due to being constrained by the geometry of the control device. In other applications, such as surgery, the operator's hand and finger motions used to operate the device to not closely approximate those motions he would use in conducting the operation by hand. This requires the surgeon to use a different repertoire of hand motions for the robot control, than he would for conducting the operation by hand. Other devices such as a glove actuator, while more closely approximating the actual motion of the hand, suffers from a lack of accuracy regarding the motion of the instrument the hand and fingers grasp, and it is the working end of the instrument which is being mimicked by the robot's tools that do the work. Other interfaces have been developed that rely on multiple cameras to record the motion of the operator's hands with or without faux instruments, but these can also suffer from a lack of accuracy.
[0004] These devices suffer from latency, especially when the operator is separated from the worksite by sufficient distances that there is a significant delay in transmission.
[0005] It is an object of some aspects of the invention to address one or more of the above existing concerns. Other concerns may be also be addressed in those aspects, or separately in other aspects of the invention as will be evident from the remainder of this specification.
SUMMARY OF THE INVENTION
[0006] In a first aspect the invention provides a method comprising the steps of actively generating an image pattern on a surface of a first object, detecting the image pattern on the surface of the first object, wherein either the step of actively generating or the step of detecting is performed at a second object spaced away from the first object, and determining parameters of the relative poses of the second object and the surface utilizing the detected image pattern and utilizing reference data for actively generating the image pattern.
[0007] The method may further comprise the step of actively displaying on the first surface an image of a remote process that is controlled in accordance with the determined parameters of the pose of the second object.
[0008] The step of actively generating may comprise the step of projecting a known image pattern to actively generate the image pattern on the surface of the first object, wherein the step of projecting is from either the second object if the step of actively generating is performed at the second object or a first location other than the second object and the first object if the step of detecting is performed at the second object.
[0009] The step of projecting may comprise projecting the image pattern from the second object. The step of detecting may comprise detecting the image pattern at the surface of the first object. The step of projecting may comprise projecting the image pattern from the first location. The step of detecting further comprises detecting the image pattern from a second location other than the first object and the second object.
[0010] The method may further comprise the step of maintaining the first object in a known pose during the steps of projecting and detecting. The method may further comprise the step of maintaining the second object in a known pose during the steps of projecting and detecting.
[0011] The surface of the first object may be substantially planar.
[0012] The method may further comprise the step of detecting movement of the detected pattern, and the step of determining parameters of the pose of the second object comprises determining movement of parameters of the pose of the second object from the detected movement of the detected pattern.
[0013] The method may further comprise the step of detecting linear movement of the second object parallel to the surface by detecting motion against texturing on the surface.
[0014] The step of projecting may further comprise projecting the image pattern such that the image pattern is asymmetrical about an axis of rotation inline with a direction of projection of the image pattern. The step of projecting may further comprise projecting the image pattern such that the size of the image pattern varies continuously with distance from the first object inline with a direction of projection of the image pattern.
[0015] The step of actively generating the image pattern may include actively generating elements of the image pattern over time, and the step of detecting includes detecting elements of the formed image pattern in synchronization with actively generating the image elements.
[0016] The method of claim 1 wherein the steps of actively generating and detecting comprise actively generating on the surface which surface forms a three dimensional cavity with access for the second object through an opening in the first object, and detecting the image pattern formed on such surface, respectively.
[0017] The surface may comprise a plurality of substantially planar sub-surfaces. The step of projecting further comprises projecting the image pattern as a combination of three or more spot beams of known relationship. The step of actively generating may further comprise actively generating the image pattern as a combination of three or more spot beams of known relationship.
[0018] The step of projecting may comprise projecting the image pattern with image pattern elements directed at a plurality of angles about an axis of the second object. The method may further comprise the step of user imparting movement of the second object.
[0019] The step of projecting may further comprise projecting encoded information, other than pose-related information, in an image pattern projected from the second object.
[0020] The step of determining an element of the pose of the second object may further comprise determining a distance from the image pattern on the surface of the first object to a reference point on the second object based upon the size of the detected image pattern.
[0021] In a second aspect the invention provides a method of controlling instruments of a surgical robot in use on a heart, the method comprising the steps of receiving a signal that a heart is about contract, and initiating movement of the surgical robot instruments so as to accommodate movement of the heart in the vicinity of the instruments during contraction as movement of the heart occurs.
[0022] The step of receiving may further comprise receiving a signal related to an anticipated nature of the contraction, and the step of initiating further comprises utilizing the anticipated nature of the contraction from the signal to control the accommodation. The method may comprise the steps of detecting a contour of movement of a heart by, projecting an image pattern on to a surface of the heart in the vicinity of the instrument, repeatedly detecting the image pattern formed on the surface of the heart, and determining movement of the heart based on a transformation of the detected image pattern from reference image pattern data, and moving the surgical robot instruments so as to accommodate the contour of movement of the heart in the vicinity of the instrument, so that operator intended motions can be carried out from this normalized position.
[0023] In a third aspect the invention provides a method of controlling an instrument of a surgical robot comprising the steps of detecting a contour of movement of a heart by, projecting an image pattern on to a surface of the heart in the vicinity of the instrument, repeatedly detecting the image pattern formed on the surface of the heart, and determining movement of the heart based on a transformation of the detected image pattern from reference image pattern data, and moving the surgical robot instruments so as to accommodate the contour of movement of the heart in the vicinity of the instrument, so that operator intended motions can be carried out from this normalized position.
[0024] In a fourth aspect the invention provides a robot system comprising a robot including and controlling an instrument, controls for an operator to control the robot to operate the instrument, a controller for determining quantified information related to motion of the instrument, and a display for displaying the information from the controller to an operator of the robot during use of the robot.
[0025] In a fifth aspect the invention provides a method of conveying information regarding the latency between motion of a controller and motion of an instrument in a remote process controlled by the controller, the method comprising displaying to an operator of the controller an image of the instrument and at least a portion of the remote process surrounding the instrument in a direction of motion of the instrument, and overlaying on the displayed image, an image of the instrument in a pose requested by motion of the controller, such that the operator can see an image of the actual pose of the instrument, and the requested pose of the instrument.
[0026] In a sixth aspect the invention provides a method of conveying information regarding the latency between motion of a controller of a surgical robot and motion of an instrument of the surgical robot controlled by the controller, the method comprising displaying on a display visible to an operator of the controller, an image of the instrument and at least a portion of a surgical field surrounding the instrument in a direction of motion of the instrument, and overlaying on the displayed image, an image of the instrument in a pose requested by motion of the controller, such that the operator can see an image of the actual pose of the instrument, and the requested pose of the instrument.
[0027] In a seventh aspect the invention provides a method of controlling latency between motion of a controller and motion of the instrument in a remote process controlled by the controller, the method comprising the steps of acquiring an original image of the instrument and at least a portion of a surgical field surrounding the instrument in a direction of motion of the instrument, and displaying the original image to an operator of the controller, acquiring an instruction from the controller to move the instrument to an instructed pose relative to the original image, transmitting the instruction and information to identify the original image to the remote process, acquiring an updated image of the remote process, performing pattern recognition at the remote process on the image identified by the transmitted information and the updated image to determine a desired pose of the instrument relative to the updated image that corresponds to the instructed pose on the original image, and moving the instrument to the desired pose.
[0028] In an eighth aspect the invention provides a method comprising the steps of actively displaying on a surface of a first object an image of a remote process that is controlled in accordance with parameters of the pose of a second object spaced away from the first object, detecting an image pattern on the surface of the first object, wherein either the image pattern is actively generated from the second object or the image pattern is detected at the second object, determining parameters of the relative poses of the second object and the surface utilizing the detected image pattern and utilizing reference data for the image pattern, and controlling the remote process in accordance with the determined parameters of the pose of the second object.
[0029] In a ninth aspect the invention provides a method comprising the steps of projecting a known image pattern on to a surface of a first object, wherein the step of projecting is from either a second object or a first location other than the second object and the first object, and the first object, second object and first location are at a distance from one another, detecting the image pattern formed on the surface of the first object, wherein if the step of projecting is from the second object then the step of detecting is from either the first object, second object or a second location other than the first and the second object, and if the step of projecting is from the first location then the step of detecting is from the second object, and determining parameters of the pose of the second object utilizing the detected image pattern and reference image pattern data for the known pattern.
[0030] In a tenth aspect the invention provides a method of controlling an instrument of a robot comprising the steps of detecting a contour of movement of an object being worked by the instrument, projecting an image pattern on to a surface of the object in the vicinity of the instrument, repeatedly detecting the image pattern formed on the surface of the object, and determining movement of the object based on a transformation of the detected image pattern from reference image pattern data, and moving the robot instruments so as to accommodate the contour of movement of the object in the vicinity of the instrument, so that operator intended motions can be carried out from this normalized position.
[0031] In a eleventh aspect the invention provides an input interface comprising a pattern generator for actively generating an image pattern on a surface of a first object, a detector for detecting the image pattern on the surface of the first object, wherein the pattern generator or the detector is at a second object spaced away from the first object, and a computer for determining parameters of the relative poses of the second object and the surface utilizing the detected image pattern from the detector and utilizing reference data for actively generating the image pattern.
[0032] In a twelfth aspect the invention provides a system comprising a surgical robot including an instrument controlled by the robot, a computer for receiving a signal that a heart being operated on by the instrument is about to contract, and generating instructions to the robot to initiate movement of the surgical robot instrument so as to accommodate movement of the heart in the vicinity of the instruments during contraction as movement of the heart occurs.
[0033] In a thirteenth aspect the invention provides a robot system comprising a robot including and controlling an instrument, controls for an operator to control the robot to operate the instrument, a controller for determining quantified information related to motion of the instrument, and a display for displaying the information from the controller to an operator of the robot during use of the robot.
[0034] In a fourteenth aspect the invention provides a system for conveying information regarding the latency between motion of a controller and motion of an instrument in a remote process controlled by the controller, the system comprising a computer and a display for displaying to an operator of the controller an image of the instrument and at least a portion of the remote process surrounding the instrument in a direction of motion of the instrument, and an overlay on the displayed image, an image of the instrument in a pose requested by motion of the controller, such that the operator can see an image of the actual pose of the instrument, and the requested pose of the instrument.
[0035] In a fifteenth aspect the invention provides system for conveying information regarding the latency between motion of a controller of a surgical robot and motion of an instrument of the surgical robot controlled by the controller, the system comprising a computer and a display for displaying on a display visible to an operator of the controller, an image of the instrument and at least a portion of a surgical field surrounding the instrument in a direction of motion of the instrument, and overlaying on the displayed image, an image of the instrument in a pose requested by motion of the controller, such that the operator can see an image of the actual pose of the instrument, and the requested pose of the instrument.
[0036] In a sixteenth aspect the invention provides a system for controlling latency between motion of a controller and motion of the instrument in a remote process controlled by the controller, the system comprising a camera for acquiring an original image of the instrument and at least a portion of a surgical field surrounding the instrument in a direction of motion of the instrument, and a display for displaying the original image to an operator of the controller, acquiring an instruction from the controller to move the instrument to an instructed pose relative to the original image, and transmitting the instruction and information to identify the original image to the remote process, wherein the camera is also for acquiring an updated image of the remote process, a computer for performing pattern recognition at the remote process on the image identified by the transmitted information and the updated image to determine a desired pose of the instrument relative to the updated image that corresponds to the instructed pose on the original image, and instructing the remote process to move the instrument to the desired pose.
[0037] In a seventeenth aspect, the invention provides a computer readable medium storing program instructions executable by one or more processors in one or more computers for causing the computers to implement the method of any one of the method aspects.
[0038] Other aspects of the present invention and detailed additional features of the above aspects will be evident based upon the detailed description, FIGS. and claims herein, including for example systems corresponding to the methods of the above aspects, methods corresponding to the systems of the above aspects, input interfaces, wands, robots, computing systems, alignment systems, software, methods of using the above, and the like.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0039] For a better understanding of the present invention and to show more clearly how it may be carried into effect, reference will now be made, by way of example, to the accompanying drawings that show the preferred embodiment of the present invention and in which:
[0040] FIG. 1 is a perspective view of portions of an input interface including a first object, an open sided box having a surface (light sensor array), and a second object (a wand) projecting an image pattern (light beams) to actively generate an image pattern on the surface (spots of light) which is detected by the light sensor in accordance with various example embodiments of aspects of the present invention.
[0041] FIG. 2 is a perspective view of portions of an alternative input interface including a buckyball shaped sensor array in accordance with various example embodiments of aspects of the present invention.
[0042] FIG. 3 is a perspective view of additional portions of an input interface, utilizing, for example, the input interface of FIG. 1, and including transmission means from the sensor array to computer, and a three dimensional viewer including superimposed force feedback information on top of a three dimensional image of a work space in accordance with various example embodiments of aspects of the present invention.
[0043] FIG. 4 and FIG. 5 are perspective views of details of two examples of force feedback information for the input interface of FIG. 3.
[0044] FIG. 6 is a perspective view and block view of various elements of a robotic control system, including the input interface of FIG. 1, in accordance with various embodiments of aspects of the present invention.
[0045] FIG. 6a is an example of an alternative input interface which uses only a single panel to form the sensor array.
[0046] FIG. 6a1 is a perspective view of a further alternative user interface, similar to that illustrated in FIG. 6a, except that the sensor array is comprised of two panels, at an angle relative to each other, known to a computer.
[0047] FIG. 6a2 is a perspective view of another alternative user interface, similar to that illustrated in FIG. 6a, except that the camera is located in a stationary position above the surface.
[0048] FIG. 6b is a block diagram illustrating another further alternate user interface in which a lens is included and which tracks the spots projected onto a surface and transmits the information wirelessly to the controller/encoder and/or the computer.
[0049] FIG. 6c is a cross-sectional, perspective view of an example embodiment of a faux instrument wand which includes a lens.
[0050] FIG. 7 is a cross-sectional, perspective view of an example embodiment of a wand, including rechargeable battery and controller/encoder, various example controls, and light emitter cluster, which houses the light emitters.
[0051] FIG. 8 is a cross-sectional, perspective view of a faux forceps wand
[0052] FIG. 8a is a cross-sectional, perspective view of an example embodiment of a wand similar to FIG. 7, but instead of multiple fixed emitters, there is one emitter, the beam of which is redirected by a mirror or other beam redirecting device.
[0053] FIG. 8b is a cross-sectional, perspective view of the distal end of the wand of FIG. 8a, illustrating an emitter beam which is redirected by a mirror.
[0054] FIG. 8c is a cross-sectional, perspective view of the distal end of the wand of FIG. 8a 2b, illustrating an emitter beam which is redirected by mirrors.
[0055] FIG. 8d is a perspective view of a surface on which an addressing grid has been overlain. For diagrammatical clarity, only parts of the grid have been illustrated, it being understood that the grid is continuous over the surface.
[0056] FIG. 9 is a cross-sectional, perspective view of an example embodiment of a faux forceps wand which includes a finger slider and sensor and/or haptic feedback device.
[0057] FIG. 10 is a perspective view of a further example embodiment of an operator viewer, with force feedback information as illustrated in detail, and tool icons of available tools a selected tool.
[0058] FIG. 11 is a cross-sectional, perspective view which illustrates an example of relative movement of wand controls and consequent movement of a tool.
[0059] FIGS. 12, 13 and 14 are cross-sectional, perspective views which illustrate an example of relative movement of wand controls and consequent movement of a tool relative to a bolt.
[0060] FIGS. 15 and 16 are cross-sectional, perspective views which illustrate an example of tools with adjustable extensions, which can retract in order to compensate for a rising and falling surface in accordance with an example embodiment of an aspect of the present invention.
[0061] FIG. 17 is a cross-sectional, perspective view of a camera tool which illustrates the effect of spacing of neighboring projected dots on a surface at two stages of movement. The separations, along with known information: the angles of the beams, relative to the tool and the position of a camera tool provide a computer with a description of the changing position of the surface at each point in time.
[0062] FIG. 18 is a perspective view detail of a distal end of the camera tool of FIG. 17 projecting beams at various predetermined angles, relative to the tool.
[0063] FIG. 19 is a cross-sectional, block diagram of an example passive haptic feedback device in which the flow of an electrorheological or magnetorheological fluid is controlled by an electrical or magnetic field between elements, which can be electrodes or magnetic coils in accordance with an embodiment of an aspect of the present invention.
[0064] FIG. 20 is a cross-sectional, block view of an alternate embodiment of a passive haptic feedback device in which the flow of fluid, such as saline or glycerin is controlled by an electrical valve.
[0065] FIG. 21 is a cross-sectional, perspective view of the operator's view of a worksite as viewed through an example embodiment of a viewer with eyepieces, illustrating superimposed tool cursors of the operator's intended position of tools at the worksite, and the actual position of the tools at the worksite.
[0066] FIG. 22 is a cross-sectional, perspective view of an example wand attached to any body part, tool, or other object, by means of connectors, which have complementary indexing means, to ensure their proper alignment.
[0067] FIG. 23 is a cross-sectional, perspective view of two wands that can be aligned in a desired manner, or be placed in a desired orientation or position with respect to each other or another object. In this example a drill is positioned so that it can drill through a hidden hole.
[0068] FIG. 24 is a cross-sectional, perspective view of one wand, and sensor array assembly (an example detector) which can be aligned in a desired manner, or be placed in a desired orientation or position with respect to each other or another object. In this example a drill is positioned so that it can drill through a hidden hole. A sensor array replaces the emitter housing in the sensor array assembly but the assembly is otherwise similar in construction to a wand. The sensor array communicates with a controller/encoder through communicating means and thence wirelessly to a computer.
[0069] FIG. 25 is a cross-sectional, perspective view of two combination wand and sensor array assemblies 47 which have been daisy chained with two other combination units (not shown and in combination with a sensor array 1.
[0070] FIG. 26 is a graphic illustration of a screen plane (surface of a first object) and device planes with mounted lasers (second object) and related coordinate systems.
[0071] FIG. 27 is a graphic illustration of a linear translation between coordinate systems of FIG. 26.
[0072] FIG. 28 is a graphic illustration of a rotational translation between coordinate systems of FIG. 26.
[0073] FIGS. 29a to 29e are partial, sectional, perspective views of the operating theatre and remote work site, which illustrate methods to reduce or eliminate operational latency of the system.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
[0074] An object location, sometimes referred to as position, and orientation, sometimes referred to as attitude, will together be called the “pose” of the object, where it is understood that the orientation of a point is arbitrary and that the orientation of a line or a plane or other special geometrical objects may be specified with only two, rather than the usual three, orientation parameters.
[0075] A pose can have many spatial parameters, referred to herein as parameters. As described above, such parameters may include the location and orientation of the object. Parameters may include location information in one, two or three dimensions. Pose location parameters may also be described in terms of vectors, providing a direction and a distance. Pose orientation parameters may be defined in terms of an axis of the object, for example, the skew (rotation about the axis), rotation (rotation of the axis about an intersection of the axis and a line normal to a plane), and tilt (rotation of the axis about an intersection of the axis and a line parallel to the plane). Other pose orientation parameters are sometimes referred to as roll, pitch and yaw.
[0076] It will be evident to those skilled in the art that there are many possible parameters to a pose, and many possible methods of deriving pose information. Some parameters will contain redundant information between parameters of the pose. The principles described herein include all manner of deriving pose information from the geometric configuration of detector and surface described herein, and are not limited to the specific pose parameters described herein.
[0077] Pose parameters may be relative to an object (such as a surface), or some other reference. Pose parameters may be indirectly derived, for example a pose relative to a first object may be derived from a pose relative to a second object and a known relationship between the first object and second object. Pose parameters may be relative in time, for example a change in the pose of an object resulting from motion over time may itself by a pose element without determining the original pose element.
[0078] The description provided herein is made with respect to exemplary embodiments. For brevity, some features and functions will be described with respect to some embodiments while other features and functions will be described with respect to other embodiments. All features and functions may be exchanged between embodiments as the context permits, and the use of individual features and functions is not limited by the description to the specific embodiments with which the features and functions are described herein. Similarly, the description of certain features and functions with respect to a given embodiment does not limit that embodiment to requiring each of the specific features and functions described with respect to that embodiment.
[0079] In this description one or more computers are referenced. It is to be understood that such computers comprise some form of processor and memory, which may or may not be integrated in a single integrated circuit. The processor may be provided by multiple CPUs which may be integrated on a single integrated circuit as is becoming more and more common, or a single CPU. Dedicated processors may be utilized for specific types of processing, for example, those that are mathematically computationally intensive. The functions of the computer may be performed in a single computer or may be distributed on multiple computers connected directly, through a local area network (LAN) or across a wide area network (WAN) such as the Internet. Distributed computers may be in a single location or in multiple locations. Distributed computers may be located close to external devices that utilize their output or provide their input in order to reduce transmission times for large amounts of data, for example image data may be processed in a computer at the location where such data is produced, rather than transmitting entire image files, lesser amounts of post-processed data may be transmitted where it is required.
[0080] The processing may be executed in accordance with computer software (computer program instructions) located in the memory to perform the various functions described herein, including for example various calculations and the reception and transmission of input and output of the processor. Such software is stored in memory for use by the processor. Typically the memory that is directly accessible to the processor will be read only memory (ROM) or random access memory (RAM) or some other form of fast access memory. Such software, or portions thereof, may also be stored in longer term memory for transfer to the fast access memory. Longer term storage may include for example a hard disk, CDROM in a CDROM drive, DVD in a DVD drive, or other computer readable medium.
[0081] The content of such software may take many forms while carrying out the features and functions described herein and variants thereof as will be evident to those skilled in the art based on the principles described herein.
[0082] Patterns includes for example the spots emitted from the emitters described herein. Patterns also includes other examples provides herein such as ellipses and other curves. It may also include asymmetrical patterns such as bar codes. Actively generating a pattern includes for example a pattern on a computer monitor (called herein a screen) or other display device. Actively generating a pattern may alternatively include projecting the pattern onto a surface. A detector includes for example a camera or a sensor array incorporating for example CCD devices, and the like. Reference pattern data may include for example the location and direction of emitters, or other projectors.
[0083] Objects as used herein are physical objects, and the term is to be construed generally unless the context requires otherwise. When projection or detection occurs at an object it is intended to include such projection or detection from objects fixedly connected to the initial object and the projector or detector is considered to be part of the initial object.
[0084] Referring to the FIGS., like items will be referenced with the same reference numerals from FIG. to FIG. and the description of previously introduced items will not be repeated, except to the extent required to understand the principle being discussed. Further, similar, although not identical, items may be referenced with the same initial reference numeral and a distinguishing alphabetic suffix, possibly followed by a numerical suffix.
[0085] In some aspects embodiments described herein provide a solid state operator interface which accurately reports the movements of the working end of an operator's faux instruments, which are then accurately reported to the working end of the robot's tools. In the case of a surgical robot, the operator (surgeon) manipulates instruments similar to those the surgeon would normally use, such as a tubular wand, for a scalpel and an instrument that would be similar in shape to forceps. This approach reduces the training that is required to become adept at using a robotic system, and also avoids the deterioration of learned skills learned in the hands-on operating procedures.
[0086] In some aspects embodiments described herein provide an operator interface that permits an input device, and the hands of the operator, to move in a larger space, which would eliminate or reduce the occasions in which the system requires resetting a center point of operator interface movements.
[0087] In some aspects embodiments described herein provide an interface which allows for fine coordinated movements by input device, and by both hands, such as when the surgeon attaches a donor and recipient vessels with sutures.
[0088] In some aspects embodiments described herein provide an interface that may include haptic feedback.
[0089] In some aspects embodiments described herein provide an interface system that can position the tools at any point in time so that non-operationally created motions are fully compensated for, and a relatively small patch of surface, where the procedure is being carried out, is rendered virtually static to the operator's point of view.
[0090] In some aspects, embodiments described herein provide a method for virtually limiting latency, during the operation. In some other aspects, embodiments described herein provide a method for alerting an operator to the existence and extent of latency during the operation.
[0091] Referring to FIG. 1, an operator's hand 6 controls the motion of the wand 2 within a sensor array 1, comprised of five rectangular segments forming an open-sided box. Narrow light beams 4 emanate from a light-emitting cluster 3 and project spots of light 5 on the light sensors of the sensor array 1.
[0092] Referring to FIG. 2, the box sensor array 1 of FIG. 1 is replaced by a buckyball-shaped sensor array 1a, comprised of hexagonal and pentagonal segments, and an opening 7, which permits the wand 2 to be inserted into the sensor array 1a.
[0093] Referring to FIG. 3, a system, includes the sensor array 1 and transmission means 11a that deliver signals from the segments of the sensor array 1 at interface pads 11b to computer 11. A three dimensional viewer 8 includes superimposed force feedback information 10b, 10c, as shown in detail 10a on top of the three dimensional image of the work space.
[0094] Referring to FIG. 4 and FIG. 5, two examples are shown of the force feedback information 10d, 10e, 10f and 10g, which may be used in substitution or in addition to haptic feedback.
[0095] Referring to FIG. 6, various elements of a robotic control system are shown. FIG. 6 illustrates an example where a body 14 is being operated on through an incision 14a. The robot in this case is fitted with tool controller 15 and example tools: forceps 15b, three dimensional camera 15c and cauterizing scalpel 15d. The robot's principal actuators 15a control the various movements of the tools in response to the positions of the wands 2 including the goose-neck camera guiding wand 13, and commands of the operator.
[0096] Referring to FIG. 6a, an example of a user interface uses a single panel to form the sensor array 1.
[0097] Referring to FIG. 6a1, a user interface is shown that is similar to that illustrated in FIG. 6a, except that the sensor array 1b is comprised of two panels at an angle relative to each other, which is known to the computer 11.
[0098] Referring to FIG. 6a2, an interface is shown that is similar to that illustrated in FIG. 6b, except that the camera 3c is located in a stationary position above the surface 1b, such that it can view spots 5 projected onto the surface and their position on the surface, but at an angle which minimizes or eliminates interference by the wand 2 with the emitted beams 4. The camera 3c is connected to the computer 11 by connecting means 3b1.
[0099] Referring to FIG. 6b, a user interface is shown in which a lens 3c is included and which tracks the spots 5 projected onto a surface 1b, which may not contain sensors, and transmits the information wirelessly to the controller/encoder 18 and/or the computer 11.
[0100] Referring to FIG. 6c, a wand 2b includes a lens 3c.
[0101] Referring to FIG. 7, an example, generally cylindrical wand 2 includes rechargeable battery 17 and controller/encoder 18, various example controls 19, 20, 20a, 21 and light emitter cluster 3, which houses light emitters 3a.
[0102] Referring to FIG. 8, a faux forceps wand 2 has finger holes 21, return spring 21a and sensor/haptic feedback controller 21b.
[0103] Referring to FIG. 8a, the wand 2 is similar to FIG. 7, but instead of multiple fixed emitters 3a, there is one emitter 3a, the beam 4 of which is redirected by a mirror 3d or other beam redirecting device. FIG. 8a also illustrates the wand 2 with a camera 3c.
[0104] Referring to FIG. 8b, is a cross-sectional, perspective view of the distal end of wand 2, illustrating in greater detail emitter 3a and beam 4, part of which is redirected by mirror 3d1, in some embodiments being one of an array of mirrors 3e.
[0105] Referring to FIG. 8c, is a cross-sectional, perspective view of the distal end of wand 2b, illustrating in greater detail the emitter 3a, beam 4, part of which is redirected by mirrors 3d2 and 3d3. FIG. 8c also illustrates an alternative location for camera 3c, in this case being located at the distal end of the mirror array 3e.
[0106] Referring to FIG. 8d, a surface 1b has an addressing grid 1c overlain. For diagrammatical clarity, only parts of the grid have been illustrated, it being understood that the grid 1c is continuous over the surface 1b.
[0107] Referring to FIG. 9, faux forceps wand 2 includes a finger slider 19a and sensor and/or haptic feedback device 19c.
[0108] Referring to FIG. 10, an operator viewer 8 has force feedback information 10 as illustrated in detail 10a, and also illustrated in FIG. 3. Tool icons 10h represent available tools. In this example, the operator has selected the forceps icon 26b for the left hand and the wrench tool icon 27b for the right hand. As an example, the selected tool is indicated by the icon being bolded.
[0109] Referring to FIG. 11, example relative movement of the wand 2 controls is shown, including the finger hole control 21, and the finger slider control 19a, (See FIG. 9), and the consequent movement of the tool 26 (See FIG. 11).
[0110] Referring to FIGS. 12, 13 and 14, example of relative movement of the wand 2 controls is shown, including the finger hole control 21, the finger slider 19a control, and the rotary controller 20 and the consequent movement of tool 27 relative to the bolt 29.
[0111] Referring to FIGS. 15 and 16, example tools 15b, 15c and 15d have adjustable extensions 15b1, 15c1 and 15d1 which can retract 15b2, 15c2 and 15d2 in order to compensate for rising and falling of a surface, for example a heart surface 14d1, 14d2.
[0112] Referring to FIG. 17, camera tool 15c views the effect of the spacing of neighboring projected dots 5 on the surface of the heart 14d1, 14d2, at two stages in the heart's beat. The separations, along with known information: the angles of the beams 4, relative to the tool 15c and the position of the camera tool 15c, provide computer 11 with a description of the changing position of the heart surface at each point in time. It also illustrates one example position of cameras, or camera lenses 3c and 3c2.
[0113] Referring to FIG. 18, distal end of the camera tool 15c is shown in detail. The emitter cluster 3 and emitters 3a project beams 4 at various predetermined angles, relative to the tool 15c.
[0114] Referring to FIG. 19, an example passive haptic feedback device has flow of an electrorheological or magnetorheological fluid controlled by an electrical or magnetic field between elements 36, which can be electrodes or magnetic coils. The control of the flow of this fluid affects the speed with which piston 31a can move back and forth through the cylinder 31.
[0115] Referring to FIG. 20, an example passive haptic feedback device has a flow of fluid, such as saline or glycerin, controlled by an electrical valve 37. The control of the flow of this fluid affects the speed with which piston 31a can move back and forth through the cylinder 31.
[0116] Referring to FIG. 21, an operator's view of the worksite (a remote process) seen through the viewer 8 and eyepieces 9 has superimposed tool cursors 15d3 and 15b3 that illustrate the operator's intended position of the tools at the worksite. Actual position of the tools 15d2 and 15b2 at the worksite is also shown in the viewer 8 to display to the operator the difference between the two due to temporal latency.
[0117] Referring to FIG. 22, a wand 2b may be attached to any body part, tool 15d2 15c2, or other object, by means of connectors 42 and 42a, that have complementary indexing means 42c and 42b, to ensure their proper alignment. Where an external camera 3c, such as illustrated in FIG. 6a2, or a sensor array 1, as illustrated in FIG. 23, is provided then the wand 2b may not have an integral camera 3c.
[0118] Referring to FIG. 23, two wands 2i and 2ii can be aligned in a desired manner, or be placed in a desired orientation or position with respect to each other or another object. In this example a drill 44 is positioned so that it can drill through a hidden hole 46.
[0119] Referring to FIG. 24, a wand 2 and sensor array assembly 1d can be aligned in a desired manner, or be placed in a desired orientation or pose with respect to each other or another object. In this example a drill 44 is posed so that it can drill through a hidden hole 46. The sensor array 1 replaces the emitter housing 3 but is otherwise similar in construction to the wand 2. The sensor array 1 communicates with the controller/encoder 18 by communicating means 11a and thence wirelessly to computer 11 (not shown).
[0120] Some general elements of embodiments of some aspects of the present of invention will now be discussed.
[0121] One embodiment is a system which accurately records the motions of the working end of an operator's faux instruments, herein referred to as a wand, which can approximate the shape of the devices the operator would use in a manual procedure. These motions are reported to the working end of the tools that the robot applies to the work site.
[0122] Other embodiments simply use the wand as an input device and its shape may not in any way relate to a particular instrument. For clarity, this disclosure will use a surgical interface to illuminate some convenient features of the invention, but for some embodiments the shape of the wand may not in any way mimic standard tools or instruments. It should also be noted that reference is made to a system controlling robotically controlled tools. It should be understood that some embodiments will control actuators that perform all types of work, such as controlling reaction devices, such as rocket motors or jet engines; the position of wing control surfaces, to name a few. The system may control virtual computer generated objects that are visually displayed or remain resident within the computer and where actuators may not even be used. Embodiments of this type would include manipulation of models of molecular structures (molecular modeling) and manipulation of protein structures. In such embodiments the wand may be thought of as a computer mouse in three dimensions, for example allowing the operator to view a three dimensional image of a structure, and then to make alterations to it, by moving the wand and making control commands, for example in the space in front of a sensor array. Such an embodiment of the wand and method could be used in architecture, machine design or movie animation. In will be recognized by those skilled in the art that these are examples only of uses of such embodiments and the embodiments are not limit to these examples.
[0123] In some described embodiments wands 2 incorporate light-emitting elements 3a that collectively cast multiple narrow beams of light, at known angles to each other, onto a sensor array 1 constructed of one or more light detecting panel(s) as illustrated on FIG. 3. The light detecting panel(s) report the location of the incident light, in real-time, to a computer. Knowing the angles at which the emitters 3a project the light beams from the wand 2, the computer can convert various locations of incident 5 light beams 4, using triangulation and mathematical methods and algorithms, well known to the art, to calculate the position and attitude of the wand 2 relative to the sensor array 1, at each particular time interval. As the wand 2 moves, so do the spots of incident light 5 on the sensor array(s) 1, and so the computer can produce a running calculation of the position and attitude (example parameters of the pose) of the wand 2, from time to time. The computer can convert changes in parameters of the pose into instructions to the robot to assume relative motions. Small changes in the position and attitude of the wand can trace relatively large positional changes where the light falls 5 on the sensor array 1. This can allow for accurate determining of the position and attitude of the wand.
[0124] Mathematical calculations that may be used to determine parameters of a pose of the wand and other parameters of pose described herein have been developed, for example, in the field of photogrammetry, which provides a collection of methods for determining the position and orientation of cameras and range sensors in a scene and relating camera positions and range measurements to scene coordinates.
[0125] In general there are four orientation problems:
A) Absolute Orientation Problem
[0126] To solve this problem one can determine, for example, the transformation between two coordinate systems or the position and orientation of a range sensors in an absolute coordinate system from the coordinates of calibration points. This can be done by recovery of a rigid body transformation between two coordinate systems. One application is to determine the relationship between a depth measuring device, such as a range camera or binocular stereo system, and the absolute coordinate system.
[0127] In the case of range camera, the input is at least a set of four conjugate pairs from one camera and absolute coordinates. In the case of a binocular stereo system, input is at least three conjugate pairs seen from the left and right camera.
B) Relative Orientation Problem
[0128] To solve this problem one can determine, for example, the relative position and orientation between two cameras from projections of calibration points in the scene. This is used to calibrate a pair of cameras for obtaining depth measurements with binoculars stereo.
[0129] Given n calibration points, there are 12+2n unknowns and 7+3n constraints.
[0130] At least 5 conjugate pairs are needed for a solution.
C) Exterior Orientation Problem
[0131] To solve this problem one can determine, for example, the position and orientation of a camera in an absolute coordinate system from the projections of calibration points in a scene. This problem must be solved for an image analysis application when necessary to relate image measurements to the geometry of the scene. This can be applied to a problem of position and orientation of a bundle of rays.
D) Interior Orientation Problem
[0132] To solve this problem one can determine, for example, the internal geometry of a camera, including camera constants, location of the principal point and corrections for lens distortions.
[0133] Some examples of these problems and their solutions are found in Ramesh Jain, Rangachar Kasturi and Brian G. Schunck, Machine Vision, McGraw-Hill, New York, 1995. ISBN 0-07-032018-7. Chapter 12 on Calibration deals in particular with an absolute orientation problem with scale change and binocular stereo, and with camera calibration problems and solutions which correlate the image pixels locations to points in space. Camera problem includes both exterior and interior problems.
[0134] In addition to calibration problems and solutions, the Jain et al reference addresses an example problem and solution for extracting distance or depth of various points in the scene relative to the position of a camera by direct and indirect methods. As an example, depth information can be obtained directly from intensity of a pair of images using two cameras displaced from each other by a known distance and known focal length. As an alternative example solution, two or more images taken from a moving camera can also be used to compute depth information. In addition to those direct methods 3D information can also be estimated indirectly from 2D intensity images known as “Shape from X Technique”, where X denotes image cues such as shading, texture, focus or motion. Examples are discussed in Chapter 11 in particular.
[0135] The above Jain et al. reference is hereby incorporated by reference into the detailed description hereof.
[0136] As a further example discussion of solutions to mathematical calculations that may be used to determine parameters of a pose of the wand for the purposes of determining 3D-position of a hand-held device equipped with laser pointers through a 2D-image analysis of laser point projections onto a screen, two sets of coordinate systems can be defined as shown in FIG. 26. The centre of a first coordinate system (xS,yS,zS) can be placed in the middle of the plane that coincides with the screen (projection) plane and is considered to be fixed. The lasers installed on the hand-held device can be described with a set of lines in a second coordinate system (xD,yD,zD) which origin agrees with an intersection of the laser pointers. Additionally, the second coordinate system can have a freedom of translation and rotation as shown in FIGS. 27 and 28. Translation and rotation coordinates such as those shown in FIGS. 27 and 28 can also be found in linear algebra book such as Howard Anton, John Wiley & Sons, 4th edition ISBN 0-471-09890-6; Section 4.10, at pp. 199 to 220.
[0137] The projection of the laser on the fixed plane is mathematically equivalent to finding the intersection between the plane equation zS=0 and the line equation describing the laser path. However, the line equations have to be transformed in the original coordinate system. There are many ways to define an arbitrary rotation and translation of one coordinate frame into another. One of the ways is via the transform matrix elements.
[0138] The table 1 and 2 shows the coordinate transforms of the point P from one coordinate system to the other as a result of the linear transposition and rotation.
TABLE-US-00001 TABLE 1 Table 1 Linear Table 2. Rotational transformation and definition of a.sub.ik coefficients translation a.sub.ik k = 1 k = 2 k = 3 x = x* + a1 x = a.sub.11x* + a.sub.12y* + a.sub.13z* i = 1 cos?cos? cos?sin? sin? y = y* + a2 x = a.sub.21x* + a.sub.22y* + a.sub.23z* i = 2 cosfsin? + sinfsin?cos? cosfcos? - sinfsin?sin? -sinfcos? z = z* + a3 x = a.sub.31x* + a.sub.32y* + a.sub.33z* i = 3 sinfsin? - cosfsin?cos? sinfcos? + cosfsin?sin? cosfos?
[0139] The table 3 is a summary of example laser property and image analysis requirements for the reconstruction of the translation or rotation of the hand held device based on the observations of movement of the projection point as set out above. For the purpose of this discussion, multiple lasers are equivalent to a single laser split into multiple spot beams.
TABLE-US-00002 # of Translation Rotation lasers x y Z Along Z Along x Along y 1 custom-character custom-character Requires the light source Possible with the offset Not detectable for the with large dispersion sensor and path narrow laser beam. It angle. Requires edge reconstruction from would be interpreted as the detection and area or minimum 3 frames for translation. In the case of perimeter calculations large angles. However, the dispersed beam, not very sensitive for requires edge detection and small rotational angles. shape reconstruction. 2 custom-character custom-character custom-character Problem with detection of It would be interpreted s Requires non parallel laser left-right laser equivalent the translation in the case beams and distance to 180° rotation. Requires of horizontal or vertical calibration. marking of one of the alignments. For misaligned lasers. Still requires path lasers, not very sensitive reconstruction via frame and requires the distance history. calculation and calibration. 3 custom-character custom-character custom-character Requires marking of one Requires area or perimeter With non parallel laser of the lasers. calibration/calculation. beams. 4 or Can provide additional information to potentially avoid singularities or ambiguities. more
[0140] Additional image frames can be used to change the number of lasers, or spots used at any one time. The linear transposition in x and y direction can be reconstructed from the center of mass. The translation along the z axis can utilize a calibration of the area/perimeter of the triangle. Detection of the rotation around z-axis can be achieved with marking of one of the lasers or by asymmetrical placement of lasers. Whereby, the marking of the laser may result in the faster processing time compared to the second option which requires the additional image processing in order to find the relative position of triangle. The marking of the laser can be achieved, for example, by having one laser of larger power which would translate in the pixel intensity saturation of the projection point.
[0141] With respect to the image processing time, it may be preferable to limit the area of the laser projection, for example to a 3 by 3 pixel array. Once, the first laser point has been detected, a search algorithm for the rest of the laser points could be limited to the smaller image matrix, based on the definition of allowable movements.
[0142] Other illustrative examples of mathematical calculation that may be used to determine parameters of a pose of the wand and other parameters of pose described herein are included for example in B. K. P. Horn. Robot Vision. McGraw-Hill, New York, 1986; U.S. patent application of Fahraeus filed Mar. 21, 2001 under application Ser. No. 09/812,902 and published in Pub. No. US2002/0048404 on Pub. Date: Apr. 25, 2002 under title APPARATUS AND METHOD FOR DETERMINING SPATIAL ORIENTATION which discusses among other things determining the spatial relationship between a surface having a predetermined pattern and an apparatus; in U.S. patent of Zhang et al. issued Apr. 4, 2006 under title APPARATUS AND METHOD FOR DETERMINING ORIENTATION PARAMETERS OF AN ELONGATE OBJECT; Marc Erich Latoschik, Elmar Bomberg, Augmenting a Laser Pointer with a Diffraction Grating for Monoscopic 6DOF Detection, Journal of Virtual Reality and Broadcasting, Volume 4(2006), no. 14, urn:nbn:de:0009-6-12754, ISSN 1860-2037 http://www.jvrb.org/4.2007/1275; Eric Woods (HIT Lab NZ), Paul Mason (Lincoln University, New Zealand), Mark Billinghurst (HIT Lab NZ) MagicMouse: an Inexpensive 6-Degree-of-Freedom Mouse http://citeseer.ist.psu.edu/706368.html; Kynan Eng, A Miniature, One-Handed 3D Motion Controller, Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, CH-8057 Zurich, Switzerland http://www.ini.ethz.ch/˜kynan/publications/Eng-3DController-ForDistribution-2007.pdf. The content of each of the above references cited above in this paragraph is hereby incorporated by reference into the detailed description hereof.
[0143] Rather than using a sensor array to detect the incident 5 light beams 4, a camera above a passive surface 1b, as illustrated in FIG. 6a2 may similarly detect the position of the incident spots of light 5 on surface 1 and make the same calculations described above to determine the position and attitude of the wand 2. Alternatively, a camera 3c may be incorporated into the wand 2b to detect where the light falls 5 on the surface 1b, as illustrated on FIG. 6a1.
[0144] With reference to FIG. 6, since the space in front of the sensor array(s) may be different from the space that the robot operates in, the operator may reset or, as is usually the case, center the wands 2 in front of the sensor array 1, to coordinate the wand's position with that of the working end of the robotic arms 15b, 15c and 15d, for the next work sequence. Additionally, the travel distances, while relatively the same, between the wands 2 and the working end of the robot arms 15b, 15c, 15d, may differ. For example, where accuracy is critical, the wand 2 may be set to move relatively long distances, to effect a relatively short displacement at the working end of the robotic arms 15b, 15c, 15d. Conversely, where accuracy is not important and quicker movements, over larger distances are desired, the computer can be instructed to translate short length movements of the wand 2 into relatively large distances of travel at the working end of the robotic arms 15b, 15c, 15d. This relationship can be changed by the operator, at any time, by moving a control on the wand 2 or controls 11e on the console 11d. These methods of computer control are well known to the art and embodiments of the invention that incorporate such controls are within the ambit of the invention.
[0145] The relative attitude of the sensor array 1 to the attitude of the robot arm work space 14b can also be set, which is usually at the commencement of the work, although it may be changed during the operation. For example, the vertical line in the sensor array 1 will usually be set to be the vertical line in the work space 14b, so that when the wand 2 is raised up vertically in front of the sensor array(s) 1, the robot will produce a vertical motion at the working end 15b, 15c, 15d of the robotic arm. This however may be changed by the operator varying the settings of the vertical and/or horizontal plane at the console 11d or in some other embodiments in the wand 2.
[0146] Similarly the longitudinal axis of the wand 2 is generally set as the same as the longitudinal axis of the working end of the robot's arms 15b, 15c, 15d, although this too can be altered by controls at the console and in some other embodiments in the wand itself.
[0147] At the start or reset times, the position and attitude of the wand 2 can be translated to be the same as the position of the working end of the robot arms 15b, 15c, 15d; the motions thereafter, until the next reset, can be relative. This allows the operator to change the operator's start or reset position and attitude of the wand to make it more comfortable to execute the next set of procedures, or provide sufficient room for the next set of procedures, in front of the sensor array 1, as referred to above.
[0148] The movement of the wands will then control the movement of the tools to which they are assigned by the operator. Finer movements and movements that require haptic feedback can be effected by controls on the wands 2b, such as the finger hole control 21, the rotary control 20, 20a and the finger slider control 19b, illustrated on FIG. 6c. Switches on the wand or on the consul can turn off the active control of the tools by the movement of the wand(s), but may turn on or leave on the active control of the tools by the controls on the wand to prevent inadvertent jiggling or wander while critical and/or fine work is being conducted by the operator. On other occasions the operator may wish to manipulate the wand 2 position simultaneously with moving the controllers 20, 20a and 19b or other controls which that preferred embodiment might include.
[0149] The sensor array 1 may be made of one or more sheets or panels of light sensor arrays, in which each pixel of the sensor array cage 1 can communicate the fact that light has or has not fallen 5 on that pixel to the computer, and identify which light beam 4 and from which wand 2, it originated. When integrated by the computer 11 with other inputs from other locations, this information can identify the location and attitude of the wand 2, by triangulation, mathematic methods and computer algorithms, well known to the art.
[0150] In some embodiments the color of the incident light, and/or the addressable pulse frequency of the light that is detected, identifies which particular light beam and wand has cast the light so incident. For example, in some embodiments a wand may have several light-emitting elements, such as a laser, diode laser or light-emitting diode, each having a different light wave length (or color), which can be identified and distinguished by the sensor array 1 (in combination with the computer). In other embodiments, the light emitter 3a is modulated or pulsed to give it a unique pulse address, which when its beam 4 is detected by the sensor array 1, which with the computer identifies the particular light beam 4, wand 2 and location and attitude of the same. Other embodiments may take advantage of the relative unique patters of beams 4 emitted from each wand 2 to identify the wand 2 and perhaps the particular beam 4 from that said wand. Other embodiments can include a combination of these methods, or other similar beam identification methods, well known to the art. It can be desirable to provide additional light emitters 3a to provide redundancy, in the event one or more of the beams does not strike a sensor. For example, in some embodiments an axial reference beam 4 may be directed straight along the longitudinal axis of the wand 2.
[0151] One or more of the light beams 4 may be modulated so as to provide information as to the wand 2 identity, and its mode of operation. For example, it might convey information as to the desired heat setting and off/on state of the cauterizing scalpel, or the forceps clasping position, as set by the wand's operator. It might also indicate the rotation of a particular tool. These are only examples of the information that may be selected by the operator, on the wand controls, and then conveyed to the sensor array 1, and hence to the computer to control the robotic arms. Embodiments can include all other convenient instructions and inputs, and all are included within the ambit of the embodiments described herein. This method of conveying instructions may be handled by a dedicated light emitting element 3a, or be bundled into one or more of the light emitting elements 3a that are used to determine the position and attitude of the wand 2. This method of conveying instructions and status information from the wand may be in addition to wireless communications 16, 16a means embedded in the wand, or in place of it.
[0152] The pulses of light from the light-emitting elements 3a from cluster 3 of the wands, may be synchronized such that the beam 3 falls 5 on the sensor array 1 at discrete times so as to avoid conflicting signals in those architectures that do not have direct connections between the sensor elements and drivers, such as active or passive matrix. In other embodiments, redundant beams are sufficient to resolve any signal interference and software means such as path prediction algorithms can be used to resolve any such conflicts. The beams in most cases will fall on more than one and in most cases many pixels in the sensor array, which will improve reliability, at the expense of resolution, and may also be used to distinguish between two beams that strike approximately the same pixels group.
[0153] There are many methods of constructing a light sensor array 1, well known to the art, and includes thin film transistor (TFT) arrays in which there may be included color filter arrays or layers, to determine the color of the incident light and report the location to the computer by direct and discreet connection, or more often, by way of a passive or active connection matrix. These active matrixes or AMTFT's architectures can be used in some embodiments. Recently, Polymer TFT's sensor arrays are being made which substantially reduce the cost of such sensor arrays. These less expensive arrays will mean that the sensor array(s) 1 can be made much larger. An example of a Polymer TFT, is described by F. Lemmi, M. Mulato, J. Ho, R. Lau, J. P. Lu, and R. A. Street, Two-Dimensional Amorphous Silicon Color Sensor Array, Xerox PARC, United States, Proceedings of the Materials Research Society, 506 Keystone Drive, Warrendale, Pa., 15086-7573, U.S.A. It is understood that any convenient light sensor array may be used, including any future development in light sensor arrays, their architecture and composition, and such an embodiment is within the ambit of the embodiments described herein.
[0154] In some embodiments, the sensor array pixels may be combined with light emitting elements, forming a superimposed sensor array and a light emitting array. In these embodiments an image of the working end of the robot arms 15b, 15c, 15d and work sight can be formed on the sensor array 1, and the operator can at the same time view the wand(s) 2 that are initiating the motion of the working end of the robot's arms 15b, 15c, 15d. This embodiment is most effective if the image is generated as a three dimensional image, although this is not required. Methods for creating a three dimensional effect are well known to the art and include synchronous liquid crystal glasses and alternating left eye, right eye, image generation and single pane three dimensional arrays. It is to be understood that the embodiments described herein includes all these methods and future three dimensional image generation methods.
[0155] Other embodiments may use an additional camera aimed at the operator's hands and wands, and append the image to that of the worksite that is viewed in the operator viewer 8. This appended image may be turned on and off by the operator.
[0156] In those preferred embodiments that use a surface 1b, and camera 3c, in place of the sensor array 1, as illustrated in FIG. 6c, the wand 2b operates partly as a laser or optical mouse, that is, detecting movement by comparing images acquired by the lens of part(s) of the surface 1b. In some preferred embodiments images of spot(s) 5 can be detected by the said lens 3c, noting both their texture or image qualities, and their positions relative to other spot(s) 5. Since the relative angle of the projected beams 4 are known, the computer 11 and/or controller/encoder 18, can process this information to determine the three dimensional position of the wand 2b relative to the surface 1b, for example by using both methods used by optical/laser mice and mathematical methods including trigonometry, well known to the art. As an example, movement of the wand 2b on planes parallel to the surface 1b, can be determined by methods used by optical/laser mice, which are well known to the art; and the height and attitude of the wand in three dimensional space can be determined by the lens 3c detecting the relative position of the spots 5 projected onto the surface 1b, and using triangulation and mathematical methods described above, which are also well known to the art. More particularly, the position of the wand 2b in three dimensional space can then be computed by integrating these two information streams to accurately establish both the lateral location of the wand 2b and its height and attitude in space. Thus, not all parameters of the pose are determined utilizing the detected pattern of the spots on the surface; rather, some of the parameters are determined utilizing the texture information (lateral location), while other parameters are determined utilizing the detected pattern of spots (height and attitude).
[0157] In other embodiments, where there are two or more panels, that are placed at relative angles known to the computer 11, such as those illustrated in FIG. 6a1, the wands 2b may contain camera(s) 3c which are able detect the position of spots 5 on two or more panels. In these arrangements, where the panels are surfaces 1b, the orientation and position of the wand 2 may be determined for example as described above by mathematical methods, including trigonometry. For example, in an embodiment where the panels are arranges at right angles to each other (at 90 degrees), as illustrated in FIG. 6a1, and where the angles at which the light beams 4 trace relative to the longitudinal axis of the wand 2 are known, and where the relative positions of the projected spots 5 which fall on both panels are recorded by the camera(s); the position and orientation of the wand 2 in three dimensional space can be directly determined by mathematical methods, including trigonometry.
[0158] This information, for example, can then be used to control the tools 15b, 15c, and 15d, or control any process, virtual or real. It can be readily appreciated that the wand 2b, like the wand 2 can be any shape and have any function required, for example having the shape of an optical/laser mouse and pointing and directing processes in a similar manner.
[0159] In this disclosure, references to wand 2, should be read as including wand 2b and vice versa, as the context permits. Similarly references to sensor array 1 should be read as including surface 1 and vice versa, as the context permits.
[0160] Embodiments of the invention that incorporate a surface 1b, rather than a sensor array(s) 1, pass information from buttons and hand controls, for example 19a, 20 and 21, on the wand 2b wirelessly or by direct connection, herein described, and by other methods well known to the art. The beams 4 may be encoded for maintaining identification of each beam and each spot 5; for example, the light emitting elements 3a may be pulsed at different frequencies and/or have different colors, which the lens 3c may detect from the light reflected from the spots 5. Although, a wand 2b, may resort exclusively to those methods used by optical/laser mice, to determine its position in three dimensional space, without resort to detecting computing and integrating the relative positions of projected spots 5, the accuracy of such a system will be inferior to those that include those latter methods and the computational overhead will be greater as well. It is to be understood that some embodiments can rely solely on those methods used by optical/laser mice, where accuracy is not as important.
[0161] In some embodiments, the surface 1b may be any suitable surface including those that contain textures and marks that are typically used in association with optical/laser mice. The surface 1b may have reflectivity or surface characterizes, such that the reflected spots 5 that are detected by the camera 3c are within a known envelope and thus spots 5 that are off the surface 1b, can be rejected in calculating the orientation of the wand 2b, accompanied by a warning signal to the operator.
[0162] The wands 2, 2b may include resting feet that allow them to rest on the surface 1, 1b, such that the beams 4 and spots 5 can be detected by the camera 3c, and such that the system can calibrate itself with a known wand starting orientation, and if placed on a specific footprint, position; or sensor array 1 or the surface 1b may include an elevated cradle 1e, as illustrated on FIG. 6b to hold the wand 2b in a fixed position for the calibration routine. The number of light emitting elements, such as lasers or photo-diodes, will depend upon the accuracy and redundancy required.
[0163] The wand 2 may in some applications be stationary, or have an otherwise known position, and measure it's position relative to a moving surface or changing contours on a surface. The embodiments of the invention may include such a wand 2 or be incorporated into a tool, such as those, 15b, 15c, 15d, illustrated in FIG. 15, FIG. 16 and FIG. 17, and be used to compensate for motions, such as the beating of the heart 14d1, 14d2.
[0164] Feedback of forces acting on the working end of the robotic arms 15b, 15c, 15d, may be detected by sensors on the robot arms, by means well known to the art and this real-time information may be conveyed to the computer which can regulate the haptic feedback devices and impart approximately the same forces on the operator's fingers and hands and/or resist the movement of the operator's fingers and hands. These haptic feedback devices, which are well known to the art, can, for example, be incorporated into the controls 19, 19a, 20, 21 or other similar controls of the wand 2 or 2b. These haptic feedback devices can be active or passive and can impart force on the operator's fingers or hands (active), and/or resist the motion of the operator's fingers or hands (passive). Examples of passive haptic feedback devices are illustrated in FIGS. 19 and 20. FIG. 19 illustrates a passive haptic feedback device in which the flow of an electrorheological or magnetorheological fluid is controlled by an electrical or magnetic field. FIG. 20 illustrates a passive haptic feedback device in which the flow of fluid, such as saline or glycerin is controlled by an electromechanical valve. Embodiments of this invention may incorporate haptic feedback devices of any design known to the art, and all come within the ambit of the embodiments described herein.
[0165] These haptic feedback devices can for example be incorporated into the finger hole 21 sensor/feedback controller 2. For example the finger holes 21 of the wand that is a faux forceps, as illustrated in FIG. 9, can be provided with haptic feedback devices which provide pinching feedback forces to the operator's hands and which accurately simulate the forces acting on the working end of the forceps tool 15b on the working end of the robotic arm. The position and motion of the mobile finger hole 21 can be conveyed to the computer wirelessly, by beam modulation, as described above or by cable.
[0166] Similarly, the same faux forceps, illustrated in FIG. 9 can on some preferred embodiments of the invention, include a haptic feedback device in the finger slider sensor/haptic feedback device 19c, which senses the movement of the finger slider 19a, and which can move the forceps tool 15b, back and forth in a direction parallel to the longitudinal direction of the said tool 15b. As the operator slides the finger slider from 19a position to 19b, the operator feels the same resistance that the tool 15b senses when it pulls back tissue that it grasps, in response to the pulling back of the said slider 19a.
[0167] The faux forceps, illustrated in FIG. 9 can transform its function from forceps to any other tool or instrument that is required. For example the same faux forceps, illustrated in FIG. 9 can act as a controller for a scalpel tool 15d, a wrench 27 (illustrated in FIG. 13), or any other tool or instrument, in which the various controls 19, 19a, 20, 21 of the wand are programmed to have different, but usually analogous, functions for each particular tool. The operator can select a particular tool by pressing a particular footswitch, a switch on the wand 2, or other switch location. All tools available and the selected tool may be presented as icons on the operator viewer 8, through the three dimensional eyepieces 9, an example of which is illustrated in FIG. 10 as detail 10h. For example, the selected tool might be bolded as the forceps icon 26b is bolded for the left hand wand 2 in the detail 10h, while the wrench tool icon 27b is bolded, for the right hand. Once selected, by the operator, the other various controls 19, 19a, 20, 21 and other controls, would be assigned to various analogous functions. The operator might call up on the viewer 8 a summary of which controls on the wand relate to what actions of the tools 15b, 15c, 15d, or other applicable tools or actions. All icons may be switched off by the operator to maximize his viewing area through the eyepieces 9.
[0168] Some embodiments also include means for reducing latency and accommodating to the motion of the subject.
[0169] Further details of the embodiments will now be discussed with particular reference to the FIGS.
[0170] FIG. 1 illustrates operator's hand 6 controlling the motion of the wand 2 within a sensor array 1, comprised of five rectangular segments, forming an open-sided box. FIG. 1 also illustrates the narrow light beams 4 emanating from the light-emitting cluster 3, and projecting spots of light 5 on the light sensors on the inside of the sensor array 1. The light-emitting elements 3a, that comprise the light-emitting cluster 3, are usually positioned such that the narrow beams of light 4 that they emit form a unique pattern, so as to aid in identifying the particular wand 2 that is being used. Various embodiments contain various numbers of light-emitting elements, depending upon the accuracy required and whether dedicated information carrying beams are used. Any shape of sensor array 1 can be utilized, and those illustrated in FIG. 1, FIG. 2, FIG. 6a and FIG. 6a1 are only intended to be examples of a large class of sensor array shapes, sizes and arrangements. The density of pixels or discrete sensors comprising the sensor array 1 will vary depending upon the use to which the robot is put.
[0171] FIG. 3 illustrates the three dimensional viewer 8 which includes two eyepieces 9 and feedback information 10 which is superimposed on the image of the work area. As illustrated in FIG. 4 and FIG. 5 the size and orientation of the vectors 10d and 10f, and the numerical force unit 10e and 10g can be computer generated to graphically report the changing forces acting on the working end of the robot's tool that corresponds to the wand that is being manipulated. In some embodiments, these vectors are three dimensional views, such that the vector position will correspond with the forces acting on the three dimensional view of the instruments, viewed through the viewer 8. The viewer 8 may superimpose feedback information on additional wands on top of the three dimensional view of the work area. These superimposed views may of course be resized, repositioned, turned on and off by the operator. The view of the work area is captured by a three dimensional camera 15c, as illustrated in FIG. 6, which transmits the image information along transmitting means 11c to the computer 11 and viewer 8. The position of the camera, like that of any robot tool may be controlled by a separate wand 13, such as that illustrated in FIG. 6, or be controlled by a multi-purpose wand, which changes its function and the tool it controls, by a mode selecting control such as switch 20, which is incorporated into the wand 2, as illustrated in FIG. 7. The camera may also be programmed to keep both tools 15b and 15d in a single view, or selected tools in a single view. This automatic mode may be turned on or off by the operator, who may then select a wand controlling mode. The feedback reporting means may be presented in many ways and that described is meant to be an example of similar feedback reporting means, all of which come within the ambit of the embodiments described herein.
[0172] In some embodiments the viewer 8 is attached to a boom support, so that it may be conveniently placed by the operator. Various preferred embodiments place the controls 11e on the console 11d which is adjacent to the sensor array 1 and the wands 2, but they may also include foot switches 12, one of which is illustrated in FIG. 6. It can be readily appreciated that the computer 11 may be replaced with two or more computers, dividing functions. For example, the sensor array 1, wands 2, one computer 11 and viewer 8 may communicate at a significant distance with the second computer 11 and work site robot controller 15. This connection could be a wideband connection which would allow the operator to conduct a procedure, such as an operation from another city, or country.
[0173] The wands 2 and 2b illustrated in FIGS. 7, 8, 9 and 12 are only meant to be examples only and other embodiments would have different shapes and controls and still be within the ambit of the embodiments described herein, for example, some embodiments may have a revolver shape. FIG. 7 illustrates the principal components of one embodiment. The wand 2 in FIG. 7 contains a rechargeable battery 17 to supply power to the various functions of the wand 2. The terminals 17a extend beyond the wand and provide contacts so that the wand may recharge when placed in a docking station which may accommodate the other wands, when not in use. Transmission means 17b provides power to controller/encoder 18 from battery 17. Controls 19, 20 and 20a are meant to be illustrative of control means, to switch modes of operation, such as from a cauterizing scalpel to a camera or forceps; and/or to vary the heat of the cauterizer or the force applied to the forceps grippers, to name just a few examples. In those cases where the robot arms are snake-like, these controls 19, 20 and 20a or similar controls, may control the radius of turn, and location of turns, of one or more of the robot's arms. In FIG. 7 transmission means 19a connects the lever control 19 to the controller/encoder 18; transmission means 20b connect the controllers 20 and 20a to the controller/encoder 18.
[0174] The controller/encoder 18 in some embodiments pulse the one or more of the light emitters 3a to pass-on control information to the computer, via the sensor array 1, as mentioned above. Transmission means 3b connects the emitters to the controller/encoder 18. The light-emitting array 3 may contain discrete emitters; they may also be lenses or optical fibers that merely channel the light from another common source, for example, a single light-emitting diode or laser. Other wireless means may be included in the wand 2, which require an aerial 16a which communicates with aerial 16 in communication with the computer 11, as illustrated in FIG. 6.
[0175] While the wands illustrated are wireless, it should be understood that various embodiments of may have wired connections to the computer 11 and/or to a power source, depending upon their use, and these embodiments come within the ambit of the invention. In some embodiments, such as those in which the wand 2 is connected directly to the computer 11, the controller/encoder 18 and all or parts of its function are incorporated into the computer 11.
[0176] FIG. 8 illustrates a faux set of forceps 2, which give the operator or surgeon the feel of the forceps he may use later in the same procedure or another day when the robot is not available or suitable for the operation. FIG. 8 is meant to be illustrative of designing the wand to resemble instruments or tools that would be otherwise used in a manual procedure. This allows the skills learned using these devices to be used when controlling a robot and reduces dramatically the learning time required to use the robot effectively. While embodiments may include wands of many shapes, and configurations, those that resemble in function or appearance the tools or instruments that are normally used, are particularly useful to those situations where the operator must carry out similar procedures both manually and by robot.
[0177] FIG. 8 illustrates a faux forceps wand 2 which has two finger holes 21, one of which pivots at the controller/feedback device 21b, which detects motion of the movable finger hole 21, which is transmitted by transmission means 21d to the controller/encoder 18 which then transmits the motion wirelessly, or directly, to the computer 11 or encodes pulses by modulating the output of the light emitters 3a, the light beam produced transmitting the motion and position of the movable finger hole 21 to the sensor array, and subsequently the computer 11. FIG. 8 also illustrates an alternative method of detecting and transmitting changes in the position of the various control elements on the wand 2. Emitter(s) 3a may be placed on the movable elements, such as the finger hole 21. The projected light 4 that is incident on the sensor array 1 or surface 1 may then be used by the computer 11 to determine the position of the moving element, as it moves, such as the finger hole 21, illustrated in FIG. 8. This method of detecting and reporting the movement of control elements may be used in any such elements which are contained in various embodiments of the invention. For diagrammatical simplicity the connection from the light emitter 3a, on the finger hole 21, to the controller/encoder 18 has not been shown.
[0178] The controller/feedback device 21b may also receive instructions wirelessly or by direct connection from computer 11, which directs the magnitude and direction of haptic feedback forces on the pivoting action of the movable finger hole 21. These haptic feedback forces can be passive or active, depending upon the design of the controller/feedback device. In some embodiments, no haptic feedback component is incorporated into the controller/feedback device, and in these embodiments the controller/feedback device 21b merely transmits motion and position data of the movable finger hole 21 to the computer; via the sensor array, wirelessly or directly to the computer 11.
[0179] FIG. 8 also illustrates a notional end 4a for the wand 2 which the operator sets at the console 11d to allow for sufficient room between the ends of the wands 2, when the tools are in close proximity.
[0180] FIG. 8a, and detail drawings 8b and 8c, illustrate a wand 2b similar to FIG. 7, but instead of multiple fixed emitters 3a, there is one or more emitters 3a, the beam(s) 4 of which are redirected by a mirror(s) 3d or other beam redirecting device. In this embodiment, the controller/encoder 8 directs each mirror 3d in the mirror array 3e, housed in a transparent housing 3f, and secured to it by rod supports 3g, to redirect part or the entire beam 4 produced by the emitter 3a. As illustrated in FIG. 8b, the controller/encoder 18 and/or the computer 11 selects each mirror 3d1 and varies its angle relative to the mirror array 3e (one at a time or in groups) and, with other mirrors in the array, directs the beam(s) in a programmed sequence, noting the angle of the projected beam relative to the wand 2b and simultaneously comparing this to the point(s) 5 detected on the surface 1b, and by mathematical means, including trigonometric methods, defining at every selected pair, at that point in time, the position of the sensor relative to the surface 1b (or sensor array 1 in those embodiments where a sensor array is used to detect the spot 5). Embodiments include all means of redirecting the beam 4, including solid state electronic mirror arrays, such as those developed by Texas Instruments Corp. or mechanical or other optical redirecting devices well known to the art. The solid state mirror arrays that have been developed by Texas Instruments Corp. may incorporate any number of mirrors and may incorporate thousands of them, each of them or groups of them being controlled by electronic means. This system is one of a larger class known as microelectronic mechanical systems (MEMS). Because the beam can be programmed to quickly produce multiple pair inputs at various angles, for mathematical comparison, as described above, the controller/encoder 18 and/or computer 11 can calculate the position of the wand 2 in three dimensional space at each point in time. The beam may be directed in various patterns, and may adapt the pattern so as to maximize the coverage on the sensor array 1 or surface 1b and minimize or eliminate the occasions in which the beam would fall incident outside of the perimeter of either the sensor array 1 or the surface 1b.
[0181] Other embodiments, such as that illustrated in FIG. 8a, may include a motor or motive device rotating mirror or prism, in place of the mirror array 3e, which redirects the beam 4 and, for example, may project an ellipse (when stationary, and open curves, when the wand 2 is in motion) or other set of curves, on the sensor array 1 or surface 1b. In such a case at ever point in time the controller/encoder 18 and/or computer 11 can calculate the position of the wand 2, as at each point in time the angle of the beam emitted, relative to the wand, is known and matched with its other pair 5 that is projected on the sensor array 1 or surface 1b at that same point in time. Obviously, the rate of rotation must be sufficient so that every motion of the wand 2 is captured by the sensor array 1, or camera 3c. Since the controller/encoder 18 and/or the computer 11 direct the mirrors in the mirror array and control the angle at every point in time each mirror elevates from the mirror array 3e surface, the angle at which the beam 4 is redirected, relative to the wand 2 is known, speeding the mathematic calculation, described above. As illustrated in FIG. 8c, any number of beams may be actuated at the same time, some being pulsed, panned about, while others may stay on, and may be fixed or be set at various angles. For example, FIG. 8c illustrates how mirrors 3d2 and 3d3 may be elevated at different angles, producing divergent beams 4, with a known angle. Also, by way of further example, an embodiment in which the wands 2b incorporates a camera(s), which may be located on various parts of the wand or some other convenient location, some beams may stay on so that the camera 3c can record the surface patterns, which assist in locating the position of the wand 2, in three dimensional space, relative to the surface 1b.
[0182] In other embodiments, as illustrated in FIG. 8d, shapes such as, circles or ellipses are projected on the sensor array 1 or surface 1b by optical means, such that the changing shapes, define the orientation and position of the wand 2b. For example, a single light emitter 3a, may include a len(s), or other optical device, which converts the light beam into a cone, which may project a ring of light; or a field of light having the same outside boundary as the ring type (herein called a filled ring) onto the sensor array 1 or surface 1b. In most embodiments a ring (not filled) is preferred, as the amount of data that requires processing is reduced, however a filled ring or field may be used for some embodiments. The three dimensional orientation and position of the wand 2, 2b may be calculated by comparing the projected shape and the detected shape that is detected on the sensor array 1 or surface 1b, by various mathematical means well known to the art such as projection geometry and trigonometry. For example, a light emitter 3a and dispersing lens which projects a circle onto the sensor array 1 or surface 1b, when the longitudinal axis of the wand 2 is normal to the said sensor array 1 or surface 1b, may for example project a parabola, when tilted off the normal. The computer can use this change in shape to calculate the orientation and position of the wand 2 with respect to the said sensor array 1 or surface 1b. It can be readily appreciated that the shapes 5c, to illustrated in FIG. 8d, are in fact equivalent to a string of points 5 illustrated in FIG. 1 and FIG. 6a.
[0183] The advantage is that a single emitter 3a including a dispersing lens(s) may be used rather than a series of emitters 3a. The other advantage is there is greater redundancy. On the other hand, a few discrete points of light 5 require far less computation than many point, and where speed of movement is important, a few points of light are preferable. The embodiment illustrated in FIG. 8d may be used with a sensor array 1b in which the projected shape 5c, comprised of spots of light 5, is sensed and reported to the computer 11, or one in which a camera 3c on the wand 2, or remote from it, is used to record the projected shapes 5c. As illustrated in FIG. 8d, where a camera 3c is used for detection, in addition to those means described above for determining the position of the wand 2b, a coded grid 1c, may be applied to the surface of surface 1b. The grid may be coded, in a similar way to a bar code, such that the position of the shape 5c or points 5 can be viewed by the camera 3c and their absolute position on the surface can be reported by the camera to the computer 11, to calculate the orientation and the position of the wand 2b in three dimensional space. As illustrated in FIG. 8d, the bar code grid may be formed from two bar coded patterns, superimposed at right angles. Any spot on the surface 1a, will then have a unique address, defined by the adjacent group of bars. The thickness, of the bars and their relative separation from each other may be arranged to encode locational information, by means well know to the art. Since the computer 11 has the same grid in memory, it can make a simple pattern match, or other method, well known to the art, to determine the location of each point of light that forms the shape 5c or for that matter any spot 5 which other embodiments of the invention rely on, such as those illustrated in FIG. 6a and FIG. 6a1. At any point on the surface, there will be a unique address defined by the two patterns immediately adjacent to the spots 5 and shapes 5c. These patterns will form the nearest address to each point at which the spots 5 and shapes 5c are incident. Since the computer has stored in memory the grid, it can then refine the position of each of the incident spots 5 and shape 5c, by noting the displacement of the said spots and shapes from the nearest bars, the exact position of which is in the computer memory. Some spots 5 and shapes 5c may by happenstance fall on the intersection of two bars, in which event the displacement calculation may not be necessary. It should be appreciated that while reference has been made to a bar code type of indexing system, other encoding schemes may be used in other embodiments and be within the ambit of the embodiments described herein.
[0184] FIG. 9 illustrates a wand 2 that includes a sliding finger control 19a with associated controller/feedback device 19c which functions in a similar manner to the movable finger hole 21, except that the sliding finger control 19 provides a convenient means of conveying linear motion to the robot tools. In the example illustrated in FIGS. 9 and 11, when the sliding finger control 19a is moved to position 19b, a distance of 19d, the controller/feedback device instructs the computer 11 to cause the tool, in this example 26, to move a given distance 19d in a similar linear direction, as assumed by 26a in FIG. 11. As mentioned above, the operator may set the ratio between the motion of the sliding finger control 19a and the consequent motion of the tool 19a, thus these distances may be different, even though relative. Simultaneously, the operator may squeeze the finger hole control 21, to position 21c, a displacement of 21d, to instruct the fingers of tool 26 to close a distance of 21d to assume the configuration of 26a in FIG. 11. As referred to above, haptic feedback may be provided by the controller/feedback controller 21b by means described above.
[0185] FIG. 10 illustrates the operator viewer 8, while the tool 26 is being manipulated, as illustrated in FIGS. 9 and 11. In this example the operator is manipulating wand 2 in his left hand. The left tool icon display 10h has bolded tool icon 26b, which indicates that the operator has chosen tool 26 to be controlled by his wand, such as that illustrated in FIG. 9. The right tool icon display 10h has bolded tool icon 27b, which indicates that the operator has chosen tool 27, as illustrated in FIGS. 12, 13 and 14, to be controlled by his wand 2, such as that illustrated in FIG. 9.
[0186] FIGS. 12, 13, and 14, illustrates that rotary motion at the tools can be controlled from a wand, such as that illustrated in FIGS. 9 and 12. In this example of the invention, the movable finger hole control 21 can be squeezed by the operator, displacing it a distance of 21d to position 21c, which causes the tool 27 to close a distance of 21d, gripping bolt head 29, assuming configuration 27a, as illustrated in FIG. 13. Simultaneously, the operator moves the finger slider control 19b a distance of 19d, to assume position 19a, to move the tool forward a distance of 19d, toward the bolt head 29, as illustrated in FIG. 13. The operator may then choose to rotate the bolt head by rotating roller control 20 a distance and direction 20b, to move the tool in direction and distance 20b, to assume position 27c. The controller/feedback controller 20c senses the motion and position of the roller control 20, and may impart haptic feedback, in a similar manner as described above in relation to the finger hole control 21, above.
[0187] While the disclosure and examples of the invention above are in the context of a guiding device that is controlled by the operator's hands, and describes the attitude and position of the wand 2, 2b in three dimensional space, it should be understood that the guiding device may be used to describe the relative motion of a surface, where the wand or guiding device is fixed, or its position is otherwise known. For example FIG. 15 and FIG. 16 illustrate the movement of the surface 14d1, 14d2 of the heart as it beats. In this case the components of the wand 2, 2b are incorporated into the distal end of the camera tool 15c, although they may be incorporated into any other tool as well, and come within the ambit of the invention. The emitter cluster 3 and emitters 3a may be seen in greater detail in FIG. 18. It should be noted that this example of the emitter cluster 3 which uses any number of emitters 3a, can be replaced with any of the other types of emitter clusters, including mirror arrays or articulating mirrors and prisms, referred to above. The angles between the beams 4, including T1, T2, and T3, and the angles between the beams 4 and the tool 15c as illustrated in FIG. 18 are known to the computer 11, in calculating the surface topology 14d1 and 14d2 as illustrated in FIG. 18. As illustrated in FIG. 17, the stereo camera 3c and/or 3c2 record the spots 5a and 5b projected on the surface of the heart 14d1, 14d2. It can be readily be appreciated that as the heart beats, the surface 14d1 and 14d2 moves up and down, and the spots projected on the surfaces, including 5a and 5b, change their distance from their neighbors 5a and 5b on their respective surfaces. This distance change, along with the angle of the beam, is recorded by the camera or cameras, 3c1 and/or 3c2, and this information is processed by the computer 11, which computes the distance of those parts of the surface from the distal end of the camera tool 15c, using trigonometric and other mathematical methods, well known to the art. It should be noted that this information also provides the distance between the surface and any other tool, such as 15b and 15d, as illustrated in FIG. 15 and FIG. 16, as the relative position of the tools is known, but positional sensors incorporated into the said tools. The more spots 5 (in this illustration referred to as 5a and 5b to denote their change in position) that are projected at any given time, the greater will be definition of the changing topology of the surface and its distance from the distal end of the tools, 15a, 15b and 15c, and any other tools that may be used. Various shapes or patterns, such as grid patterns may be projected onto the surface of the heart, by various optical means, herein described, or well known to the art. These shapes or patterns may be considered as strings of spots 5, 5a and 5b.
[0188] As the heart beats, and the distance between the distal ends of the tools and the heart surface 14d1 and 14d2 varies, the computer can instruct the tool arms to vary their length to keep the distance between the surface and the distal end of the tools constant (assuming the operator has not instructed any change in tool position). In the example illustrated in FIG. 15 and FIG. 16, the arms are telescoping, for example, the arm 15c, the camera arm, has a distal shaft which can slide in and out of the main arm 15d. In FIG. 15 the distal shaft 15c1 is relatively extended, so that it is located in an ideal position to view the distal end of the other tool shafts, 15b1 and 15d1 which are positioned, in this example, immediately above the surface 14d1 of the heart. As the surface of the heart moves up, as illustrated in FIG. 16 and FIG. 17, the movement is detected by the changing lateral separation between the neighboring dots, such as dots 5a and 5b, and their respective neighboring dots on their respective surfaces. The computer may use this information, using trigonometric calculations and other mathematical techniques, well known to the art, to direct the arms to move up sufficiently, so as to keep the distal end of the tools, 15b2, 15c2 and 15d2 at the same relative distance to the heart surface 14d2. As can be appreciated, this dynamic adjustment of the tool arm length can effectively compensate for the motion of the beating heart, allowing the operator to control other tool motions (which overlay the compensating motions) and which actually do the work, just as if the heart were stationary. As mentioned above, lateral movements of the heart surface 14d1 and 14d2 can also be compensated for by using texture and pattern recognition methods utilizing the surface that is illuminated by the spots 5a, 5b and 5 (in addition to areas, not so illuminated). For this purpose, the spots 5 may be considerably larger to incorporate more textural or pattern information. The vertical and lateral means of characterizing the motions of the heart surface can then be integrated by the computer 11 and any motion of the heart surface can be fully compensated for, effectively freezing the heart motion, to allow for precise manipulation of the tools, for example, to cut and suture the heart tissue. The integration of this information will provide information on the bending, expansion and contraction of the surface, in addition to (in this example) the changes in elevation of the surface. Fortunately, as the surface that is being worked on by the surgeon is small, this additional characterization (ie. bending, expansion and contraction) is most often not required. It should be noted that as the camera tool 15c is making compensating motions, the operator's view of the heart surface will remain the same, ie the heart will appear to virtually stop, and any more complex movements, ie. stretching and shrinking and localized motions may be compensated by software manipulating the image, by means well known to the art. Similarly, rather than the camera tool 15c, making compensation motions, the image presented to the operator can by optical and electronic means be manipulated to give the same effect. For example in some embodiments of the invention, the camera lens may be zoomed back as the surface of the heart advances toward it, giving the effect of an approximately stationary surface. The operator may of course choose to override any or some compensating features of the system. The operator may also choose to select the area of the surface of the heart or other body, for which motion compensation is required. This may involve selecting a tool, such as the sensor cluster 3, with varying angles of emitter 3a angles, or instructing the computer to compute only those changes within a designated patch, which might be projected on the operator viewer 8. In most cases the area of relevant motion will be small, as the actual surgical work space is usually small. The operator may, or the system may periodically scan the surface to define its curvature, especially at the beginning of a procedure.
[0189] The stereo camera's 3c1 and 3c2 may also provide distance information, using parallax information and trigonometric and standard mathematical methods well know to the art of distance finders. Other optical methods of distance determination, such as is used in auto-focusing cameras and medical imaging, and well known to the art, may be used as well, and be within the ambit of the invention, such as Doppler detection and interferometry. This information, acquired by all these methods, may be used to supplement or backstop the other distance information, which is acquired by methods described above and integrated by the computer 11. It should be noted that embodiments that use one or more of these methods is within the ambit of the embodiments described herein.
[0190] In some embodiments, the computer 11 may receive information from the electrocardiogram (ECG) 14c, which has sensors 14e on the patient's abdomen and which indicates that an electrical pulse has been detected, which will result in a muscular response of the heart tissue, and hence a change in the shape and the position of the heart surface. The time delay between receiving the electrical triggering pulse and the actual resulting heart muscular activity, even though small, allows for the system to anticipate the motion and better provide compensating motions of the length and attitude of the robot's tools, 15b, 15c, and 15d. The system software can compare the electrical impulses, as detected by the ECG, with the resultant changes in the shape and position of the heart wall, as observed by the methods described above, to model the optimum tool motion that is required to virtually freeze the heart motion. In combination with the methods of motion compensation described above, the inclusion of the ECG initiating information, generally allows for a smoother response of the tools to the motion of the surface it is accommodating to.
[0191] It can be readily appreciated that the system herein described allows many surgical procedures to be conducted without resort to a heart lung machine or to other heart restraining devices, all of which can have serious side effects.
[0192] It should be readily appreciated that embodiments that compensate for the motion of bodies being manipulated, whether fine grain or course grain, (as chosen by the operator) inherently reduce the effects of latency between the operator's instructions and the motion of the tools, which he guides. This effective reduction or elimination of latency means that telesurgery over great distances, which increases with distance, becomes more practical. The system's software distinguishes between operator generated motion, such as the lifting of a tissue flap, and non-operational motion, such as the beating of the heart. Generally, the former is much finer grained and the latter larger grained. For example, the software may set the compensating routines to ignore small area of motion, where the procedure is being executed, such as the suturing of a flap, but compensate for grosser motions, such as the beating of the heart, which causes a large surface of the heart to move. The design of this software and the relative sizes of the body to which the compensation routine responds or ignores, and their location, will depend upon the particular procedure for which the system is being utilized.
[0193] FIG. 21 illustrates an embodiment, which includes additional means to overcome temporal latency between the operator's instructions and the actual tool movements, any of which may be used separately or in combination with the others. FIG. 21 illustrates the operator's view of the worksite as viewed through the viewer 8 and eyepieces 9 illustrating the superimposed tool cursors 15d3 and 15b3 which illustrate the operator's intended position of the tools at the worksite. These cursors are no normal cursors, they show the exact intended position of the working edges of the tools they control. FIG. 21 also illustrates that the operator also sees the latest reported actual position of the tools 15d2 and 15b2 at the worksite, the difference between the two being due to temporal latency. The superimposed tool cursors 15d3 and 15b3 can be electronically superimposed onto the operator's view, and these show the intended position, while 15d2 and 15b2 show their most recently reported actual position. In most preferred embodiments the cursors are rendered in 3-D, and change perspective, to conform to the 3-D view of the worksite, are simple outlines, so as not to be confused with the images of the actual tools, and may be manually tuned on and off, or automatically presented when the system detects that latency has exceeded a preset threshold. The intended tool position cursors, 15d3 and 15b3 may also change color or markings to indicate the depth to which they have passed into the tissue, as indicated 15d4 in FIG. 21. The cursors 15d3 and 15b3 may also change color markings in response to forces acting on the actual tools 15d2 and 15b2, so as to prevent the operator from exceeding a safe threshold for that particular substrate he is manipulating.
[0194] FIGS. 29a to 29e illustrate an example method of limiting the effects of latency in transmission of tool instructions and movement of the body relative to the position of the tools at the remote worksite. Each video image at the worksite FIG. 29b is recorded, time coded, and transmitted to the operating theatre, along with the time code for each video frame. The operator at the operating theatre, then sees the video frame FIG. 29a, and then causes the tool 15d2 to advance along the incision 14a, which he views as an icon 15d3 in FIG. 29c, and the displacement between 15d3 and 15d2 being the measure of latency. The position of the cursors, that is, the intended tool positions, are transmitted to the remote worksite along with the corresponding frame time-code, of the operator's video frame at each time step. In most embodiments of the invention, the time-code is originally encoded onto the video stream at the remote work site by the remote worksite robot controller 15 which also saves in memory the corresponding video frame(s). As a separate process, and at each time step, at the remote work site, the position of the tools are adjusted to accommodate to their intended position relative to the changing position of the body, as described above, which is illustrated as the accommodation of tool position 45 in FIG. 29d and becomes the real time image for the comparison to follow. Upon receiving each instruction from the operator, the worksite controller 15 then retrieves from memory the corresponding video frame and notes the intended machine instruction relative to it. It then compares this frame FIG. 29b, retrieved from memory with the real time image at the remote worksite FIG. 29d, and carries out the intended machine instruction embedded in FIG. 29c resulting in the performance of the intended instruction as illustrated in FIG. 29e. This comparison may be accomplished by pattern recognition methods well known to the art which note the relative location of such features as protruding veins and arteries and other visible features. In some embodiments, markers suitable for optical marker recognition 40 are placed on or detachably attached to the operation surface, such as the heart 14d to assist in tracking movements of the worksite. While the normalization process, including pattern recognition and other means noted above impose a system overhead on computations, the area that is monitored and the precision of monitoring can be adjusted by the operator. The area immediately adjacent to the present tool position can have, for example, fine grained monitoring and normalization, whereas more peripheral areas can have, for example, coarser gained treatment.
[0195] As illustrated on FIG. 21 and FIG. 29c, the operator's intended movement of the tools as illustrated to him by cursors 15b3 and 15d3, may diverge from the actual tools that he views 15b2, 15d2 the difference being the latency between the two. The operator will immediately know the degree to which latency is occurring, and he may choose to slow his movements to allow the actual tools, 15b2 and 15d2 to catch up. In some embodiments the systems stops in the event a preset latency threshold is exceeded. It is important to note that the operator, when he stops the tool, will know where it will stop at the worksite. For example, in FIG. 21 the operator is making an incision which must stop before it transects artery 38. Even though the tool 15d2 will continue to move forward, they will stop when the meet the intended tool position indicated by cursor 15d3, just short of the artery 38. While this disclosure has described cursors resembling a scalpel and forceps and their corresponding cursors, it should be understood that these are merely examples of a large class of embodiments, which include all manner of tools and instruments and there corresponding cursors, and all are within the ambit of this invention.
[0196] FIG. 19 and FIG. 20 illustrate two exemplar passive haptic feedback modules that can be incorporated into the controller/feedback controllers in the wand 2, such as 19c, 20c and 21b. Other haptic feedback devices, well known to the art, whether active or passive, may be incorporated into the controller/feedback controller, and all such systems are within the ambit of the invention.
[0197] FIG. 19 is a typical passive haptic feedback device 30 in which the flow of an electrorheological or magnetorheological fluid is controlled by an electrical or magnetic field between elements 36, which can be electrodes or magnetic coils. The control of the flow of this fluid affects the speed with which piston 31a can move back and forth through the cylinder 31. The piston is connected and transmits motion and forces to and between the piston and the various control input devices on the wand 2, for example, the movable finger hole 21, the finger slider control 19b and the roller control 20. The total displacement of the piston 19d may for example be the same as the displacement 19d of the finger slider control 19b, or may vary depending upon the mechanical linkage connecting the two. The working fluid moves 35 between each side of the piston 31a through a bypass conduit 32, where its flow may be restricted or alleviated by varying the electrical or magnetic field imposed on an electrorheological or magnetorheological fluid. The controller/encoder modulates the electrical energy transmitted by transmitting means 34a to the electrodes or coils 36. In other passive haptic feedback devices a simple electromechanical valve 37 controls the flow 35 of working fluid, which may for example be saline or glycerin, as illustrated in FIG. 20. The controller/encoder modulates the electrical energy transmitted to the electromechanical valve 37 which is transmitted by transmitting means 37a, as illustrated in FIG. 20.
[0198] In both the haptic feedback devices 30 illustrated in FIGS. 19 and 20, a motion and position sensor 33, transmits information on the motion and position of the piston 31a by transmission means 34 to the controller/encoder 18. The controller/encoder 18 receives instructions wirelessly 16a, or directly from the computer, and sends motion and position information received from the motion and position sensor 33 to the computer.
[0199] FIG. 22 is wand 2b which may be attached to any body part, tool, or other object, by means of connectors 42 and 42a, which have complementary indexing means 42c and 42b, to ensure their proper alignment. By such means, and similar connecting means, well known to the art, these wands 2b may be placed on a body part, such as the surface of the heart 14d1 to project the beams 4 to a sensor array 1 or surface 1b (not shown) and thereby establish the orientation and position of the heart as it moves. Similarly a wand 2b may be connected to any object to determine its position and orientation in space, together with the means hereinbefore described, in cooperation with computer 11.
[0200] FIG. 23 illustrates how multiple wands 2i, 2ii may be used in combination to provide accurate alignment between two or more objects in space. In this example FIG. 23, one wand 2i is connected to a drill 44. The other wand 2ii is connected to a bone nail 45 with a slotted proximal end, for indexing position, and which has a hidden hole 46 which will receive a bolt, once a hole is drilled through bone 46, and the hidden hole 46 in direction 41. Since the position and orientation of the hidden hole 46 relative to the end of the bone nail, connected to the wand 20d is known, the operator can drill a hole along an appropriate path, which is provided by computer 11 calculating the appropriate path and graphically illustrating the proper path with a graphical overlay of the bone shown on viewer 8. The position of the wands 2i and 2ii in space is determined by those means hereinbefore described. While FIG. 23 illustrates a single sensor array 1, it should be understood that any number is sensor arrays or surfaces 1b, might be used, so long as their position and orientation are known to the computer 11, and in the case of surface 1b, the camera 3c, which would be incorporated into the assembly, as illustrated in FIG. 22, can identify each screen by means of identifying barcodes or other identifying marks. In FIG. 23, the sensor array 1 is above the operating space. FIG. 23 also illustrates two connectors 42a that are fixed to a calibrating table 43, which is calibrated in position to sensor array 1. This permits the wands 2i and 2ii to be connected to the said connectors 42a on calibrating table 43 to ensure accurate readings when ambient temperature changes might affect the relative angles of the beams 4, or the distance between emitters 3a. The computer 11 can recalibrate the position of the wands 2i and 2ii by noting the pattern of spots 5 that are projected onto the sensor array 1. While the example shown in FIG. 23 illustrates two wands 2i and 2ii, any number of wands may be used for purposes of comparing the position of objects, to which they are connected, or changes in position of those objects over time. For example, a one wand might be connected to the end of a leg bone, while another might be attached to prosthesis, and the two might be brought together in perfect alignment. Another example would be connecting a wand 2i to a probe of known length, and another wand 2ii to a patient's scull, in a predetermined orientation. The wand 2i could then be inserted into the brain of a patient and the exact endpoint of the probe could be determined. The wand 2i could also be attached to the tools 15b1, 15c1 and 15d1, as illustrated on FIG. 15 to ensure perfect positioning of the tools. For example one tool might have a drill attached, such that the drill illustrated in FIG. 23, is controlled robotically and in coordination with the position of the bone nail 45 in that of FIG. 23. Due to modern manufacturing processes, the wands 2b illustrated in FIG. 22, the wand 2i illustrated in FIG. 23, and sensor array assemblies 1d illustrated in FIG. 24 can be made to be very small and placed as an array on objects such as cars, bridges or buildings to measure their stability over time. Others might be connected to the earth to measure seismic or local movements of the soil. These wands 2b, 2i, might also be connected to scanners to allow for the scanning of three dimensional objects, since these wands can provide the information as to the scanner's position in space; the scanning data can be assembled into a virtual three dimensional output. Since the wands 2b and 2i may be put on any object, the uses for assembling objects are countless.
[0201] While FIG. 23 illustrates a system in which the camera 3c is located in the wand 2, it should be understood that a surface 1b, as illustrated in FIG. 6a1, or a separate camera 3c could be used, as illustrated in FIG. 6a2, all of which can detect the position of the incident spots 5.
[0202] FIG. 24 illustrates a similar arrangement of wands 2i and 2ii as illustrated in FIG. 23, but the wand 2ii is replaced with sensor array assembly 1d. The sensor array assembly 1d uses a sensor array 1, which senses the position 5 of the incident beams 4 and reports their coordinates by connection 11a to controller/encoder 18 and then wirelessly to the computer 11 (not shown). This system provides the same positional information as that system illustrated on FIG. 23, except that the large sensor in FIG. 23 has been replaced with a much smaller sensor in FIG. 24, making it more economical for certain purposes.
[0203] Referring to FIG. 25, a cross-sectional, perspective view illustrates two combination wand and sensor array assemblies 47 which have been daisy chained with two other combination units (not shown). Such arrays may also be combined with sensor arrays 1 or surfaces 1b for greater accuracy. Such arrays can be used to detect and report the relative movement of parts of structures, to which they are attached, such as bridges, ships and oil pipelines.
[0204] While embodiments have been described with respect to a system comprised of three tools 15b, 15c, and 15d, it is to be understood that any number of tools and any number of wands 2 may be used in such a system.
[0205] While embodiments have used examples of tools that a robot could manipulate, it is to be understood that any tool, object or body may be moved or directed by the methods and devices described by way of example herein, and all such embodiments are within the ambit of the embodiments herein.
[0206] While embodiments have been described as being used as a surgical robot, it is to be understood that this use is merely used as a convenient example of many uses to which the robot could be employed, all of which come within the ambit of the embodiments described herein.
[0207] While embodiments have been described as being used to manipulate tools, it is to be understood that the methods and devices described by example herein may be used to manipulate virtual, computer generated objects. For example, embodiments may be used for assembling and/or modeling physical processes, such as molecular modeling and fluid dynamics modeling to name just a few.
[0208] It is to be understood that modifications and variations to the embodiments described herein may be resorted to without departing from the spirit and scope of the invention as those skilled in the art will readily understand. Such modifications and variations are considered to be within the purview and scope of the inventions and appended claims.
Claims
1. (canceled)
2. A hand controller for a robotic surgery system, the hand controller comprising: a body extending linearly between a proximal end and a distal end; a first control pivotally attached to the body, the first control configured to permit a user to control a first function of a surgical instrument; and a second control supported by a side of the body at a location between the proximal end and the distal end, the second control configured to linearly move in a direction parallel to a length of the body to permit a user to control a second function of the surgical instrument, the second function being different from the first function.
3. The hand controller of claim 2, wherein the first control comprises at least one lever.
4. The hand controller of claim 2, wherein the first control comprises an opening configured to receive a finger therethrough.
5. The hand controller of claim 3, wherein the first function of the surgical instrument comprises adjusting a distance between first and second jaws of an end effector of the surgical instrument.
6. The hand controller of claim 5, wherein the distance between the first and second jaws of the end effector of the surgical instrument is adjusted responsive to movement of the at least one lever.
7. The hand controller of claim 2, wherein the second function of the surgical instrument comprises a linear movement of an end effector of the surgical instrument.
8. The hand controller of claim 7, wherein: a linear movement of the second control causes the linear movement of an end effector.
9. The hand controller of claim 2, wherein the second control comprises a slidable control.
10. The hand controller of claim 2, wherein operation of one or both of the first control and the second control causes a haptic feedback to be provided to the user.
11. The hand controller of claim 10, wherein the first function of the surgical instrument comprises adjusting a distance between a first jaw and a second jaw of an end effector of the surgical instrument, and wherein the haptic feedback provided to the user comprises pinching feedback that simulates a force applied by the first jaw and the second jaw.
12. The hand controller of claim 2, further comprising a spring configured to bias the first control.
13. The hand controller of claim 12, wherein the spring is configured to bias the first control to a position in which the first function of the surgical instrument is not activated.
14. The hand controller of claim 13, wherein the position comprises an open position.
15. A hand controller for a robotic surgery system, the hand controller comprising: a body extending linearly between a proximal end and a distal end; a first control pivotally coupled to the body, a finger ring attached to the first control and configured to receive a finger of a user's hand therethrough, the first control configured to cause activation of a first function of a surgical instrument; and a second control supported by a side of the body at a location between the proximal end and the distal end, the second control configured to slide along the side of the body to cause activation of a second function of the surgical instrument, the second function being different from the first function.
16. The hand controller of claim 15, wherein the first control comprises a lever pivotally attached to the body.
17. The hand controller of claim 15, wherein the first function of the surgical instrument comprises adjusting a distance between first and second jaws of an end effector of the surgical instrument.
18. The hand controller of claim 17, further comprising a second finger ring on an opposite side of the body from the first control, the second finger ring configured to receive another finger of the user's hand therethrough.
19. The hand controller of claim 15, wherein operation of one or both of the first control and the second control causes a haptic feedback to be provided to the user.
20. The hand controller of claim 19, wherein the first function of the surgical instrument comprises adjusting a distance between a first jaw and a second jaw of an end effector of the surgical instrument, and wherein the haptic feedback provided to the user comprises pinching feedback that simulates a force applied by the first jaw and the second jaw.
21. A hand controller for a robotic surgery system, the hand controller comprising: a body extending linearly between a proximal end and a distal end; a first control pivotally attached to the body, the first control configured to permit a user to control a first function of a surgical instrument; and a second control supported by a side of the body at a location between the proximal end and the distal end, the second control configured to receive an input in a direction generally parallel to a length of the body to permit a user to control a second function of the surgical instrument, the second function being different from the first function.
BOOM BOOM!
HAND CONTROLLER FOR ROBOTIC SURGERY SYSTEM
DOCUMENT ID
US 20220401162 A1
DATE PUBLISHED
2022-12-22
INVENTOR INFORMATION
NAME
CITY
STATE
ZIP CODE
COUNTRY
Unsworth; John D.
Hamilton
N/A
N/A
CA
APPLICANT INFORMATION
NAME
Titan Medical Inc.
CITY
Toronto
STATE
N/A
ZIP CODE
N/A
COUNTRY
CA
AUTHORITY
N/A
TYPE
assignee
APPLICATION NO
17/813898
DATE FILED
2022-07-20
DOMESTIC PRIORITY (CONTINUITY DATA)
parent US continuation 16913809 20200626 PENDING child US 17813898
parent US continuation 16455192 20190627 parent-grant-document US 10695139 child US 16913809
parent US continuation 16160200 20181015 parent-grant-document US 10357319 child US 16455192
parent US continuation 15490098 20170418 parent-grant-document US 10130434 child US 16160200
parent US continuation 15211295 20160715 parent-grant-document US 9681922 child US 15490098
parent US continuation 14831045 20150820 parent-grant-document US 9421068 child US 15211295
parent US continuation 14302723 20140612 parent-grant-document US 9149339 child US 14831045
parent US continuation 12449779 20090825 parent-grant-document US 8792688 WO continuation PCT/CA2008/000392 20080229 child US 14302723
us-provisional-application US 60904187 20070301
us-provisional-application US 60921467 20070403
us-provisional-application US 60907723 20070413
us-provisional-application US 60933948 20070611
us-provisional-application US 60937987 20070702
us-provisional-application US 61001756 20071105
US CLASS CURRENT:
1/1
CPC CURRENT
TYPE
CPC
DATE
CPCI
A 61 B 90/361
2016-02-01
CPCI
A 61 B 34/30
2016-02-01
CPCI
A 61 B 34/25
2016-02-01
CPCI
G 06 F 3/016
2013-01-01
CPCI
A 61 B 34/37
2016-02-01
CPCI
H 05 K 999/99
2013-01-01
CPCI
A 61 B 34/74
2016-02-01
CPCI
G 06 F 3/0308
2013-01-01
CPCI
A 61 B 34/20
2016-02-01
CPCI
A 61 B 34/76
2016-02-01
CPCI
A 61 B 34/10
2016-02-01
CPCI
G 01 D 5/262
2013-01-01
CPCI
G 06 T 11/00
2013-01-01
CPCI
A 61 B 90/06
2016-02-01
CPCI
A 61 B 34/70
2016-02-01
CPCI
G 06 F 3/0325
2013-01-01
CPCA
A 61 B 2017/00703
2013-01-01
CPCA
A 61 B 2017/00207
2013-01-01
CPCA
A 61 B 2034/2051
2016-02-01
CPCA
A 61 B 2034/2055
2016-02-01
CPCA
A 61 B 2090/062
2016-02-01
CPCA
A 61 B 2090/365
2016-02-01
CPCA
A 61 B 90/36
2016-02-01
CPCA
A 61 B 2034/107
2016-02-01
CPCA
A 61 B 2034/2068
2016-02-01
Abstract
A Robotic control system has a wand, which emits multiple narrow beams of light, which fall on a light sensor array, or with a camera, a surface, defining the wand's changing position and attitude which a computer uses to direct relative motion of robotic tools or remote processes, such as those that are controlled by a mouse, but in three dimensions and motion compensation means and means for reducing latency.
Background/Summary
INCORPORATION BY REFERENCE TO ANY PRIORITY APPLICATIONS
[0001] Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet as filed with the present application are hereby incorporated by reference under 37 CFR 1.57.
BACKGROUND OF THE INVENTION
Field of the Invention
[0002] This invention relates to operator interfaces for controlling robots and remote processes, including pointing devices, such as a mouse. It also relates to methods and systems for controlling remote processes.
Description of the Related Art
[0003] Real-time operator control of robots has been accomplished with electro-mechanical controls such as joysticks and multiple axis hand grips. These devices suffer from a limited range of motion, due to being constrained by the geometry of the control device. In other applications, such as surgery, the operator's hand and finger motions used to operate the device to not closely approximate those motions he would use in conducting the operation by hand. This requires the surgeon to use a different repertoire of hand motions for the robot control, than he would for conducting the operation by hand. Other devices such as a glove actuator, while more closely approximating the actual motion of the hand, suffers from a lack of accuracy regarding the motion of the instrument the hand and fingers grasp, and it is the working end of the instrument which is being mimicked by the robot's tools that do the work. Other interfaces have been developed that rely on multiple cameras to record the motion of the operator's hands with or without faux instruments, but these can also suffer from a lack of accuracy.
[0004] These devices suffer from latency, especially when the operator is separated from the worksite by sufficient distances that there is a significant delay in transmission.
[0005] It is an object of some aspects of the invention to address one or more of the above existing concerns. Other concerns may be also be addressed in those aspects, or separately in other aspects of the invention as will be evident from the remainder of this specification.
SUMMARY OF THE INVENTION
[0006] In a first aspect the invention provides a method comprising the steps of actively generating an image pattern on a surface of a first object, detecting the image pattern on the surface of the first object, wherein either the step of actively generating or the step of detecting is performed at a second object spaced away from the first object, and determining parameters of the relative poses of the second object and the surface utilizing the detected image pattern and utilizing reference data for actively generating the image pattern.
[0007] The method may further comprise the step of actively displaying on the first surface an image of a remote process that is controlled in accordance with the determined parameters of the pose of the second object.
[0008] The step of actively generating may comprise the step of projecting a known image pattern to actively generate the image pattern on the surface of the first object, wherein the step of projecting is from either the second object if the step of actively generating is performed at the second object or a first location other than the second object and the first object if the step of detecting is performed at the second object.
[0009] The step of projecting may comprise projecting the image pattern from the second object. The step of detecting may comprise detecting the image pattern at the surface of the first object. The step of projecting may comprise projecting the image pattern from the first location. The step of detecting further comprises detecting the image pattern from a second location other than the first object and the second object.
[0010] The method may further comprise the step of maintaining the first object in a known pose during the steps of projecting and detecting. The method may further comprise the step of maintaining the second object in a known pose during the steps of projecting and detecting.
[0011] The surface of the first object may be substantially planar.
[0012] The method may further comprise the step of detecting movement of the detected pattern, and the step of determining parameters of the pose of the second object comprises determining movement of parameters of the pose of the second object from the detected movement of the detected pattern.
[0013] The method may further comprise the step of detecting linear movement of the second object parallel to the surface by detecting motion against texturing on the surface.
[0014] The step of projecting may further comprise projecting the image pattern such that the image pattern is asymmetrical about an axis of rotation inline with a direction of projection of the image pattern. The step of projecting may further comprise projecting the image pattern such that the size of the image pattern varies continuously with distance from the first object inline with a direction of projection of the image pattern.
[0015] The step of actively generating the image pattern may include actively generating elements of the image pattern over time, and the step of detecting includes detecting elements of the formed image pattern in synchronization with actively generating the image elements.
[0016] The method of claim 1 wherein the steps of actively generating and detecting comprise actively generating on the surface which surface forms a three dimensional cavity with access for the second object through an opening in the first object, and detecting the image pattern formed on such surface, respectively.
[0017] The surface may comprise a plurality of substantially planar sub-surfaces. The step of projecting further comprises projecting the image pattern as a combination of three or more spot beams of known relationship. The step of actively generating may further comprise actively generating the image pattern as a combination of three or more spot beams of known relationship.
[0018] The step of projecting may comprise projecting the image pattern with image pattern elements directed at a plurality of angles about an axis of the second object. The method may further comprise the step of user imparting movement of the second object.
[0019] The step of projecting may further comprise projecting encoded information, other than pose-related information, in an image pattern projected from the second object.
[0020] The step of determining an element of the pose of the second object may further comprise determining a distance from the image pattern on the surface of the first object to a reference point on the second object based upon the size of the detected image pattern.
[0021] In a second aspect the invention provides a method of controlling instruments of a surgical robot in use on a heart, the method comprising the steps of receiving a signal that a heart is about contract, and initiating movement of the surgical robot instruments so as to accommodate movement of the heart in the vicinity of the instruments during contraction as movement of the heart occurs.
[0022] The step of receiving may further comprise receiving a signal related to an anticipated nature of the contraction, and the step of initiating further comprises utilizing the anticipated nature of the contraction from the signal to control the accommodation. The method may comprise the steps of detecting a contour of movement of a heart by, projecting an image pattern on to a surface of the heart in the vicinity of the instrument, repeatedly detecting the image pattern formed on the surface of the heart, and determining movement of the heart based on a transformation of the detected image pattern from reference image pattern data, and moving the surgical robot instruments so as to accommodate the contour of movement of the heart in the vicinity of the instrument, so that operator intended motions can be carried out from this normalized position.
[0023] In a third aspect the invention provides a method of controlling an instrument of a surgical robot comprising the steps of detecting a contour of movement of a heart by, projecting an image pattern on to a surface of the heart in the vicinity of the instrument, repeatedly detecting the image pattern formed on the surface of the heart, and determining movement of the heart based on a transformation of the detected image pattern from reference image pattern data, and moving the surgical robot instruments so as to accommodate the contour of movement of the heart in the vicinity of the instrument, so that operator intended motions can be carried out from this normalized position.
[0024] In a fourth aspect the invention provides a robot system comprising a robot including and controlling an instrument, controls for an operator to control the robot to operate the instrument, a controller for determining quantified information related to motion of the instrument, and a display for displaying the information from the controller to an operator of the robot during use of the robot.
[0025] In a fifth aspect the invention provides a method of conveying information regarding the latency between motion of a controller and motion of an instrument in a remote process controlled by the controller, the method comprising displaying to an operator of the controller an image of the instrument and at least a portion of the remote process surrounding the instrument in a direction of motion of the instrument, and overlaying on the displayed image, an image of the instrument in a pose requested by motion of the controller, such that the operator can see an image of the actual pose of the instrument, and the requested pose of the instrument.
[0026] In a sixth aspect the invention provides a method of conveying information regarding the latency between motion of a controller of a surgical robot and motion of an instrument of the surgical robot controlled by the controller, the method comprising displaying on a display visible to an operator of the controller, an image of the instrument and at least a portion of a surgical field surrounding the instrument in a direction of motion of the instrument, and overlaying on the displayed image, an image of the instrument in a pose requested by motion of the controller, such that the operator can see an image of the actual pose of the instrument, and the requested pose of the instrument.
[0027] In a seventh aspect the invention provides a method of controlling latency between motion of a controller and motion of the instrument in a remote process controlled by the controller, the method comprising the steps of acquiring an original image of the instrument and at least a portion of a surgical field surrounding the instrument in a direction of motion of the instrument, and displaying the original image to an operator of the controller, acquiring an instruction from the controller to move the instrument to an instructed pose relative to the original image, transmitting the instruction and information to identify the original image to the remote process, acquiring an updated image of the remote process, performing pattern recognition at the remote process on the image identified by the transmitted information and the updated image to determine a desired pose of the instrument relative to the updated image that corresponds to the instructed pose on the original image, and moving the instrument to the desired pose.
[0028] In an eighth aspect the invention provides a method comprising the steps of actively displaying on a surface of a first object an image of a remote process that is controlled in accordance with parameters of the pose of a second object spaced away from the first object, detecting an image pattern on the surface of the first object, wherein either the image pattern is actively generated from the second object or the image pattern is detected at the second object, determining parameters of the relative poses of the second object and the surface utilizing the detected image pattern and utilizing reference data for the image pattern, and controlling the remote process in accordance with the determined parameters of the pose of the second object.
[0029] In a ninth aspect the invention provides a method comprising the steps of projecting a known image pattern on to a surface of a first object, wherein the step of projecting is from either a second object or a first location other than the second object and the first object, and the first object, second object and first location are at a distance from one another, detecting the image pattern formed on the surface of the first object, wherein if the step of projecting is from the second object then the step of detecting is from either the first object, second object or a second location other than the first and the second object, and if the step of projecting is from the first location then the step of detecting is from the second object, and determining parameters of the pose of the second object utilizing the detected image pattern and reference image pattern data for the known pattern.
[0030] In a tenth aspect the invention provides a method of controlling an instrument of a robot comprising the steps of detecting a contour of movement of an object being worked by the instrument, projecting an image pattern on to a surface of the object in the vicinity of the instrument, repeatedly detecting the image pattern formed on the surface of the object, and determining movement of the object based on a transformation of the detected image pattern from reference image pattern data, and moving the robot instruments so as to accommodate the contour of movement of the object in the vicinity of the instrument, so that operator intended motions can be carried out from this normalized position.
[0031] In a eleventh aspect the invention provides an input interface comprising a pattern generator for actively generating an image pattern on a surface of a first object, a detector for detecting the image pattern on the surface of the first object, wherein the pattern generator or the detector is at a second object spaced away from the first object, and a computer for determining parameters of the relative poses of the second object and the surface utilizing the detected image pattern from the detector and utilizing reference data for actively generating the image pattern.
[0032] In a twelfth aspect the invention provides a system comprising a surgical robot including an instrument controlled by the robot, a computer for receiving a signal that a heart being operated on by the instrument is about to contract, and generating instructions to the robot to initiate movement of the surgical robot instrument so as to accommodate movement of the heart in the vicinity of the instruments during contraction as movement of the heart occurs.
[0033] In a thirteenth aspect the invention provides a robot system comprising a robot including and controlling an instrument, controls for an operator to control the robot to operate the instrument, a controller for determining quantified information related to motion of the instrument, and a display for displaying the information from the controller to an operator of the robot during use of the robot.
[0034] In a fourteenth aspect the invention provides a system for conveying information regarding the latency between motion of a controller and motion of an instrument in a remote process controlled by the controller, the system comprising a computer and a display for displaying to an operator of the controller an image of the instrument and at least a portion of the remote process surrounding the instrument in a direction of motion of the instrument, and an overlay on the displayed image, an image of the instrument in a pose requested by motion of the controller, such that the operator can see an image of the actual pose of the instrument, and the requested pose of the instrument.
[0035] In a fifteenth aspect the invention provides system for conveying information regarding the latency between motion of a controller of a surgical robot and motion of an instrument of the surgical robot controlled by the controller, the system comprising a computer and a display for displaying on a display visible to an operator of the controller, an image of the instrument and at least a portion of a surgical field surrounding the instrument in a direction of motion of the instrument, and overlaying on the displayed image, an image of the instrument in a pose requested by motion of the controller, such that the operator can see an image of the actual pose of the instrument, and the requested pose of the instrument.
[0036] In a sixteenth aspect the invention provides a system for controlling latency between motion of a controller and motion of the instrument in a remote process controlled by the controller, the system comprising a camera for acquiring an original image of the instrument and at least a portion of a surgical field surrounding the instrument in a direction of motion of the instrument, and a display for displaying the original image to an operator of the controller, acquiring an instruction from the controller to move the instrument to an instructed pose relative to the original image, and transmitting the instruction and information to identify the original image to the remote process, wherein the camera is also for acquiring an updated image of the remote process, a computer for performing pattern recognition at the remote process on the image identified by the transmitted information and the updated image to determine a desired pose of the instrument relative to the updated image that corresponds to the instructed pose on the original image, and instructing the remote process to move the instrument to the desired pose.
[0037] In a seventeenth aspect, the invention provides a computer readable medium storing program instructions executable by one or more processors in one or more computers for causing the computers to implement the method of any one of the method aspects.
[0038] Other aspects of the present invention and detailed additional features of the above aspects will be evident based upon the detailed description, FIGS. and claims herein, including for example systems corresponding to the methods of the above aspects, methods corresponding to the systems of the above aspects, input interfaces, wands, robots, computing systems, alignment systems, software, methods of using the above, and the like.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0039] For a better understanding of the present invention and to show more clearly how it may be carried into effect, reference will now be made, by way of example, to the accompanying drawings that show the preferred embodiment of the present invention and in which:
[0040] FIG. 1 is a perspective view of portions of an input interface including a first object, an open sided box having a surface (light sensor array), and a second object (a wand) projecting an image pattern (light beams) to actively generate an image pattern on the surface (spots of light) which is detected by the light sensor in accordance with various example embodiments of aspects of the present invention.
[0041] FIG. 2 is a perspective view of portions of an alternative input interface including a buckyball shaped sensor array in accordance with various example embodiments of aspects of the present invention.
[0042] FIG. 3 is a perspective view of additional portions of an input interface, utilizing, for example, the input interface of FIG. 1, and including transmission means from the sensor array to computer, and a three dimensional viewer including superimposed force feedback information on top of a three dimensional image of a work space in accordance with various example embodiments of aspects of the present invention.
[0043] FIG. 4 and FIG. 5 are perspective views of details of two examples of force feedback information for the input interface of FIG. 3.
[0044] FIG. 6 is a perspective view and block view of various elements of a robotic control system, including the input interface of FIG. 1, in accordance with various embodiments of aspects of the present invention.
[0045] FIG. 6a is an example of an alternative input interface which uses only a single panel to form the sensor array.
[0046] FIG. 6a1 is a perspective view of a further alternative user interface, similar to that illustrated in FIG. 6a, except that the sensor array is comprised of two panels, at an angle relative to each other, known to a computer.
[0047] FIG. 6a2 is a perspective view of another alternative user interface, similar to that illustrated in FIG. 6a, except that the camera is located in a stationary position above the surface.
[0048] FIG. 6b is a block diagram illustrating another further alternate user interface in which a lens is included and which tracks the spots projected onto a surface and transmits the information wirelessly to the controller/encoder and/or the computer.
[0049] FIG. 6c is a cross-sectional, perspective view of an example embodiment of a faux instrument wand which includes a lens.
[0050] FIG. 7 is a cross-sectional, perspective view of an example embodiment of a wand, including rechargeable battery and controller/encoder, various example controls, and light emitter cluster, which houses the light emitters.
[0051] FIG. 8 is a cross-sectional, perspective view of a faux forceps wand
[0052] FIG. 8a is a cross-sectional, perspective view of an example embodiment of a wand similar to FIG. 7, but instead of multiple fixed emitters, there is one emitter, the beam of which is redirected by a mirror or other beam redirecting device.
[0053] FIG. 8b is a cross-sectional, perspective view of the distal end of the wand of FIG. 8a, illustrating an emitter beam which is redirected by a mirror.
[0054] FIG. 8c is a cross-sectional, perspective view of the distal end of the wand of FIG. 8a 2b, illustrating an emitter beam which is redirected by mirrors.
[0055] FIG. 8d is a perspective view of a surface on which an addressing grid has been overlain. For diagrammatical clarity, only parts of the grid have been illustrated, it being understood that the grid is continuous over the surface.
[0056] FIG. 9 is a cross-sectional, perspective view of an example embodiment of a faux forceps wand which includes a finger slider and sensor and/or haptic feedback device.
[0057] FIG. 10 is a perspective view of a further example embodiment of an operator viewer, with force feedback information as illustrated in detail, and tool icons of available tools a selected tool.
[0058] FIG. 11 is a cross-sectional, perspective view which illustrates an example of relative movement of wand controls and consequent movement of a tool.
[0059] FIGS. 12, 13 and 14 are cross-sectional, perspective views which illustrate an example of relative movement of wand controls and consequent movement of a tool relative to a bolt.
[0060] FIGS. 15 and 16 are cross-sectional, perspective views which illustrate an example of tools with adjustable extensions, which can retract in order to compensate for a rising and falling surface in accordance with an example embodiment of an aspect of the present invention.
[0061] FIG. 17 is a cross-sectional, perspective view of a camera tool which illustrates the effect of spacing of neighboring projected dots on a surface at two stages of movement. The separations, along with known information: the angles of the beams, relative to the tool and the position of a camera tool provide a computer with a description of the changing position of the surface at each point in time.
[0062] FIG. 18 is a perspective view detail of a distal end of the camera tool of FIG. 17 projecting beams at various predetermined angles, relative to the tool.
[0063] FIG. 19 is a cross-sectional, block diagram of an example passive haptic feedback device in which the flow of an electrorheological or magnetorheological fluid is controlled by an electrical or magnetic field between elements, which can be electrodes or magnetic coils in accordance with an embodiment of an aspect of the present invention.
[0064] FIG. 20 is a cross-sectional, block view of an alternate embodiment of a passive haptic feedback device in which the flow of fluid, such as saline or glycerin is controlled by an electrical valve.
[0065] FIG. 21 is a cross-sectional, perspective view of the operator's view of a worksite as viewed through an example embodiment of a viewer with eyepieces, illustrating superimposed tool cursors of the operator's intended position of tools at the worksite, and the actual position of the tools at the worksite.
[0066] FIG. 22 is a cross-sectional, perspective view of an example wand attached to any body part, tool, or other object, by means of connectors, which have complementary indexing means, to ensure their proper alignment.
[0067] FIG. 23 is a cross-sectional, perspective view of two wands that can be aligned in a desired manner, or be placed in a desired orientation or position with respect to each other or another object. In this example a drill is positioned so that it can drill through a hidden hole.
[0068] FIG. 24 is a cross-sectional, perspective view of one wand, and sensor array assembly (an example detector) which can be aligned in a desired manner, or be placed in a desired orientation or position with respect to each other or another object. In this example a drill is positioned so that it can drill through a hidden hole. A sensor array replaces the emitter housing in the sensor array assembly but the assembly is otherwise similar in construction to a wand. The sensor array communicates with a controller/encoder through communicating means and thence wirelessly to a computer.
[0069] FIG. 25 is a cross-sectional, perspective view of two combination wand and sensor array assemblies 47 which have been daisy chained with two other combination units (not shown and in combination with a sensor array 1.
[0070] FIG. 26 is a graphic illustration of a screen plane (surface of a first object) and device planes with mounted lasers (second object) and related coordinate systems.
[0071] FIG. 27 is a graphic illustration of a linear translation between coordinate systems of FIG. 26.
[0072] FIG. 28 is a graphic illustration of a rotational translation between coordinate systems of FIG. 26.
[0073] FIGS. 29a to 29e are partial, sectional, perspective views of the operating theatre and remote work site, which illustrate methods to reduce or eliminate operational latency of the system.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
[0074] An object location, sometimes referred to as position, and orientation, sometimes referred to as attitude, will together be called the “pose” of the object, where it is understood that the orientation of a point is arbitrary and that the orientation of a line or a plane or other special geometrical objects may be specified with only two, rather than the usual three, orientation parameters.
[0075] A pose can have many spatial parameters, referred to herein as parameters. As described above, such parameters may include the location and orientation of the object. Parameters may include location information in one, two or three dimensions. Pose location parameters may also be described in terms of vectors, providing a direction and a distance. Pose orientation parameters may be defined in terms of an axis of the object, for example, the skew (rotation about the axis), rotation (rotation of the axis about an intersection of the axis and a line normal to a plane), and tilt (rotation of the axis about an intersection of the axis and a line parallel to the plane). Other pose orientation parameters are sometimes referred to as roll, pitch and yaw.
[0076] It will be evident to those skilled in the art that there are many possible parameters to a pose, and many possible methods of deriving pose information. Some parameters will contain redundant information between parameters of the pose. The principles described herein include all manner of deriving pose information from the geometric configuration of detector and surface described herein, and are not limited to the specific pose parameters described herein.
[0077] Pose parameters may be relative to an object (such as a surface), or some other reference. Pose parameters may be indirectly derived, for example a pose relative to a first object may be derived from a pose relative to a second object and a known relationship between the first object and second object. Pose parameters may be relative in time, for example a change in the pose of an object resulting from motion over time may itself by a pose element without determining the original pose element.
[0078] The description provided herein is made with respect to exemplary embodiments. For brevity, some features and functions will be described with respect to some embodiments while other features and functions will be described with respect to other embodiments. All features and functions may be exchanged between embodiments as the context permits, and the use of individual features and functions is not limited by the description to the specific embodiments with which the features and functions are described herein. Similarly, the description of certain features and functions with respect to a given embodiment does not limit that embodiment to requiring each of the specific features and functions described with respect to that embodiment.
[0079] In this description one or more computers are referenced. It is to be understood that such computers comprise some form of processor and memory, which may or may not be integrated in a single integrated circuit. The processor may be provided by multiple CPUs which may be integrated on a single integrated circuit as is becoming more and more common, or a single CPU. Dedicated processors may be utilized for specific types of processing, for example, those that are mathematically computationally intensive. The functions of the computer may be performed in a single computer or may be distributed on multiple computers connected directly, through a local area network (LAN) or across a wide area network (WAN) such as the Internet. Distributed computers may be in a single location or in multiple locations. Distributed computers may be located close to external devices that utilize their output or provide their input in order to reduce transmission times for large amounts of data, for example image data may be processed in a computer at the location where such data is produced, rather than transmitting entire image files, lesser amounts of post-processed data may be transmitted where it is required.
[0080] The processing may be executed in accordance with computer software (computer program instructions) located in the memory to perform the various functions described herein, including for example various calculations and the reception and transmission of input and output of the processor. Such software is stored in memory for use by the processor. Typically the memory that is directly accessible to the processor will be read only memory (ROM) or random access memory (RAM) or some other form of fast access memory. Such software, or portions thereof, may also be stored in longer term memory for transfer to the fast access memory. Longer term storage may include for example a hard disk, CDROM in a CDROM drive, DVD in a DVD drive, or other computer readable medium.
[0081] The content of such software may take many forms while carrying out the features and functions described herein and variants thereof as will be evident to those skilled in the art based on the principles described herein.
[0082] Patterns includes for example the spots emitted from the emitters described herein. Patterns also includes other examples provides herein such as ellipses and other curves. It may also include asymmetrical patterns such as bar codes. Actively generating a pattern includes for example a pattern on a computer monitor (called herein a screen) or other display device. Actively generating a pattern may alternatively include projecting the pattern onto a surface. A detector includes for example a camera or a sensor array incorporating for example CCD devices, and the like. Reference pattern data may include for example the location and direction of emitters, or other projectors.
[0083] Objects as used herein are physical objects, and the term is to be construed generally unless the context requires otherwise. When projection or detection occurs at an object it is intended to include such projection or detection from objects fixedly connected to the initial object and the projector or detector is considered to be part of the initial object.
[0084] Referring to the FIGS., like items will be referenced with the same reference numerals from FIG. to FIG. and the description of previously introduced items will not be repeated, except to the extent required to understand the principle being discussed. Further, similar, although not identical, items may be referenced with the same initial reference numeral and a distinguishing alphabetic suffix, possibly followed by a numerical suffix.
[0085] In some aspects embodiments described herein provide a solid state operator interface which accurately reports the movements of the working end of an operator's faux instruments, which are then accurately reported to the working end of the robot's tools. In the case of a surgical robot, the operator (surgeon) manipulates instruments similar to those the surgeon would normally use, such as a tubular wand, for a scalpel and an instrument that would be similar in shape to forceps. This approach reduces the training that is required to become adept at using a robotic system, and also avoids the deterioration of learned skills learned in the hands-on operating procedures.
[0086] In some aspects embodiments described herein provide an operator interface that permits an input device, and the hands of the operator, to move in a larger space, which would eliminate or reduce the occasions in which the system requires resetting a center point of operator interface movements.
[0087] In some aspects embodiments described herein provide an interface which allows for fine coordinated movements by input device, and by both hands, such as when the surgeon attaches a donor and recipient vessels with sutures.
[0088] In some aspects embodiments described herein provide an interface that may include haptic feedback.
[0089] In some aspects embodiments described herein provide an interface system that can position the tools at any point in time so that non-operationally created motions are fully compensated for, and a relatively small patch of surface, where the procedure is being carried out, is rendered virtually static to the operator's point of view.
[0090] In some aspects, embodiments described herein provide a method for virtually limiting latency, during the operation. In some other aspects, embodiments described herein provide a method for alerting an operator to the existence and extent of latency during the operation.
[0091] Referring to FIG. 1, an operator's hand 6 controls the motion of the wand 2 within a sensor array 1, comprised of five rectangular segments forming an open-sided box. Narrow light beams 4 emanate from a light-emitting cluster 3 and project spots of light 5 on the light sensors of the sensor array 1.
[0092] Referring to FIG. 2, the box sensor array 1 of FIG. 1 is replaced by a buckyball-shaped sensor array 1a, comprised of hexagonal and pentagonal segments, and an opening 7, which permits the wand 2 to be inserted into the sensor array 1a.
[0093] Referring to FIG. 3, a system, includes the sensor array 1 and transmission means 11a that deliver signals from the segments of the sensor array 1 at interface pads 11b to computer 11. A three dimensional viewer 8 includes superimposed force feedback information 10b, 10c, as shown in detail 10a on top of the three dimensional image of the work space.
[0094] Referring to FIG. 4 and FIG. 5, two examples are shown of the force feedback information 10d, 10e, 10f and 10g, which may be used in substitution or in addition to haptic feedback.
[0095] Referring to FIG. 6, various elements of a robotic control system are shown. FIG. 6 illustrates an example where a body 14 is being operated on through an incision 14a. The robot in this case is fitted with tool controller 15 and example tools: forceps 15b, three dimensional camera 15c and cauterizing scalpel 15d. The robot's principal actuators 15a control the various movements of the tools in response to the positions of the wands 2 including the goose-neck camera guiding wand 13, and commands of the operator.
[0096] Referring to FIG. 6a, an example of a user interface uses a single panel to form the sensor array 1.
[0097] Referring to FIG. 6a1, a user interface is shown that is similar to that illustrated in FIG. 6a, except that the sensor array 1b is comprised of two panels at an angle relative to each other, which is known to the computer 11.
[0098] Referring to FIG. 6a2, an interface is shown that is similar to that illustrated in FIG. 6b, except that the camera 3c is located in a stationary position above the surface 1b, such that it can view spots 5 projected onto the surface and their position on the surface, but at an angle which minimizes or eliminates interference by the wand 2 with the emitted beams 4. The camera 3c is connected to the computer 11 by connecting means 3b1.
[0099] Referring to FIG. 6b, a user interface is shown in which a lens 3c is included and which tracks the spots 5 projected onto a surface 1b, which may not contain sensors, and transmits the information wirelessly to the controller/encoder 18 and/or the computer 11.
[0100] Referring to FIG. 6c, a wand 2b includes a lens 3c.
[0101] Referring to FIG. 7, an example, generally cylindrical wand 2 includes rechargeable battery 17 and controller/encoder 18, various example controls 19, 20, 20a, 21 and light emitter cluster 3, which houses light emitters 3a.
[0102] Referring to FIG. 8, a faux forceps wand 2 has finger holes 21, return spring 21a and sensor/haptic feedback controller 21b.
[0103] Referring to FIG. 8a, the wand 2 is similar to FIG. 7, but instead of multiple fixed emitters 3a, there is one emitter 3a, the beam 4 of which is redirected by a mirror 3d or other beam redirecting device. FIG. 8a also illustrates the wand 2 with a camera 3c.
[0104] Referring to FIG. 8b, is a cross-sectional, perspective view of the distal end of wand 2, illustrating in greater detail emitter 3a and beam 4, part of which is redirected by mirror 3d1, in some embodiments being one of an array of mirrors 3e.
[0105] Referring to FIG. 8c, is a cross-sectional, perspective view of the distal end of wand 2b, illustrating in greater detail the emitter 3a, beam 4, part of which is redirected by mirrors 3d2 and 3d3. FIG. 8c also illustrates an alternative location for camera 3c, in this case being located at the distal end of the mirror array 3e.
[0106] Referring to FIG. 8d, a surface 1b has an addressing grid 1c overlain. For diagrammatical clarity, only parts of the grid have been illustrated, it being understood that the grid 1c is continuous over the surface 1b.
[0107] Referring to FIG. 9, faux forceps wand 2 includes a finger slider 19a and sensor and/or haptic feedback device 19c.
[0108] Referring to FIG. 10, an operator viewer 8 has force feedback information 10 as illustrated in detail 10a, and also illustrated in FIG. 3. Tool icons 10h represent available tools. In this example, the operator has selected the forceps icon 26b for the left hand and the wrench tool icon 27b for the right hand. As an example, the selected tool is indicated by the icon being bolded.
[0109] Referring to FIG. 11, example relative movement of the wand 2 controls is shown, including the finger hole control 21, and the finger slider control 19a, (See FIG. 9), and the consequent movement of the tool 26 (See FIG. 11).
[0110] Referring to FIGS. 12, 13 and 14, example of relative movement of the wand 2 controls is shown, including the finger hole control 21, the finger slider 19a control, and the rotary controller 20 and the consequent movement of tool 27 relative to the bolt 29.
[0111] Referring to FIGS. 15 and 16, example tools 15b, 15c and 15d have adjustable extensions 15b1, 15c1 and 15d1 which can retract 15b2, 15c2 and 15d2 in order to compensate for rising and falling of a surface, for example a heart surface 14d1, 14d2.
[0112] Referring to FIG. 17, camera tool 15c views the effect of the spacing of neighboring projected dots 5 on the surface of the heart 14d1, 14d2, at two stages in the heart's beat. The separations, along with known information: the angles of the beams 4, relative to the tool 15c and the position of the camera tool 15c, provide computer 11 with a description of the changing position of the heart surface at each point in time. It also illustrates one example position of cameras, or camera lenses 3c and 3c2.
[0113] Referring to FIG. 18, distal end of the camera tool 15c is shown in detail. The emitter cluster 3 and emitters 3a project beams 4 at various predetermined angles, relative to the tool 15c.
[0114] Referring to FIG. 19, an example passive haptic feedback device has flow of an electrorheological or magnetorheological fluid controlled by an electrical or magnetic field between elements 36, which can be electrodes or magnetic coils. The control of the flow of this fluid affects the speed with which piston 31a can move back and forth through the cylinder 31.
[0115] Referring to FIG. 20, an example passive haptic feedback device has a flow of fluid, such as saline or glycerin, controlled by an electrical valve 37. The control of the flow of this fluid affects the speed with which piston 31a can move back and forth through the cylinder 31.
[0116] Referring to FIG. 21, an operator's view of the worksite (a remote process) seen through the viewer 8 and eyepieces 9 has superimposed tool cursors 15d3 and 15b3 that illustrate the operator's intended position of the tools at the worksite. Actual position of the tools 15d2 and 15b2 at the worksite is also shown in the viewer 8 to display to the operator the difference between the two due to temporal latency.
[0117] Referring to FIG. 22, a wand 2b may be attached to any body part, tool 15d2 15c2, or other object, by means of connectors 42 and 42a, that have complementary indexing means 42c and 42b, to ensure their proper alignment. Where an external camera 3c, such as illustrated in FIG. 6a2, or a sensor array 1, as illustrated in FIG. 23, is provided then the wand 2b may not have an integral camera 3c.
[0118] Referring to FIG. 23, two wands 2i and 2ii can be aligned in a desired manner, or be placed in a desired orientation or position with respect to each other or another object. In this example a drill 44 is positioned so that it can drill through a hidden hole 46.
[0119] Referring to FIG. 24, a wand 2 and sensor array assembly 1d can be aligned in a desired manner, or be placed in a desired orientation or pose with respect to each other or another object. In this example a drill 44 is posed so that it can drill through a hidden hole 46. The sensor array 1 replaces the emitter housing 3 but is otherwise similar in construction to the wand 2. The sensor array 1 communicates with the controller/encoder 18 by communicating means 11a and thence wirelessly to computer 11 (not shown).
[0120] Some general elements of embodiments of some aspects of the present of invention will now be discussed.
[0121] One embodiment is a system which accurately records the motions of the working end of an operator's faux instruments, herein referred to as a wand, which can approximate the shape of the devices the operator would use in a manual procedure. These motions are reported to the working end of the tools that the robot applies to the work site.
[0122] Other embodiments simply use the wand as an input device and its shape may not in any way relate to a particular instrument. For clarity, this disclosure will use a surgical interface to illuminate some convenient features of the invention, but for some embodiments the shape of the wand may not in any way mimic standard tools or instruments. It should also be noted that reference is made to a system controlling robotically controlled tools. It should be understood that some embodiments will control actuators that perform all types of work, such as controlling reaction devices, such as rocket motors or jet engines; the position of wing control surfaces, to name a few. The system may control virtual computer generated objects that are visually displayed or remain resident within the computer and where actuators may not even be used. Embodiments of this type would include manipulation of models of molecular structures (molecular modeling) and manipulation of protein structures. In such embodiments the wand may be thought of as a computer mouse in three dimensions, for example allowing the operator to view a three dimensional image of a structure, and then to make alterations to it, by moving the wand and making control commands, for example in the space in front of a sensor array. Such an embodiment of the wand and method could be used in architecture, machine design or movie animation. In will be recognized by those skilled in the art that these are examples only of uses of such embodiments and the embodiments are not limit to these examples.
[0123] In some described embodiments wands 2 incorporate light-emitting elements 3a that collectively cast multiple narrow beams of light, at known angles to each other, onto a sensor array 1 constructed of one or more light detecting panel(s) as illustrated on FIG. 3. The light detecting panel(s) report the location of the incident light, in real-time, to a computer. Knowing the angles at which the emitters 3a project the light beams from the wand 2, the computer can convert various locations of incident 5 light beams 4, using triangulation and mathematical methods and algorithms, well known to the art, to calculate the position and attitude of the wand 2 relative to the sensor array 1, at each particular time interval. As the wand 2 moves, so do the spots of incident light 5 on the sensor array(s) 1, and so the computer can produce a running calculation of the position and attitude (example parameters of the pose) of the wand 2, from time to time. The computer can convert changes in parameters of the pose into instructions to the robot to assume relative motions. Small changes in the position and attitude of the wand can trace relatively large positional changes where the light falls 5 on the sensor array 1. This can allow for accurate determining of the position and attitude of the wand.
[0124] Mathematical calculations that may be used to determine parameters of a pose of the wand and other parameters of pose described herein have been developed, for example, in the field of photogrammetry, which provides a collection of methods for determining the position and orientation of cameras and range sensors in a scene and relating camera positions and range measurements to scene coordinates.
[0125] In general there are four orientation problems:
A) Absolute Orientation Problem
[0126] To solve this problem one can determine, for example, the transformation between two coordinate systems or the position and orientation of a range sensors in an absolute coordinate system from the coordinates of calibration points. This can be done by recovery of a rigid body transformation between two coordinate systems. One application is to determine the relationship between a depth measuring device, such as a range camera or binocular stereo system, and the absolute coordinate system.
[0127] In the case of range camera, the input is at least a set of four conjugate pairs from one camera and absolute coordinates. In the case of a binocular stereo system, input is at least three conjugate pairs seen from the left and right camera.
B) Relative Orientation Problem
[0128] To solve this problem one can determine, for example, the relative position and orientation between two cameras from projections of calibration points in the scene. This is used to calibrate a pair of cameras for obtaining depth measurements with binoculars stereo.
[0129] Given n calibration points, there are 12+2n unknowns and 7+3n constraints.
[0130] At least 5 conjugate pairs are needed for a solution.
C) Exterior Orientation Problem
[0131] To solve this problem one can determine, for example, the position and orientation of a camera in an absolute coordinate system from the projections of calibration points in a scene. This problem must be solved for an image analysis application when necessary to relate image measurements to the geometry of the scene. This can be applied to a problem of position and orientation of a bundle of rays.
D) Interior Orientation Problem
[0132] To solve this problem one can determine, for example, the internal geometry of a camera, including camera constants, location of the principal point and corrections for lens distortions.
[0133] Some examples of these problems and their solutions are found in Ramesh Jain, Rangachar Kasturi and Brian G. Schunck, Machine Vision, McGraw-Hill, New York, 1995. ISBN 0-07-032018-7. Chapter 12 on Calibration deals in particular with an absolute orientation problem with scale change and binocular stereo, and with camera calibration problems and solutions which correlate the image pixels locations to points in space. Camera problem includes both exterior and interior problems.
[0134] In addition to calibration problems and solutions, the Jain et al reference addresses an example problem and solution for extracting distance or depth of various points in the scene relative to the position of a camera by direct and indirect methods. As an example, depth information can be obtained directly from intensity of a pair of images using two cameras displaced from each other by a known distance and known focal length. As an alternative example solution, two or more images taken from a moving camera can also be used to compute depth information. In addition to those direct methods 3D information can also be estimated indirectly from 2D intensity images known as “Shape from X Technique”, where X denotes image cues such as shading, texture, focus or motion. Examples are discussed in Chapter 11 in particular.
[0135] The above Jain et al. reference is hereby incorporated by reference into the detailed description hereof.
[0136] As a further example discussion of solutions to mathematical calculations that may be used to determine parameters of a pose of the wand for the purposes of determining 3D-position of a hand-held device equipped with laser pointers through a 2D-image analysis of laser point projections onto a screen, two sets of coordinate systems can be defined as shown in FIG. 26. The centre of a first coordinate system (xS,yS,zS) can be placed in the middle of the plane that coincides with the screen (projection) plane and is considered to be fixed. The lasers installed on the hand-held device can be described with a set of lines in a second coordinate system (xD,yD,zD) which origin agrees with an intersection of the laser pointers. Additionally, the second coordinate system can have a freedom of translation and rotation as shown in FIGS. 27 and 28. Translation and rotation coordinates such as those shown in FIGS. 27 and 28 can also be found in linear algebra book such as Howard Anton, John Wiley & Sons, 4th edition ISBN 0-471-09890-6; Section 4.10, at pp. 199 to 220.
[0137] The projection of the laser on the fixed plane is mathematically equivalent to finding the intersection between the plane equation zS=0 and the line equation describing the laser path. However, the line equations have to be transformed in the original coordinate system. There are many ways to define an arbitrary rotation and translation of one coordinate frame into another. One of the ways is via the transform matrix elements.
[0138] The table 1 and 2 shows the coordinate transforms of the point P from one coordinate system to the other as a result of the linear transposition and rotation.
TABLE-US-00001 TABLE 1 Table 1 Linear Table 2. Rotational transformation and definition of a.sub.ik coefficients translation a.sub.ik k = 1 k = 2 k = 3 x = x* + a1 x = a.sub.11x* + a.sub.12y* + a.sub.13z* i = 1 cos?cos? cos?sin? sin? y = y* + a2 x = a.sub.21x* + a.sub.22y* + a.sub.23z* i = 2 cosfsin? + sinfsin?cos? cosfcos? - sinfsin?sin? -sinfcos? z = z* + a3 x = a.sub.31x* + a.sub.32y* + a.sub.33z* i = 3 sinfsin? - cosfsin?cos? sinfcos? + cosfsin?sin? cosfos?
[0139] The table 3 is a summary of example laser property and image analysis requirements for the reconstruction of the translation or rotation of the hand held device based on the observations of movement of the projection point as set out above. For the purpose of this discussion, multiple lasers are equivalent to a single laser split into multiple spot beams.
TABLE-US-00002 # of Translation Rotation lasers x y Z Along Z Along x Along y 1 custom-character custom-character Requires the light source Possible with the offset Not detectable for the with large dispersion sensor and path narrow laser beam. It angle. Requires edge reconstruction from would be interpreted as the detection and area or minimum 3 frames for translation. In the case of perimeter calculations large angles. However, the dispersed beam, not very sensitive for requires edge detection and small rotational angles. shape reconstruction. 2 custom-character custom-character custom-character Problem with detection of It would be interpreted s Requires non parallel laser left-right laser equivalent the translation in the case beams and distance to 180° rotation. Requires of horizontal or vertical calibration. marking of one of the alignments. For misaligned lasers. Still requires path lasers, not very sensitive reconstruction via frame and requires the distance history. calculation and calibration. 3 custom-character custom-character custom-character Requires marking of one Requires area or perimeter With non parallel laser of the lasers. calibration/calculation. beams. 4 or Can provide additional information to potentially avoid singularities or ambiguities. more
[0140] Additional image frames can be used to change the number of lasers, or spots used at any one time. The linear transposition in x and y direction can be reconstructed from the center of mass. The translation along the z axis can utilize a calibration of the area/perimeter of the triangle. Detection of the rotation around z-axis can be achieved with marking of one of the lasers or by asymmetrical placement of lasers. Whereby, the marking of the laser may result in the faster processing time compared to the second option which requires the additional image processing in order to find the relative position of triangle. The marking of the laser can be achieved, for example, by having one laser of larger power which would translate in the pixel intensity saturation of the projection point.
[0141] With respect to the image processing time, it may be preferable to limit the area of the laser projection, for example to a 3 by 3 pixel array. Once, the first laser point has been detected, a search algorithm for the rest of the laser points could be limited to the smaller image matrix, based on the definition of allowable movements.
[0142] Other illustrative examples of mathematical calculation that may be used to determine parameters of a pose of the wand and other parameters of pose described herein are included for example in B. K. P. Horn. Robot Vision. McGraw-Hill, New York, 1986; U.S. patent application of Fahraeus filed Mar. 21, 2001 under application Ser. No. 09/812,902 and published in Pub. No. US2002/0048404 on Pub. Date: Apr. 25, 2002 under title APPARATUS AND METHOD FOR DETERMINING SPATIAL ORIENTATION which discusses among other things determining the spatial relationship between a surface having a predetermined pattern and an apparatus; in U.S. patent of Zhang et al. issued Apr. 4, 2006 under title APPARATUS AND METHOD FOR DETERMINING ORIENTATION PARAMETERS OF AN ELONGATE OBJECT; Marc Erich Latoschik, Elmar Bomberg, Augmenting a Laser Pointer with a Diffraction Grating for Monoscopic 6DOF Detection, Journal of Virtual Reality and Broadcasting, Volume 4(2006), no. 14, urn:nbn:de:0009-6-12754, ISSN 1860-2037 http://www.jvrb.org/4.2007/1275; Eric Woods (HIT Lab NZ), Paul Mason (Lincoln University, New Zealand), Mark Billinghurst (HIT Lab NZ) MagicMouse: an Inexpensive 6-Degree-of-Freedom Mouse http://citeseer.ist.psu.edu/706368.html; Kynan Eng, A Miniature, One-Handed 3D Motion Controller, Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, CH-8057 Zurich, Switzerland http://www.ini.ethz.ch/˜kynan/publications/Eng-3DController-ForDistribution-2007.pdf. The content of each of the above references cited above in this paragraph is hereby incorporated by reference into the detailed description hereof.
[0143] Rather than using a sensor array to detect the incident 5 light beams 4, a camera above a passive surface 1b, as illustrated in FIG. 6a2 may similarly detect the position of the incident spots of light 5 on surface 1 and make the same calculations described above to determine the position and attitude of the wand 2. Alternatively, a camera 3c may be incorporated into the wand 2b to detect where the light falls 5 on the surface 1b, as illustrated on FIG. 6a1.
[0144] With reference to FIG. 6, since the space in front of the sensor array(s) may be different from the space that the robot operates in, the operator may reset or, as is usually the case, center the wands 2 in front of the sensor array 1, to coordinate the wand's position with that of the working end of the robotic arms 15b, 15c and 15d, for the next work sequence. Additionally, the travel distances, while relatively the same, between the wands 2 and the working end of the robot arms 15b, 15c, 15d, may differ. For example, where accuracy is critical, the wand 2 may be set to move relatively long distances, to effect a relatively short displacement at the working end of the robotic arms 15b, 15c, 15d. Conversely, where accuracy is not important and quicker movements, over larger distances are desired, the computer can be instructed to translate short length movements of the wand 2 into relatively large distances of travel at the working end of the robotic arms 15b, 15c, 15d. This relationship can be changed by the operator, at any time, by moving a control on the wand 2 or controls 11e on the console 11d. These methods of computer control are well known to the art and embodiments of the invention that incorporate such controls are within the ambit of the invention.
[0145] The relative attitude of the sensor array 1 to the attitude of the robot arm work space 14b can also be set, which is usually at the commencement of the work, although it may be changed during the operation. For example, the vertical line in the sensor array 1 will usually be set to be the vertical line in the work space 14b, so that when the wand 2 is raised up vertically in front of the sensor array(s) 1, the robot will produce a vertical motion at the working end 15b, 15c, 15d of the robotic arm. This however may be changed by the operator varying the settings of the vertical and/or horizontal plane at the console 11d or in some other embodiments in the wand 2.
[0146] Similarly the longitudinal axis of the wand 2 is generally set as the same as the longitudinal axis of the working end of the robot's arms 15b, 15c, 15d, although this too can be altered by controls at the console and in some other embodiments in the wand itself.
[0147] At the start or reset times, the position and attitude of the wand 2 can be translated to be the same as the position of the working end of the robot arms 15b, 15c, 15d; the motions thereafter, until the next reset, can be relative. This allows the operator to change the operator's start or reset position and attitude of the wand to make it more comfortable to execute the next set of procedures, or provide sufficient room for the next set of procedures, in front of the sensor array 1, as referred to above.
[0148] The movement of the wands will then control the movement of the tools to which they are assigned by the operator. Finer movements and movements that require haptic feedback can be effected by controls on the wands 2b, such as the finger hole control 21, the rotary control 20, 20a and the finger slider control 19b, illustrated on FIG. 6c. Switches on the wand or on the consul can turn off the active control of the tools by the movement of the wand(s), but may turn on or leave on the active control of the tools by the controls on the wand to prevent inadvertent jiggling or wander while critical and/or fine work is being conducted by the operator. On other occasions the operator may wish to manipulate the wand 2 position simultaneously with moving the controllers 20, 20a and 19b or other controls which that preferred embodiment might include.
[0149] The sensor array 1 may be made of one or more sheets or panels of light sensor arrays, in which each pixel of the sensor array cage 1 can communicate the fact that light has or has not fallen 5 on that pixel to the computer, and identify which light beam 4 and from which wand 2, it originated. When integrated by the computer 11 with other inputs from other locations, this information can identify the location and attitude of the wand 2, by triangulation, mathematic methods and computer algorithms, well known to the art.
[0150] In some embodiments the color of the incident light, and/or the addressable pulse frequency of the light that is detected, identifies which particular light beam and wand has cast the light so incident. For example, in some embodiments a wand may have several light-emitting elements, such as a laser, diode laser or light-emitting diode, each having a different light wave length (or color), which can be identified and distinguished by the sensor array 1 (in combination with the computer). In other embodiments, the light emitter 3a is modulated or pulsed to give it a unique pulse address, which when its beam 4 is detected by the sensor array 1, which with the computer identifies the particular light beam 4, wand 2 and location and attitude of the same. Other embodiments may take advantage of the relative unique patters of beams 4 emitted from each wand 2 to identify the wand 2 and perhaps the particular beam 4 from that said wand. Other embodiments can include a combination of these methods, or other similar beam identification methods, well known to the art. It can be desirable to provide additional light emitters 3a to provide redundancy, in the event one or more of the beams does not strike a sensor. For example, in some embodiments an axial reference beam 4 may be directed straight along the longitudinal axis of the wand 2.
[0151] One or more of the light beams 4 may be modulated so as to provide information as to the wand 2 identity, and its mode of operation. For example, it might convey information as to the desired heat setting and off/on state of the cauterizing scalpel, or the forceps clasping position, as set by the wand's operator. It might also indicate the rotation of a particular tool. These are only examples of the information that may be selected by the operator, on the wand controls, and then conveyed to the sensor array 1, and hence to the computer to control the robotic arms. Embodiments can include all other convenient instructions and inputs, and all are included within the ambit of the embodiments described herein. This method of conveying instructions may be handled by a dedicated light emitting element 3a, or be bundled into one or more of the light emitting elements 3a that are used to determine the position and attitude of the wand 2. This method of conveying instructions and status information from the wand may be in addition to wireless communications 16, 16a means embedded in the wand, or in place of it.
[0152] The pulses of light from the light-emitting elements 3a from cluster 3 of the wands, may be synchronized such that the beam 3 falls 5 on the sensor array 1 at discrete times so as to avoid conflicting signals in those architectures that do not have direct connections between the sensor elements and drivers, such as active or passive matrix. In other embodiments, redundant beams are sufficient to resolve any signal interference and software means such as path prediction algorithms can be used to resolve any such conflicts. The beams in most cases will fall on more than one and in most cases many pixels in the sensor array, which will improve reliability, at the expense of resolution, and may also be used to distinguish between two beams that strike approximately the same pixels group.
[0153] There are many methods of constructing a light sensor array 1, well known to the art, and includes thin film transistor (TFT) arrays in which there may be included color filter arrays or layers, to determine the color of the incident light and report the location to the computer by direct and discreet connection, or more often, by way of a passive or active connection matrix. These active matrixes or AMTFT's architectures can be used in some embodiments. Recently, Polymer TFT's sensor arrays are being made which substantially reduce the cost of such sensor arrays. These less expensive arrays will mean that the sensor array(s) 1 can be made much larger. An example of a Polymer TFT, is described by F. Lemmi, M. Mulato, J. Ho, R. Lau, J. P. Lu, and R. A. Street, Two-Dimensional Amorphous Silicon Color Sensor Array, Xerox PARC, United States, Proceedings of the Materials Research Society, 506 Keystone Drive, Warrendale, Pa., 15086-7573, U.S.A. It is understood that any convenient light sensor array may be used, including any future development in light sensor arrays, their architecture and composition, and such an embodiment is within the ambit of the embodiments described herein.
[0154] In some embodiments, the sensor array pixels may be combined with light emitting elements, forming a superimposed sensor array and a light emitting array. In these embodiments an image of the working end of the robot arms 15b, 15c, 15d and work sight can be formed on the sensor array 1, and the operator can at the same time view the wand(s) 2 that are initiating the motion of the working end of the robot's arms 15b, 15c, 15d. This embodiment is most effective if the image is generated as a three dimensional image, although this is not required. Methods for creating a three dimensional effect are well known to the art and include synchronous liquid crystal glasses and alternating left eye, right eye, image generation and single pane three dimensional arrays. It is to be understood that the embodiments described herein includes all these methods and future three dimensional image generation methods.
[0155] Other embodiments may use an additional camera aimed at the operator's hands and wands, and append the image to that of the worksite that is viewed in the operator viewer 8. This appended image may be turned on and off by the operator.
[0156] In those preferred embodiments that use a surface 1b, and camera 3c, in place of the sensor array 1, as illustrated in FIG. 6c, the wand 2b operates partly as a laser or optical mouse, that is, detecting movement by comparing images acquired by the lens of part(s) of the surface 1b. In some preferred embodiments images of spot(s) 5 can be detected by the said lens 3c, noting both their texture or image qualities, and their positions relative to other spot(s) 5. Since the relative angle of the projected beams 4 are known, the computer 11 and/or controller/encoder 18, can process this information to determine the three dimensional position of the wand 2b relative to the surface 1b, for example by using both methods used by optical/laser mice and mathematical methods including trigonometry, well known to the art. As an example, movement of the wand 2b on planes parallel to the surface 1b, can be determined by methods used by optical/laser mice, which are well known to the art; and the height and attitude of the wand in three dimensional space can be determined by the lens 3c detecting the relative position of the spots 5 projected onto the surface 1b, and using triangulation and mathematical methods described above, which are also well known to the art. More particularly, the position of the wand 2b in three dimensional space can then be computed by integrating these two information streams to accurately establish both the lateral location of the wand 2b and its height and attitude in space. Thus, not all parameters of the pose are determined utilizing the detected pattern of the spots on the surface; rather, some of the parameters are determined utilizing the texture information (lateral location), while other parameters are determined utilizing the detected pattern of spots (height and attitude).
[0157] In other embodiments, where there are two or more panels, that are placed at relative angles known to the computer 11, such as those illustrated in FIG. 6a1, the wands 2b may contain camera(s) 3c which are able detect the position of spots 5 on two or more panels. In these arrangements, where the panels are surfaces 1b, the orientation and position of the wand 2 may be determined for example as described above by mathematical methods, including trigonometry. For example, in an embodiment where the panels are arranges at right angles to each other (at 90 degrees), as illustrated in FIG. 6a1, and where the angles at which the light beams 4 trace relative to the longitudinal axis of the wand 2 are known, and where the relative positions of the projected spots 5 which fall on both panels are recorded by the camera(s); the position and orientation of the wand 2 in three dimensional space can be directly determined by mathematical methods, including trigonometry.
[0158] This information, for example, can then be used to control the tools 15b, 15c, and 15d, or control any process, virtual or real. It can be readily appreciated that the wand 2b, like the wand 2 can be any shape and have any function required, for example having the shape of an optical/laser mouse and pointing and directing processes in a similar manner.
[0159] In this disclosure, references to wand 2, should be read as including wand 2b and vice versa, as the context permits. Similarly references to sensor array 1 should be read as including surface 1 and vice versa, as the context permits.
[0160] Embodiments of the invention that incorporate a surface 1b, rather than a sensor array(s) 1, pass information from buttons and hand controls, for example 19a, 20 and 21, on the wand 2b wirelessly or by direct connection, herein described, and by other methods well known to the art. The beams 4 may be encoded for maintaining identification of each beam and each spot 5; for example, the light emitting elements 3a may be pulsed at different frequencies and/or have different colors, which the lens 3c may detect from the light reflected from the spots 5. Although, a wand 2b, may resort exclusively to those methods used by optical/laser mice, to determine its position in three dimensional space, without resort to detecting computing and integrating the relative positions of projected spots 5, the accuracy of such a system will be inferior to those that include those latter methods and the computational overhead will be greater as well. It is to be understood that some embodiments can rely solely on those methods used by optical/laser mice, where accuracy is not as important.
[0161] In some embodiments, the surface 1b may be any suitable surface including those that contain textures and marks that are typically used in association with optical/laser mice. The surface 1b may have reflectivity or surface characterizes, such that the reflected spots 5 that are detected by the camera 3c are within a known envelope and thus spots 5 that are off the surface 1b, can be rejected in calculating the orientation of the wand 2b, accompanied by a warning signal to the operator.
[0162] The wands 2, 2b may include resting feet that allow them to rest on the surface 1, 1b, such that the beams 4 and spots 5 can be detected by the camera 3c, and such that the system can calibrate itself with a known wand starting orientation, and if placed on a specific footprint, position; or sensor array 1 or the surface 1b may include an elevated cradle 1e, as illustrated on FIG. 6b to hold the wand 2b in a fixed position for the calibration routine. The number of light emitting elements, such as lasers or photo-diodes, will depend upon the accuracy and redundancy required.
[0163] The wand 2 may in some applications be stationary, or have an otherwise known position, and measure it's position relative to a moving surface or changing contours on a surface. The embodiments of the invention may include such a wand 2 or be incorporated into a tool, such as those, 15b, 15c, 15d, illustrated in FIG. 15, FIG. 16 and FIG. 17, and be used to compensate for motions, such as the beating of the heart 14d1, 14d2.
[0164] Feedback of forces acting on the working end of the robotic arms 15b, 15c, 15d, may be detected by sensors on the robot arms, by means well known to the art and this real-time information may be conveyed to the computer which can regulate the haptic feedback devices and impart approximately the same forces on the operator's fingers and hands and/or resist the movement of the operator's fingers and hands. These haptic feedback devices, which are well known to the art, can, for example, be incorporated into the controls 19, 19a, 20, 21 or other similar controls of the wand 2 or 2b. These haptic feedback devices can be active or passive and can impart force on the operator's fingers or hands (active), and/or resist the motion of the operator's fingers or hands (passive). Examples of passive haptic feedback devices are illustrated in FIGS. 19 and 20. FIG. 19 illustrates a passive haptic feedback device in which the flow of an electrorheological or magnetorheological fluid is controlled by an electrical or magnetic field. FIG. 20 illustrates a passive haptic feedback device in which the flow of fluid, such as saline or glycerin is controlled by an electromechanical valve. Embodiments of this invention may incorporate haptic feedback devices of any design known to the art, and all come within the ambit of the embodiments described herein.
[0165] These haptic feedback devices can for example be incorporated into the finger hole 21 sensor/feedback controller 2. For example the finger holes 21 of the wand that is a faux forceps, as illustrated in FIG. 9, can be provided with haptic feedback devices which provide pinching feedback forces to the operator's hands and which accurately simulate the forces acting on the working end of the forceps tool 15b on the working end of the robotic arm. The position and motion of the mobile finger hole 21 can be conveyed to the computer wirelessly, by beam modulation, as described above or by cable.
[0166] Similarly, the same faux forceps, illustrated in FIG. 9 can on some preferred embodiments of the invention, include a haptic feedback device in the finger slider sensor/haptic feedback device 19c, which senses the movement of the finger slider 19a, and which can move the forceps tool 15b, back and forth in a direction parallel to the longitudinal direction of the said tool 15b. As the operator slides the finger slider from 19a position to 19b, the operator feels the same resistance that the tool 15b senses when it pulls back tissue that it grasps, in response to the pulling back of the said slider 19a.
[0167] The faux forceps, illustrated in FIG. 9 can transform its function from forceps to any other tool or instrument that is required. For example the same faux forceps, illustrated in FIG. 9 can act as a controller for a scalpel tool 15d, a wrench 27 (illustrated in FIG. 13), or any other tool or instrument, in which the various controls 19, 19a, 20, 21 of the wand are programmed to have different, but usually analogous, functions for each particular tool. The operator can select a particular tool by pressing a particular footswitch, a switch on the wand 2, or other switch location. All tools available and the selected tool may be presented as icons on the operator viewer 8, through the three dimensional eyepieces 9, an example of which is illustrated in FIG. 10 as detail 10h. For example, the selected tool might be bolded as the forceps icon 26b is bolded for the left hand wand 2 in the detail 10h, while the wrench tool icon 27b is bolded, for the right hand. Once selected, by the operator, the other various controls 19, 19a, 20, 21 and other controls, would be assigned to various analogous functions. The operator might call up on the viewer 8 a summary of which controls on the wand relate to what actions of the tools 15b, 15c, 15d, or other applicable tools or actions. All icons may be switched off by the operator to maximize his viewing area through the eyepieces 9.
[0168] Some embodiments also include means for reducing latency and accommodating to the motion of the subject.
[0169] Further details of the embodiments will now be discussed with particular reference to the FIGS.
[0170] FIG. 1 illustrates operator's hand 6 controlling the motion of the wand 2 within a sensor array 1, comprised of five rectangular segments, forming an open-sided box. FIG. 1 also illustrates the narrow light beams 4 emanating from the light-emitting cluster 3, and projecting spots of light 5 on the light sensors on the inside of the sensor array 1. The light-emitting elements 3a, that comprise the light-emitting cluster 3, are usually positioned such that the narrow beams of light 4 that they emit form a unique pattern, so as to aid in identifying the particular wand 2 that is being used. Various embodiments contain various numbers of light-emitting elements, depending upon the accuracy required and whether dedicated information carrying beams are used. Any shape of sensor array 1 can be utilized, and those illustrated in FIG. 1, FIG. 2, FIG. 6a and FIG. 6a1 are only intended to be examples of a large class of sensor array shapes, sizes and arrangements. The density of pixels or discrete sensors comprising the sensor array 1 will vary depending upon the use to which the robot is put.
[0171] FIG. 3 illustrates the three dimensional viewer 8 which includes two eyepieces 9 and feedback information 10 which is superimposed on the image of the work area. As illustrated in FIG. 4 and FIG. 5 the size and orientation of the vectors 10d and 10f, and the numerical force unit 10e and 10g can be computer generated to graphically report the changing forces acting on the working end of the robot's tool that corresponds to the wand that is being manipulated. In some embodiments, these vectors are three dimensional views, such that the vector position will correspond with the forces acting on the three dimensional view of the instruments, viewed through the viewer 8. The viewer 8 may superimpose feedback information on additional wands on top of the three dimensional view of the work area. These superimposed views may of course be resized, repositioned, turned on and off by the operator. The view of the work area is captured by a three dimensional camera 15c, as illustrated in FIG. 6, which transmits the image information along transmitting means 11c to the computer 11 and viewer 8. The position of the camera, like that of any robot tool may be controlled by a separate wand 13, such as that illustrated in FIG. 6, or be controlled by a multi-purpose wand, which changes its function and the tool it controls, by a mode selecting control such as switch 20, which is incorporated into the wand 2, as illustrated in FIG. 7. The camera may also be programmed to keep both tools 15b and 15d in a single view, or selected tools in a single view. This automatic mode may be turned on or off by the operator, who may then select a wand controlling mode. The feedback reporting means may be presented in many ways and that described is meant to be an example of similar feedback reporting means, all of which come within the ambit of the embodiments described herein.
[0172] In some embodiments the viewer 8 is attached to a boom support, so that it may be conveniently placed by the operator. Various preferred embodiments place the controls 11e on the console 11d which is adjacent to the sensor array 1 and the wands 2, but they may also include foot switches 12, one of which is illustrated in FIG. 6. It can be readily appreciated that the computer 11 may be replaced with two or more computers, dividing functions. For example, the sensor array 1, wands 2, one computer 11 and viewer 8 may communicate at a significant distance with the second computer 11 and work site robot controller 15. This connection could be a wideband connection which would allow the operator to conduct a procedure, such as an operation from another city, or country.
[0173] The wands 2 and 2b illustrated in FIGS. 7, 8, 9 and 12 are only meant to be examples only and other embodiments would have different shapes and controls and still be within the ambit of the embodiments described herein, for example, some embodiments may have a revolver shape. FIG. 7 illustrates the principal components of one embodiment. The wand 2 in FIG. 7 contains a rechargeable battery 17 to supply power to the various functions of the wand 2. The terminals 17a extend beyond the wand and provide contacts so that the wand may recharge when placed in a docking station which may accommodate the other wands, when not in use. Transmission means 17b provides power to controller/encoder 18 from battery 17. Controls 19, 20 and 20a are meant to be illustrative of control means, to switch modes of operation, such as from a cauterizing scalpel to a camera or forceps; and/or to vary the heat of the cauterizer or the force applied to the forceps grippers, to name just a few examples. In those cases where the robot arms are snake-like, these controls 19, 20 and 20a or similar controls, may control the radius of turn, and location of turns, of one or more of the robot's arms. In FIG. 7 transmission means 19a connects the lever control 19 to the controller/encoder 18; transmission means 20b connect the controllers 20 and 20a to the controller/encoder 18.
[0174] The controller/encoder 18 in some embodiments pulse the one or more of the light emitters 3a to pass-on control information to the computer, via the sensor array 1, as mentioned above. Transmission means 3b connects the emitters to the controller/encoder 18. The light-emitting array 3 may contain discrete emitters; they may also be lenses or optical fibers that merely channel the light from another common source, for example, a single light-emitting diode or laser. Other wireless means may be included in the wand 2, which require an aerial 16a which communicates with aerial 16 in communication with the computer 11, as illustrated in FIG. 6.
[0175] While the wands illustrated are wireless, it should be understood that various embodiments of may have wired connections to the computer 11 and/or to a power source, depending upon their use, and these embodiments come within the ambit of the invention. In some embodiments, such as those in which the wand 2 is connected directly to the computer 11, the controller/encoder 18 and all or parts of its function are incorporated into the computer 11.
[0176] FIG. 8 illustrates a faux set of forceps 2, which give the operator or surgeon the feel of the forceps he may use later in the same procedure or another day when the robot is not available or suitable for the operation. FIG. 8 is meant to be illustrative of designing the wand to resemble instruments or tools that would be otherwise used in a manual procedure. This allows the skills learned using these devices to be used when controlling a robot and reduces dramatically the learning time required to use the robot effectively. While embodiments may include wands of many shapes, and configurations, those that resemble in function or appearance the tools or instruments that are normally used, are particularly useful to those situations where the operator must carry out similar procedures both manually and by robot.
[0177] FIG. 8 illustrates a faux forceps wand 2 which has two finger holes 21, one of which pivots at the controller/feedback device 21b, which detects motion of the movable finger hole 21, which is transmitted by transmission means 21d to the controller/encoder 18 which then transmits the motion wirelessly, or directly, to the computer 11 or encodes pulses by modulating the output of the light emitters 3a, the light beam produced transmitting the motion and position of the movable finger hole 21 to the sensor array, and subsequently the computer 11. FIG. 8 also illustrates an alternative method of detecting and transmitting changes in the position of the various control elements on the wand 2. Emitter(s) 3a may be placed on the movable elements, such as the finger hole 21. The projected light 4 that is incident on the sensor array 1 or surface 1 may then be used by the computer 11 to determine the position of the moving element, as it moves, such as the finger hole 21, illustrated in FIG. 8. This method of detecting and reporting the movement of control elements may be used in any such elements which are contained in various embodiments of the invention. For diagrammatical simplicity the connection from the light emitter 3a, on the finger hole 21, to the controller/encoder 18 has not been shown.
[0178] The controller/feedback device 21b may also receive instructions wirelessly or by direct connection from computer 11, which directs the magnitude and direction of haptic feedback forces on the pivoting action of the movable finger hole 21. These haptic feedback forces can be passive or active, depending upon the design of the controller/feedback device. In some embodiments, no haptic feedback component is incorporated into the controller/feedback device, and in these embodiments the controller/feedback device 21b merely transmits motion and position data of the movable finger hole 21 to the computer; via the sensor array, wirelessly or directly to the computer 11.
[0179] FIG. 8 also illustrates a notional end 4a for the wand 2 which the operator sets at the console 11d to allow for sufficient room between the ends of the wands 2, when the tools are in close proximity.
[0180] FIG. 8a, and detail drawings 8b and 8c, illustrate a wand 2b similar to FIG. 7, but instead of multiple fixed emitters 3a, there is one or more emitters 3a, the beam(s) 4 of which are redirected by a mirror(s) 3d or other beam redirecting device. In this embodiment, the controller/encoder 8 directs each mirror 3d in the mirror array 3e, housed in a transparent housing 3f, and secured to it by rod supports 3g, to redirect part or the entire beam 4 produced by the emitter 3a. As illustrated in FIG. 8b, the controller/encoder 18 and/or the computer 11 selects each mirror 3d1 and varies its angle relative to the mirror array 3e (one at a time or in groups) and, with other mirrors in the array, directs the beam(s) in a programmed sequence, noting the angle of the projected beam relative to the wand 2b and simultaneously comparing this to the point(s) 5 detected on the surface 1b, and by mathematical means, including trigonometric methods, defining at every selected pair, at that point in time, the position of the sensor relative to the surface 1b (or sensor array 1 in those embodiments where a sensor array is used to detect the spot 5). Embodiments include all means of redirecting the beam 4, including solid state electronic mirror arrays, such as those developed by Texas Instruments Corp. or mechanical or other optical redirecting devices well known to the art. The solid state mirror arrays that have been developed by Texas Instruments Corp. may incorporate any number of mirrors and may incorporate thousands of them, each of them or groups of them being controlled by electronic means. This system is one of a larger class known as microelectronic mechanical systems (MEMS). Because the beam can be programmed to quickly produce multiple pair inputs at various angles, for mathematical comparison, as described above, the controller/encoder 18 and/or computer 11 can calculate the position of the wand 2 in three dimensional space at each point in time. The beam may be directed in various patterns, and may adapt the pattern so as to maximize the coverage on the sensor array 1 or surface 1b and minimize or eliminate the occasions in which the beam would fall incident outside of the perimeter of either the sensor array 1 or the surface 1b.
[0181] Other embodiments, such as that illustrated in FIG. 8a, may include a motor or motive device rotating mirror or prism, in place of the mirror array 3e, which redirects the beam 4 and, for example, may project an ellipse (when stationary, and open curves, when the wand 2 is in motion) or other set of curves, on the sensor array 1 or surface 1b. In such a case at ever point in time the controller/encoder 18 and/or computer 11 can calculate the position of the wand 2, as at each point in time the angle of the beam emitted, relative to the wand, is known and matched with its other pair 5 that is projected on the sensor array 1 or surface 1b at that same point in time. Obviously, the rate of rotation must be sufficient so that every motion of the wand 2 is captured by the sensor array 1, or camera 3c. Since the controller/encoder 18 and/or the computer 11 direct the mirrors in the mirror array and control the angle at every point in time each mirror elevates from the mirror array 3e surface, the angle at which the beam 4 is redirected, relative to the wand 2 is known, speeding the mathematic calculation, described above. As illustrated in FIG. 8c, any number of beams may be actuated at the same time, some being pulsed, panned about, while others may stay on, and may be fixed or be set at various angles. For example, FIG. 8c illustrates how mirrors 3d2 and 3d3 may be elevated at different angles, producing divergent beams 4, with a known angle. Also, by way of further example, an embodiment in which the wands 2b incorporates a camera(s), which may be located on various parts of the wand or some other convenient location, some beams may stay on so that the camera 3c can record the surface patterns, which assist in locating the position of the wand 2, in three dimensional space, relative to the surface 1b.
[0182] In other embodiments, as illustrated in FIG. 8d, shapes such as, circles or ellipses are projected on the sensor array 1 or surface 1b by optical means, such that the changing shapes, define the orientation and position of the wand 2b. For example, a single light emitter 3a, may include a len(s), or other optical device, which converts the light beam into a cone, which may project a ring of light; or a field of light having the same outside boundary as the ring type (herein called a filled ring) onto the sensor array 1 or surface 1b. In most embodiments a ring (not filled) is preferred, as the amount of data that requires processing is reduced, however a filled ring or field may be used for some embodiments. The three dimensional orientation and position of the wand 2, 2b may be calculated by comparing the projected shape and the detected shape that is detected on the sensor array 1 or surface 1b, by various mathematical means well known to the art such as projection geometry and trigonometry. For example, a light emitter 3a and dispersing lens which projects a circle onto the sensor array 1 or surface 1b, when the longitudinal axis of the wand 2 is normal to the said sensor array 1 or surface 1b, may for example project a parabola, when tilted off the normal. The computer can use this change in shape to calculate the orientation and position of the wand 2 with respect to the said sensor array 1 or surface 1b. It can be readily appreciated that the shapes 5c, to illustrated in FIG. 8d, are in fact equivalent to a string of points 5 illustrated in FIG. 1 and FIG. 6a.
[0183] The advantage is that a single emitter 3a including a dispersing lens(s) may be used rather than a series of emitters 3a. The other advantage is there is greater redundancy. On the other hand, a few discrete points of light 5 require far less computation than many point, and where speed of movement is important, a few points of light are preferable. The embodiment illustrated in FIG. 8d may be used with a sensor array 1b in which the projected shape 5c, comprised of spots of light 5, is sensed and reported to the computer 11, or one in which a camera 3c on the wand 2, or remote from it, is used to record the projected shapes 5c. As illustrated in FIG. 8d, where a camera 3c is used for detection, in addition to those means described above for determining the position of the wand 2b, a coded grid 1c, may be applied to the surface of surface 1b. The grid may be coded, in a similar way to a bar code, such that the position of the shape 5c or points 5 can be viewed by the camera 3c and their absolute position on the surface can be reported by the camera to the computer 11, to calculate the orientation and the position of the wand 2b in three dimensional space. As illustrated in FIG. 8d, the bar code grid may be formed from two bar coded patterns, superimposed at right angles. Any spot on the surface 1a, will then have a unique address, defined by the adjacent group of bars. The thickness, of the bars and their relative separation from each other may be arranged to encode locational information, by means well know to the art. Since the computer 11 has the same grid in memory, it can make a simple pattern match, or other method, well known to the art, to determine the location of each point of light that forms the shape 5c or for that matter any spot 5 which other embodiments of the invention rely on, such as those illustrated in FIG. 6a and FIG. 6a1. At any point on the surface, there will be a unique address defined by the two patterns immediately adjacent to the spots 5 and shapes 5c. These patterns will form the nearest address to each point at which the spots 5 and shapes 5c are incident. Since the computer has stored in memory the grid, it can then refine the position of each of the incident spots 5 and shape 5c, by noting the displacement of the said spots and shapes from the nearest bars, the exact position of which is in the computer memory. Some spots 5 and shapes 5c may by happenstance fall on the intersection of two bars, in which event the displacement calculation may not be necessary. It should be appreciated that while reference has been made to a bar code type of indexing system, other encoding schemes may be used in other embodiments and be within the ambit of the embodiments described herein.
[0184] FIG. 9 illustrates a wand 2 that includes a sliding finger control 19a with associated controller/feedback device 19c which functions in a similar manner to the movable finger hole 21, except that the sliding finger control 19 provides a convenient means of conveying linear motion to the robot tools. In the example illustrated in FIGS. 9 and 11, when the sliding finger control 19a is moved to position 19b, a distance of 19d, the controller/feedback device instructs the computer 11 to cause the tool, in this example 26, to move a given distance 19d in a similar linear direction, as assumed by 26a in FIG. 11. As mentioned above, the operator may set the ratio between the motion of the sliding finger control 19a and the consequent motion of the tool 19a, thus these distances may be different, even though relative. Simultaneously, the operator may squeeze the finger hole control 21, to position 21c, a displacement of 21d, to instruct the fingers of tool 26 to close a distance of 21d to assume the configuration of 26a in FIG. 11. As referred to above, haptic feedback may be provided by the controller/feedback controller 21b by means described above.
[0185] FIG. 10 illustrates the operator viewer 8, while the tool 26 is being manipulated, as illustrated in FIGS. 9 and 11. In this example the operator is manipulating wand 2 in his left hand. The left tool icon display 10h has bolded tool icon 26b, which indicates that the operator has chosen tool 26 to be controlled by his wand, such as that illustrated in FIG. 9. The right tool icon display 10h has bolded tool icon 27b, which indicates that the operator has chosen tool 27, as illustrated in FIGS. 12, 13 and 14, to be controlled by his wand 2, such as that illustrated in FIG. 9.
[0186] FIGS. 12, 13, and 14, illustrates that rotary motion at the tools can be controlled from a wand, such as that illustrated in FIGS. 9 and 12. In this example of the invention, the movable finger hole control 21 can be squeezed by the operator, displacing it a distance of 21d to position 21c, which causes the tool 27 to close a distance of 21d, gripping bolt head 29, assuming configuration 27a, as illustrated in FIG. 13. Simultaneously, the operator moves the finger slider control 19b a distance of 19d, to assume position 19a, to move the tool forward a distance of 19d, toward the bolt head 29, as illustrated in FIG. 13. The operator may then choose to rotate the bolt head by rotating roller control 20 a distance and direction 20b, to move the tool in direction and distance 20b, to assume position 27c. The controller/feedback controller 20c senses the motion and position of the roller control 20, and may impart haptic feedback, in a similar manner as described above in relation to the finger hole control 21, above.
[0187] While the disclosure and examples of the invention above are in the context of a guiding device that is controlled by the operator's hands, and describes the attitude and position of the wand 2, 2b in three dimensional space, it should be understood that the guiding device may be used to describe the relative motion of a surface, where the wand or guiding device is fixed, or its position is otherwise known. For example FIG. 15 and FIG. 16 illustrate the movement of the surface 14d1, 14d2 of the heart as it beats. In this case the components of the wand 2, 2b are incorporated into the distal end of the camera tool 15c, although they may be incorporated into any other tool as well, and come within the ambit of the invention. The emitter cluster 3 and emitters 3a may be seen in greater detail in FIG. 18. It should be noted that this example of the emitter cluster 3 which uses any number of emitters 3a, can be replaced with any of the other types of emitter clusters, including mirror arrays or articulating mirrors and prisms, referred to above. The angles between the beams 4, including T1, T2, and T3, and the angles between the beams 4 and the tool 15c as illustrated in FIG. 18 are known to the computer 11, in calculating the surface topology 14d1 and 14d2 as illustrated in FIG. 18. As illustrated in FIG. 17, the stereo camera 3c and/or 3c2 record the spots 5a and 5b projected on the surface of the heart 14d1, 14d2. It can be readily be appreciated that as the heart beats, the surface 14d1 and 14d2 moves up and down, and the spots projected on the surfaces, including 5a and 5b, change their distance from their neighbors 5a and 5b on their respective surfaces. This distance change, along with the angle of the beam, is recorded by the camera or cameras, 3c1 and/or 3c2, and this information is processed by the computer 11, which computes the distance of those parts of the surface from the distal end of the camera tool 15c, using trigonometric and other mathematical methods, well known to the art. It should be noted that this information also provides the distance between the surface and any other tool, such as 15b and 15d, as illustrated in FIG. 15 and FIG. 16, as the relative position of the tools is known, but positional sensors incorporated into the said tools. The more spots 5 (in this illustration referred to as 5a and 5b to denote their change in position) that are projected at any given time, the greater will be definition of the changing topology of the surface and its distance from the distal end of the tools, 15a, 15b and 15c, and any other tools that may be used. Various shapes or patterns, such as grid patterns may be projected onto the surface of the heart, by various optical means, herein described, or well known to the art. These shapes or patterns may be considered as strings of spots 5, 5a and 5b.
[0188] As the heart beats, and the distance between the distal ends of the tools and the heart surface 14d1 and 14d2 varies, the computer can instruct the tool arms to vary their length to keep the distance between the surface and the distal end of the tools constant (assuming the operator has not instructed any change in tool position). In the example illustrated in FIG. 15 and FIG. 16, the arms are telescoping, for example, the arm 15c, the camera arm, has a distal shaft which can slide in and out of the main arm 15d. In FIG. 15 the distal shaft 15c1 is relatively extended, so that it is located in an ideal position to view the distal end of the other tool shafts, 15b1 and 15d1 which are positioned, in this example, immediately above the surface 14d1 of the heart. As the surface of the heart moves up, as illustrated in FIG. 16 and FIG. 17, the movement is detected by the changing lateral separation between the neighboring dots, such as dots 5a and 5b, and their respective neighboring dots on their respective surfaces. The computer may use this information, using trigonometric calculations and other mathematical techniques, well known to the art, to direct the arms to move up sufficiently, so as to keep the distal end of the tools, 15b2, 15c2 and 15d2 at the same relative distance to the heart surface 14d2. As can be appreciated, this dynamic adjustment of the tool arm length can effectively compensate for the motion of the beating heart, allowing the operator to control other tool motions (which overlay the compensating motions) and which actually do the work, just as if the heart were stationary. As mentioned above, lateral movements of the heart surface 14d1 and 14d2 can also be compensated for by using texture and pattern recognition methods utilizing the surface that is illuminated by the spots 5a, 5b and 5 (in addition to areas, not so illuminated). For this purpose, the spots 5 may be considerably larger to incorporate more textural or pattern information. The vertical and lateral means of characterizing the motions of the heart surface can then be integrated by the computer 11 and any motion of the heart surface can be fully compensated for, effectively freezing the heart motion, to allow for precise manipulation of the tools, for example, to cut and suture the heart tissue. The integration of this information will provide information on the bending, expansion and contraction of the surface, in addition to (in this example) the changes in elevation of the surface. Fortunately, as the surface that is being worked on by the surgeon is small, this additional characterization (ie. bending, expansion and contraction) is most often not required. It should be noted that as the camera tool 15c is making compensating motions, the operator's view of the heart surface will remain the same, ie the heart will appear to virtually stop, and any more complex movements, ie. stretching and shrinking and localized motions may be compensated by software manipulating the image, by means well known to the art. Similarly, rather than the camera tool 15c, making compensation motions, the image presented to the operator can by optical and electronic means be manipulated to give the same effect. For example in some embodiments of the invention, the camera lens may be zoomed back as the surface of the heart advances toward it, giving the effect of an approximately stationary surface. The operator may of course choose to override any or some compensating features of the system. The operator may also choose to select the area of the surface of the heart or other body, for which motion compensation is required. This may involve selecting a tool, such as the sensor cluster 3, with varying angles of emitter 3a angles, or instructing the computer to compute only those changes within a designated patch, which might be projected on the operator viewer 8. In most cases the area of relevant motion will be small, as the actual surgical work space is usually small. The operator may, or the system may periodically scan the surface to define its curvature, especially at the beginning of a procedure.
[0189] The stereo camera's 3c1 and 3c2 may also provide distance information, using parallax information and trigonometric and standard mathematical methods well know to the art of distance finders. Other optical methods of distance determination, such as is used in auto-focusing cameras and medical imaging, and well known to the art, may be used as well, and be within the ambit of the invention, such as Doppler detection and interferometry. This information, acquired by all these methods, may be used to supplement or backstop the other distance information, which is acquired by methods described above and integrated by the computer 11. It should be noted that embodiments that use one or more of these methods is within the ambit of the embodiments described herein.
[0190] In some embodiments, the computer 11 may receive information from the electrocardiogram (ECG) 14c, which has sensors 14e on the patient's abdomen and which indicates that an electrical pulse has been detected, which will result in a muscular response of the heart tissue, and hence a change in the shape and the position of the heart surface. The time delay between receiving the electrical triggering pulse and the actual resulting heart muscular activity, even though small, allows for the system to anticipate the motion and better provide compensating motions of the length and attitude of the robot's tools, 15b, 15c, and 15d. The system software can compare the electrical impulses, as detected by the ECG, with the resultant changes in the shape and position of the heart wall, as observed by the methods described above, to model the optimum tool motion that is required to virtually freeze the heart motion. In combination with the methods of motion compensation described above, the inclusion of the ECG initiating information, generally allows for a smoother response of the tools to the motion of the surface it is accommodating to.
[0191] It can be readily appreciated that the system herein described allows many surgical procedures to be conducted without resort to a heart lung machine or to other heart restraining devices, all of which can have serious side effects.
[0192] It should be readily appreciated that embodiments that compensate for the motion of bodies being manipulated, whether fine grain or course grain, (as chosen by the operator) inherently reduce the effects of latency between the operator's instructions and the motion of the tools, which he guides. This effective reduction or elimination of latency means that telesurgery over great distances, which increases with distance, becomes more practical. The system's software distinguishes between operator generated motion, such as the lifting of a tissue flap, and non-operational motion, such as the beating of the heart. Generally, the former is much finer grained and the latter larger grained. For example, the software may set the compensating routines to ignore small area of motion, where the procedure is being executed, such as the suturing of a flap, but compensate for grosser motions, such as the beating of the heart, which causes a large surface of the heart to move. The design of this software and the relative sizes of the body to which the compensation routine responds or ignores, and their location, will depend upon the particular procedure for which the system is being utilized.
[0193] FIG. 21 illustrates an embodiment, which includes additional means to overcome temporal latency between the operator's instructions and the actual tool movements, any of which may be used separately or in combination with the others. FIG. 21 illustrates the operator's view of the worksite as viewed through the viewer 8 and eyepieces 9 illustrating the superimposed tool cursors 15d3 and 15b3 which illustrate the operator's intended position of the tools at the worksite. These cursors are no normal cursors, they show the exact intended position of the working edges of the tools they control. FIG. 21 also illustrates that the operator also sees the latest reported actual position of the tools 15d2 and 15b2 at the worksite, the difference between the two being due to temporal latency. The superimposed tool cursors 15d3 and 15b3 can be electronically superimposed onto the operator's view, and these show the intended position, while 15d2 and 15b2 show their most recently reported actual position. In most preferred embodiments the cursors are rendered in 3-D, and change perspective, to conform to the 3-D view of the worksite, are simple outlines, so as not to be confused with the images of the actual tools, and may be manually tuned on and off, or automatically presented when the system detects that latency has exceeded a preset threshold. The intended tool position cursors, 15d3 and 15b3 may also change color or markings to indicate the depth to which they have passed into the tissue, as indicated 15d4 in FIG. 21. The cursors 15d3 and 15b3 may also change color markings in response to forces acting on the actual tools 15d2 and 15b2, so as to prevent the operator from exceeding a safe threshold for that particular substrate he is manipulating.
[0194] FIGS. 29a to 29e illustrate an example method of limiting the effects of latency in transmission of tool instructions and movement of the body relative to the position of the tools at the remote worksite. Each video image at the worksite FIG. 29b is recorded, time coded, and transmitted to the operating theatre, along with the time code for each video frame. The operator at the operating theatre, then sees the video frame FIG. 29a, and then causes the tool 15d2 to advance along the incision 14a, which he views as an icon 15d3 in FIG. 29c, and the displacement between 15d3 and 15d2 being the measure of latency. The position of the cursors, that is, the intended tool positions, are transmitted to the remote worksite along with the corresponding frame time-code, of the operator's video frame at each time step. In most embodiments of the invention, the time-code is originally encoded onto the video stream at the remote work site by the remote worksite robot controller 15 which also saves in memory the corresponding video frame(s). As a separate process, and at each time step, at the remote work site, the position of the tools are adjusted to accommodate to their intended position relative to the changing position of the body, as described above, which is illustrated as the accommodation of tool position 45 in FIG. 29d and becomes the real time image for the comparison to follow. Upon receiving each instruction from the operator, the worksite controller 15 then retrieves from memory the corresponding video frame and notes the intended machine instruction relative to it. It then compares this frame FIG. 29b, retrieved from memory with the real time image at the remote worksite FIG. 29d, and carries out the intended machine instruction embedded in FIG. 29c resulting in the performance of the intended instruction as illustrated in FIG. 29e. This comparison may be accomplished by pattern recognition methods well known to the art which note the relative location of such features as protruding veins and arteries and other visible features. In some embodiments, markers suitable for optical marker recognition 40 are placed on or detachably attached to the operation surface, such as the heart 14d to assist in tracking movements of the worksite. While the normalization process, including pattern recognition and other means noted above impose a system overhead on computations, the area that is monitored and the precision of monitoring can be adjusted by the operator. The area immediately adjacent to the present tool position can have, for example, fine grained monitoring and normalization, whereas more peripheral areas can have, for example, coarser gained treatment.
[0195] As illustrated on FIG. 21 and FIG. 29c, the operator's intended movement of the tools as illustrated to him by cursors 15b3 and 15d3, may diverge from the actual tools that he views 15b2, 15d2 the difference being the latency between the two. The operator will immediately know the degree to which latency is occurring, and he may choose to slow his movements to allow the actual tools, 15b2 and 15d2 to catch up. In some embodiments the systems stops in the event a preset latency threshold is exceeded. It is important to note that the operator, when he stops the tool, will know where it will stop at the worksite. For example, in FIG. 21 the operator is making an incision which must stop before it transects artery 38. Even though the tool 15d2 will continue to move forward, they will stop when the meet the intended tool position indicated by cursor 15d3, just short of the artery 38. While this disclosure has described cursors resembling a scalpel and forceps and their corresponding cursors, it should be understood that these are merely examples of a large class of embodiments, which include all manner of tools and instruments and there corresponding cursors, and all are within the ambit of this invention.
[0196] FIG. 19 and FIG. 20 illustrate two exemplar passive haptic feedback modules that can be incorporated into the controller/feedback controllers in the wand 2, such as 19c, 20c and 21b. Other haptic feedback devices, well known to the art, whether active or passive, may be incorporated into the controller/feedback controller, and all such systems are within the ambit of the invention.
[0197] FIG. 19 is a typical passive haptic feedback device 30 in which the flow of an electrorheological or magnetorheological fluid is controlled by an electrical or magnetic field between elements 36, which can be electrodes or magnetic coils. The control of the flow of this fluid affects the speed with which piston 31a can move back and forth through the cylinder 31. The piston is connected and transmits motion and forces to and between the piston and the various control input devices on the wand 2, for example, the movable finger hole 21, the finger slider control 19b and the roller control 20. The total displacement of the piston 19d may for example be the same as the displacement 19d of the finger slider control 19b, or may vary depending upon the mechanical linkage connecting the two. The working fluid moves 35 between each side of the piston 31a through a bypass conduit 32, where its flow may be restricted or alleviated by varying the electrical or magnetic field imposed on an electrorheological or magnetorheological fluid. The controller/encoder modulates the electrical energy transmitted by transmitting means 34a to the electrodes or coils 36. In other passive haptic feedback devices a simple electromechanical valve 37 controls the flow 35 of working fluid, which may for example be saline or glycerin, as illustrated in FIG. 20. The controller/encoder modulates the electrical energy transmitted to the electromechanical valve 37 which is transmitted by transmitting means 37a, as illustrated in FIG. 20.
[0198] In both the haptic feedback devices 30 illustrated in FIGS. 19 and 20, a motion and position sensor 33, transmits information on the motion and position of the piston 31a by transmission means 34 to the controller/encoder 18. The controller/encoder 18 receives instructions wirelessly 16a, or directly from the computer, and sends motion and position information received from the motion and position sensor 33 to the computer.
[0199] FIG. 22 is wand 2b which may be attached to any body part, tool, or other object, by means of connectors 42 and 42a, which have complementary indexing means 42c and 42b, to ensure their proper alignment. By such means, and similar connecting means, well known to the art, these wands 2b may be placed on a body part, such as the surface of the heart 14d1 to project the beams 4 to a sensor array 1 or surface 1b (not shown) and thereby establish the orientation and position of the heart as it moves. Similarly a wand 2b may be connected to any object to determine its position and orientation in space, together with the means hereinbefore described, in cooperation with computer 11.
[0200] FIG. 23 illustrates how multiple wands 2i, 2ii may be used in combination to provide accurate alignment between two or more objects in space. In this example FIG. 23, one wand 2i is connected to a drill 44. The other wand 2ii is connected to a bone nail 45 with a slotted proximal end, for indexing position, and which has a hidden hole 46 which will receive a bolt, once a hole is drilled through bone 46, and the hidden hole 46 in direction 41. Since the position and orientation of the hidden hole 46 relative to the end of the bone nail, connected to the wand 20d is known, the operator can drill a hole along an appropriate path, which is provided by computer 11 calculating the appropriate path and graphically illustrating the proper path with a graphical overlay of the bone shown on viewer 8. The position of the wands 2i and 2ii in space is determined by those means hereinbefore described. While FIG. 23 illustrates a single sensor array 1, it should be understood that any number is sensor arrays or surfaces 1b, might be used, so long as their position and orientation are known to the computer 11, and in the case of surface 1b, the camera 3c, which would be incorporated into the assembly, as illustrated in FIG. 22, can identify each screen by means of identifying barcodes or other identifying marks. In FIG. 23, the sensor array 1 is above the operating space. FIG. 23 also illustrates two connectors 42a that are fixed to a calibrating table 43, which is calibrated in position to sensor array 1. This permits the wands 2i and 2ii to be connected to the said connectors 42a on calibrating table 43 to ensure accurate readings when ambient temperature changes might affect the relative angles of the beams 4, or the distance between emitters 3a. The computer 11 can recalibrate the position of the wands 2i and 2ii by noting the pattern of spots 5 that are projected onto the sensor array 1. While the example shown in FIG. 23 illustrates two wands 2i and 2ii, any number of wands may be used for purposes of comparing the position of objects, to which they are connected, or changes in position of those objects over time. For example, a one wand might be connected to the end of a leg bone, while another might be attached to prosthesis, and the two might be brought together in perfect alignment. Another example would be connecting a wand 2i to a probe of known length, and another wand 2ii to a patient's scull, in a predetermined orientation. The wand 2i could then be inserted into the brain of a patient and the exact endpoint of the probe could be determined. The wand 2i could also be attached to the tools 15b1, 15c1 and 15d1, as illustrated on FIG. 15 to ensure perfect positioning of the tools. For example one tool might have a drill attached, such that the drill illustrated in FIG. 23, is controlled robotically and in coordination with the position of the bone nail 45 in that of FIG. 23. Due to modern manufacturing processes, the wands 2b illustrated in FIG. 22, the wand 2i illustrated in FIG. 23, and sensor array assemblies 1d illustrated in FIG. 24 can be made to be very small and placed as an array on objects such as cars, bridges or buildings to measure their stability over time. Others might be connected to the earth to measure seismic or local movements of the soil. These wands 2b, 2i, might also be connected to scanners to allow for the scanning of three dimensional objects, since these wands can provide the information as to the scanner's position in space; the scanning data can be assembled into a virtual three dimensional output. Since the wands 2b and 2i may be put on any object, the uses for assembling objects are countless.
[0201] While FIG. 23 illustrates a system in which the camera 3c is located in the wand 2, it should be understood that a surface 1b, as illustrated in FIG. 6a1, or a separate camera 3c could be used, as illustrated in FIG. 6a2, all of which can detect the position of the incident spots 5.
[0202] FIG. 24 illustrates a similar arrangement of wands 2i and 2ii as illustrated in FIG. 23, but the wand 2ii is replaced with sensor array assembly 1d. The sensor array assembly 1d uses a sensor array 1, which senses the position 5 of the incident beams 4 and reports their coordinates by connection 11a to controller/encoder 18 and then wirelessly to the computer 11 (not shown). This system provides the same positional information as that system illustrated on FIG. 23, except that the large sensor in FIG. 23 has been replaced with a much smaller sensor in FIG. 24, making it more economical for certain purposes.
[0203] Referring to FIG. 25, a cross-sectional, perspective view illustrates two combination wand and sensor array assemblies 47 which have been daisy chained with two other combination units (not shown). Such arrays may also be combined with sensor arrays 1 or surfaces 1b for greater accuracy. Such arrays can be used to detect and report the relative movement of parts of structures, to which they are attached, such as bridges, ships and oil pipelines.
[0204] While embodiments have been described with respect to a system comprised of three tools 15b, 15c, and 15d, it is to be understood that any number of tools and any number of wands 2 may be used in such a system.
[0205] While embodiments have used examples of tools that a robot could manipulate, it is to be understood that any tool, object or body may be moved or directed by the methods and devices described by way of example herein, and all such embodiments are within the ambit of the embodiments herein.
[0206] While embodiments have been described as being used as a surgical robot, it is to be understood that this use is merely used as a convenient example of many uses to which the robot could be employed, all of which come within the ambit of the embodiments described herein.
[0207] While embodiments have been described as being used to manipulate tools, it is to be understood that the methods and devices described by example herein may be used to manipulate virtual, computer generated objects. For example, embodiments may be used for assembling and/or modeling physical processes, such as molecular modeling and fluid dynamics modeling to name just a few.
[0208] It is to be understood that modifications and variations to the embodiments described herein may be resorted to without departing from the spirit and scope of the invention as those skilled in the art will readily understand. Such modifications and variations are considered to be within the purview and scope of the inventions and appended claims.
Claims
1. (canceled)
2. A hand controller for a robotic surgery system, the hand controller comprising: a body extending linearly between a proximal end and a distal end; a first control pivotally attached to the body, the first control configured to permit a user to control a first function of a surgical instrument; and a second control supported by a side of the body at a location between the proximal end and the distal end, the second control configured to linearly move in a direction parallel to a length of the body to permit a user to control a second function of the surgical instrument, the second function being different from the first function.
3. The hand controller of claim 2, wherein the first control comprises at least one lever.
4. The hand controller of claim 2, wherein the first control comprises an opening configured to receive a finger therethrough.
5. The hand controller of claim 3, wherein the first function of the surgical instrument comprises adjusting a distance between first and second jaws of an end effector of the surgical instrument.
6. The hand controller of claim 5, wherein the distance between the first and second jaws of the end effector of the surgical instrument is adjusted responsive to movement of the at least one lever.
7. The hand controller of claim 2, wherein the second function of the surgical instrument comprises a linear movement of an end effector of the surgical instrument.
8. The hand controller of claim 7, wherein: a linear movement of the second control causes the linear movement of an end effector.
9. The hand controller of claim 2, wherein the second control comprises a slidable control.
10. The hand controller of claim 2, wherein operation of one or both of the first control and the second control causes a haptic feedback to be provided to the user.
11. The hand controller of claim 10, wherein the first function of the surgical instrument comprises adjusting a distance between a first jaw and a second jaw of an end effector of the surgical instrument, and wherein the haptic feedback provided to the user comprises pinching feedback that simulates a force applied by the first jaw and the second jaw.
12. The hand controller of claim 2, further comprising a spring configured to bias the first control.
13. The hand controller of claim 12, wherein the spring is configured to bias the first control to a position in which the first function of the surgical instrument is not activated.
14. The hand controller of claim 13, wherein the position comprises an open position.
15. A hand controller for a robotic surgery system, the hand controller comprising: a body extending linearly between a proximal end and a distal end; a first control pivotally coupled to the body, a finger ring attached to the first control and configured to receive a finger of a user's hand therethrough, the first control configured to cause activation of a first function of a surgical instrument; and a second control supported by a side of the body at a location between the proximal end and the distal end, the second control configured to slide along the side of the body to cause activation of a second function of the surgical instrument, the second function being different from the first function.
16. The hand controller of claim 15, wherein the first control comprises a lever pivotally attached to the body.
17. The hand controller of claim 15, wherein the first function of the surgical instrument comprises adjusting a distance between first and second jaws of an end effector of the surgical instrument.
18. The hand controller of claim 17, further comprising a second finger ring on an opposite side of the body from the first control, the second finger ring configured to receive another finger of the user's hand therethrough.
19. The hand controller of claim 15, wherein operation of one or both of the first control and the second control causes a haptic feedback to be provided to the user.
20. The hand controller of claim 19, wherein the first function of the surgical instrument comprises adjusting a distance between a first jaw and a second jaw of an end effector of the surgical instrument, and wherein the haptic feedback provided to the user comprises pinching feedback that simulates a force applied by the first jaw and the second jaw.
21. A hand controller for a robotic surgery system, the hand controller comprising: a body extending linearly between a proximal end and a distal end; a first control pivotally attached to the body, the first control configured to permit a user to control a first function of a surgical instrument; and a second control supported by a side of the body at a location between the proximal end and the distal end, the second control configured to receive an input in a direction generally parallel to a length of the body to permit a user to control a second function of the surgical instrument, the second function being different from the first function.
BOOM!
METHOD FOR CONTROLLING AN ARTICULATING INSTRUMENT
DOCUMENT ID
US 20220401088 A1
DATE PUBLISHED
2022-12-22
INVENTOR INFORMATION
NAME
CITY
STATE
ZIP CODE
COUNTRY
Dean; Jesse Thomas
Raleigh
NC
N/A
US
Genova; Perry A.
Chapel Hill
NC
N/A
US
APPLICANT INFORMATION
NAME
Titan Medical Inc.
CITY
Toronto
STATE
N/A
ZIP CODE
N/A
COUNTRY
CA
AUTHORITY
N/A
TYPE
assignee
APPLICATION NO
17/807661
DATE FILED
2022-06-17
DOMESTIC PRIORITY (CONTINUITY DATA)
us-provisional-application US 63212716 20210620
US CLASS CURRENT:
1/1
CPC CURRENT
TYPE
CPC
DATE
CPCI
A 61 B 34/37
2016-02-01
CPCI
A 61 B 17/00234
2013-01-01
CPCA
A 61 B 2034/741
2016-02-01
CPCA
A 61 B 2017/00314
2013-01-01
CPCA
A 61 B 2090/502
2016-02-01
CPCA
A 61 B 2034/302
2016-02-01
Abstract
A method for controlling an articulating surgical instrument is disclosed. The instrument includes a manipulator and a positioner actuable to position a distal segment within an instrument workspace. The manipulator is attached to the distal segment and includes a distal end configured for mounting an operational tool for performing an operation within the instrument workspace, the manipulator being actuable to manipulate the distal end of the manipulator. The method involves receiving input including position input signals representing a position within an input workspace and orientation input signals representing an orientation within the input workspace and causing generating position control signals for actuating the positioner to move the distal segment within the instrument workspace to a physical position represented by the position input signal and generating manipulation control signals based on the orientation input signals for actuating the manipulator to orient the distal end within the instrument workspace.
Background/Summary
TECHNICAL FIELD
[0001] This disclosure relates generally to a surgical instrument apparatus for performing a surgical procedure within a body cavity of a patient.
DESCRIPTION OF RELATED ART
[0002] Robotic surgery systems commonly employ one or more articulating instruments to perform surgical functions within a surgical site in a body cavity of a patient. The articulating instruments may be controlled by a processor circuit that receives inputs from an input device having some means of sensing movements of a surgeon's hands. For example, the input device may include a pair of hand controllers that are grasped in the surgeon's hand and moved to cause corresponding movement of the articulating instruments. It is generally desirable that the movement of the articulating instruments closely mimic the surgeon's hand movements so that performing operations in the surgical site is intuitive.
[0003] In commonly owned PCT patent publication WO2016176755A1 filed on Dec. 15, 2015, a method for controlling a dexterous tool in a robotic control system is disclosed. An input device having a handle capable of translational and rotational movement is used to control a tool positioning device of the dexterous tool to position and orient an end effector in response to the position and orientation of the handle. In this control method the location of the end effector is thus determined by the position of the handle if the input device. A processor circuit performs inverse kinematic transformations on the position of the end effector to generate actuation signals for the tool positioner to cause the end effector to move to a physical position corresponding to the position and orientation of the handle. The control of the tool positioning device is intuitive in the sense that the position and orientation of the end effector is substantially similar to the position and orientation of the surgeon's hands. However, by taking the end effector as the point of focus for controlling the tool positioner, the location and position of other portions of the tool positioner are not controlled by the surgeon, but rather calculated by the processor circuit to position the end effector. There remains a need for a mode of controlling a tool positioner that provides the surgeon with control over other portions of the tool positioner in addition to controlling the position of the end effector.
SUMMARY
[0004] In accordance with one disclosed aspect there is provided a method for controlling an articulating surgical instrument. The instrument includes a manipulator and a positioner, the positioner being actuable to position a distal segment of the positioner within an instrument workspace. The manipulator is attached to the distal segment of the positioner and includes a distal end configured for mounting an operational tool for performing an operation within the instrument workspace, the manipulator being actuable to manipulate the distal end of the manipulator within the instrument workspace. The method involves receiving input signals at a processor circuit, the input signals including position input signals representing a position within an input workspace, and orientation input signals representing an orientation within the input workspace. The method also involves causing the processor circuit to generate position control signals for actuating the positioner to move the distal segment within the instrument workspace to a physical position represented by the position input signals. The method further involves causing the processor circuit to generate manipulation control signals based on the orientation input signals for actuating the manipulator to orient the distal end within the instrument workspace.
[0005] Receiving input signals may involve receiving input signals from an autonomous controller processor circuit operably configured to autonomously generate the position input signals and the orientation input signals.
[0006] Receiving input signals may involve generating the input signals in response to movements of an operator's hand.
[0007] Generating the input signals in response to movements of an operator's hand may involve at least one of receiving movement signals from a sensor disposed to monitor free movements of an operator's hand within an input region, receiving movement signals from a movement sensor grasped or attached to the operator's hand, the movement sensor being responsive to free movements of the operator's hand, or receiving movement signals from a virtual reality headset worn by the operator.
[0008] The method may involve causing the processor circuit to process the movement signals to generate the position input signals and orientation input signals.
[0009] Causing the processor circuit to process the movement signals may involve filtering the free movements of the operator's hand to extract movements of the operator's digits with respect to one of the palm of the operator's hand or the wrist of the operator's hand.
[0010] Generating the input signals may involve generating the position input signals in response to translations of the operator's hand in three translational degrees of freedom within the input workspace, and generating the orientation input signals in response to rotations of the operator's hand in at least two rotational degrees of freedom within the input workspace.
[0011] Generating the position input signals may involve kinematically transmitting translational movements of the operator's hand in three translational degrees of freedom via a first kinematic structure to a plurality of encoders operable to produce translational movement signals, and processing the translational movement signals to generate the position input signals.
[0012] Generating the orientation input signals may involve kinematically transmitting orientation movements of the operator's hand in the at least two rotational degrees of freedom via a second kinematic structure to a plurality of encoders operable to produce orientation signals, and processing the orientation signals to generate the orientation input signals.
[0013] The positioner may include a first plurality of segments extending between a bulkhead segment and the distal segment, the first plurality of segments being selectively actuable by transmitting actuation forces via a first plurality of control wires extending through the first plurality of segments, and actuating the positioner may involve generating the position control signals to selectively actuate the first plurality of control wires to cause respective movements of the first plurality of segments to position the distal segment at the physical position.
[0014] The manipulator may include a second plurality of segments extending between the distal segment of the positioner and the distal end of the manipulator, the second plurality of segments being moveable in response to transmitting actuation forces delivered via a second plurality of control wires extending through the first plurality of segments and through the second plurality of segments, and actuating the manipulator may involve generating the manipulator control signals to selectively actuate the second plurality of control wires to cause respective movements of the second plurality of segments to orient the distal end within the instrument workspace.
[0015] The first plurality of segments may include a plurality of adjacently stacked vertebra extending between the bulkhead segment and the distal segment and the control wires may be coupled to the distal segment and selectively actuating the first plurality of control wires may involve actuating the first plurality of control wires to cause the distal segment to move to position the distal segment at the physical position, and each vertebra may be coupled to move in at least one of a pitch axis and a yaw axis with respect to adjacent vertebra and the actuation forces delivered via the first plurality of control wires may cause the respective vertebra to be angled with respect to each other to bend the instrument in a continuous arc.
[0016] The first plurality of segments may include a plurality of elongate segments coupled together by respective joints.
[0017] The positioner may be coupled to a rigid shaft and wherein the position input signals may include an insertion depth position within the input workspace, and actuating the positioner to move the distal segment within the instrument workspace may involve causing the rigid shaft to be advanced or retracted in response to the insertion depth position.
[0018] The rigid shaft may be received in a bore of an insertion tube and the instrument workspace may extend outwardly from an end of the insertion tube.
[0019] Causing the rigid shaft to be retracted may involve causing the rigid shaft to be retracted such that the distal segment of the positioner is disposed proximate the end of the insertion tube and the manipulator remains extending outwardly from the end of the insertion tube and remains capable of movement with respect to the distal segment.
[0020] The method may involve generating a positioner operational envelope defining boundaries to movement of the distal segment of the positioner within a portion of the instrument workspace, and generating an alert signal when the position input signals represent a position in the instrument workspace that lie on or outside the positioner operational envelope.
[0021] Receiving input signals may involve causing an input device to generate the input signals in response to movements of an operator's hand and may further involve delivering haptic feedback via the input device to the operator's hand in response to the alert signal.
[0022] The method of #generating the positioner operational envelope may involve generating the positioner operational envelope in the instrument workspace and mapping the positioner operational envelope from the instrument workspace to the input workspace and generating the alert signal may involve generating the alert signal when the position input signals represent a position in the input workspace that lies on or is outside the positioner operational envelope mapped to the input workspace.
[0023] The processor circuit may be operably configured to generate display signals for displaying a view of the instrument workspace and generating the display signals may involve generating display signals including an overlay image corresponding to the positioner operational envelope.
[0024] The positioner operational envelope may involve an insertion/retraction region that represents a region of the instrument workspace within which the positioner should be constrained in a straightened condition for insertion or retraction of the instrument into or out of the instrument workspace.
[0025] The positioner operational envelope may further involve a free movement region that represents a region of the instrument workspace within which the positioner is able to move after the positioner is disposed outside of the insertion/retraction region of the positioner operational envelope.
[0026] The articulating instrument may include one of a set of articulating instruments of differing instrument types used in a surgical procedure and the positioner operational envelope may include at least one of one of a pre-defined positioner operational envelope for all differing instrument types, a pre-defined positioner operational envelope selected based on the instrument type, a positioner operational envelope based at least in part on surgical data associated with a target operational site, a positioner operational envelope based at least in part on surgical data provided by a scan of the target operational site, or a positioner operational envelope based at least in part on operator input.
[0027] The method may involve generating a manipulator operational envelope defining boundaries to movement of the distal end of the manipulator within a portion of the instrument workspace, and generating an alert signal when the orientation input signals represent an orientation in the instrument workspace that would cause the distal end of the manipulator or the operational tool to lie on or outside the manipulator operational envelope.
[0028] The articulating instrument may include one of a set of articulating instruments of differing instrument types used in a surgical procedure and the manipulator operational envelope may include at least one of one of a pre-defined manipulator operational envelope for all differing instrument types, a pre-defined manipulator operational envelope selected based on the instrument type, a manipulator operational envelope based at least in part on surgical data associated with a target operational site, a manipulator operational envelope based at least in part on surgical data provided by a scan of the target operational site, or a manipulator operational envelope based at least in part on operator input.
[0029] The method may involve, in response to receiving position input signals that are associated with a retraction of the distal segment from the instrument workspace, causing the processor circuit to determine whether the position input signal represents a physical position of the distal segment associated with the positioner being in a bent condition, and causing the processor circuit to generate modified position control signals to cause the positioner to be straightened while being retracted into the insertion/retraction region of the instrument workspace.
[0030] The method may involve causing the processor circuit to determine whether the received position input signals represent a physical position of the distal segment that lies on or outside the positioner operational envelope and causing the processor circuit to generate the position control signals may involve causing the processor circuit to generate modified position control signals that constrain movements of the distal segment to be within the positioner operational envelope.
[0031] The method may involve in response to receiving a positioning mode change signal, causing the processor circuit to combine the position input signals and the orientation input signals to generate a desired position and orientation of the distal end of the manipulator, and perform an inverse kinematic computation on the desired position and orientation of the distal end of the manipulator to determine a position for the distal segment of the positioner and actuation parameters for the positioner and the manipulator associated with the desired position and orientation of the distal end of the manipulator.
[0032] Other aspects and features will become apparent to those ordinarily skilled in the art upon review of the following description of specific disclosed embodiments in conjunction with the accompanying figures.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0033] In drawings which illustrate disclosed embodiments,
[0034] FIG. 1 is a perspective view of a robotic surgery system in accordance with one disclosed embodiment;
[0035] FIG. 2A is a front perspective view of a drive unit of the system shown in FIG. 1;
[0036] FIG. 2B is a rear perspective view of the drive unit of the system shown in FIG. 1;
[0037] FIG. 3A is a perspective view of a portion of an insertion tube of the system shown in FIG. 1;
[0038] FIG. 3B is a perspective view of the insertion tube of FIG. 3A including a pair of inserted instruments;
[0039] FIG. 3C is a perspective view of an articulated portion of one of the instruments shown in FIG. 3B;
[0040] FIG. 4 is a block diagram of processor circuit elements of the robotic surgery system shown in FIG. 1;
[0041] FIG. 5 is a perspective view of a portion of an input device of the system shown in FIG. 1;
[0042] FIG. 6 is a process flowchart depicting blocks of code for directing the processor circuit of FIG. 4 to receive input signals from the input device of FIG. 5;
[0043] FIG. 7A is a rear perspective view the articulated portions of the instruments shown in FIG. 3B;
[0044] FIG. 7B is a side perspective view of one of the instruments shown in FIG. 3B;
[0045] FIG. 8 is a perspective view of an alternative input device used to generate position input signals and orientation input signals for use in the system shown in FIG. 1;
[0046] FIG. 9 is a perspective view of an alternative embodiment of an instrument;
[0047] FIG. 10 is a front perspective view of the insertion tube of FIG. 3A including an inserted instrument;
[0048] FIG. 11A is an elevational view of the insertion tube and inserted instrument in a first condition;
[0049] FIG. 11B is an elevational view of the insertion tube and inserted instrument in a second condition;
[0050] FIG. 11C is an elevational view of the insertion tube and inserted instrument in a third condition;
[0051] FIG. 11D is an elevational view of the insertion tube and instrument in a bent condition;
[0052] FIG. 12 is a process flowchart including block of code for directing the processor circuit of FIG. 4 to generate a positioner operational envelope;
[0053] FIG. 13A is a perspective view of the articulated portion of the instrument shown in FIG. 3C in a pose in accordance with disclosed embodiments; and
[0054] FIG. 13B is a perspective view of the articulated portion of the instrument shown in FIG. 3C in a pose in accordance with a prior art embodiment.
DETAILED DESCRIPTION
[0055] Referring to FIG. 1, a robotic surgery system in accordance with one disclosed embodiment is shown generally at 100. The system 100 includes a workstation 102 and an instrument cart 104. The instrument cart 104 includes a drive unit 106 to which an insertion tube 108 and an instrument 110 are mounted. The workstation 102 includes an input device 112 that receives operator input and produces input signals. The input device 112 may also be capable of generating haptic feedback to the operator. The input device 112 may be implemented using a haptic interface available from Force Dimension, of Switzerland, for example. However in other embodiments the input device may be implemented using other input devices, including but not limited to a non-contact hand tacking device or other motion sensing device.
[0056] In the embodiment shown, the workstation 102 further includes a workstation processor circuit 114 in communication with the input device 112 for receiving the input signals and generating control signals for controlling the robotic surgery system, which are transmitted to the instrument cart 104 via an interface cable 116. In this embodiment, the input device 112 includes right and left hand controllers 122 and 124, which are grasped by the operator's hands and moved to cause the input device 112 to produce the input signals. The workstation 102 also includes a footswitch 126 for generating an enablement signal. The workstation 102 may also include other footswitches 128 that provide an additional input to the system as described below. The workstation 102 also includes a display 120 in communication with the workstation processor circuit 114. The display 120 may be configured for displaying images of a surgical site and/or portions of the instrument 110 in the surgical site. In the embodiment shown, the workstation 102 further includes a secondary display 132 for displaying status information related to the system 100. The instrument cart 104 includes an instrument processor circuit 118 that receives the input signals from the workstation processor circuit 114 and produces control signals for causing movement of the instrument 110 during a surgical procedure.
[0057] The drive unit 106 is shown in isolation in FIGS. 2A and 2B. Referring to FIG. 2A, the insertion tube 108 includes a drive interface 200 that detachably mounts to a corresponding drive interface 202 on the drive unit 106. The insertion tube 108 includes a camera 204 at a distal end of the insertion tube, which is inserted into a body cavity of a patient to capture body cavity image data representing an interior view of the body cavity for display on the display 120 of the workstation 102. Referring to FIG. 2B, in this embodiment the insertion tube 108 includes a pair of adjacent bores extending through the insertion tube for receiving a right hand side instrument 110a and a left hand side instrument 110b. The instruments 110a and 110b each include a respective operational tool 210 and 212 at a distal end. The operational tools 210 and 210 may be one of a variety of different operational tools, such as a probe, dissector, hook, or cauterizing tool. As an example, the operational tools 210 and 210 may be configured as an end effector having opposing jaws that provide an actuated function such as a scissor for cutting tissue or forceps for gripping tissue. In other embodiments one of the instruments 110a or 110b may include an operational tool 210 or 212 in the form of a distally located camera that provides imaging functions in addition to or in place of the camera 204. One of the instruments 110a or 110b may include an operational tool in the form of an illuminator configured to provide illumination for generation of images by the camera 204.
[0058] The drive unit 106 includes a mounting interface 214 for mounting the instrument 110a and a mounting interface 216 for mounting the instrument 110b. The drive unit 106 is configured to cause the respective mounting interfaces 214 and 216 to advance or retract in a direction aligned with a Z-axis shown at 218.
[0059] A portion of the insertion tube 108 is shown in FIG. 3A and includes two adjacently located bores 300 and 302 extending through the insertion tube 108 for receiving the respective surgical instruments 110a and 110b. The insertion tube 108 may be inserted through an incision into a body cavity of a patient to provide access to an instrument workspace 324 within a surgical site. The insertion tube 108 also includes a third bore 304 for receiving the camera 204. In other embodiments the insertion tube 108 may include additional bores for accommodating further instruments.
[0060] The camera 204 is configured as a stereoscopic camera having a pair of spaced apart imagers 306 and 308 for producing stereoscopic views representing an interior view of the body cavity. The camera 204 also includes an integrated illuminator 310 for illuminating the body cavity for capturing images. The integrated illuminator 310 may be implemented using an illumination source such as a light emitting diode, or an illumination source may be remotely located and may deliver the illumination through an optical fiber running through the insertion tube 108.
[0061] The camera 204 is mounted on an articulated arm 328 and coupled to a flexible shaft 330, which is shown truncated in FIG. 3B. The flexible shaft 330 will typically include a connector end that is connected to a camera port in the drive unit 106. The camera 204 is shown in a deployed state in FIG. 3B. Drive forces delivered by the drive unit 106 via the drive interface 202 to the drive interface 200 of the insertion tube 108 cause the articulated arm 328 to move the camera 204 the longitudinally extended insertion state shown in FIG. 3A to a deployed state as shown in FIG. 3B. In the deployed position, the camera 204 is able to generate images of the surgical site and instrument workspace 324 without obstructing movements of the instruments 110a and 110b. The images of the surgical site may be displayed on the display 120 of the system 100 shown in FIG. 1.
[0062] The instruments 110a and 110b are shown inserted through the respective bores 300 and 302 of the insertion tube 108 (in FIG. 3B the bore 302 is not visible). The right hand side instrument 110a includes a rigid shaft portion 312 and an articulated portion 314 that extends outwardly from the bore 300. In this embodiment the operational tool 210 of the instrument 110a is an end effector 316. The instrument 110a includes an actuator 318, which includes a plurality of actuator slides 320 disposed in an actuator housing 322. The actuator housing 322 is located at a proximal end of the instrument 110a that couples to the mounting interface 214 on the drive unit 106 for moving the articulated portion 314 and actuating the end effector 316. The rigid shaft portion 312 may be advanced or retracted by the mounting interface 214 to change the insertion depth of the articulated portion 314 within the instrument workspace 324. The instrument workspace 324 extends outwardly from an end 348 of the insertion tube 108.
[0063] The actuator 318 of the instrument 110a may be generally configured as disclosed in commonly owned PCT patent publication WO2016/090459 entitled “ACTUATOR AND DRIVE FOR MANIPULATING A TOOL” filed on Feb. 18, 2015 and incorporated herein by reference in its entirety. The interface of the drive unit 106 may have a track system (not shown) coupled to the actuator 318 for longitudinally advancing and retracting the instrument 110a to cause the rigid shaft portion 312 to move within the bore 300. The longitudinal positioning of the instrument 110a places the end effector 316 at a desired longitudinal offset with respect to the insertion tube 108 for accessing a surgical site within the body cavity of the patient.
[0064] The instrument 110b is shown in FIG. 3B in side-by side relation and identically configured to the instrument 110a and includes an articulated portion 326 extending into the surgical site. In some embodiments, the instrument 110b may have a different operational tool than the instrument 110a.
[0065] The articulated portion 314 of the instrument 110a is shown actuated and in enlarged detail in FIG. 3C. Referring to FIG. 3C, the articulated portion 314 of the instrument includes a manipulator 332 and a positioner 334. The positioner 334 is actuable to position a distal segment 346 of the positioner within an instrument workspace 324. The manipulator 332 is attached to the distal segment 346 of the positioner 334 and includes a distal end 352. The distal end 352 is configured for mounting the end effector 316 for performing an operation within the instrument workspace 324. In this embodiment, the end effector 316 includes a pair of jaws 354 for grasping tissue. The manipulator 332 is actuable to manipulate the distal end 352 of the manipulator within the instrument workspace 324.
[0066] The positioner 334 includes a first plurality of segments 336 extending between a bulkhead segment 344 and the distal segment 346. In the embodiment shown the first plurality of segments 336 include the bulkhead segment 344, a set of fifteen stacked vertebra segments 340 extending between the bulkhead segment and an intermediate segment 338. A further set of fifteen stacked vertebra segments 340 extend between the intermediate segment 338 and the distal segment 346. Three adjacent vertebra 340 are shown in enlarged detail in an insert 362. The adjacent vertebra 340 are coupled to move about either a pitch axis 372 or a yaw axis 374 with respect to the adjacent vertebra, which facilitates bending of the positioner 334 in a continuous arc.
[0067] The manipulator 332 includes a second plurality of segments 350 extending between the distal segment 346 of the positioner 334 and the distal end 352 of the manipulator. Each manipulator segment 350 may be similarly configured to the segments shown in the insert 362.
[0068] A plurality of control wires 356 are shown in FIG. 3C extending into the articulated portion 314 of the instrument 110a. Although not visible in FIG. 3B, the plurality of control wires 356 extend back along the rigid shaft portion 312 and each control wire is coupled to one of the plurality of actuator slides 320 within the actuator housing 322. The control wires 356 may each be implemented as a single flexible wire, which in one embodiment may be implemented using nitinol wire, which is capable of about 200N in tension or compression without permanent deformation and capable of experiencing up to about 4% strain. Nitinol is an alloy of nickel and titanium having shape memory and superelasticity and its ability to support both tension and compression allows the control wires to be selectively pushed or pulled with similar forces without permanent deformation.
[0069] The plurality of control wires 356 include four positioner control wires 358 extending through the insertion tube 108, through a plurality of openings 366 in the first plurality of segments 336, and are connected to the distal segment 346. A first pair of the positioner control wires 358 are routed through a first pair of the openings 366 that are diametrically opposed, as shown in the insert 362. The remaining pair of positioner control wires 358 are routed through a second pair of the openings 366 that are orthogonally located with respect to the first pair of openings. The articulated portion 314 of the instrument 110a may be configured as described in further detail in commonly owned PCT patent publication WO2014/201538 entitled “ARTICULATED TOOL POSITIONER AND SYSTEM EMPLOYING SAME” filed on Dec. 20, 2013 and incorporated herein by reference in its entirety. The described positioner provides for dexterous movement of the end effector 316 through a plurality of articulated segments. As described in more detail in PCT patent publication WO2014/201538, the control wires may be configured to work in pairs connected to diametrically opposed portions of the distal segment 346. When a pushing force is delivered to one of the pair of control wires, a pulling force is delivered to the other of the pair. In other embodiments, the actuation may be provided by a single wire that is pulled or pushed to cause the desired movement of the distal segment 346. Selectively actuating the first plurality of control wires 358 cause the respective stacked vertebra 340 and 342 to move to position the distal segment at the physical position represented by the position input signal while the respective vertebra are angled with respect to each other to bend the instrument in a continuous arc.
[0070] The plurality of control wires 356 also include four manipulator control wires 360 extending through the insertion tube 108, through a plurality of openings 368 in the segments 336, through a plurality of openings in the second plurality of segments 350, and connected to the distal end 352 of the manipulator 332. As in the case of the positioner control wires 358, the four manipulator control wires 360 are routed through openings 368 indicated in the insert 362. When the four manipulator control wires 360 are actuated by the respective actuator slides 320, an actuation force is applied to the distal end 352. The manipulator 332 is thus actuable via the plurality of actuator slides 320 to manipulate the distal end 352 and the end effector 316 within the instrument workspace 324.
[0071] In the embodiment shown in FIG. 3C, the segments 336 in the positioner 334 each include four openings 370 that are additional to the four openings required for the four positioner control wires 358 and the four openings required for the four manipulator control wires 360. These four openings 370 are respectively located between the openings 366 and the openings 368, as shown in the insert 362. The additional openings 370 accommodate four structural wires (not shown) that are connected at the bulkhead segment 344 and run through the openings in the second plurality of segments 350 to the distal segment 346, where the structural wires are connected to the distal segment. The structural wires each have the same length and function as a parallelogram in two dimensions, tending to keep the distal segment 346 in the same orientation as the bulkhead segment 344. When pushing and pulling on the pairs of positioner control wires 358, the equal length structural wires constrain the set of segments 340 to bend in one direction, while the set of segments 342 bend in an opposite direction, thus causing the positioner 334 to take up an “S” shape as shown in FIG. 3C. The pushing and pulling on the pairs of positioner control wires 358 thus cause the distal segment 346 to move laterally and/or vertically, while remaining substantially aligned with the bulkhead segment 344 and the Zo-axis. The structural wires need not necessarily be secured within the bulkhead segment 344. In some embodiments the structural wires may be secured within the rigid shaft portion 312 or actuator housing 322.
[0072] A block diagram of processor circuit elements of the robotic surgery system 100 is shown in FIG. 4. Referring to FIG. 4 the workstation processor circuit 114 includes a microprocessor 400, a workstation memory 402, a USB interface 404, an input/output 406, and a motion control interface 408, all of which are in communication with the microprocessor 400. In this embodiment the input device 112 communicates using a USB protocol and the USB interface 404 receives input signals produced by the input device in response to movements of the hand controllers 122 and 124. The input/output 406 includes an input for receiving the enablement signal from the footswitches 126 and 128 and an output for producing display signals for driving the display 120 (shown in FIG. 1). The motion control interface 408 generates control signals 410 based on the input signals received from the input device 112.
[0073] The instrument processor circuit 118 includes a microprocessor 450, a memory 452, a communications interface 454, and a drive control interface 456, all of which are in communication with the microprocessor. The microprocessor 450 receives the control signals 410 at the communications interface 454 via the interface cable 116 (FIG. 1). The microprocessor 450 processes the control signals 410 and causes the drive control interface 456 to produce drive signals 458 for moving the instruments 110a and 110b. The drive signals are received by the drive unit 106, which generates the necessary actuation forces for moving the plurality of actuator slides 320 to position the positioner 334 and the manipulator 332 within the instrument workspace 324.
[0074] The workstation processor circuit 114 thus acts as a controller subsystem for receiving user input, while the instrument processor circuit 118 acts as a responder subsystem in responding to the control signals 410 based on user input by driving the instruments 110a and 110b. While the embodiment shown includes both the workstation processor circuit 114 and the instrument processor circuit 118, in other embodiments a single processor circuit may be used to perform both controller and responder functions.
[0075] A portion of the input device 112 that includes the right hand controller 122 is shown in greater detail in FIG. 5. For simplicity, only the right hand controller 122 of the input device 112 will be further described, it being understood that the left hand controller 124 operates in the same way. The input device 112 is supported on a base 500 and includes arms 502, 504, and 506 that provide a mounting for the hand controller 122, which may be grasped by the operator and moved within an input workspace. The input device reference frame has an x.sub.r-z.sub.r plane parallel to the base 500 and a y.sub.r axis perpendicular to the base. The z.sub.r axis is parallel to the base 500 and is coincident with an axis 516 passing centrally through the hand controller 122. The x.sub.r, y.sub.r, z.sub.r reference frame defines an input workspace 522.
[0076] The input device 112 includes a plurality of encoders (not shown) that generate signals in response to movements of the movements of the arms 502, 504, and 506, which act as a via a first kinematic structure for kinematically transmitting translational movements of the operator's hand via the hand controller 122 in three translational degrees of freedom to the encoders. The input device 112 is thus operable to produce translational movement signals based on the encoder signals to generate the position input signals. The arms 502-506 permit positioning and rotation about orthogonal axes x.sub.i, y.sub.i, and z.sub.i of a Cartesian reference frame. The Cartesian reference frame has an origin at a point on a body of the hand controller 122 and the location of the origin defines the hand controller position 508. In this embodiment, the hand controller 122 is mounted on a gimbal mount 510. The arms 502-506 confine movements of the hand controller 122 and hence the hand controller position 508 to within a generally hemispherical input workspace.
[0077] The input device 112 includes a second kinematic structure that kinematically transmits orientation movements of the operator's hand in the at least two rotational degree of freedom to encoders (not shown) that sense the rotational orientation of the hand controller 122 about x.sub.i, and y.sub.i axes. In this embodiment the input device 112 is also responsive to rotations of the hand controller 122 about the z.sub.i-axis, for a third degree of freedom. Rotational hand movements of the operator are thus also encoded to produce signals representing the orientation of the hand controller 122 in the input workspace 522 relative to an input device Cartesian reference frame x.sub.r, y.sub.r, z.sub.r. The encoder orientation signals are processed by the input device 112 to generate the orientation input signals.
[0078] In one embodiment the input device 112 may also be configured to generate haptic forces for providing haptic feedback to the hand controller 122 through the arms 502-506 and gimbal mount 510. For example, haptic forces may be initiated in response to alert signals that are generated by the workstation processor circuit 114 or instrument processor circuit 118 and communicated to the input device 112. The hand controller 122 also includes an end effector actuator 520 that may be opened and closed to actuate movement of an end effector as described in more detail later herein.
[0079] Referring to FIG. 6, a flowchart depicting blocks of code for directing the workstation processor circuit 114 to receive input signals from the input device 112 for controlling the articulated portion 314 of the instrument 110a is shown at 600. The blocks generally represent codes that may be read from the workstation memory 402 for directing the microprocessor 400 to perform control functions. The actual code to implement each block may be written in any suitable program language, such as C, C++, C #, Java, and/or assembly code, for example.
[0080] The process 600 is initiated at block 602, which directs the microprocessor 400 to receive position input signals generated by the input device 112 in response to movements of the hand controller 122. For the input device 112 shown in FIG. 5, the input workspace 522 is represented by an input device reference frame x.sub.r, y.sub.r, z.sub.r. The position input signals are generated by the input device 112 in response to translations of the hand controller 122 in three translational degrees of freedom within the input workspace by the operator. In this embodiment, the position and orientation signals are transmitted as input signals via the USB connection 518 to the USB interface 404 of the workstation processor circuit 114. The received position input signal represents a current position of the hand controller 122 within the reference frame x.sub.r, y.sub.r, z.sub.r, and may be represented by a position vector given by:
[00001] P .fwdarw. MCURR = { x i y i z i } , Eqn ? 1
where x.sub.i, y.sub.i, and z.sub.i represent coordinates of the hand controller position 508 (i.e. the origin of the coordinate system x.sub.i, y.sub.i, z.sub.i) relative to the input device reference frame x.sub.r, y.sub.r, z.sub.r.
[0081] Block 604 then directs the microprocessor 400 to generate position control signals based on the received position input signals and to transmit the position control signals to the instrument processor circuit 118 via the motion control interface 408.
[0082] The process then continues at block 606, which is implemented on the instrument processor circuit 118. Block 606 directs the microprocessor 450 to receive the position control signals. Block 608 then directs the microprocessor 450 to generate drive signals. The drive control interface 456 of the instrument processor circuit 118 produces the necessary drive signals to cause the drive unit 106 to actuate the applicable plurality of actuator slides 320 on the instrument 110a for actuating the positioner 334 to move the distal segment 346 to a physical position represented by the position input signals.
[0083] Block 610 then directs the microprocessor 400 of the workstation processor circuit 114 to receive the orientation input signals from the input device 112 at the USB interface 404. The orientation of the hand controllers 122 within the input workspace 522 is given by a rotation matrix:
[00002] R MCURR = [ x 1 ? x y 1 ? x z 1 ? x x 1 ? y y 1 ? y z 1 ? y x 1 ? z y 1 ? z z 1 ? z ] , Eqn ? 2
where the columns of the matrix represent the axes of the hand controller reference frame x.sub.i, y.sub.i, z.sub.i relative to the input device reference frame x.sub.r, y.sub.r, z.sub.r. The matrix R.sub.MCURR thus defines the current rotational orientation of the hand controller 122 with the input workspace 522 relative to the x.sub.r, y.sub.r, and z.sub.r reference frame. The current hand controller position vector {right arrow over (P)}.sub.MCURR and current handle rotation matrix R.sub.MCURR are received as current hand controller position signals and current hand controller orientation signals via the USB connection 518 at the USB interface 404 of the workstation processor circuit 114.
[0084] Block 612 then directs the microprocessor 400 to processes the position input signals and to generate position control signals for actuating the positioner to move the distal segment 346 within the instrument workspace 324. The position control signals are transmitted via the motion control interface 408 to the instrument processor circuit 118.
[0085] The process then continues at block 614, which directs the microprocessor 450 of the instrument processor circuit 118 to receive the manipulation control signals. Block 616 then directs the microprocessor 450 to generate drive signals to cause the drive unit 106 to actuate the applicable plurality of actuator slides 320 on the instrument 110a for actuating the manipulator 332 to orient the distal end 352 within the instrument workspace 324.
[0086] The articulated portion 314 of the instrument 110a and an articulated portion of the instrument 110b are shown from the rear in FIG. 7A. The articulated portion 314 of the instrument 110a is also shown from a side perspective in FIG. 7B. Referring to FIG. 7A, a fixed reference frame 700 (X.sub.0, Y.sub.0, Z.sub.0) is established for the articulated portion 314 of the instrument 110a. In this embodiment the fixed reference frame 700 has an origin located at a center of the insertion tube 108. In FIG. 7, the Zo axis is aligned with a longitudinal axis of the insertion tube 108 and the Y-axis is directed into the plane of the page.
[0087] The bulkhead segment 344 has a reference frame 702 (X.sub.1-Z.sub.1) that has an origin located at a center of the bulkhead segment 344. As noted above, the drive unit 106 may cause the actuator housing 322 to be advanced or retracted in the Z.sub.0 axis direction. An insertion distance q.sub.ins represents the depth of insertion of the bulkhead segment 344 into the instrument workspace 324 and corresponds to a distance between the origin of the X.sub.0-Z.sub.0 reference frame 700 and the origin of the X.sub.1-Z.sub.1 reference frame 702. The intermediate segment 338 has a reference frame 704 (X.sub.2-Z.sub.2) that has an origin located at the center of the intermediate segment. The distal segment 346 of the positioner 334 has a reference frame 706 (X.sub.3-Z.sub.3) that has an origin located at a center of the distal segment 346.
[0088] The position control signal received at block 606 of the process 600 provides the desired location of the reference frame X.sub.3-Z.sub.3 within the instrument workspace 324 with respect to the fixed reference frame X.sub.0-Z.sub.0. Generating the necessary drive signals at block 608 to cause the distal segment 346 to be positioned at X.sub.3Z.sub.3 involves working backwards from the location X.sub.3-Z.sub.3 to determine configuration variables (q.sub.ins and the angles ?.sub.p and d.sub.p) for the positioner 334 and then determining the respective actuations for the positioner control wires 358 that correspond to the configuration variables. The angles ?.sub.p and d.sub.p are depicted in FIG. 7B. When the positioner 334 is actuated it will bend to lie in a plane 708 that is at an angle d.sub.p relative to the X.sub.1, Z.sub.1 plane 710. When d.sub.p=0, the positioner 334 lies on the plane 710. The angle d.sub.p becomes positive as the positioner rotates away from the Y.sub.1 axis. The angle d.sub.p represents the degree of bending of the positioner 334 within the plane 708. When the positioner 334 lies in the plane 708 along a line 712 aligned with the Z.sub.0 axis, the angle ?.sub.p=90°. The angle ?.sub.p decreases as the positioner bends outwardly.
[0089] A vector p.sub.3/0 from the origin of the fixed reference frame 700 (X.sub.0, Y.sub.0, Z.sub.0) to the end of the positioner 334 (reference frame 706), has x, y, and z components written in the fixed reference frame as:
[00003] p ¯ 3 / 0 .Math. .Math. ¯ = - L 1 ? cos ? d p ( sin ? ? p - 1 ) p 2 - ? p Eqn ? 3 ? a p ¯ 3 / 0 .Math. J ¯ = L 1 ? sin ? d p ( sin ? ? p - 1 ) p 2 - ? p Eqn ? 3 ? b p ¯ 3 / 0 .Math. k ¯ = q i ? n ? s + L 1 ? cos ? ? p p 2 - ? p Eqn ? 3 ? c
where: [0090] i, j, k are unit vectors in the x, y, and z directions, respectively; and [0091] L.sub.1 is the length of the positioner 334 (shown in FIG. 7A for the instrument 110b).
[0092] To find the angles d.sub.p and ?.sub.p of the proximal segment (s-segment) the equations (1a) and (1b) must be solved for d.sub.p and ?.sub.p. d.sub.p can be found directly by taking the ratio of (3b) and (3a):
d.sub.p=atan 2(-p.sub.3/0.Math.j,p.sub.3/0.Math.î) Eqn 4
[0093] To find ?.sub.p, (1a) or (1b) must be solved for ?.sub.p. A closed-form solution does not exist, so the solution must be found numerically. The Newton-Raphson method has been used successfully in simulation. This method is appropriate and tends to converge very quickly because in the range 0≤?.sub.p≤p the function (13) has a large radius of curvature. The Newton-Raphson method may be implemented using equations (1a) and (1b), which can be rearranged to be written in the form ƒ(?.sub.p)=0:
[00004] f ? ( ? p ) = - L 1 p 2 - ? p .Math. cos ? d p .Math. ( sin ? ? p - 1 ) - p ¯ 3 / 0 .Math. .Math. ^ + L 1 p 2 - ? p .Math. sin ? d p .Math. ( sin ? ? p - 1 ) - p ¯ 3 / 0 .Math. ? ˆ = 0 Eqn ? 5
[0094] Note that the inclusion of both equations (3a) and (3b) is necessary, since either one individually is stationary in ?.sub.p for specific values of d.sub.p. For example, for
[00005] d p = p 2 ,
equation (3a) is equal to zero for all values of ?.sub.p. The derivative of Eqn 5 with respect to ?.sub.p is:
[00006] f ' ( ? p ) = L P p 2 - ? P .Math. [ - 1 p 2 - ? P .Math. cos ? d p .Math. ( sin ? ? p - 1 ) - cos ? d p .Math. cos ? ? p + 1 p 2 - ? P .Math. sin ? d p .Math. ( sin ? ? p - 1 ) + d p .Math. cos ? ? p ] Eqn ? 6
[0095] Following the Newton-Raphson scheme, successive iterations can be made for improved estimates of ?.sub.p to satisfy (13) using the following relationship:
[00007] ? p n + 1 = ? p n - f ? ( ? p n ) f ' ( ? p n ) Eqn ? 7
[0096] The choice of an initial ?.sub.p can determine the number of total iterations required in order to reach a desired amount of error. It is possible that the required number of iterations can be minimized by setting the initial ?.sub.p equal to the ?.sub.p value from the previous kinematics frame. This, however, can increase software complexity and creates special cases that must be handled, such as the case with
[00008] ? p = p 2 .
In order to find q.sub.ins, equation (3c) can be rearranged as follows:
[00009] q ins = p ¯ 3 / 0 . k ¯ - L 1 ? cos ? ? p p 2 - ? p Eqn ? 8
[0097] The respective actuations for the positioner control wires 358 p.sub.1, p.sub.2, p.sub.3, and p.sub.4 may be calculated from the calculated angles ?.sub.p and d.sub.p:
[00010] p 1 = r b ( ? p - p 2 ) ? cos ? ( d p ) Eqn ? 9 ? a p 2 = r b ( ? p - p 2 ) ? cos ? ( d p - ß ) Eqn ? 9 ? b p 3 = r b ( ? p - p 2 ) ? cos ? ( d p - 2 ? ß ) Eqn ? 9 ? c p 4 = r b ( ? p - p 2 ) ? cos ? ( d p - 3 ? ß ) , Eqn ? 9 ? d
where r.sub.b is the radius of the segments 336 and ß is the angle between the openings in the segments 336 (in this case ß=90°). The wires p.sub.1 and p.sub.3 correspond to the first pair of control wires and the wires p.sub.2 and p.sub.4 correspond to the second pair of control wires. The above equations may thus be evaluated by the microprocessor 450 of the instrument processor circuit 118 to determine actuations p.sub.1, p.sub.2, p.sub.3, and p.sub.4 for causing the positioner 334 to position the distal segment 346 at a physical position in the instrument workspace 324 corresponding to the position control signals received from the workstation processor circuit 114 at block 606.
[0098] Still referring to FIG. 7A, the distal segment 346 is thus positioned in the instrument workspace 324 based on the position control signals. The manipulator 332 extends into the instrument workspace 324 from the distal segment 346 and is capable of movement in at least two degrees of freedom with respect to the distal segment. The distal end 352 of the manipulator 332 has a reference frame 720 (X.sub.4-Z.sub.4) that has an origin located at a center of the distal end. The movements of the distal end 352 of the manipulator 332 may be characterized as occurring within a bend plane 722, which is disposed at an angle d.sub.m to a plane 724 aligned with the X.sub.3-Z.sub.3 axes of the reference frame 706 at the distal segment 346. Within the bend plane 722, the manipulator 332 is disposed at an angle ?.sub.m, with respect to the Z.sub.3-axis.
[0099] A vector p.sub.4/3 from the origin of the distal segment 346 reference frame 706 to the distal end 352 of the manipulator 332 (reference frame 720), has x, y, and z components written in the fixed reference frame 700 as:
[00011] p ¯ 4 / 3 . .Math. ¯ = - L 2 ? cos ? d m ( sin ? ( ? m ) - 1 ) p 2 - ? m Eqn ? 10 ? a p ¯ 4 / 3 . ? ¯ = L 2 ? sin ? d m ( sin ? ( m ) - 1 ) p 2 - ? m Eqn ? 10 ? b p ¯ 4 / 3 . k ¯ = L 2 ? cos ? ( ? m ) p 2 - ? m Eqn ? 10 ? c
where: [0100] i, j, k are unit vectors in the x, y, and z directions, respectively; and [0101] L.sub.2 is a length of the manipulator 332, shown in FIG. 7A for the instrument 110b.
[0102] The rotation matrix between the end of the positioner 334 (frame X.sub.3-Z.sub.3) and the end of the distal end 352 (frame X.sub.4-Z.sub.4) may be computed as a series of simple rotations including rotation of d.sub.m about the Z-axis (R.sub.(d.sub.m.sub.)), followed by rotation of
[00012] ( p 2 - ? m )
and about the new
[00013] Y - axis ? ( R ( p 2 - ? m ) ) ,
followed by rotation of -d.sub.m about the new Z-axis (R.sub.(-d.sub.m.sub.)):
[00014] R 4 / 3 = R ( d 2 ) ? R ( p 2 - ? 2 ) ? R ( - d 2 ) Eqn ? 11 R 4 / 3 = ? [ cos ? ( d m ) - sin ? ( d m ) 0 sin ? ( d m ) cos ? ( d m ) 0 0 0 1 ] [ cos ? ( p 2 - ? m ) 0 sin ? ( p 2 - ? m ) 0 1 0 - sin ? ( p 2 - ? m ) 0 cos ? ( p 2 - ? m ) ] ? ? [ cos ? ( - d m ) - sin ? ( - d m ) 0 sin ? ( - d m ) cos ? ( - d m ) 0 0 0 1 ] ? R 4 / 3 = ? [ ? ? sin 2 ? d m + sin ? ? m ? cos 2 ? d m cos ? d m ? sin ? d m ? sin ? ? m - cos ? d m ? sin ? d m cos ? d m cos ? ? m cos ? d m ? sin ? d m sin ? ? m - cos ? d m ? sin ? d m cos 2 ? d m + sin ? ? m ? sin 2 ? d m sin ? d m cos ? ? m - cos ? d m ? cos ? ? m - sin ? d m ? cos ? ? m sin ? ? m ] ?
R.sub.4/3 may be written in short form as:
[00015] R m = R 4 / 3 = [ R m 1 , 1 R m 1 , 2 R m 1 , 3 R m 2 , 1 R m 2 , 2 R m 2 , 3 R m 3 , 1 R m 3 , 2 R m 3 , 3 ] Eqn ? 12
?.sub.m can be found by solving for ?.sub.m in the following equation:
[00016] ? m = p 2 - a ? tan ? 2 ? ( R m 1 , 3 2 + R m 2 , 3 2 , R m 3 ? 3 ) Eqn ? 13
d.sub.m can be found by taking the ratio:
[00017] R m 2 , 3 R m 1 , 3 = sin ? d m ? cos ? ? m cos ? d m ? cos ? ? m = sin ? d m cos ? d m = tan ? ( d m ) Eqn ? 14
And then solving for d.sub.m:
d.sub.m=atan 2(R.sub.m.sub.2,3,R.sub.m.sub.1,3) Eqn 15
[0103] The respective actuations for the four manipulator control wires 360 m.sub.1, m.sub.2, m.sub.3, and m.sub.4 may be calculated from the calculated angles ?.sub.m and d.sub.m:
[00018] m 1 = r b ( ? m - p 2 ) ? cos ? ( d m - p 4 ) Eqn ? 16 ? a m 2 = r b ( ? m - p 2 ) ? cos ? ( d m - p 4 - ß ) Eqn ? 16 ? b m 3 = r b ( ? m - p 2 ) ? cos ? ( d m - p 4 - 2 ? ß ) Eqn ? 16 ? c m 4 = r b ( ? m - p 2 ) ? cos ? ( d m - p 4 - 3 ? ß ) Eqn ? 16 ? d
where r.sub.b is the radius of the segments 336 and ß is the angle between the openings in the segments 336 (in this case ß=90°). The wires m.sub.1 and m.sub.3 correspond to the first pair of control wires and the wires m.sub.2 and m.sub.4 correspond to the second pair of control wires. The above equations may thus be evaluated by the microprocessor 450 of the instrument processor circuit 118 to determine actuations m.sub.1, m.sub.2, m.sub.3, and m.sub.4 for causing the manipulator 332 to orient the distal end 352 at an orientation in instrument workspace 324 corresponding to the orientation control signals received from the workstation processor circuit 114 at block 614.
[0104] In this embodiment the end effector 316 connected to the distal end 352 of the manipulator 332 is also configured for an additional roll degree of freedom a about the Z.sub.4-axis in the instrument workspace 324. The roll movement may be actuated in response to a rotation of the hand controller 122 (FIG. 5) about the z.sub.r-axis in input workspace 522.
[0105] The input device 112 shown in FIG. 1A and FIG. 5 monitors hand movements of the operator via movements of the hand controller 122 grasped by the operator's hand. In another embodiment shown in FIG. 8, an alternative input device may be used to generate the position input signals and orientation input signals. In this embodiment, a motion controller device 800 includes a sensor 802 that monitors free movements of the operator's hands 804 within an input region 806. The workstation processor circuit 114 or another processor circuit within the motion controller device 800 processes signals produced by the sensor 802 to generate position input signals and orientation input signals based on free movements of the operator's hands 804. The input signals may be transmitted via a wired connection to the USB interface 404 or other suitable interface on the workstation processor circuit 114. In some embodiments, the movement signals may be filtered to capture free movements of the operator's hands to extract movements of the operator's digits with respect to either the palm of the operator's hand or the wrist of the operator's hand for generating the input signals. As an example, the motion controller device 800 may be implemented using one of several hand tracking and haptics devices available from Ultraleap of California, United States.
[0106] In other embodiments the input signals may be received from other input devices, such as a virtual reality headset worn by the operator. Alternatively, the system 100 may be implemented as an autonomous surgical system that performs surgical operations under the control of an autonomous controller processor circuit. For example, surgical operations performed under the instruction of a surgeon may involve a repetitive step such as performing a suture movement. An autonomous processor circuit may be operably configured to autonomously generate the position input signals and the orientation input signals to perform the suture movement rather than receiving signals from a surgeon via the input device 112.
[0107] The instrument 110a in the embodiment described above includes articulated segments in the form of vertebra 340, 342, and 350 that provide a smoothly bendable positioner 334. Referring to FIG. 9, in another embodiment an instrument 900 includes a positioner 902 that includes elongated segments 904 and 906, and 908. The segments 904 and 906, and 908 of the positioner 902 are articulated at discrete joints 910, 912, and 914. The articulated segments 904-908 may include control wires (not shown) that run through the linkages and activate the instrument 900 to cause bending at the discrete joints 910-914.
[0108] The instrument 900 also includes a manipulable wrist 916, which in this embodiment includes articulated segments as generally described above that provide a smoothly bendable linkage for positioning an end effector 918 in an instrument workspace 920. A second instrument 922 is similarly configured. The above described movement and instrument control embodiments may be implemented for the instruments 900 and 922 with minor implementation differences.
[0109] As disclosed above in connection with FIGS. 2A and 2B, the drive unit 106 is configured to cause the respective mounting interfaces 214 and 216 to advance or retract in a direction aligned with a Z-axis shown at 218 causing the positioner 334 and manipulator 332 to either enter the instrument workspace 324 or be withdrawn from the instrument workspace. Referring to FIG. 10, the insertion tube 108 is shown with a single instrument 110a inserted within the bore 302. The instrument 110b and camera 204 have been omitted in FIG. 10. The rigid shaft portion 312 has been advanced or retracted such that the bulkhead segment 344 of the positioner 334 is just clearing the bore 302. If in FIG. 10 the rigid shaft portion 312 is currently being advanced, the positioner 334 is ready for actuation to move the distal segment 346 to a desired physical position within the instrument workspace 324 since the bulkhead segment 344 is clear of the bore 302. However, if the rigid shaft portion 312 is currently being retracted, the positioner 334 would need to have been actuated to assume a straightened condition as shown in FIG. 10, which represents a first constraint on motion of the positioner 334. In the straightened condition the positioner 334 may be retracted into the bore 302 of the insertion tube 108 without causing a collision between the positioner segments and the bore 302. Such collision would work against the actuation of the plurality of actuator slides 320 and may cause wear or damage to the instrument 110a.
[0110] If the bulkhead segment 344 is being further advanced out of the bore 302 into the instrument workspace 324, the positioner 334 is able to move the distal segment 346 within movement limitations due to the length of the positioner and the possible bend angles through which the positioner is capable of bending. These physical movement limitations associated with the instrument 110 a place a second constraint on the motion of the positioner 334.
[0111] The first and second constraints are illustrated in FIG. 10 as a positioner operational envelope 1000 (shown in broken lines). The positioner operational envelope 1000 includes an insertion/retraction region 1002 associated with the first constraint. The insertion/retraction region 1002 represents a region of the instrument workspace within which the positioner 334 should be in a straightened condition for insertion or retraction of the instrument 110a into or out of the instrument workspace 324. The positioner operational envelope 1000 further includes a free movement region 1004 associated with the second constraint. The free movement region 1004 represents a region of the instrument workspace 324 within which the positioner 334 is able to move after the distal segment 346 of the positioner 334 is no longer disposed within the insertion/retraction region 1002 of the positioner operational envelope 1000.
[0112] The operation of the positioner operational envelope 1000 is further described with reference to FIGS. 11A-11D, in which the instrument 110a is shown in various conditions. Referring to FIG. 11A, in this condition only the manipulator 332 and distal segment 346 are clear of the bore 302 of the insertion tube 108. While the distal segment 346 is already outside of the bore 302, a remainder of the positioner 334 is still constrained within the bore and cannot be moved. The distal segment 346 thus remains within the positioner insertion/retraction region 1002 of the positioner operational envelope 1000, which indicates that the positioner is currently constrained.
[0113] A manipulator operational envelope 1100 may also generated to define boundaries to movement of the distal end 352 of the manipulator 332 within the instrument workspace 324. In FIG. 11A, the manipulator 332 would be capable of movement within a manipulator operational envelope 1100 with respect to the distal segment 346.
[0114] Referring to FIG. 11B, the instrument 110a is shown in a condition where the manipulator 332, the distal segment 346, the intermediate segment 338, and a portion of the positioner 334 are all clear the bore 302 of the insertion tube 108. However the distal segment 346 remains just within the insertion/retraction region 1002 of the positioner operational envelope 1000, indicating that the positioner 334 remains constrained by the bore 302 of the insertion tube 108. The manipulator is still free to move within the manipulator operational envelope 1100, which in FIG. 11B has been advanced further into the instrument workspace 324.
[0115] Referring to FIG. 11C, the instrument 110a is shown in a condition where the distal segment 346 is well clear of the insertion/retraction region 1002 of the positioner operational envelope 1000. In this condition, the distal segment 346 is only constrained by the free movement region 1004 of the positioner operational envelope 1000. The positioner 334 is able to move within the respective free movement region 1004 of the positioner operational envelope 1000 and the manipulator 332 is able to move within the manipulator operational envelope 1100.
[0116] Referring to FIG. 11D, the instrument 110a as shown in FIG. 11C is now shown in a bent condition. Since the distal segment 346 is well clear of the insertion/retraction region 1002 of the positioner operational envelope 1000, the intermediate segment 338 and adjacent segments 340 and 342 may now move outside of the insertion/retraction region 1002 of the positioner operational envelope 1000. The distal segment 346 of the positioner 334 remains within the free movement region 1004 of the positioner operational envelope 1000. The intermediate segment 338 and adjacent segments 340 and 342 need not remain within the free movement region 1004 of the positioner operational envelope 1000. The positioner operational envelope 1000 thus defines boundaries to movement of the distal segment 346 of the positioner 334 within a portion of the instrument workspace 324.
[0117] Referring to FIG. 12, a process embodiment that includes blocks of codes for causing the workstation processor circuit 114 to implement the positioner operational envelope 1000 while generating the position control signal is shown at generally at 1200. The process 1200 replaces block 604 in the process 600. The process 1200 begins at block 1202, which directs the microprocessor 400 of the workstation processor circuit 114 to generate the insertion/retraction region 1002 of the positioner operational envelope 1000. In this embodiment the insertion/retraction region 1002 is a cylindrical region within the instrument workspace 324 and may be generated as a three-dimensional cylindrical region in the fixed reference frame 700 (X.sub.0, Y.sub.0, Z.sub.0) for the system 100a shown in FIG. 7A. In other embodiments the instrument 110 may be one of a set of articulating instruments of differing instrument types used in a surgical procedure and the positioner operational envelope 1000 may be a pre-defined positioner operational envelope for all differing instrument types. Alternatively the positioner operational envelope 1000 may be a pre-defined positioner operational envelope selected based on the instrument type. In other embodiments the positioner operational envelope 1000 may be established based at least in part on surgical data associated with a target operational site or based at least in part on surgical data provided by a scan of the target operational site. In some embodiments the positioner operational envelope 1000 may be based at least in part on operator input.
[0118] Block 1204 then directs the microprocessor 400 to generate the free movement region 1004 of the positioner operational envelope 1000. In this embodiment the free movement region 1004 is a frustoconical region extending outwardly from the distal end of the insertion/retraction region 1002. The free movement region 1004 may also be generated as a three-dimensional conical region in the fixed reference frame 700 (X.sub.0, Y.sub.0, Z.sub.0).
[0119] In one embodiment the positioner operational envelope 1000 may be generated in the instrument workspace 324 and mapped from the instrument workspace into the input workspace 522. The generating of the alert signal at block 1208 may thus involve generating the alert signal when the position input signals received at the input device 112 represent a position in the input workspace that lies on or is outside the positioner operational envelope mapped to the input workspace.
[0120] The process 1200 then continues at block 1206, which directs the microprocessor 400 to determine whether the position input signals (received at block 602 of the process 600 in FIG. 6) represent a location in the instrument workspace 324 that is within the cylindrical region associated with the insertion/retraction region 1002. If the input signals represent a location that is within the cylindrical region, the microprocessor 400 is directed to block 1208. Block 1208 directs the microprocessor 400 to generate an alert signal to alert the operator that the position input signals represent a position in the instrument workspace 324 that lie on or outside the positioner operational envelope 1000.
[0121] In one embodiment, the input device 112 may deliver haptic feedback to the operator's hand in response to the operator alert signal. The alert signals may thus be in the form of a haptic feedback signal transmitted back to the input device 112 via the USB interface 404. The input device 112, on receiving the haptic feedback signal causes a haptic force or vibration to be generated via the hand controller 122 that alerts the operator to the alert condition. Alternatively or additionally, the microprocessor 400 may generate display signals that cause an alert to be displayed on the display 120 as an overlay image shown at 130 in FIG. 1.
[0122] Block 1210 then directs the microprocessor to generate modified position control signals based on the input signals received and the insertion/retraction region 1002 of the positioner operational envelope 1000. For example, if the distal segment 346 is still within the insertion/retraction region 1002 of the positioner operational envelope 1000, the prior position control signals may be maintained so as to prevent actuation of the positioner 334 that would cause bending.
[0123] As another example, if the instrument 110a is currently being withdrawn from the instrument workspace 324, the position control signals may be generated to cause a bent positioner 334 to be straightened. Referring back to FIG. 11D, block 1210 may direct the microprocessor 400 to determine that the position input signals received from the input device 112 are associated with a retraction of the distal segment 346 from the instrument workspace 324. In this case, block 1210 directs the microprocessor 400 to determine whether the received position input signal represents a physical position of the distal segment 346 associated with the positioner 334 being in a bent condition. If the positioner 334 is in a bent condition, block 1210 directs the microprocessor 400 to generate modified position control signals to cause the positioner 334 to be straightened while being retracted into the insertion/retraction region 1002.
[0124] Similarly, block 1210 may direct the microprocessor 400 to determine whether the received position input signal represents a physical position of the distal segment 346 that lies on or outside the positioner operational envelope 1000. If this is the case, block 1210 directs the microprocessor 400 to generate modified position control signals that constrain movements of the distal segment 346 to be within the positioner operational envelope 1004.
[0125] Block 1210 replaces block 604 in the process 600 and the modified control signals are transmitted to the instrument processor circuit 118 via the motion control interface 408. The instrument processor circuit 118 thus receives a modified position control signal and reacts accordingly when actuating the positioner 334.
[0126] If at block 1206, the position input signals represent a location that is not within the cylindrical region associated with the insertion/retraction region 1002, the microprocessor 400 is directed to block 1214. Block 1214 directs the microprocessor 400 to determine whether the position input signals represent a location in the instrument workspace 324 that is outside the conical region associated with the free movement region 1004. If the location is outside the free movement region 1004, block 1214 directs the microprocessor 400 to block 1208, which generates an alert signal generally as described above. Block 1210 then generates modified position control signals that maintain the physical position of the distal segment 346 inside the free movement region 1004 of the positioner operational envelope 1000. As described above, the alert signal may initiate haptic feedback, a displayed alert, or an audible alert. Block 1210 then directs the microprocessor 400 to transmit the modified control signals to the instrument processor circuit 118 via the motion control interface 408. The workstation processor circuit 114 responds by maintaining the current position of the distal segment 346 of the positioner 334.
[0127] If at block 1214 the location based on the position input signals remains within the free movement region 1004, block 1214 directs the microprocessor 400 to block 1216. Block 1216 directs the microprocessor 400 to generate position control signals based on the received position input signals and directs the microprocessor to transmit the position control signals to the instrument processor circuit 118 via the motion control interface 408. The workstation processor circuit 114 responds by actuating the distal segment 346 of the positioner 334 to move to the position represented by the position input signals.
[0128] The process 1200 thus causes position input signals that are not within the positioner operational envelope 1000 to be handled differently such that the operator is alerted to the undesirable movement and also prevents the movement from being effected. In instances where operator input received at the hand controller 122 of the input device 112 may result in damage to the instrument (such as when there is an attempt to retract a bent instrument), the instrument processor circuit 118 intercedes to force the straightening of the positioner 334 prior to being retracted in to the bore 302 of the insertion tube 108.
[0129] In some embodiments the instrument processor circuit 118 may be configured to generate display signals for displaying a view of the instrument workspace on the display 120 as shown in FIG. 1 at 120. The generated display signals may include display signals that cause an overlay image representing the positioner operational envelope 1000 to be displayed. The positioner operational envelope 1000 may be displayed as a colored or shaded region or an outline of the region, for example.
[0130] The process 1200 described above thus causes generation of an alert when the distal segment 346 of the positioner 334 would be positioned outside the positioner operational envelope 1000 based on current input signals being received at the input device 112. The manipulator operational envelope 1100 may be treated in the same manner and an alert signal generated by the microprocessor 400 when the orientation input signals represent an orientation in the instrument workspace 324 that would cause the distal end 352 of the manipulator 332 or the end effector 316 to lie on or outside the manipulator operational envelope 1100.
[0131] The instrument 110a may be one of a set of articulating instruments of differing instrument types used in a surgical procedure and the manipulator operational envelope 1100 may be a pre-defined manipulator operational envelope for all differing instrument types. Alternatively the manipulator operational envelope 1100 may be a pre-defined manipulator operational envelope selected based on the instrument type. In other embodiments the manipulator operational envelope 1100 may be established based at least in part on surgical data associated with a target operational site or based at least in part on surgical data provided by a scan of the target operational site. In some embodiments the manipulator operational envelope 1100 may be based at least in part on operator input.
[0132] FIGS. 13A and 13B show different poses of the positioner 334 and manipulator 332 of the instrument 110a that highlight the differences between the control model described in the background section and the control mode disclosed in the above embodiments. In both of these Figures, the position of the hand controller 122 is assumed to be aligned with the z.sub.1-axis in input workspace 522 (FIG. 5) and the orientation of the hand controller is rotated to the left about the y.sub.1-axis.
[0133] Referring to FIG. 13A, the distal segment 346 of the positioner 334 in instrument workspace 324 is located at a position Z.sub.3-X.sub.3, which due to the hand controller 122 being aligned with the z.sub.1-axis in input workspace 522 is located on the Z.sub.0 axis in instrument workspace 324. The position of the distal segment 346 thus corresponds to the position of the hand controller 122 in input workspace 522. The manipulator 332 is bent to the left based on the orientation of the hand controller 122 to place the distal end 352 and end effector 316 at orientations X.sub.4-Z.sub.4 and X.sub.5-Z.sub.5 respectively. FIG. 13A is thus representative of the instrument 110a being controlled in accordance with the disclosed embodiments herein.
[0134] Referring to FIG. 13B, the end effector 316 is located at a position Z.sub.5-X.sub.5, which corresponds to the position of the hand controller 122 being aligned with the z.sub.1-axis in input workspace 522. The end effector 316 is also oriented based on the orientation of the hand controller 122 to place the distal end 352 and end effector 316 at orientations X.sub.4-Z.sub.4 and X.sub.5-Z.sub.5 respectively. Thus, while the orientation of the end effector 316 is the same in both FIG. 13A and FIG. 13B, the actual position of the end effector X.sub.5-Z.sub.5 differs. In FIG. 13A, the end effector X.sub.5-Z.sub.5 is bent off the Z.sub.0-axis in response to the orientation of the hand controller 122.
[0135] Additionally in FIG. 13B, the distal segment 346 of the positioner 334 in instrument workspace 324 is located at a position Z.sub.3-X.sub.3, which is calculated by the instrument processor circuit 118 to place the end effector 316 at the position in instrument workspace 324 corresponding to the position of the hand controller 122 in input workspace 522. The distal segment 346 is thus located to the right of the Z.sub.0-axis. FIG. 13A is thus representative of the instrument 110a being controlled in accordance with the control mode described in the background section.
[0136] In one embodiment, the instrument processor circuit 118 may be responsive to a positioning mode change signal to change the control mode between the FIG. 13A and FIG. 13B control modes. For example, when the positioning mode change signal is received while in the mode depicted in FIG. 13A, the instrument processor circuit 118 may be configured to discontinue control based on the separate position control signals and the manipulation control signals. The processor circuit 118 would thus combine the position input signals and the orientation input signals received from the input device 112 to generate a desired position and orientation of the distal end distal end 352 of the manipulator or the end effector 316. Additionally, the instrument processor circuit 118 may also be configured to perform an inverse kinematic computation on the desired position and orientation of the distal end distal end 352 of the manipulator to determine a position for the distal segment distal segment 346 of the positioner 334. The instrument processor circuit 118 would further determine actuation parameters for the positioner 334 and the manipulator 332 associated with the desired position and orientation of the distal end 352 of the manipulator.
[0137] Each of the control modes of FIG. 13A and FIG. 13B have their respective advantages. The control mode in accordance with the embodiments disclosed herein has an advantage of controlling the location of the distal segment 346 of the positioner 334. The separate orientation control of the manipulator 332 may be better suited for performing some operations. The control mode of FIG. 13B may be advantageous when the system 100 is operating autonomously by the processor circuit 118 without input via the input device 112.
[0138] While specific embodiments have been described and illustrated, such embodiments should be considered illustrative only and not as limiting the disclosed embodiments as construed in accordance with the accompanying claims.
Claims
1. A method for controlling an articulating surgical instrument, the instrument comprising a manipulator and a positioner, the positioner being actuable to position a distal segment of the positioner within an instrument workspace, the manipulator attached to the distal segment of the positioner and including a distal end configured for mounting an operational tool for performing an operation within the instrument workspace, the manipulator being actuable to manipulate the distal end of the manipulator within the instrument workspace, the method comprising: receiving input signals at a processor circuit, the input signals including: position input signals representing a position within an input workspace; and orientation input signals representing an orientation within the input workspace; causing the processor circuit to generate position control signals for actuating the positioner to move the distal segment within the instrument workspace to a physical position represented by the position input signals; and causing the processor circuit to generate manipulation control signals based on the orientation input signals for actuating the manipulator to orient the distal end within the instrument workspace.
2. The method of claim 1 wherein receiving input signals comprises receiving input signals from an autonomous controller processor circuit operably configured to autonomously generate the position input signals and the orientation input signals.
3. The method of claim 1 wherein receiving input signals comprises generating the input signals in response to movements of an operator's hand.
4. The method of claim 3 wherein generating the input signals in response to movements of an operator's hand comprises at least one of: receiving movement signals from a sensor disposed to monitor free movements of an operator's hand within an input region; receiving movement signals from a movement sensor grasped or attached to the operator's hand, the movement sensor being responsive to free movements of the operator's hand; or receiving movement signals from a virtual reality headset worn by an operator.
5. The method of claim 4 further comprising causing the processor circuit to process the movement signals to generate the position input signals and orientation input signals.
6. The method of claim 5 wherein causing the processor circuit to process the movement signals comprises filtering the free movements of the operator's hand to extract movements of an operator's digits with respect to one of a palm of the operator's hand or a wrist of the operator's hand.
7. The method of claim 3 wherein generating the input signals comprises: generating the position input signals in response to translations of the operator's hand in three translational degrees of freedom within the input workspace; and generating the orientation input signals in response to rotations of the operator's hand in at least two rotational degrees of freedom within the input workspace.
8. The method of claim 7 wherein generating the position input signals comprises: kinematically transmitting translational movements of the operator's hand in three translational degrees of freedom via a first kinematic structure to a plurality of encoders operable to produce translational movement signals; and processing the translational movement signals to generate the position input signals.
9. The method of claim 7 wherein generating the orientation input signals comprises: kinematically transmitting orientation movements of the operator's hand in the at least two rotational degrees of freedom via a second kinematic structure to a plurality of encoders operable to produce orientation signals; and processing the orientation signals to generate the orientation input signals.
10. The method of claim 1 wherein the positioner comprises a first plurality of segments extending between a bulkhead segment and the distal segment, the first plurality of segments being selectively actuable by transmitting actuation forces via a first plurality of control wires extending through the first plurality of segments, and wherein actuating the positioner comprises generating the position control signals to selectively actuate the first plurality of control wires to cause respective movements of the first plurality of segments to position the distal segment at the physical position.
11. The method of claim 10 wherein the manipulator comprises a second plurality of segments extending between the distal segment of the positioner and the distal end of the manipulator, the second plurality of segments being moveable in response to transmitting actuation forces delivered via a second plurality of control wires extending through the first plurality of segments and through the second plurality of segments, and wherein actuating the manipulator comprises generating manipulator control signals to selectively actuate the second plurality of control wires to cause respective movements of the second plurality of segments to orient the distal end within the instrument workspace.
12. The method of claim 10 wherein the first plurality of segments comprise a plurality of adjacently stacked vertebra extending between the bulkhead segment and the distal segment and wherein the control wires are coupled to the distal segment and wherein selectively actuating the first plurality of control wires comprises actuating the first plurality of control wires to cause the distal segment to move to position the distal segment at the physical position, and wherein each vertebra is coupled to move in at least one of a pitch axis and a yaw axis with respect to adjacent vertebra and wherein the actuation forces delivered via the first plurality of control wires cause the respective vertebra to be angled with respect to each other to bend the instrument in a continuous arc.
13. The method of claim 12 wherein the first plurality of segments comprise a plurality of elongate segments coupled together by respective joints.
14. The method of claim 1 wherein the positioner is coupled to a rigid shaft and wherein: the position input signals comprise an insertion depth position within the input workspace; and actuating the positioner to move the distal segment within the instrument workspace comprises causing the rigid shaft to be advanced or retracted in response to the insertion depth position.
15. The method of claim 14 wherein the rigid shaft is received in a bore of an insertion tube and wherein the instrument workspace extends outwardly from an end of the insertion tube.
16. The method of claim 15 wherein causing the rigid shaft to be retracted comprises causing the rigid shaft to be retracted such that the distal segment of the positioner is disposed proximate the end of the insertion tube and the manipulator remains extending outwardly from the end of the insertion tube and remains capable of movement with respect to the distal segment.
17. The method of claim 1 further comprising: generating a positioner operational envelope defining boundaries to movement of the distal segment of the positioner within a portion of the instrument workspace; and generating an alert signal when the position input signals represent a position in the instrument workspace that lie on or outside the positioner operational envelope.
18. The method of claim 17 wherein receiving input signals comprises causing an input device to generate the input signals in response to movements of an operator's hand and further comprising delivering haptic feedback via the input device to the operator's hand in response to the alert signal.
19. The method of claim 17 wherein generating the positioner operational envelope comprises generating the positioner operational envelope in the instrument workspace and mapping the positioner operational envelope from the instrument workspace to the input workspace and wherein generating the alert signal comprises generating the alert signal when the position input signals represent a position in the input workspace that lies on or is outside the positioner operational envelope mapped to the input workspace.
20. The method of claim 17 wherein the processor circuit is operably configured to generate display signals for displaying a view of the instrument workspace and wherein generating the display signals comprises generating display signals including an overlay image corresponding to the positioner operational envelope.
21. The method of claim 17 wherein the positioner operational envelope comprises an insertion/retraction region that represents a region of the instrument workspace within which the positioner should be constrained in a straightened condition for insertion or retraction of the instrument into or out of the instrument workspace.
22. The method of claim 21 wherein the positioner operational envelope further comprises a free movement region that represents a region of the instrument workspace within which the positioner is able to move after the positioner is disposed outside of the insertion/retraction region of the positioner operational envelope.
23. The method of claim 17 wherein the articulating surgical instrument comprises one of a set of articulating instruments of differing instrument types used in a surgical procedure and wherein the positioner operational envelope comprises at least one of one of: a pre-defined positioner operational envelope for all differing instrument types; a pre-defined positioner operational envelope selected based on the instrument type; a positioner operational envelope based at least in part on surgical data associated with a target operational site; a positioner operational envelope based at least in part on surgical data provided by a scan of the target operational site; or a positioner operational envelope based at least in part on operator input.
24. The method of claim 17 further comprising: generating a manipulator operational envelope defining boundaries to movement of the distal end of the manipulator within a portion of the instrument workspace; and generating an alert signal when the orientation input signals represent an orientation in the instrument workspace that would cause the distal end of the manipulator or the operational tool to lie on or outside the manipulator operational envelope.
25. The method of claim 24 wherein the articulating surgical instrument comprises one of a set of articulating instruments of differing instrument types used in a surgical procedure and wherein the manipulator operational envelope comprises at least one of one of: a pre-defined manipulator operational envelope for all differing instrument types; a pre-defined manipulator operational envelope selected based on the instrument type; a manipulator operational envelope based at least in part on surgical data associated with a target operational site; a manipulator operational envelope based at least in part on surgical data provided by a scan of the target operational site; or a manipulator operational envelope based at least in part on operator input.
26. The method of claim 21 wherein in response to receiving position input signals that are associated with a retraction of the distal segment from the instrument workspace, further comprising: causing the processor circuit to determine whether the position input signals represent a physical position of the distal segment associated with the positioner being in a bent condition; and causing the processor circuit to generate modified position control signals to cause the positioner to be straightened while being retracted into the insertion/retraction region of the instrument workspace.
27. The method of claim 17 further comprising causing the processor circuit to determine whether the position input signals represent a physical position of the distal segment that lies on or outside the positioner operational envelope and wherein causing the processor circuit to generate the position control signals comprises causing the processor circuit to generate modified position control signals that constrain movements of the distal segment to be within the positioner operational envelope.
28. The method of claim 1 further comprising in response to receiving a positioning mode change signal, causing the processor circuit to: combine the position input signals and the orientation input signals to generate a desired position and orientation of the distal end of the manipulator; and perform an inverse kinematic computation on the desired position and orientation of the distal end of the manipulator to determine a position for the distal segment of positioner and actuation parameters for the positioner and the manipulator associated with the desired position and orientation of the distal end of the manipulator.
BOOM BOOM!
Brake Assembly For Robotic Surgery System
DOCUMENT ID
US 11529986 B2
DATE PUBLISHED
2022-12-20
INVENTOR INFORMATION
NAME
CITY
STATE
ZIP CODE
COUNTRY
Basco de Rosa Payne; Angelica
Copley
OH
N/A
US
Pratt; Spencer Scott
Cary
NC
N/A
US
Shipley; Abraham Allen
Apex
NC
N/A
US
Pflaumer; Hans Christian
Apex
NC
N/A
US
APPLICANT INFORMATION
NAME
Titan Medical Inc.
CITY
Toronto
STATE
N/A
ZIP CODE
N/A
COUNTRY
CA
AUTHORITY
N/A
TYPE
assignee
ASSIGNEE INFORMATION
NAME
Titan Medical Inc.
CITY
Toronto
STATE
N/A
ZIP CODE
N/A
COUNTRY
CA
TYPE CODE
03
APPLICATION NO
17/301269
DATE FILED
2021-03-30
DOMESTIC PRIORITY (CONTINUITY DATA)
continuation parent-doc US 16419743 20190522 US 10988158 child-doc US 17301269
US CLASS CURRENT:
1/1
CPC CURRENT
TYPE
CPC
DATE
CPCI
B 62 B 3/00
2013-01-01
CPCI
B 62 B 5/0433
2013-01-01
CPCI
B 62 B 5/04
2013-01-01
CPCI
B 62 B 5/0461
2013-01-01
CPCI
B 60 B 33/0092
2013-01-01
CPCI
F 16 D 65/28
2013-01-01
CPCA
B 62 B 2005/0471
2013-01-01
CPCA
F 16 D 2121/04
2013-01-01
CPCA
B 62 B 2301/044
2013-01-01
CPCA
B 62 B 2202/00
2013-01-01
CPCA
F 16 D 2125/46
2013-01-01
CPCA
F 16 D 2125/64
2013-01-01
CPCA
F 16 D 2121/14
2013-01-01
Abstract
A robotic surgery cart has a pair of rear wheel assemblies and a pair of front wheel assemblies. A brake assembly for the robotic surgery cart includes a gearbox interposed between and connected to the pair of rear wheel assemblies by rotatable shafts. Elongate actuators extend between and interconnect the rotatable shafts and brake mechanisms for the front wheel assemblies. A pedal lever is rotatably coupled to the gearbox and can rotate clockwise by pressing one portion of the pedal lever and can rotate counterclockwise by pressing another portion of the pedal lever. Rotation of the pedal lever causes the gearbox to rotate the rotatable shafts to substantially lock the pair of rear wheel assemblies, and substantially simultaneously causes a translation of the elongate actuators to actuate the brake mechanisms of the front wheel assemblies, such that the wheels of the front and rear wheel assemblies brake substantially simultaneously.
Background/Summary
INCORPORATION BY REFERENCE TO ANY PRIORITY APPLICATIONS
(1) Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet as filed with the present application are hereby incorporated by reference under 37 CFR 1.57.
BACKGROUND
Field
(2) The present disclosure generally relates to robotic surgical systems, and more particularly to a brake assembly for a robotic surgical system.
Description of the Related Art
(3) Robotic surgery systems generally include an operator interface that receives operator input from a surgeon and causes corresponding movements of surgical tools within a body cavity of a patient to perform a surgical procedure. The operator interface can be on a workstation that the surgeon interfaces with to perform a surgical procedure using the surgical tools. The surgical tools can be on a cart separate from the workstation. The cart can be mobile, allowing hospital staff to move the cart into an operating room prior to the surgical procedure, and to remove it from the operating room once the surgical procedure has been completed.
SUMMARY
(4) In accordance with one aspect of the disclosure, a brake assembly is provided on a cart of a robotic surgery system. The brake system is actuatable by a user to lock and unlock all the wheels of the cart substantially simultaneously.
(5) In accordance with another aspect of the disclosure, a brake assembly for a robotic surgery cart is provided. The brake assembly comprises a pair of rear wheel assemblies, each having a brake mechanism actuatable to selectively brake a wheel of each of the rear wheel assemblies, and a pair of front wheel assemblies, each having a disc brake assembly actuatable to selectively brake a rotor operatively coupled to a wheel of each of the front wheel assemblies. The brake assembly also comprises a gearbox interposed between the pair of rear wheel assemblies, a pair of rotatable shafts extending along a first axis and interconnecting the gearbox with the pair of rear wheel assemblies, and a pair of elongate actuators interconnecting the pair of rotatable shafts and the disc brake assemblies of the front wheel assemblies. The brake assembly also comprises a pedal lever rotatably coupled to the gearbox and configured to rotate about a second axis that is generally perpendicular to the first axis, the pedal lever configured to rotate clockwise by pressing on one portion of the pedal lever and to rotate counterclockwise by pressing on another portion of the pedal lever. Rotation of the pedal lever about the second axis causes the gearbox to rotate the pair of rotatable shafts about the first axis to substantially lock the pair of rear wheel assemblies, and substantially simultaneously causes a translation of the pair of elongate actuators to actuate the disc brake assemblies to substantially lock the rotors of the front wheel assemblies, such that the wheels of the front and rear wheel assemblies brake substantially simultaneously.
(6) In accordance with another aspect of the disclosure, a brake assembly for a robotic surgery cart is provided. The brake assembly comprises a pair of rear wheel assemblies, each having a brake mechanism actuatable to selectively brake a wheel of the rear wheel assembly, at least one front wheel assembly, a rotor operatively coupled to a wheel of the at least one front wheel assembly, a disc brake assembly actuatable to selectively brake the rotor, and a gearbox interposed between the pair of rear wheels. The brake assembly also comprises a pair of rotatable shafts extending along a first axis and interconnecting the gearbox with the pair of rear wheels, and at least one elongate actuator interconnecting at least one of the pair of rotatable shafts and the disc brake assembly. The brake assembly also comprises a pedal lever rotatably coupled to the gearbox, the pedal lever configured to rotate in a first direction by pressing on one portion of the pedal lever and to rotate in a second direction by pressing on another portion of the pedal lever. Rotation of the pedal lever causes the gearbox to rotate the pair of rotatable shafts about the first axis to substantially lock the pair of rear wheel assemblies, and substantially simultaneously causes a translation of the at least one elongate actuator to actuate the disc brake assembly to substantially lock the rotor, such that the rear wheel assemblies and the at least one front wheel assembly brake substantially simultaneously.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) FIG. 1 illustrates a robotic surgery system.
(2) FIG. 2A is a front perspective view of a baseplate assembly of a cart of the robotic surgery system of FIG. 1 along plane 2-2.
(3) FIG. 2B is a bottom view of the baseplate assembly of FIG. 2.
(4) FIG. 3 is a front perspective view of another baseplate assembly for the cart of the robotic surgery system of FIG. 1.
(5) FIG. 4 is a rear perspective view of the baseplate assembly in FIG. 3.
(6) FIG. 5 is an exploded view of the baseplate assembly of FIG. 3.
(7) FIG. 6 is a bottom view of the baseplate assembly of FIG. 3.
(8) FIG. 7A is a perspective rear view of the brake assembly of the baseplate assembly of FIG. 3 with the baseplate removed.
(9) FIG. 7B is a partial rear view of the brake assembly of the baseplate assembly of FIG. 3.
(10) FIG. 8 is a partial perspective bottom view of the brake assembly attached to the baseplate of the baseplate assembly.
(11) FIG. 9 is an exploded view of a front wheel of the cart, showing a portion of the brake assembly of the base plate assembly.
DETAILED DESCRIPTION
(12) Overview of Robotic Surgery System
(13) FIG. 1 illustrates a robotic surgery system 100. The robotic surgery system 100 includes a workstation 102 and an instrument station or a patient cart 104. The patient cart 104 includes at least one tool mountable on a moveable instrument mount, central unit or drive unit 106 that houses an instrument drive (not shown) for manipulating the tool. The tool may include an insertion device 108 that can support at least one surgical instrument (hereinafter to be interchangeably used with an “instrument” or “surgical tool”) and a camera (not shown) that images a surgical site. The workstation 102 may also include a tool such as an instrument clutch (that may optionally be implemented by a foot pedal described below). The insertion device 108 can optionally support two or more instruments (not shown). The camera may optionally include a primary camera and at least one secondary camera. The primary camera and the secondary camera may provide different viewing angles, perform different functions and/or produce different images. At least one of the primary camera and the secondary camera may optionally be a two-dimensional (2D) or a three-dimensional (3D) camera. FIG. 1 is merely an example of a robotic surgery system, and certain elements may be removed, other elements added, two or more elements combined, or one element can be separated into multiple elements depending on the specification and requirements of the robotic surgery system.
(14) The workstation 102 includes an input device for use by a user (for example, a surgeon; hereinafter to be interchangeably used with an “operator”) for controlling the instrument via the instrument drive to perform surgical operations on a patient. The input device may optionally be implemented using a haptic interface device available from Force Dimension, of Switzerland, for example. The input device optionally includes a right input device 132 and a left input device 112 for controlling respective right and left instruments (not shown). The right input device 132 includes a right hand controller 122 (hereinafter to be interchangeably used with a “hand grip” or “handpiece”) and the left input device 112 includes a left hand controller 124. The right and left hand controllers 122 and 124 may optionally be mechanically or electrically coupled to the respective input devices 132 and 112. Alternatively, the right and left hand controllers 122 and 124 may be wirelessly coupled to the respective input devices 132 and 112 or may be wireless coupled directly to the workstation 102. In some cases, when there are two instruments at the instrument station 104, the right and left hand controllers 122 and 124 may respectively control the two instruments. In some cases, when there are more than two instruments, the right and left hand controllers 122 and 124 may be used to select two of the multiple instruments that an operator wishes to use. In some cases, when there is only one instrument, one of the right and left hand controllers 122 and 124 may be used to select the single instrument.
(15) The input devices 132 and 112 may generate input signals representing positions of the hand controllers 122 and 124 within an input device workspace (not shown). In some cases where the input devices 132 and 112 are coupled directly and wirelessly to the workstation, they would include the necessary sensors to allow wireless control such as an accelerometer, a gyroscope and/or magnetometer. In other cases, a wireless connection of the input devices 132 and 112 to the workstation 102 may be accomplished by the use of camera systems alone or in combination with the described sensors. The afore described sensors for wireless functionality may also be placed in each handpiece to be used in conjunction with the input devices 132 and 112 to independently verify the input device data. The workstation 102 also includes a workstation processor circuit 114, which is in communication with the input devices 132 and 112 for receiving the input signals.
(16) The workstation 102 also includes a display 120 in communication with the workstation processor circuit 114 for displaying real time images and/or other graphical depictions of a surgical site produced by the camera associated with the instrument. The workstation 102 may optionally include right and left graphical depictions (not shown) displayed on the display 120 respectively for the right and left side instruments (not shown). The graphical depictions may optionally be displayed at a peripheral region of the display 120 to prevent obscuring a live view of the surgical workspace also displayed on the display. The display 120 may further be operable to provide other visual feedback and/or instructions to the user. A second auxiliary display 123 may be utilized to display auxiliary surgical information to the user (surgeon), displaying, for example, patient medical charts and pre-operation images. In some cases, the auxiliary display 123 may be a touch display and may also be configured to display graphics representing additional inputs for controlling the workstation 102 and/or the patient cart 104. The workstation 102 further includes a footswitch or foot pedal 126, which is actuatable by the user to provide input signals to the workstation processor circuit 114. In one case, the signal provided to the workstation processor circuit 114 may inhibit movement of the instrument while the footswitch 126 is depressed.
(17) The patient cart 104 includes an instrument processor circuit 118 for controlling the central unit 106, insertion device 108, one or more instruments and/or one or more cameras. In such case, the instrument processor circuit 118 is in communication with the workstation processor circuit 114 via an interface cable 116 for transmitting signals between the workstation processor circuit 114 and the instrument processor circuit 118. In some cases, communication between the workstation processor circuit 114 and the processor circuit 118 may be wireless or via a computer network, and the workstation 102 may even be located remotely from the instrument station 104. Input signals are generated by the right and left input devices 132 and 112 in response to movement of the hand controllers 122 and 124 by the user within the input device workspace and the instrument is spatially positioned in a surgical workspace in response to the input signals.
(18) Additional details of the robotic surgery system 100 are described in U.S. patent application Ser. No. 16/174,646 filed on Oct. 30, 2018, the entirety of which is hereby incorporated by references and should be considered a part of this specification.
(19) Braking Assembly
(20) FIGS. 2A-9 illustrate a braking assembly 200, 200' for a mobile cart, such as an instrument station or the patient cart 104 of the robotic surgical system 100.
(21) FIGS. 2A-2B show a braking assembly 200 of the patient cart 104, that at least partially defines the bottom portion of the patient cart 104 taken along line 2-2 in FIG. 1. The braking assembly 200 can have a baseplate 210, a pair of rear wheel assemblies 220 (left rear wheel assembly 220A and right rear wheel assembly 220B), a pair or front wheel assemblies 230 (left front wheel assembly 230A and right front wheel assembly 230B), a gearbox assembly 250 and a pedal assembly 260. The rear wheel assemblies 220, front wheel assemblies 230 and gearbox assembly 250 can be attached to the baseplate 210 (e.g. to an underside of the base plate 210). Optionally, the pair of rear wheel assemblies 220 are casters 220A, 220B (not shown). Optionally, the pair of front wheel assemblies 230 are casters 230A, 230B. The pedal assembly 260 can have a depressible pedal 262 movably coupled to the gearbox assembly 250, and actuatable by an operator (e.g., by pressing on the pedal with their foot) to actuate the gearbox assembly 250 to lock and unlock the pair of rear and front wheel assemblies 220, 230 substantially simultaneously, as further discussed below. In one implementation, the pedal assembly 260 engages the gearbox assembly 250 in a push-push manner, such that pushing the pedal 262 down causes the gearbox assembly 250 to lock the pair of rear and front wheel assemblies 220, 230 (e.g., inhibit or prevent them from rotating), and pushing the pedal 262 down a second time causes the gearbox assembly 250 to unlock the pair of rear and front wheel assemblies 220, 230 (e.g., allowing them to rotate).
(22) FIGS. 3-9 show a braking assembly 200' similar to the braking assembly 200 described above in connection with FIG. 2. Thus, references numerals used to designate the various components of the braking assembly 200' are identical to those used for identifying the corresponding components of the braking assembly 200 in FIG. 2, except that a “'” is added to the numerical identifier. Therefore, the structure and description for the various components of the braking assembly 200 in FIG. 2 is understood to also apply to the corresponding components of the braking assembly 200' in FIGS. 3-9, except as described below.
(23) The braking assembly 200' differs from the braking assembly 200 in that at least a portion of the gearbox assembly 250' is mounted at least in part to a top surface 211A' of the baseplate 210'. Additionally, the pair of front wheel assemblies 230' are not casters. Further, the pedal assembly 260' is rotatably coupled to the gearbox 250' (See FIG. 7B). The pedal assembly 260' includes a first pedal 262' and a second pedal 264', where actuation of one of the pedals 262', 264' locks the pair of front and rear wheel assemblies 220', 230' substantially simultaneously to lock the braking assembly 200' and patient cart 104 in place, and actuation of the other of the pedals 262', 264' unlocks the pair of front and rear wheel assemblies 220', 230', allowing the braking assembly 200' and patient cart 104 to be moved. In one implementation, the rear wheel assemblies 220A', 220B' can be casters, such as casters from Tente International GmbH.
(24) With reference to FIGS. 4-6, the gearbox assembly 250' can optionally have a front mounting plate 252', a top mounting plate 254', where at least a portion of the front and top mounting plates 252', 254' attach to the baseplate 210', and a gearbox 256'. The gearbox 256' can be a right angle gearbox with an input shaft (not shown) that couples to the pedal assembly 260' and two output shafts (not shown) that extend generally perpendicular to the input shaft. The gearbox 256' can have one or more gears (e.g., bevel gears) that translate rotation of input shaft about an axis Y (via rotation of the pedal assembly 260') into rotation of the output shafts (not shown) of the gearbox 256' about an axis X that is perpendicular to the axis Y. Optionally, the two output shafts rotate in the same direction. The top mounting plate 254' can attach to a top surface 211A' of the base plate 210'. The gearbox 256' can optionally mount to one or both of a bottom surface 211B' of the base plate 210' and the top mounting plate 254'. The rear wheel assemblies 220A', 220B' can attach to the bottom surface 211B' of the base plate 210' via mounting plates 222A', 222B'. The front wheel assemblies 230A', 230B' can attach to the bottom surface 211B' of the base plate 210' via brackets 232A', 233A', 232B', 233B'.
(25) FIG. 6 shows a bottom view of the braking assembly 200', FIG. 7 shows the components of the braking assembly 200' with the baseplate 210' removed, and FIG. 8 shows a partially assembled view of the braking assembly 200'. With reference to FIGS. 6-8, the output shafts of the gearbox 256' engage with rotary detents 288A', 288B' on both sides of the gearbox 256', which selectively engage spring plungers 289A', 289B' as they rotate, as discussed further below. The rotary detents 288A', 288B' couple with shaft couplings 286A', 286B', which in turn couple with shaft portions 285A', 285B' that extend to shaft ends 280A', 280B'. The shaft ends 280A', 280B' can optionally extend into housings 224A', 224B' of the rear wheel assemblies 220A', 220B'. The shaft ends 280A', 280B' optionally have a portion (e.g., keyed portion) that engages a break mechanism in the rear wheel assemblies 220A', 220B'. Rotation of the pedal assembly 260' in one direction causes the shaft portions 285A', 285B', and therefore the shaft ends 280A', 280B' to rotate in a first direction (via the gearbox 256), causing the brake mechanism in the front wheel assemblies 220A', 220B' to engage at least a portion of the wheels 221A', 221B' of the front wheel assemblies 220A', 220B'. In another implementation, the rotary detents 288A', 288B' and spring plungers 289A', 289B' are excluded.
(26) With continued reference to FIGS. 6-8, the shaft portions 285A', 285B' can extend through brackets 283A', 283B' that are attached to the bottom surface 211B' of the baseplate 210'. Optionally, the brackets 283A', 283B' can each include a bushing through which the shaft portions 285A', 285B' extend. The brackets 283A', 283B' can support the shaft portions 285A', 285B' on the braking assembly 200'. A pair of levers 284A', 284B' can be mounted (e.g., via press-fit connection, rigidly mounted) on the shaft portions 285A', 285B', respectively. The levers 284A', 284B' rotate with the shaft portions 285A', 285B'.
(27) One or more elongate actuators operatively interconnect the pair of front wheel assemblies 220' and the pair of front wheel assemblies 230'. In FIGS. 6-8, a pair of elongate actuators 270A', 270B' operatively interconnect the pair of front wheel assemblies and the pair of rear wheel assemblies 230'. The pair of actuators 270A', 270B' can have linkages 272A', 272B' at an end thereof (e.g., removably attached to an end thereof) that couples to the levers 284A', 284B'. The pair of actuators 270A', 270B' can have linkage 274A', 274B' at an opposite end thereof (e.g., removably attached to the opposite end thereof) that couples to a disk brake assembly 237A', 237B' of the front wheel assemblies 230A', 230B' as further described below.
(28) In one implementation, the pair of actuators 270A', 270B' are a pair of gas springs. In one implementation, the elongate actuators 270A', 270B' are gas springs for medical applications provided by Industrial Gas Springs, Inc. The elongate actuators 270A', 270B' optionally include shaft portions 271A', 271B' that attach to the levers 284A', 284B' via linkages 272A', 272B', cylinder portions 274A', 274B' attached to the shaft portions 272A', 272B', and piston rod portions 276A', 276B' that travel within the cylinder portions 278A', 278B' and that couple to the front wheel assemblies 230A', 230B' via the linkages 274A', 274B'. In other implementations, the one or more actuators are a pair of rods (e.g., substantially rigid rods) that extend (continuously) from the linkages 272A', 272B' to the linkages 274A', 274B'. In other implementations, the one or more actuators are a pair of compression springs that extend from the linkages 272A', 272B' to the linkages 274A', 274B'. In other implementations, the one or more actuators are a pair of extension springs that extend from the linkages 272A', 272B' to the linkages 274A', 274B'.
(29) The braking assembly 200' can include a one or more support rails attached to the baseplate 210'. As shown in FIGS. 6 and 7A, the braking assembly 200' can optionally have one or more (e.g., a pair of) longitudinal rails 216A', 216B' attached to the bottom surface 211B' of the baseplate 210'. One or more (e.g., a pair of) transverse rails 212', 214' can attach to one or both of the longitudinal rail(s) 216A', 216B' and the bottom surface 211B' of the baseplate 210'. The transverse rail(s) 212', 214' optionally have slots 212A', 212B', 214A', 214B' through which at least a portion of the elongate actuator(s) 270A', 270B' extend. The slots 212A', 212B', 214A', 214B' can aid in guiding the movement of the elongate actuator(s) 270A', 270B' as further discussed below.
(30) FIG. 9 shows an exploded view of the left front wheel assembly 230A'. The right front wheel assembly 230B' can have the same components and arrangements shown in FIG. 9 and described below, except that “B” would replace “A” in the numerical identifiers.
(31) The front wheel assembly 230A', 230B' includes a wheel 231A', 231B' mounted between the outer bracket 232A', 232A' and inner bracket 233A', 233B' with an axle 234A', 234B' that extends through the wheel 231A', 231B'. The axle 234A', 234B' can have a recess or slot that couples with a key member 235A', 235B'. The key member 235A', 235B' can engage a key slot 241A', 241B' in a central opening 240A', 240B' of the wheel 231A', 231B' so that the wheel 231A', 231B' and axle 234A', 234B' rotate as one unit (e.g., the wheel 231A', 231B' does not rotate independently of the axle 234A', 234B'). An end of the axle 234A', 234B' can fixedly couple with a rotor 236A', 236B' rotate as one unit (e.g., the axle 234A', 234B' does not rotate independently of the rotor 236A', 236B').
(32) A disc brake assembly 237A', 237B' can be disposed about at least a portion of the rotor 236A', 236B' and selectively actuatable to engage the rotor 236A', 236B' to brake (e.g., inhibit or prevent the rotation of) the rotor 236A', 236B'. In one implementation, the disc brake assembly 237A', 237B' can apply a force (e.g., a clamp force) of approximately up to about 900 lbf on the rotor 236A', 236B'. The disk brake assembly 237A', 237B' can have a lever 238A', 238B' that can couple with the linkage 274A', 274B' of the elongate actuator 270A', 270B'. The disc brake assembly 237A', 237B' can optionally couple to the bracket 233A', 233B' (e.g., via a spacer 239A', 239B' and fasteners 246A', 246B' and 247A', 247B', which can be screws).
(33) Optionally, a locking ring 242A', 242B' can be coupled to an end of the axle 234A', 234B' to inhibit (e.g., prevent) the axle 234A', 234B' from sliding out of the wheel 231A', 231B'. Optionally, a set screw 248A', 248B' can be inserted in an opening 249A', 249B' of the wheel 231A', 231B' to aid in retaining the axle 234A', 234B' fixedly coupled to the wheel 231A', 231B'. Optionally, bearings 243A', 243B' and 244A', 244B' can be coupled to the axle 234A', 234B' and disposed in the brackets 232A', 232B' and 233A', 233B' to facilitate rotation of the axle 234A', 234B' within the brackets 232A', 232B' and 233A', 233B'. In one implementation, the front wheels 231A', 231B' are similar to ones supplied by TREW Industrial Wheels, Inc. In one implementation, the disc brake assemblies 237A', 237B' can be mechanical brakes, such as model 1100m provided by Hayes Performance Systems.
(34) In operation, the pedal assembly 260' can rotate about axis Y by at least a degrees. In one implementation, a is between 0 degrees and 90 degrees, such as about 60 degrees. Optionally, rotation of the pedal assembly 260' in one direction (e.g., a counterclockwise direction by pressing on pedal 262') causes the wheel assemblies 220A', 220B', 230A', 230B' to lock to inhibit (e.g., prevent) motion of the baseplate 210', and rotation of the pedal assembly 260' in an opposite direction (e.g., a clockwise direction by pressing on pedal 264') causes the wheel assemblies 220A', 220B', 230A', 230B' to unlock and allow motion of the baseplate 210'. In another implementation, rotation of the pedal assembly 260' away from a neutral or level position (e.g., rotation clockwise or counterclockwise away from a neutral position) where the pedals 262', 264' are generally at the same orientation relative to the baseplate 210' causes the wheel assemblies 220A', 220B', 230A', 230B' to lock to inhibit (e.g., prevent) motion of the baseplate 210', and rotation of the pedal assembly 260' to the neutral or level position causes the wheel assemblies 220A', 220B', 230A', 230B' to unlock and allow motion of the baseplate 210'.
(35) In operation, when the pedal assembly 260' is rotated to lock the rear and front pair of wheel assemblies 220', 230', the gearbox 256' translates rotation of the pedal assembly 260' into rotation (e.g., simultaneous rotation) of the shaft portions 285A', 285B' and shaft ends 280A', 280B'. Rotation of the shaft ends 280A', 280B' cause the brake mechanisms in the rear wheel assemblies 220A', 220B' to engage to inhibit (e.g., prevent) rotation of the rear wheel assemblies 220A', 220B'. Rotation of the shaft portions 285A', 285B' cause rotation (e.g., simultaneous rotation) of the levers 284A', 284B', which push (via the linkages 272A', 272B') the elongate actuators 270A', 270B' axially toward a front end F of the baseplate 210'. This results in the elongate actuators 270A', 270B' axially moving toward the front end F of the baseplate 210' so that the linkages 274A', 274B' push the levers 238A', 238B' causing the disc brake assemblies 237A', 237B' to engage the rotors 235A', 235B' to inhibit (e.g., prevent) rotation of the rotors 235A', 235B'. As the front wheels 231A', 231B', axles 234A', 234B' and rotors 235A', 235B' so that they rotate as one (e.g., they do not rotate independently of each other), the braking of the rotors 235A', 235B' with the disc brake assemblies 237A', 237B' also causes (e.g., simultaneously causes) the wheels 231A', 231B' to lock. Advantageously, the rear wheels 220A', 220B' and the front wheels 230A', 230B' lock substantially simultaneously upon rotation of the pedal assembly 260' to the locking orientation. Therefore, all the wheel assemblies 220A', 220B', 230A', 230B' of the braking assembly 200' (and therefor of a mobile cart, such as the patient cart 104) can be locked with a single actuation of the pedal assembly 260', making the locking and unlocking of the mobile cart (e.g., patient cart 104) simple and efficient.
(36) As the shaft portions 285A', 285B' rotate to cause the wheel assemblies 220A', 220B', 230A', 230B' to lock, the rotary detents 288A', 288B' are rotated so that they engage the spring plungers 289A', 289B'. The spring plungers 289A', 289B' can resiliently hold the position of the rotary detents 288A', 288B' by exerting a force on the rotary detents 288A', 288B', and therefore the shaft portions 285A', 285B' to counteract any counteracting force (e.g. due to rotational inertia in the gearbox 256') that may cause the inadvertent backdriving of the gearbox 256' and/or pedal assembly 260' and cause the wheel assemblies 220A', 220B', 230A', 230B' from inadvertently unlocking once the operator has actuated the pedal assembly 260' to lock them.
(37) In another implementation, the pair of elongate actuators 270A', 270B' can instead be replaced by a single elongate actuation. Additionally, the axles 234A', 234B' of the front wheel assemblies 230A', 230B' can instead be replaced by a single axle that extends through and is coupled (e.g., via a key and key slot arrangement as shown in FIG. 9) to both wheels 231A', 231B'. The two disc brake assemblies 237A', 237B' can instead be replaced by a single disc break assembly mounted to the bottom surface 211B' of the baseplate 210'. The two rotors 236A', 236B' can instead by replaced by a single rotor that is fixedly coupled to the axle (e.g., via a key and key-slot arrangement). The single elongate actuator would extend between and couple to a lever attached to the shaft portions 285A', 285B' and to a lever of the disc brake assembly. The single elongate actuator would operate in the same manner described above for the elongate actuators 270A', 270B' to lock the front wheel assemblies 230A', 230B' substantially simultaneously with the locking of the rear wheel assemblies 220A', 220B'.
(38) Advantageously, the braking assembly 200, 200' allow all wheels 220A', 220B', 230A', 230B' to be locked and unlocked substantially simultaneously via actuation of the pedal assembly 260' (e.g., a single pedal assembly) by the operator. The braking of the wheels 220A', 220B', 230A', 230B' advantageously inhibit (e.g., prevent) motion of the mobile cart, such as the patient cart 104, in which it is incorporated, along a surface having an incline of up to about 10 degrees.
(39) Additional Embodiments
(40) In embodiments of the present invention, a brake assembly for a robotic surgery cart may be in accordance with any of the following clauses:
(41) Clause 1. A brake assembly for a robotic surgery cart, comprising: a pair of rear wheel assemblies, each having a brake mechanism actuatable to selectively brake a wheel of each of the rear wheel assemblies; a pair of front wheel assemblies, each having a disc brake assembly actuatable to selectively brake a rotor operatively coupled to a wheel of each of the front wheel assemblies; a gearbox interposed between the pair of rear wheel assemblies; a pair of rotatable shafts extending along a first axis and interconnecting the gearbox with the pair of rear wheel assemblies; a pair of elongate actuators interconnecting the pair of rotatable shafts and the disc brake assemblies of the front wheel assemblies; and a pedal lever rotatably coupled to the gearbox and configured to rotate about a second axis that is generally perpendicular to the first axis, the pedal lever having a pair of pedals disposed on opposite sides of the second axis, allowing the pedal lever to rotate clockwise by pressing on one of the pair of pedals and to rotate counterclockwise by pressing on the other of the pair of pedals, wherein rotation of the pedal lever about the second axis causes the gearbox to rotate the pair of rotatable shafts about the first axis to substantially lock the pair of rear wheel assemblies, and substantially simultaneously causes a translation of the pair of elongate actuators to actuate the disc brake assemblies to substantially lock the rotors of the front wheel assemblies, such that the wheels of the front and rear wheel assemblies brake substantially simultaneously.
(42) Clause 2. The brake assembly of clause 1, wherein the pair of rear wheel assemblies are casters.
(43) Clause 3. The brake assembly of any preceding clause, wherein the pair of elongate actuators are gas springs.
(44) Clause 4. The brake assembly of any preceding clause, wherein for each of the front wheel assemblies, the rotor is coupled to the wheel by an axle that is rotatably fixed relative to the wheel and the rotor.
(45) Clause 5. The brake assembly of any preceding clause, wherein each of the pair of elongate actuators couples to one of the pair of rotatable shafts via a linkage movably coupled to a lever that is rotatably fixed to the rotatable shaft.
(46) Clause 6. The brake assembly of any preceding clause, wherein each of the pair of elongate actuators couples to one of the disc brake assemblies via a linkage coupled to a movable lever of the disc brake assembly, wherein the lever is movable by the elongate actuator between a first position where the disc brake assembly does not inhibit rotation of the rotor and a second position where the disc brake assembly applies a braking force on the rotor.
(47) Clause 7. The brake assembly of any preceding clause, further comprising a pair of rotary detents disposed on opposite sides of the gearbox, the rotary detents configured to engage a spring assembly to exert a force on the rotatable shafts to inhibit their rotation unless the pedal lever is actuated.
(48) Clause 8. The brake assembly of any preceding clause, wherein the pedal lever is configured to rotate over a range of approximately 60 degrees.
(49) Clause 9. The brake assembly of any preceding clause, further comprising a baseplate configured to support the pair of rear wheel assemblies, the pair of front wheel assemblies, and the gearbox.
(50) Clause 10. A brake assembly for a robotic surgery cart, comprising: a pair of rear wheel assemblies, each having a brake mechanism actuatable to selectively brake a wheel of the rear wheel assembly; at least one front wheel assembly; a rotor operatively coupled to a wheel of the at least one front wheel assembly; a disc brake assembly actuatable to selectively brake the rotor; a gearbox interposed between the pair of rear wheels; a pair of rotatable shafts extending along a first axis and interconnecting the gearbox with the pair of rear wheels; at least one elongate actuator interconnecting at least one of the pair of rotatable shafts and the disc brake assembly; and a pedal lever rotatably coupled to the gearbox and configured to rotate about a second axis that is generally perpendicular to the first axis, the pedal lever configured to rotate in a first direction by pressing on one portion of the pedal lever and to rotate in a second direction by pressing on another portion of the pedal lever, wherein rotation of the pedal lever about the second axis causes the gearbox to rotate the pair of rotatable shafts about the first axis to substantially lock the pair of rear wheel assemblies, and substantially simultaneously causes a translation of the at least one elongate actuator to actuate the disc brake assembly to substantially lock the rotor, such that the rear wheel assemblies and the at least one front wheel assembly brake substantially simultaneously.
(51) Clause 11. The brake assembly of clause 10, wherein the pair of rear wheel assemblies are casters.
(52) Clause 12. The brake assembly of any of clauses 10-11, wherein the at least one elongate actuator is a pair of elongate actuators that extend between and interconnect the pair of rotatable shafts and a pair of disc brake assemblies.
(53) Clause 13. The brake assembly of any of clauses 10-12, wherein the at least one elongate actuator is a gas spring.
(54) Clause 14. The brake assembly of any of clauses 10-13, wherein the rotor is coupled to the wheel by an axle that is rotatably fixed relative to the wheel and the rotor.
(55) Clause 15. The brake assembly of any of clauses 10-14, wherein the at least one elongate actuator couples to one of the pair of rotatable shafts via a linkage movably coupled to a lever that is rotatably fixed to the rotatable shaft.
(56) Clause 16. The brake assembly of any of clauses 10-15, wherein the at least one elongate actuator couples to the disc brake assembly via a linkage coupled to a movable lever of the disc brake assembly, wherein the lever is movable by the elongate actuator between a first position where the disc brake assembly does not inhibit rotation of the rotor and a second position where the disc brake assembly applies a braking force on the rotor.
(57) Clause 17. The brake assembly of any of clauses 10-16, further comprising a pair of rotary detents disposed on opposite sides of the gearbox, the rotary detents configured to engage a spring assembly to exert a force on the rotatable shafts to inhibit their rotation unless the pedal lever is actuated.
(58) Clause 18. The brake assembly of any of clauses 10-17, wherein the pedal lever is configured to rotate over a range of approximately 60 degrees.
(59) Other Variations
(60) Those skilled in the art will appreciate that, in some embodiments, additional components and/or steps can be utilized, and disclosed components and/or steps can be combined or omitted. For example, although some embodiments are described in connection with a robotic surgery system, the disclosure is not so limited. Systems, devices, and methods described herein can be applicable to medical procedures in general, among other uses. As another example, certain components can be illustrated and/or described as being circular or cylindrical. In some implementations, the components can be additionally or alternatively include non-circular portions, such as portions having straight lines. As yet another example, any of the actuators described herein can include one or more motors, such as electrical motors. As yet another example, in addition to or instead of controlling tilt and/or pan of a camera, roll (or spin) can be controlled. For example, one or more actuators can be provided for controlling the spin.
(61) The foregoing description details certain embodiments of the systems, devices, and methods disclosed herein. It will be appreciated, however, that no matter how detailed the foregoing appears in text, the systems, devices, and methods can be practiced in many ways. The use of particular terminology when describing certain features or aspects of the disclosure should not be taken to imply that the terminology is being redefined herein to be restricted to including any specific characteristics of the features or aspects of the technology with which that terminology is associated.
(62) It will be appreciated by those skilled in the art that various modifications and changes can be made without departing from the scope of the described technology. Such modifications and changes are intended to fall within the scope of the embodiments. It will also be appreciated by those of skill in the art that parts included in one embodiment are interchangeable with other embodiments; one or more parts from a depicted embodiment can be included with other depicted embodiments in any combination. For example, any of the various components described herein and/or depicted in the figures can be combined, interchanged, or excluded from other embodiments.
(63) With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations can be expressly set forth herein for sake of clarity.
(64) Directional terms used herein (for example, top, bottom, side, up, down, inward, outward, etc.) are generally used with reference to the orientation or perspective shown in the figures and are not intended to be limiting. For example, positioning “above” described herein can refer to positioning below or on one of sides. Thus, features described as being “above” may be included below, on one of sides, or the like.
(65) It will be understood by those within the art that, in general, terms used herein are generally intended as “open” terms (for example, the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims can contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to embodiments containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (for example, “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (for example, the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations).
(66) The term “comprising” as used herein is synonymous with “including,” “containing,” or “characterized by,” and is inclusive or open-ended and does not exclude additional, unrecited elements or method steps.
(67) Conditional language, such as “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment.
(68) Language of degree used herein, such as the terms “approximately,” “about,” “generally,” and “substantially” as used herein represent a value, amount, or characteristic close to the stated value, amount, or characteristic that still performs a desired function and/or achieves a desired result. For example, the terms “approximately”, “about”, “generally,” and “substantially” may refer to an amount that is within less than 10% of, within less than 5% of, within less than 1% of, within less than 0.1% of, and/or within less than 0.01% of the stated amount.
(69) It will be further understood by those within the art that any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, can be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.” Further, the term “each,” as used herein, in addition to having its ordinary meaning, can mean any subset of a set of elements to which the term “each” is applied.
(70) Conjunctive language such as the phrase “at least one of X, Y, and Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to convey that an item, term, etc. may be either X, Y, or Z. Thus, such conjunctive language is not generally intended to imply that certain embodiments require the presence of at least one of X, at least one of Y, and at least one of Z.
(71) The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality may be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the invention.
(72) The various illustrative blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
(73) The steps of a method or algorithm and functions described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a tangible, non-transitory computer-readable medium. A software module may reside in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, hard disk, a removable disk, a CD ROM, or any other form of storage medium known in the art. A storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer readable media. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
(74) The above description discloses embodiments of systems, apparatuses, devices, methods, and materials of the present disclosure. This disclosure is susceptible to modifications in the components, parts, elements, steps, and materials, as well as alterations in the fabrication methods and equipment. Such modifications will become apparent to those skilled in the art from a consideration of this disclosure or practice of the disclosure. Consequently, it is not intended that the disclosure be limited to the specific embodiments disclosed herein, but that it cover all modifications and alternatives coming within the scope and spirit of the subject matter embodied in the following claims.
Claims
1. A brake assembly for a robotic surgery cart, comprising: a pair of rear wheel assemblies, each having a brake mechanism actuatable to selectively brake a wheel of each of the rear wheel assemblies; a pair of front wheel assemblies, each having a brake assembly actuatable to selectively brake a wheel of each of the front wheel assemblies; a gearbox interposed between the pair of rear wheel assemblies; a pair of rotatable shafts extending along a first axis and interconnecting the gearbox with the pair of rear wheel assemblies; a pair of actuators interconnecting the pair of rotatable shafts and the brake assemblies of the front wheel assemblies; and a pedal lever rotatably coupled to the gearbox and configured to rotate about a second axis different than the first axis, the pedal lever configured to rotate in a first direction about the second axis and to rotate in an opposite second direction about the second axis, wherein rotation of the pedal lever in one of the first and second directions about the second axis causes the gearbox to rotate the pair of rotatable shafts about the first axis to substantially lock the pair of rear wheel assemblies, and substantially simultaneously causes a translation of at least a portion of the pair of actuators to actuate the brake assemblies to substantially lock the front wheel assemblies, such that the wheels of the front and rear wheel assemblies brake substantially simultaneously.
2. The brake assembly of claim 1, wherein the pair of rear wheel assemblies are casters.
3. The brake assembly of claim 1, wherein the pair of actuators are gas springs.
4. The brake assembly of claim 1, wherein rotation of the pedal lever in the other of the first and second directions about the second axis causes the gearbox to rotate the pair of rotatable shafts about the first axis to substantially unlock the pair of rear wheel assemblies, and substantially simultaneously causes a translation of at least a portion of the pair of actuators to deactivate the brake assemblies to substantially unlock the front wheel assemblies.
5. The brake assembly of claim 1, wherein rotation of the pedal lever away from a neutral position in either of the first and second directions about the second axis causes the gearbox to rotate the pair of rotatable shafts about the first axis to substantially lock the pair of rear wheel assemblies, and substantially simultaneously causes a translation of at least a portion of the pair of actuators to actuate the brake assemblies to substantially lock the front wheel assemblies, such that the wheels of the front and rear wheel assemblies brake substantially simultaneously.
6. The brake assembly of claim 5, wherein positioning of the pedal lever in the neutral position about the second axis causes the gearbox to rotate the pair of rotatable shafts about the first axis to substantially unlock the pair of rear wheel assemblies, and substantially simultaneously causes a translation of at least a portion of the pair of actuators to deactivate the brake assemblies to substantially unlock the front wheel assemblies.
7. The brake assembly of claim 1, wherein each of the pair of actuators couples to one of the pair of rotatable shafts via a linkage movably coupled to a lever that is rotatably fixed to the rotatable shaft.
8. The brake assembly of claim 1, wherein each of the pair of actuators couples to one of the brake assemblies via a linkage coupled to a movable lever of the brake assembly, wherein the lever is movable by the actuator between a first position where the brake assembly does not inhibit rotation of the front wheel and a second position where the brake assembly applies a braking force on the front wheel.
9. The brake assembly of claim 1, further comprising a pair of rotary detents disposed on opposite sides of the gearbox, the rotary detents configured to engage a spring assembly to exert a force on the rotatable shafts to inhibit their rotation unless the pedal lever is actuated.
10. The brake assembly of claim 1, wherein the pedal lever is configured to rotate over a range of approximately 60 degrees.
11. The brake assembly of claim 1, further comprising a baseplate configured to support the pair of rear wheel assemblies, the pair of front wheel assemblies, and the gearbox.
12. A brake assembly for a robotic surgery cart, comprising: a pair of rear wheel assemblies, each having a brake mechanism actuatable to selectively brake a wheel of each of the rear wheel assemblies; at least one front wheel assembly; a brake assembly actuatable to selectively brake the at least one front wheel assembly; a gearbox interposed between the pair of rear wheels; a pair of rotatable shafts extending along a first axis and interconnecting the gearbox with the pair of rear wheels; at least one actuator interconnecting at least one of the pair of rotatable shafts and the brake assembly; and a pedal lever rotatably coupled to the gearbox and configured to rotate about a second axis different than the first axis, the pedal lever configured to rotate in a first direction about the second axis and to rotate in a second direction opposite to the first direction about the second axis, wherein rotation of the pedal lever in one of the first direction and the second direction about the second axis causes the gearbox to rotate the pair of rotatable shafts about the first axis to substantially lock the pair of rear wheel assemblies, and substantially simultaneously causes a translation of the at least one elongate actuator to actuate the brake assembly to substantially lock the front wheel assembly, such that the rear wheel assemblies and the at least one front wheel assembly brake substantially simultaneously.
13. The brake assembly of claim 12, wherein the pair of rear wheel assemblies are casters.
14. The brake assembly of claim 12, wherein the at least one actuator is a pair of elongate actuators that extend between and interconnect the pair of rotatable shafts and a pair of brake assemblies.
15. The brake assembly of claim 12, wherein the at least one actuator is a gas spring.
16. The brake assembly of claim 12, wherein rotation of the pedal lever in the other of the first and second directions about the second axis causes the gearbox to rotate the pair of rotatable shafts about the first axis to substantially unlock the pair of rear wheel assemblies, and substantially simultaneously causes a translation of at least a portion of the pair of actuators to deactivate the brake assemblies to substantially unlock the front wheel assemblies.
17. The brake assembly of claim 12, wherein rotation of the pedal lever away from a neutral position in either of the first and second directions about the second axis causes the gearbox to rotate the pair of rotatable shafts about the first axis to substantially lock the pair of rear wheel assemblies, and substantially simultaneously causes a translation of at least a portion of the pair of actuators to actuate the brake assemblies to substantially lock the front wheel assemblies, such that the wheels of the front and rear wheel assemblies brake substantially simultaneously.
18. The brake assembly of claim 17, wherein positioning of the pedal lever in the neutral position about the second axis causes the gearbox to rotate the pair of rotatable shafts about the first axis to substantially unlock the pair of rear wheel assemblies, and substantially simultaneously causes a translation of at least a portion of the pair of actuators to deactivate the brake assemblies to substantially unlock the front wheel assemblies.
19. The brake assembly of claim 12, wherein the at least one actuator couples to one of the pair of rotatable shafts via a linkage movably coupled to a lever that is rotatably fixed to the rotatable shaft.
20. The brake assembly of claim 12, wherein the at least one actuator couples to the brake assembly via a linkage coupled to a movable lever of the brake assembly, wherein the lever is movable by the actuator between a first position where the brake assembly does not inhibit rotation of the front wheel and a second position where the brake assembly applies a braking force on the front wheel.
21. The brake assembly of claim 12, further comprising a pair of rotary detents disposed on opposite sides of the gearbox, the rotary detents configured to engage a spring assembly to exert a force on the rotatable shafts to inhibit their rotation unless the pedal lever is actuated.
22. The brake assembly of claim 12, wherein the pedal lever is configured to rotate over a range of approximately 60 degrees.
BOOM
Service Life Management For An Instrument Of A Robotic Surgery System
DOCUMENT ID
US 11529207 B2
DATE PUBLISHED
2022-12-20
INVENTOR INFORMATION
NAME
CITY
STATE
ZIP CODE
COUNTRY
Pflaumer; Hans Christian
Raleigh
NC
N/A
US
Laakso; Aki Hannu Einari
Raleigh
NC
N/A
US
Genova; Perry A.
Chapel Hill
NC
N/A
US
APPLICANT INFORMATION
NAME
Titan Medical Inc.
CITY
Toronto
STATE
N/A
ZIP CODE
N/A
COUNTRY
CA
AUTHORITY
N/A
TYPE
assignee
ASSIGNEE INFORMATION
NAME
Titan Medical Inc.
CITY
Toronto
STATE
N/A
ZIP CODE
N/A
COUNTRY
CA
TYPE CODE
03
APPLICATION NO
17/152033
DATE FILED
2021-01-19
US CLASS CURRENT:
1/1
CPC CURRENT
TYPE
CPC
DATE
CPCI
A 61 B 34/74
2016-02-01
CPCI
A 61 B 34/76
2016-02-01
Abstract
A robotic surgery system is disclosed that can include an instrument including an operational tool coupled to a positioner and an input device configured to generate input signals in response to manipulation by an operator representing a desired spatial positioning of the tool within a tool workspace including extents corresponding to physical movement limitations for the positioner. A processor can be configured to receive the input signals and process the signals to determine the desired spatial positioning. The processor can be configured to initiate a movement management function in response to a determination that the desired spatial positioning would result in a movement of the positioner associated with a potential service life reduction for the instrument. The processor can be configured to generate drive signals for movement of the positioner in response to a determination that the desired spatial positioning is not associated with a potential reduction in service life.
Background/Summary
BACKGROUND
1. Field
(1) This disclosure relates generally to a surgical instrument apparatus for performing a surgical procedure within a body cavity of a patient.
2. Description of Related Art
(2) Surgical instruments used in laparoscopic and/or robotic surgery generally have a service life that is pre-determined based on testing or estimated based on material and structural properties of the instrument. The service life may be expressed as a total number of uses or a total usage time. Alternatively, the service life may be based on actual usage parameters such as the number of movements or discrete operations, for example. Use of the instrument beyond the pre-determined service life is considered to be associated with decreased performance and/or increased risk of failure of the instrument.
SUMMARY
(3) In accordance with one disclosed aspect there is provided a robotic surgery system. The system may include an input device configured to generate input signals in response to manipulation by an operator, the input signals representing a desired spatial positioning of a tool of an instrument within a tool workspace, the tool workspace including extents corresponding to physical movement limitations associated with a positioner of the instrument to which the tool is coupled. The system may include a processor configured to receive the input signals from the input device and process the input signals to determine the desired spatial positioning of the tool within the tool workspace. The processor may be configured to, in response to a determination that the desired spatial positioning would result in a movement of the positioner associated with a potential service life reduction for the instrument, initiate a movement management function. The processor may be configured to, in response to a determination that the desired spatial positioning would not result in the movement of the positioner associated with the potential service life reduction for the instrument, generate drive signals for movement of the positioner to cause the tool to be positioned at a position corresponding to the desired spatial positioning in the tool workspace.
(4) The processor may be configured to make the determination that the desired spatial positioning would result in the movement of the positioner associated with the potential service life reduction by determining that the desired spatial positioning associated with the input signals lies outside a pre-determined safe region of the tool workspace.
(5) The processor may be configured to initiate the movement management function by temporarily permitting the operator to extend the pre-determined safe region to permit the tool to move outside the pre-determined safe region.
(6) The input device may be configured to deliver a haptic feedback to an operator of the input device and the processor may be configured to generate the alert by causing the input device to deliver the haptic feedback.
(7) The processor may be configured to initiate the movement management function by causing an alert to be generated to indicate to the operator that the desired movement is associated with the potential service life reduction, and generating drive signals to inhibit movement of the positioner to cause the tool to remain positioned at a current position in tool workspace.
(8) The processor may be configured to initiate the movement management function by causing an alert to be generated to indicate to the operator that the desired spatial positioning is associated with the potential service life reduction, and in response to receiving an override input from the operator, generate drive signals for movement of the positioner to cause the tool to be positioned at the position in the tool workspace, and update a service life parameter associated with the instrument based on an expected reduction in service life caused by the movement.
(9) The service life parameter may include a pre-determined number of uses for the instrument, the number of uses being decremented each time the instrument is used in a surgical procedure, and the processor may be configured to decrement the number of uses based on the expected reduction in service life caused by the movement.
(10) The service life parameter may include a pre-determined usage time and the processor may be configured to decrement the usage time based on the expected reduction in service life caused by the movement.
(11) The service life parameter may include a pre-determined number of movements of the positioner that are associated with the potential service life reduction, and the processor may be configured to decrement the number of movements each time the override input is received from the operator.
(12) The processor may be configured to discontinue generating drive signals for movements of the positioner that are associated with the potential service life reduction responsive to expiry of an override period.
(13) The system may include a display configured to display an image of the tool workspace to the operator and the processor may be configured to cause the alert to be generated by causing displaying of an alert icon on the display.
(14) The processor may be configured to cause displaying an interactive alert icon on the display, the interactive alert icon being configured to generate the override input when activated by the operator.
(15) The input device may be configured to deliver a haptic feedback to an operator of the input device and the processor may be configured to causing the input device to deliver the haptic feedback.
(16) The service life parameter may be stored in a memory associated with the instrument, and the processor may be configured to update the service life parameter by writing a new service life parameter to the memory.
(17) The memory may include a memory located on the instrument, and the system may include an instrument interface configured to place the processor in data communication with the memory responsive to the instrument being loaded into the system.
(18) Access for reading and writing to the memory may be protected by a security function to prevent unauthorized changes to the service life parameter.
(19) The memory may include a memory of the processor and the service life parameter may include an identifier that associates the service life parameter with the instrument.
(20) The positioner may include a plurality of articulated linkages, and a plurality of control wires that are pushed or pulled to cause movement of the articulated linkages to position the tool within the tool workspace, and the determination that the desired spatial positioning would result in the movement of the positioner associated with the potential service life reduction may be based on an estimated strain in the control wires associated with the movement.
(21) The tool may include an end effector positioned at a distal end of the tool and the end effector may include a pair of opposing elements, the opposing elements being actuated to close by an end effector actuation signal received from the input device, and the processor may be configured to determine an end effector drive signal for causing the opposing elements to close with a desired force in proportion to the end effector actuation signal, and in response to a determination that the desired force would result in the potential service life reduction for the instrument, initiate an actuation management function, and in response to a determination that the desired force would not result in the potential service life reduction for the instrument, generate the end effector drive signal to cause the end effector to close with the desired force.
(22) There is provided a method of operating a robotic surgery system of any of the preceding paragraphs and/or disclosed below.
(23) In accordance with another disclosed aspect there is provided a method for operating a robotic surgery system, the robotic surgery system including a processor and an input device. The method may be implemented by the processor. The method may involve receiving input signals in response to manipulation of the input device by an operator, the input signals representing a desired spatial positioning of a tool of an instrument within a tool workspace, the tool workspace including extents corresponding to physical movement limitations associated with a positioner of the instrument to which the tool is coupled. The method may involve processing the input signals to determine the desired spatial positioning of the tool within the tool workspace. The method may involve, in response to a determination that the desired spatial positioning would result in a movement of the positioner associated with a potential service life reduction for the instrument, initiating a movement management function. The method may involve, in response to a determination that the desired spatial positioning would not result in a movement of the positioner associated with the potential service life reduction, generating drive signals for movement of the positioner to cause the tool to be positioned at a position corresponding to the desired spatial positioning in the tool workspace.
(24) Initiating the movement management function may involve generating an alert to indicate to the operator that the desired spatial positioning is associated with the potential service life reduction, and in response to receiving an override input from the operator, generating drive signals for movement of the positioner to cause the tool to be positioned at the position in the tool workspace, and updating a service life parameter associated with the instrument based on an expected reduction in service life caused by the movement.
(25) Other aspects and features will become apparent to those ordinarily skilled in the art upon review of the following description of specific disclosed embodiments in conjunction with the accompanying figures.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) In drawings which illustrate disclosed embodiments,
(2) FIG. 1 is a perspective view of a robotic surgery system in accordance with one disclosed embodiment;
(3) FIG. 2A is a front perspective view of a drive unit of the system shown in FIG. 1;
(4) FIG. 2B is a rear perspective view of the drive unit of the system shown in FIG. 1;
(5) FIG. 3A is a perspective view of a portion of an insertion tube associated with the drive unit shown in FIGS. 2A and 2B;
(6) FIG. 3B is a perspective view of the insertion tube with a pair of instruments inserted;
(7) FIG. 3C is a perspective view of a portion of the insertion tube with the instruments shown in a deployed state;
(8) FIG. 4 is a block diagram of processor circuit elements of the system shown in FIG. 1;
(9) FIG. 5 is a right hand controller portion of an input device of the system shown in FIG. 1;
(10) FIG. 6 is a detailed perspective view of the right side instrument shown in FIG. 3B;
(11) FIG. 7 is a flowchart of a movement process implemented by the workstation processor circuit shown in FIG. 4;
(12) FIG. 8 is a perspective view of the right side instrument of FIG. 3B shown in a bent pose;
(13) FIG. 9 is a rear perspective view of the right side instrument of FIG. 3B shown in a bent pose along with the left side instrument in a straight pose;
(14) FIG. 10 is a flowchart of a process implemented by the workstation processor circuit of FIG. 4 for making a determination as to whether a desired spatial positioning of the end effector is associated with a service life reduction;
(15) FIG. 11A is a flowchart of a movement management process implemented by the workstation processor circuit shown in FIG. 4;
(16) FIG. 11B is a screenshot of an example alert displayed during the process of FIG. 11A; and
(17) FIG. 12 is a perspective view of an instrument including an alternative positioner.
DETAILED DESCRIPTION
(18) Referring to FIG. 1, a robotic surgery system in accordance with one disclosed embodiment is shown generally at 100. The system 100 includes a workstation 102 and an instrument cart 104. The instrument cart 104 includes a drive unit 106 to which an insertion tube 108 and an instrument 110 are mounted. The workstation 102 includes an input device 112 that receives operator input and produces input signals. The input device 112 may also be capable of generating haptic feedback to the operator. The input device 112 may be implemented using a haptic interface available from Force Dimension, of Switzerland, for example.
(19) In the embodiment shown, the workstation 102 further includes a workstation processor circuit 114 in communication with the input device 112 for receiving the input signals and generating drive signals for controlling the robotic surgery system, which are transmitted to the instrument cart 104 via an interface cable 116. The input device 112 includes right and left hand controllers 122 and 124, which are grasped by the operator's hands and moved to cause the input device 112 to produce the input signals. The workstation 102 also includes a footswitch 126 for generating an enablement signal. The workstation 102 may also include other footswitches 128 that provide an additional input to the system as described below. The workstation 102 also includes a display 120 in communication with the workstation processor circuit 114.
(20) The display 120 may be configured for displaying images of the surgical workspace and portions of the instruments 110 that are within the surgical workspace. In the embodiment shown, the workstation 102 further includes a secondary display 132 for displaying status information related to the system 100. The instrument cart 104 includes an instrument processor circuit 118 that receives and the input signals from the workstation processor circuit 114 and produces drive signals operable to drive the instrument 110 during a surgical procedure.
(21) The drive unit 106 is shown in isolation in FIGS. 2A and 2B. Referring to FIG. 2A, the insertion tube 108 includes a drive interface 200 that detachably mounts to a corresponding drive interface 202 on the drive unit 106. The insertion tube 108 includes a camera 204 at a distal end of the insertion tube, which is inserted into a body cavity of a patient to capture body cavity image data representing an interior view of the body cavity for display on the display 120 of the workstation 102. Referring to FIG. 2B, in this embodiment the insertion tube 108 includes a pair of adjacent bores extending through the insertion tube for receiving a right hand side instrument 110a and a left hand side instrument 110b. The instruments 110a and 110b each include a respective operational tool 210 and 212 at a distal end. The operational tools 210 and 210 may be one of a variety of different operational tools, such as a probe, dissector, hook, or cauterizing tool. As an example, the operational tools 210 and 210 may be configured as an end effector having opposing jaws that provide an actuated function such as a scissor for cutting tissue or forceps for gripping tissue. In other embodiments one of the instruments 110a or 110b may include an operational tool 210 or 212 in the form of a distally located camera that provides imaging functions in addition to or in place of the camera 204. One of the instruments 110a or 110b may include an operational tool in the form of an illuminator configured to provide illumination for generation of images by the camera 204.
(22) A portion of the insertion tube 108 is shown in FIG. 3A and includes two adjacently located bores 300 and 302 extending through the insertion tube 108 for receiving the respective surgical instruments 110a and 110b. The insertion tube 108 also includes a third bore 304 for receiving the camera 204. In alternative embodiments, the camera 204 may be fixedly mounted to a distal portion of the insertion tube 108. The camera 204 is configured as a stereoscopic camera having a pair of spaced apart imagers 306 and 308 for producing stereoscopic views representing an interior view of the body cavity. The camera 204 also includes an integrated illuminator 310 for illuminating the body cavity for capturing images. The integrated illuminator 310 may be implemented using an illumination source such as a light emitting diode or an illumination source may be remotely located and may deliver the illumination through an optical fiber running through the insertion tube 108.
(23) Referring to FIG. 3B, the instruments 110a and 110b are shown inserted through the respective bores 300 and 302 of the insertion tube 108 (in FIG. 3B the bore 302 is not visible and the drive unit 106 has been omitted for sake of illustration). The right hand side instrument 110a includes a rigid shaft portion 312 and a positioner portion 314 that extends outwardly from the bore 300. In this embodiment the instrument 110a includes an end effector 316 that acts as the operational tool 210. The positioner 314 may include an articulated tool positioner as described in detail in commonly owned PCT patent publication WO2014/201538 entitled “ARTICULATED TOOL POSITIONER AND SYSTEM EMPLOYING SAME” filed on Dec. 20, 2013 and incorporated herein by reference in its entirety. The described positioner in PCT patent publication WO2014/201538 provides for dexterous movement of the end effector 316 through a plurality of articulated segments.
(24) In this embodiment, the instrument 110a includes an actuator 318 including a plurality of actuator slides 320 disposed in a housing 322. The housing 322 is located at a proximal end of the instrument 110a that couples to a corresponding interface (not shown) on the drive unit 106 for moving the positioner 314 and actuating the end effector 316. The actuator 318 of the instrument 110a may be generally configured as disclosed in commonly owned PCT patent publication WO2016/090459 entitled “ACTUATOR AND DRIVE FOR MANIPULATING A TOOL” filed on Feb. 18, 2015 and incorporated herein by reference in its entirety. The interface of the drive unit 106 may have a track system (not shown) coupled to the actuator 318 for longitudinally advancing and retracting the instrument 110a to cause the rigid shaft portion 312 to move within the bore 300. The longitudinal positioning of the instrument 110a places the end effector 316 at a desired longitudinal offset with respect to the insertion tube 108 for accessing a surgical workspace within the body cavity of the patient.
(25) The instrument 110a also includes a plurality of electrical contact pins 324 disposed on a forward facing portion 326 of the actuator housing 322. The pins 324 are in communication with an instrument usage monitor board 328, which is shown in cut away view located within the actuator housing 322. The pins 324 are disposed to engage and electrically connect to similar pins (not shown) disposed on the drive unit 106 for placing the monitor board 328 into communication with the instrument processor circuit 118. As an example, the pins 324 may be implemented using sprung pogo connector pins. The instrument 110b is shown in FIG. 3B in side-by side relation and identically configured to the instrument 110a. In some embodiments, the instrument 110b may have a different operational tool 212 than the instrument 110a.
(26) The camera 204 is mounted on an articulated arm 330 moveable in response to drive forces delivered by the drive interface 202 of the drive unit 106 to the drive interface 200 of the insertion tube 108. Drive forces delivered by the drive unit 106 cause the camera 204 to move from the longitudinally extended insertion state shown in FIGS. 3A and 3B to a deployed state as shown in FIG. 3C.
(27) Drive forces are imparted on the plurality of actuator slides 320 of the actuator 318 by the drive unit 106, which causes the positioner 314 of the instrument 110 to perform dexterous movement to position the end effector 316 for performing various surgical tasks. As shown in FIG. 3C, the left instrument 110b is also shown along with an associated positioner 332 and an end effector 334. In the deployed position shown in FIG. 3C, the camera 204 is able to generate images of the body cavity without obstructing movements of the positioners 314 and 332.
(28) A block diagram of the processor circuit elements of the system 100 is shown in FIG. 4. Referring to FIG. 4 the workstation processor circuit 114 includes a microprocessor 400. The workstation processor circuit 114 also includes a workstation memory 402, a USB interface 404, an input/output 406 and a motion control interface 408, all of which are in communication with the microprocessor 400. The input/output 406 includes an input for receiving the enablement signal from the footswitches 126 and 128 and an output for producing display signals for driving the display 120. In this embodiment the input device 112 communicates using a USB protocol and the USB interface 404 receives input signals produced by the input device in response to movements of the hand controllers 122 and 124. The workstation memory 402 includes a current buffer 420 and a previous buffer 440 including a plurality of stores for storing values associated with the control signals, as described later herein.
(29) The instrument processor circuit 118 includes a microprocessor 450, a memory 452, a communications interface 454, and a drive control interface 456, all of which are in communication with the microprocessor.
(30) The microprocessor 450 receives the control signals at the communications interface 454 based on the input signals received at the workstation processor circuit 114. The microprocessor 450 processes the control signals and causes the drive control interface 456 to produce drive signals for moving the instruments 110a and 110b.
(31) The workstation processor circuit 114 thus acts as a controller subsystem for receiving user input, while the instrument processor circuit 118 acts as a responder subsystem in responding to the user input and driving the instruments 110a and 110b. While the embodiment shown includes the workstation processor circuit 114 and the instrument processor circuit 118, in other embodiments a single processor circuit may be used to perform both controller and responder functions.
(32) In the embodiment shown, the instrument processor circuit 118 further includes an instrument data interface 458 having signal lines 460 that connect via the pins 324 on the instrument actuator 318 to the monitor board 328. In one embodiment the instrument data interface 458 may be implemented as a universal asynchronous receiver-transmitter (UART) or an PC (Inter-Integrated Circuit) interface. Alternatively the interface 458 may be implemented using an interface such as Synchronous Serial Interface (SSI), Serial Peripheral Interface Bus (SPI), EtherCAT (Ethernet for Control Automation Technology), or a Controller Area Network (CAN bus), for example. The monitor board 328 includes an interface 462 and a memory 464. The memory 464 may be a persistent memory such as a NOR or NAND flash memory or other type of persistent memory. The interface 462 on the monitor board 328 facilitates writing data received via instrument interface 458 to the memory 464 or reading out data from the memory 464. In some embodiments the interface 462 may implement security protocols to prevent unauthorized access to the memory 464.
(33) A portion of the input device 112 that includes the right hand controller 122 is shown in greater detail in FIG. 5. For simplicity, only the right hand controller 122 of the input device 112 will be further described, it being understood that the left hand controller 124 operates in the same way. The input device 112 is supported on a base 500 and includes arms 502, 504, and 506 that provide a mounting for the hand controller 122, which may be grasped by the operator and moved within an input device workspace. The arms 502-506 permit positioning and rotation about orthogonal axes x.sub.1, y.sub.1 and z.sub.1 of a Cartesian reference frame defining the input workspace. The Cartesian reference frame has an origin at a point on a body of the hand controller 122 and the location of the origin defines the hand controller position 508 (i.e. at the origin). In this embodiment, the hand controller 122 is mounted on a gimbal mount 510. The arms 502-506 confine movements of the hand controller 122 and hence the hand controller position 508 to within a generally hemispherical input device workspace. In one embodiment the input device 112 may also be configured to generate haptic forces for providing haptic feedback to the hand controller 122 through the arms 502-506 and gimbal mount 510. The hand controller 122 also includes an end effector actuator 520 that may be opened and closed to actuate movement of an end effector as described in more detail later herein.
(34) The input device 112 includes sensors (not shown) that sense the position of each of the arms 502-506 and rotation of the hand controller 122 about each of the x.sub.1, y.sub.1 and z.sub.1 axes and produces signals representing the position of the hand controller in the input device workspace and the rotational orientation of hand controller relative to an input device Cartesian reference frame x.sub.r, y.sub.r, z.sub.r. In this embodiment, the position and orientation signals are transmitted as input signals via the USB connection 518 to the USB interface 404 of the workstation processor circuit 114.
(35) In this embodiment, the gimbal mount 510 has a pin 512 extending downwardly from the mount and the base 500 includes a calibration opening 514 for receiving the pin. When the pin 512 is received in the opening 514 the hand controller 122 is located in a calibration position that is defined relative to the input device Cartesian reference frame x.sub.r, y.sub.r, z.sub.r. The input device reference frame has an x.sub.r-z.sub.r plane parallel to the base 500 and a y.sub.r axis perpendicular to the base. The z.sub.r axis is parallel to the base 500 and is coincident with an axis 516 passing centrally through the hand controller 122.
(36) The input device 112 produces current hand controller signals and current hand controller orientation signals that represent the current position and orientation of the hand controller 122. The signals may be represented by a current hand controller position vector and a current hand controller rotation matrix. The current hand controller position vector is given by:
(37) P .fwdarw. MCURR = { x 1 y 1 z 1 } ,
(38) where x.sub.1, y.sub.1, and z.sub.1 represent coordinates of the hand controller position 508 (i.e. the origin of the coordinate system x.sub.1, y.sub.1, z.sub.1) relative to the input device reference frame x.sub.r, y.sub.r, z.sub.r. The current hand controller rotation matrix is given by:
(39) R MCURR = [ x 1 ? x y 1 ? x z 1 ? x x 1 ? y y 1 ? y z 1 ? y x 1 ? z y 1 ? z z 1 ? z ] ,
(40) where the columns of the matrix represent the axes of the hand controller reference frame x.sub.1, y.sub.1, z.sub.1 relative to the input device reference frame x.sub.r, y.sub.r, z.sub.r. The matrix R.sub.MCURR thus defines the current rotational orientation of the hand controller 122 relative to the x.sub.r, y.sub.r and z.sub.r fixed controller reference frame. The current hand controller position vector {right arrow over (P)}.sub.MCURR and current handle rotation matrix R.sub.MCURR are transmitted as current hand controller position and current hand controller orientation signals via the USB connection 518 to the USB interface 404 of the workstation processor circuit 114. The workstation processor circuit 114 stores the three values representing the current handle position vector {right arrow over (P)}.sub.MCURR in a store 422 and the nine values representing the current hand controller rotation matrix R.sub.MCURR in a store 424 of the current buffer 420 of workstation memory 402.
(41) The right side instrument 110a is shown in greater detail in FIG. 6. Referring to FIG. 6, the positioner 314 of the instrument 110a operates within a surgical workspace 600. The positioner 314 of the instrument 110a is configured to position the end effector 316 within a tool workspace 602 indicated by the broken lines in FIG. 6. The surgical workspace 600 will generally be larger than the tool workspace 602 since the tool may be longitudinally advanced or retracted to access different portions of the surgical workspace. The instrument cart 104 may also be repositioned to facilitate access to different portions of the surgical workspace 600. The microprocessor 400 of the workstation processor circuit 114 processes the input signals based on a current mapping between the input device workspace for the input device 112 and the surgical workspace 600 and causes the motion control interface 408 to transmit control signals, which are conveyed to the instrument processor circuit 118 via the interface cable 116. The mapping may include a scale factor that scales movements in input device workspace to produce scaled movements in surgical workspace 600. For example, a 100 mm translation in input device workspace may be scaled by a scale factor of 0.5 to produce a 50 mm movement in surgical workspace 600 for fine movement.
(42) The positioner 314 positions the end effector 316 within the tool workspace 602 by activating various drivers in the drive unit 106 in response to the drive signals produced by the drive control interface 456 of the instrument processor circuit 118. The drivers in the drive unit 106 are coupled to deliver actuation forces to the plurality of actuator slides 320 of the actuator 318. The drive signals are produced by the drive control interface 456 in response to the control signals received at the communications interface 454 from the workstation processor circuit 114 and are based on the current hand controller position vector {right arrow over (P)}.sub.MCURR and current hand controller rotation matrix R.sub.MCURR stored in the stores 422 and 424 of the current buffer 420 in the workstation memory 402.
(43) In this embodiment the positioner 314 of the instrument 110a includes a plurality of the identical “vertebra” 604 as described in commonly owned PCT patent application PCT/CA2013/001076 entitled “ARTICULATED TOOL POSITIONER AND SYSTEM EMPLOYING SAME” filed on Dec. 20, 2013, which is incorporated herein by reference in its entirety. The vertebra 604 are operable to move with respect to each other when control wires passing through the vertebra are extended or retracted to cause movements of the positioner 314. The control wires are coupled to the actuator slides 320, which when moved by the drive unit 106 position the end effector 316 within the surgical workspace 600. The position and orientation of the end effector 316 is defined relative to a fixed responder reference frame having axes x.sub.v, y.sub.v, and z.sub.v, which intersect at a point referred to as the fixed responder reference position 608. The fixed responder reference position 608 lies on a longitudinal axis 610 of the instrument 110a and is contained in a plane perpendicular to the longitudinal axis and containing a distal edge of the insertion tube 606. In one embodiment the fixed responder reference frame acts as a body cavity frame of reference.
(44) In the embodiment shown, the end effector 316 includes opposing gripper jaws 614 that are positioned and oriented within an end effector workspace. A tip of the gripper jaws 614 may be designated as an end effector position 612 defined as the origin of an end effector Cartesian reference frame x.sub.2, y.sub.2, z.sub.2. The end effector position 612 is defined relative to the responder reference position 608 and the end effector may be positioned and orientated relative to the fixed responder reference frame x.sub.v, y.sub.v, z.sub.v for causing movement of the positioner 314 and/or the end effector 316.
(45) The current hand controller position signal {right arrow over (P)}.sub.MCURR and current hand controller orientation signal R.sub.MCURR cause movement of the end effector 316 of the instrument 110a to new end effector positions and desired new end effector orientations represented by a new end effector position vector {right arrow over (P)}.sub.EENEW:
(46) P .fwdarw. EENEW = { x 2 y 2 z 2 } ,
(47) where x.sub.2, y.sub.2, and z.sub.2 represent coordinates of the end effector position 612 within the end effector workspace relative to the x.sub.v, y.sub.v, z.sub.v fixed responder reference frame. The new end effector orientation is represented by a 3×3 end effector rotation matrix R.sub.EENEW:
(48) R EENEW = [ x 2 ? x y 2 ? x z 2 ? x x 2 ? y y 2 ? y z 2 ? y x 2 ? z y 2 ? z z 2 ? z ] ,
(49) where the columns of the R.sub.EENEW matrix represent the axes of the end effector reference frame x.sub.2, y.sub.2, and z.sub.2 written in the fixed responder reference frame x.sub.v, y.sub.v, and z.sub.v. The rotation matrix R.sub.EENEW thus defines a new orientation of the end effector 316 in the end effector workspace, relative to the x.sub.v, y.sub.v, and z.sub.v fixed responder reference frame. Values for the vector {right arrow over (P)}.sub.EENEW and rotation matrix R.sub.EENEW are calculated as described later herein and stored in stores 430 and 432 of the current buffer 420 of the workstation memory 402 respectively.
(50) When the system 100 initially starts up, the workstation processor circuit 114 sets a controller base position vector {right arrow over (P)}.sub.MBASE equal to the current hand controller vector {right arrow over (P)}.sub.MCURR and causes a definable controller base rotation matrix R.sub.MBASE to define an orientation that is the same as the current orientation defined by the hand controller rotation matrix R.sub.MCURR associated with the current hand controller rotation. At startup, the following operations are therefore performed:
{right arrow over (P)}.sub.MBASE={right arrow over (P)}.sub.MCURR, and
R.sub.MBASE=R.sub.MCURR.
(51) For the example of the instrument 110a, the hand controller 122 reference frame represented by the axes x.sub.1, y.sub.1, and z.sub.1 shown in FIG. 5 and the definable controller base reference frame represented by the axes m.sub.ob, y.sub.mb, and z.sub.mb (also shown in FIG. 5) thus coincide at startup of the system 100. Referring back to FIG. 4, the workstation processor circuit 114 stores the values representing the definable controller base position vector {right arrow over (P)}.sub.MBASE and the definable controller base rotation matrix R.sub.MBASE in the stores 426 and 428 of the current buffer 420 of the workstation memory 402.
(52) At startup of the system 100 there would be no previously stored values for the new end effector position vector {right arrow over (P)}.sub.EENEW and the new end effector rotation matrix R.sub.EENEW and in one embodiment these values are set to home configuration values. A home configuration may be defined that produces a generally straight positioner 314 for the instrument 110a as shown in FIG. 6 and the values of {right arrow over (P)}.sub.EENEW and R.sub.EENEW for the home configuration may be preconfigured at initialization. On startup of the system 100 the workstation processor circuit 114 also causes a definable end effector base position vector {right arrow over (P)}.sub.EEBASE and a definable end effector base rotation matrix R.sub.EEBASE to be set to the home configuration values of {right arrow over (P)}.sub.EENEW and R.sub.EENEW. Additionally, values for {right arrow over (P)}.sub.EEPREV and R.sub.EEPREV stored in the stores 446 and 448 of the previous buffer 440 (shown in FIG. 4) of the workstation processor circuit 114 are also set to the home configuration values of {right arrow over (P)}.sub.EENEW and R.sub.EENEW. In other embodiments, the home configuration may define configuration variables to produce different bent or both straight and bent positioning device poses for the home configuration. At startup, the following operations are therefore performed:
{right arrow over (P)}.sub.EEBASE={right arrow over (P)}.sub.EENEW={right arrow over (P)}.sub.EEPREV, and
R.sub.EEBASE=R.sub.EENEW=R.sub.EEPREV.
(53) The end effector reference frame represented by the axes x.sub.2, y.sub.2, and z.sub.2 shown in FIG. 6 and the definable responder base reference frame represented by the axes x.sub.sb, y.sub.sb, and z.sub.sb thus coincide at startup of the system 100. Referring back to FIG. 4, the workstation processor circuit 114 stores the values x.sub.sb, y.sub.sb, and z.sub.sb representing the definable responder base position vector {right arrow over (P)}.sub.EEBASE in store 434 and stores the values representing the definable responder base rotation matrix R.sub.MBASE in a store 436 of the current buffer 420 of the workstation memory 402.
(54) The tool workspace 602 lies within the surgical workspace 600, and in this embodiment is represented by an elliptic paraboloid surface in the reference frame x.sub.v, y.sub.v, z.sub.v, which is given by:
(55) x 2 a 2 + y 2 b 2 = z c .
(56) For the instrument 110a, since the positioner 314 is capable of symmetrical movements in any direction, the parameters a and b are equal. In other embodiments the instrument 110 may be configured to provide non-symmetrical movements in different directions and thus the parameters a and b may differ. The parameter c offsets the paraboloid with respect to the fixed responder reference position 608 to a position 618 defined by the axes x.sub.s, y.sub.s, z.sub.s, since physical limitations due to the vertebra 604 would prohibit movement close to the reference position 608. In other embodiments the tool workspace 602 may be defined by a surface other than the elliptic paraboloid shown in FIG. 6 or by a look up table of coordinates that may be interpolated to define a continuous 602.
(57) In FIG. 6, a second elliptic paraboloid surface 616 is shown lying within the tool workspace 602 and represents a pre-determined safe region within the tool workspace 602. In one embodiment, movements within the safe region 616 are considered to not cause a potential service life reduction for the instrument 110a. Movements beyond the safe region, but still within the tool workspace 602, are associated with an increased mechanical stress being placed on the components of the instrument 110a. In this embodiment a safe region 616 is defined having the same shape as the tool workspace 602. However, in other embodiments, the safe region 616 may have a different surface shape.
(58) Referring to FIG. 7, a flowchart depicting blocks of code for directing the workstation processor circuit 114 to execute a process for moving the instrument 110a is shown generally at 700. The blocks generally represent codes that direct the microprocessor 400 to perform various functions. The actual code to implement each block may be written in any suitable program language, such as C, C++, C#, Java, OpenGL, and/or assembly code, for example.
(59) The movement process 700 begins at block 702, which directs the microprocessor 400 to determine whether the enablement signal generated by the footswitch 126 is in an active state. If at block 702, it is determined that the footswitch 126 is currently released, the enablement signal will be in the active state and the microprocessor is directed to block 704, which directs the microprocessor 400 to read new values for {right arrow over (P)}.sub.MCURR and R.sub.MCURR from the current buffer 420 of the workstation memory 402, which represent the current hand controller position vector {right arrow over (P)}.sub.MCURR and current hand controller matrix R.sub.MCURR. Block 706 then directs the microprocessor 400 to calculate new end effector position signals {right arrow over (P)}.sub.EENEW and new end effector orientation signals R.sub.EENEW representing a desired end effector position 612 and desired end effector orientation, relative to the fixed responder reference position 608 and the responder base orientation (shown in FIG. 6). Block 706 also directs the microprocessor 400 to store values representing the new end effector position vector {right arrow over (P)}.sub.EENEW in the store 430 and to store values representing the desired end effector orientation matrix R.sub.EENEW in the store 432 of the current buffer 420 of the workstation memory 402.
(60) The new end effector position signals {right arrow over (P)}.sub.EENEW and new end effector orientation signals R.sub.EENEW are calculated according to the following relations:
{right arrow over (P)}.sub.EENEW=A({right arrow over (P)}.sub.MCURR-P.sub.MBASE)+{right arrow over (P)}.sub.EEBASE Eqn 1a
R.sub.EENEW=R.sub.EEBASER.sub.MBASE.sup.-1R.sub.MCURR Eqn 1b
(61) where: P.sub.EENEW is the new end effector position vector that represents the new desired position of the end effector 316 in the end effector workspace, and is defined relative to the responder base reference position; A is a scalar value representing a scaling factor in translational motion between the hand controller 122 (controller) and the instrument 110a (responder); {right arrow over (P)}.sub.MCURR is the current representation of the hand controller position vector stored in the store 422 of the current buffer 420, the hand controller position vector being defined relative to the fixed controller reference frame x.sub.r, y.sub.r, and z.sub.r; P.sub.MBASE is the last-saved position vector {right arrow over (P)}.sub.MCURR for the hand controller 122 that was shifted at the last transition of the enablement signal from the inactive state to the active state or on system initialization or by operation of a control interface by an operator; {right arrow over (P)}.sub.EEBASE is the last saved position vector {right arrow over (P)}.sub.EENEW for the end effector 316 that was shifted at the last transition of the enablement signal from the inactive state to the active state or on system initialization; R.sub.EENEW is the new end effector orientation matrix representing the current orientation of the end effector 316, and is defined relative to the fixed responder reference position 608; R.sub.EEBASE is the last-saved rotation matrix R.sub.EENEW of the end effector 316 shifted at the last transition of the enablement signal from the inactive state to the active state;
(62) R.sub.MBASE.sup.-1 is the inverse of rotation matrix R.sub.MBASE, which is the last-saved rotation matrix R.sub.MCURR of the hand controller 122 saved at the last transition of the enablement signal from the inactive state to the active state; and
(63) R.sub.WCURR is the currently acquired rotation matrix representing the orientation of hand controller 122 relative to the fixed controller reference frame x.sub.r, y.sub.r, and z.sub.r.
(64) Block 708 then directs the microprocessor 400 to determine whether the enablement signal has transitioned to the inactive state. If the enablement signal has transitioned to the inactive state, the microprocessor 400 is directed to block 710. Block 710 directs the microprocessor 400 to cause the motion control interface 408 to transmit control signals based on the previously calculated values of {right arrow over (P)}.sub.EEPREV and R.sub.EEPREV in the respective stores 446 and 448 of the previous butter 440 of the workstation memory 402. The control signals transmitted by the motion control interface 408 are thus derived from the last saved values of {right arrow over (P)}.sub.EENEW and R.sub.EENEW. The instrument processor circuit 118 receives the control signals and produces drive signals at the drive control interface 456 that inhibit further movement of the positioner 314 of the instrument 110a.
(65) If the enablement signal has not transitioned to the inactive state at block 708, the microprocessor 400 is directed to block 712. Block 712 directs the microprocessor 400 to determine whether the desired spatial positioning of the positioner 314 of the instrument 110a would result in a movement of the positioner associated with a potential service life reduction for the instrument 110a. If at block 712, the spatial positioning of the positioner 314 is determined not to be associated with a potential service life reduction, then the microprocessor 400 is directed to block 714. Block 714 directs the microprocessor 400 to cause the motion control interface 408 to transmit control signals based on the newly calculated values for {right arrow over (P)}.sub.EENEW and R.sub.EENEW. When the control signals are received at the communications interface 454 of the instrument processor circuit 118, the microprocessor 450 causes drive signals to be produced to cause the end effector 316 to assume a position and orientation in tool workspace determined by the current position and current orientation of the hand controller 122.
(66) The process then continues at block 716, which directs the microprocessor 400 to copy the current position vector {right arrow over (P)}.sub.MCURR and the current rotation matrix R.sub.MCURR stored in stores 422 and 424 of the current buffer 420 into stores 442 ({right arrow over (P)}.sub.MPREV) and 444 (R.sub.MPREV) of the previous buffer 440 of the workstation memory 402. Block 716 also directs the microprocessor 400 to copy the newly calculated end effector position vector {right arrow over (P)}.sub.EENEW and the newly calculated end effector rotation matrix R.sub.EENEW into stores 446 and 448 of the previous buffer 440. By storing the newly calculated end effector position vector {right arrow over (P)}.sub.EENEW and newly calculated end effector rotation matrix R.sub.EENEW, as previously calculated end effector position vector {right arrow over (P)}.sub.EEPREV and previously calculated end effector rotation matrix R.sub.EEPREV, a subsequently acquired new end effector position vector {right arrow over (P)}.sub.EENEW and subsequently acquired new end effector rotation matrix R.sub.EENEW can be calculated from the next received hand controller position vector {right arrow over (P)}.sub.MCURR and next received hand controller rotation matrix R.sub.MCURR provided by the hand controller 122. Block 716 then directs the microprocessor 400 back to block 702, and the process is repeated.
(67) If at block 712, the microprocessor 400 determines that the desired spatial positioning of the positioner 314 of the instrument 110a would result in a movement of the positioner associated with a potential service life reduction for the instrument 110a, the microprocessor is directed to block 718. Block 718 directs the microprocessor 400 to initiate a movement management function. The movement management function may include various steps, such as the generation of an alert and/or receiving an operator override and generating a corresponding override signal. Various process embodiments of the movement management function are described in more detail below.
(68) When the movement management function block 718 has been initiated, the microprocessor 400 is directed to block 720, which directs the microprocessor 400 to determine whether an override signal has been enabled or asserted at block 718. If the microprocessor 400 determines that an operator override was received at block 720, the microprocessor 400 is directed to block 714, and the motion control signals based {right arrow over (P)}.sub.EENEW and rotation matrix R.sub.EENEW are transmitted as described above and the movements of the positioner 314 are permitted to proceed outside the safe region 616 of the tool workspace 602. If the microprocessor 400 determines that an override is not in effect at block 720, the microprocessor 400 is directed to block 710, and the motion control signals based {right arrow over (P)}.sub.EEPREV and rotation matrix R.sub.EEPREV are transmitted as described above and the end effector 316 is constrained within the safe region 616 of the tool workspace 602. In this case, the drive signals inhibit movement of the positioner 314 beyond the safe region 616 and cause the end effector 316 to remain positioned at a current position in the tool workspace 602. Further movements that would result in the end effector 316 remaining within the safe region 616 would however be permitted.
(69) If at block 702, it is determined that the footswitch 126 is currently depressed, the enablement signal will be in the inactive state and the microprocessor is directed to block 720 initiating a base setting process. The base setting process is associated with blocks 720 and 722 and is executed asynchronously whenever the enablement signal produced by the footswitch 126 transitions from the active state to the inactive state. During the base setting process, the drive signals are maintained at the values that were in effect at the time the enablement signal transitioned to inactive at block 708. At block 720 the microprocessor 400 is directed to determine whether the enablement signal has transitioned back to being in the active state. While enablement signal remains inactive (i.e. while the footswitch 126 is depressed) the control signals transmitted by the motion control interface 408 are based only on the previously calculated end effector position and previously calculated orientation signals {right arrow over (P)}.sub.EEPREV and R.sub.EEPREV that were in effect before the enablement signal transitioned to inactive. If at block 720 the enablement signal remains in the inactive state, the microprocessor 400 is directed to repeat block 720 and the process is thus effectively suspended while the enablement signal remains in in the inactive state. While the footswitch 126 is depressed, the surgeon may thus move the hand controller 122 to a new location to relocate the input device workspace relative to the surgical workspace 600.
(70) When at block 720 the enablement signal transitions from the inactive state to the active state, the microprocessor 400 is directed to block 722. Block 722 directs the microprocessor 400 to set new base positions and orientations for the hand controller 122 and end effector 316 respectively. Block 722 directs the microprocessor 400 to cause current values of current hand controller position vector {right arrow over (P)}.sub.MCURR and the hand controller rotation matrix R.sub.MCURR to be written to stores 426 and 428 of the current buffer 420 workstation memory 402 as new values for the controller base position vector PHRASE and controller base rotation matrix R.sub.MBASE. Block 722 also directs the microprocessor 400 to cause current values for the end effector position signal {right arrow over (P)}.sub.EENEW and the end effector orientation signal R.sub.EENEW to be stored in stores 434 and 436 of the current buffer 420 as the definable end effector base position vector {right arrow over (P)}.sub.EEBASE and definable responder base rotation matrix R.sub.MBASE. Following execution of block 722, the microprocessor 400 is directed back to block 704 of the process 700, which directs the microprocessor to permit further movement of the positioner 314 of the instrument 110a. The control signals transmitted by the motion control interface 408 thus cause the instrument processor circuit 118 to produce drive signals at the drive control interface 456 that cause further movement of the instrument 110a.
(71) The base setting process implemented at blocks 720 and 722 thus allows the positioner 314 of the instrument 110a to be immobilized by depressing the footswitch 126 while the hand controller 122 of the input device 112 is moved to a new location. When the footswitch 126 is released, control of the positioner 314 of the instrument 110a resumes at the new position of the hand controller 122. The hand controller 122 may thus be repositioned as desired while the positioner 314 remains immobile, allowing a greater workspace to be accessed by the operator and preventing unintended movements that may inflict injury to the patient.
(72) The end effector position vector {right arrow over (P)}.sub.EENEW or {right arrow over (P)}.sub.EEPREV and end effector orientation matrix R.sub.EENEW or R.sub.EEPREV produced at block 706 provides a desired location end effector tip 612 (shown in FIG. 6) with respect to the fixed reference position 608. In the processor embodiment shown in FIG. 4, the microprocessor 400 of the workstation processor circuit 114 causes the motion control interface 408 to transmit motion control signals that define a pose required by the positioner 314 to position and orient the end effector 316 in the desired end effector position and orientation. The motion control signals are thus generated based on a kinematic configuration of the positioner 314 and end effector 316 to place the end effector position 612 at a desired position and orientation.
(73) Generation of motion control signals (block 408, FIG. 4) by the instrument processor circuit 118 is described with further reference to FIG. 8 and FIG. 9. The right side instrument 110a is shown in FIG. 8 in a bent pose from a side perspective and from a rear perspective in FIG. 9. The left side instrument 110b is shown in FIG. 9 in a straight pose corresponding to the home configuration described above. Referring to FIG. 8 and FIG. 9, the positioner 314 of the instrument 110a has a first articulated segment referred to as an s-segment 800 and a second articulated segment referred to as a distal segment 802. The segments each include the plurality of vertebra 604. The s-segment 800 begins at a distance from the insertion tube 606, referred to as the insertion distance q.sub.ins, which is a distance between the fixed responder reference position 608 defined at the origin of the responder fixed base reference frame x.sub.v, y.sub.v, and z.sub.v and a first position 804 at an origin of a first position reference frame x.sub.3, y.sub.3, and z.sub.3. The insertion distance q.sub.ins represents an unbendable portion of the positioner 314 that extends out of the end of the insertion tube 606. In the embodiment shown, the insertion distance q.sub.ins may be about 10-20 mm, while in other embodiments the insertion distance may be longer or shorter, varying from 0-100 mm, for example.
(74) The s-segment 800 extends from the first position 804 to a third position 806 defined as an origin of a third reference frame having axes x.sub.5, y.sub.5, and z.sub.5 and is capable of assuming a smooth s-shape when control wires (not shown) inside the s-segment 800 are pushed and pulled by actuating the plurality of actuator slides 320 of the actuator 318 (FIG. 3B). The s-segment 800 has a mid-point at a second position 808, defined as the origin of a second position reference frame having axes x.sub.4, y.sub.4, and z.sub.4. The s-segment 800 has a length L.sub.1, best shown in FIG. 9 for the left side tool positioner 332 of the instrument 110b. In the embodiment shown, the length L.sub.1 may be about 65 mm. The distal segment 802 extends from the third position 806 to a fourth position 810 defined as an origin of a fourth reference frame having axes x.sub.6, y.sub.6, and z.sub.6. The distal segment 802 has a length L.sub.2, best shown in FIG. 9 for the left side tool positioner 332. In the embodiment shown, the length L.sub.2 may be about 30 mm.
(75) Each end effector 316 and 334 also has an end effector length, which in the embodiment shown is a gripper length L.sub.3 extending from the fourth position 810 to the end effector tip position 612 defined as the origin of the axes x.sub.2, y.sub.2, and z.sub.2. The gripper length L.sub.3 is best shown in FIG. 9 again for the left side tool positioner 332 and in one embodiment may be about 40 mm. The responder reference position 608, first position 804, second position 808, third position 806, fourth position 810, and the end effector position 612 may collectively be referred to as tool reference positions.
(76) As described in PCT/CA2013/001076, by pushing and pulling on control wires inside the positioners 314 and 332, the s-segments 800 of the positioners may be bent into various degrees of an s-shape, from the straight condition shown in FIG. 6 to a partial or full s-shape for the right side instrument 110a shown in FIG. 8 and FIG. 9. The s-segment 800 is sectional in that it has a first section 812 and a second section 814 on opposite sides of the second position 808. Referring to FIG. 8, the first and second sections 812 and 814 lie in a first bend plane containing the first position 804, second position 808, and third position 806. The first bend plane is at an angle d.sub.prox to the x.sub.v-z.sub.v plane of the fixed responder reference frame x.sub.v, y.sub.v, and z.sub.v. The first section 812 and second section 814 are bent in the first bend plane through opposite but equal angles ?.sub.prox such that no matter the angle ?.sub.prox or the bend plane angle d.sub.prox, the z.sub.5 axis of the third position 806 is always parallel to and aligned in the same direction as the z.sub.v axis of the fixed responder reference position 608. Thus, by pushing and pulling on the control wires within the positioner 314, the third position 806 can be placed at any of a number of discrete positions in space within a cylindrical volume about the first position 804. This cylindrical volume may be referred to as the s-segment workspace.
(77) In addition, the distal segment 802 lies in a second bend plane containing the third position 806 and the fourth position 810. The second bend plane is at an angle d.sub.dist to the x.sub.v-z.sub.v plane of the fixed responder reference frame x.sub.v, y.sub.v, and z.sub.v. The distal segment 802 is bent in the second bend plane at an angle ?.sub.dist. Thus, by pushing and pulling the control wires within the positioner 314, the fourth position 810 can be placed within another volume in space about the fourth position 810. This volume may be referred to as the distal workspace. The combination of the s-segment workspace and the distal workspace define the tool workspace 602 and represents the total possible movement of the positioner 314 of the instrument 110a as effected by the positioner 314. The left side instrument 110b may be similarly positioned by the positioner 332.
(78) The distance between the fourth position 810 and the end effector position 612 is the distance between the movable portion of the distal segment 802 and the tip of the gripper 614 of the end effector 316 in the embodiment shown, i.e. the length the gripper length L.sub.3 shown in FIG. 9. Generally, a portion of the gripper between the fourth position 810 and the end effector position 612 will be unbendable.
(79) In the embodiment shown, the end effector 316 include moveable gripper jaws 614 that are rotatable about the z.sub.2 axis in the x.sub.2-y.sub.2 plane of the end effector reference frame, the angle of rotation being represented by an angle ? relative to the positive x.sub.2 axis. Finally, the gripper jaws 614 may be at any of varying degrees of openness from fully closed to fully open (as limited by a hinge joint of the jaws). The varying degrees of openness may be defined as “G”. In summary therefore, the motion control signals are generated based on a kinematic configuration of the positioner 314 and end effector 316 as defined by the following configuration variables: q.sub.ins represents a distance from the responder reference position 608 defined by axes x.sub.v, y.sub.v, and z.sub.v to the first position 804 defined by axes x.sub.3, y.sub.3 and z.sub.3 where the s-segment 800 of the positioner 314 begins; d.sub.prox represents a first bend plane in which the s-segment 800 is bent relative to the x.sub.v-y.sub.v plane of the fixed responder reference frame; ?.sub.prox represents an angle at which the first and second sections 812 and 814 of the s-segment 800 are bent in the first bend plane; d.sub.dist represents a second bend plane in which the distal segment 802 is bent relative to the x.sub.v-y.sub.v plane of the fixed responder reference frame; d.sub.dist represents an angle through which the distal segment 802 is bent in the second bend; ? represents a rotation of the end effector 316 about axis z.sub.2; and G: represents a degree of openness of the gripper jaws 614 of the end effector 316 (this is a value which is calculated in direct proportion to a signal produced by an actuator (not shown) on the hand controller 122 indicative of an amount of pressure the operator exerts by squeezing the actuator to actuate the gripper jaws 614 to close).
(80) To calculate the configuration variables, it will first be recalled that the end effector rotation matrix R.sub.EENEW is a 3×3 matrix:
(81) R EENEW = [ x 2 ? x y 2 ? x z 2 ? x x 2 ? y y 2 ? y z 2 ? y x 2 ? z y 2 ? z z 2 ? z ] ,
(82) where the last column of R.sub.EENEW is the z-axis of the end effector reference frame written relative to the fixed responder reference frame x.sub.v, y.sub.v, and z.sub.v. The values ?.sub.dist, d.sub.dist, and ? associated with the distal segment 802 may be calculated according to the relations:
(83) ? dist = p 2 - a ? tan ? 2 ? ( z 2 ? x 2 + z 2 ? y 2 , ? z 2 ? z ) Eqn ? 2 d dist = - a ? tan ? 2 ? ( z 2 ? y , z 2 ? x ) . if ? "\[LeftBracketingBar]" d dist "\[RightBracketingBar]" > p 2 : Eqn ? 3 ? = a ? tan ? 2 ? ( - y 2 ? z , x 2 ? z ) - d dist + p ? else Eqn ? 4 ? a ? = a ? tan ? 2 ? ( y 2 ? z , - x 2 ? z ) - d dist Eqn ? 4 ? b
(84) The third position 806 may then be written in terms of a vector p.sub.3/v from the fixed responder reference position 608 to the third position. Similarly, a vector p.sub.4/3 may be defined from the third position 806 to the fourth position 810 and a vector p.sub.5/4 may be defined from the fourth position 810 to the end effector position 612. These values can then be used to compute the location of third position 806 relative to the fixed responder reference position 608 by subtracting the vectors p.sub.4/3 and p.sub.5/4 from the end effector position vector {right arrow over (P)}.sub.EENEW:
(85) p _ 3 / v = P .fwdarw. EENEW - p _ 4 / 3 - p _ 5 / 4 , where : Eqn ? 5 p _ 4 / 3 .Math. i _ = - L 2 ? cos ? d dist ( sin ? ? dist - 1 ) p 2 - ? dist Eqn ? 6 ? a p _ 4 / 3 .Math. j _ = L 2 ? sin ? d dist ( sin ? ? dist - 1 ) p 2 - ? dist Eqn ? 6 ? b p ¯ 4 / 3 .Math. k _ = L 2 ? cos ? ( ? dist ) p 2 - ? dist Eqn ? 6 ? c p ¯ 5 / 4 .Math. i ¯ = L 3 ? cos ? ( d dist ) ? cos ? ( ? dist ) Eqn ? 7 ? a p ¯ 5 / 4 .Math. j ¯ = - L 3 ? sin ? ( d dist ) ? cos ? ( ? dist ) Eqn ? 7 ? b p ¯ 5 / 4 .Math. k ¯ = L 3 ? sin ? ( ? dist ) , Eqn ? 7 ? c
(86) where i is a unit vector in the x direction, J is a unit vector in the y direction, and k is a unit vector in the z direction.
(87) The vector p.sub.3/v from the fixed responder reference position 608 to the third position 806 may then be used to find the configuration variables d.sub.prox and ?.sub.prox for the s-segment 800. The angle d.sub.prox is calculated by solving the following two equations for d.sub.prox:
(88) p ¯ 3 / v .Math. i ¯ = - L 1 ? cos ? d prox ( sin ? ? prox - 1 ) p 2 - ? prox Eqn ? 8 ? a p ¯ 3 / v .Math. J _ = L 1 ? sin ? d prox ( sin ? ? prox - 1 ) p 2 - ? prox . Eqn ? 8 ? b
(89) Taking a ratio of Eqn 8b and Eqn 8a yields:
d.sub.prox=a tan 2(-p.sub.3/v.Math.J,p.sub.3/v.Math.i), Eqn 9
(90) where i and J are unit vectors in the x and y directions respectively. A closed form solution cannot be found for ?.sub.prox, and accordingly ?.sub.prox must be found using a numerical equation solution to either of equations Eqn 8a or Eqn 8b. For example, a Newton-Raphson method may be employed, which iteratively approximates successively better roots of a real-valued function. The Newton-Raphson method can be implemented using the following equations:
(91) 0 f ? ( ? prox ) = L 1 p ? - ? prox ? cos ? d prox ( 1 - sin ? ? prox ) - p ¯ 3 / v .Math. i _ = 0 , Eqn ? 10
(92) where i is the unit vector in the x direction. The equation Eqn 10 is Eqn 8a rewritten in the form ƒ(?.sub.prox)=0. The Newton-Raphson method tends to converge very quickly because in the range 0<?.sub.prox<p, the function has a large radius of curvature and has no local stationary points. Following the Newton-Raphson method, successive improved estimates of ?.sub.prox can be made iteratively to satisfy equation Eqn 10 using the following relationship:
(93) ? n + 1 = ? n - f ? ( ? n ) f ' ( ? n ) Eqn ? 11
(94) Finally, upon determination of ?.sub.prox, the following equation can be used to find q.sub.ins:
(95) q ins = - p ¯ 3 / v .Math. k ¯ - L 1 ? cos ? ? prox p 2 - ? prox , Eqn ? 12
(96) where k is the unit vector in the z direction and p.sub.3/v. k is the dot product of the vector p.sub.3/v and the unit vector k.
(97) The above configuration variables are calculated for the end effector position and orientation signals {right arrow over (P)}.sub.EENEW and R.sub.EENEW at block 706 or {right arrow over (P)}.sub.EEPREV and R.sub.EEPREV at block 714 of the processes 700. The configuration variables generally define a pose of the positioner 314 required to position the end effector 316 at the desired location and orientation in end effector workspace. Configuration variables are produced for each end effector 316 and 334 of the respective right and left side instruments 110a and 110b. Two sets of configuration variables referred to as right and left configuration variables respectively are thus produced and transmitted by the motion control interface 408 to the instrument processor circuit 118 and used by the microprocessor 280 to generate drive control signals for spatially positioning the positioner 314 and end effector 316 of the instrument 110a in the surgical workspace 600.
(98) The values of the vector {right arrow over (P)}.sub.EENEW and rotation matrix R.sub.EENEW calculated as described above and stored in stores 430 and 432 of the current buffer 420 of the workstation memory 402 thus define the location (x, y, z) of the end effector 316 of the instrument 110a within the surgical workspace 600 relative to the fixed responder reference frame x.sub.v, y.sub.v, z.sub.v (shown in FIG. 6).
(99) Referring to FIG. 10, a process for implementing block 712 of the movement process 700 (FIG. 7) is shown generally at 1000. The process 1000 causes the microprocessor 400 to make the determination as to whether the desired spatial positioning of the end effector 316 by the tool positioner 314 is associated with a service life reduction. Block 1002 directs the microprocessor 400 to read the values of the vector {right arrow over (P)}.sub.MCURR and rotation matrix R.sub.MCURR from the stores 422 and 424 of the current buffer 420. Block 1004 then directs the microprocessor 400 to generate a notional line extending from the reference position 618 x.sub.s, y.sub.s, z.sub.s (in FIG. 6) to the position defined by the values of {right arrow over (P)}.sub.MCURR and R.sub.MCURR. Block 1006 then directs the microprocessor 400 to determine whether the generated line intersects the save region surface 616. If at block 1006, the line intersects the surface 616, then the end effector position 612 corresponding to {right arrow over (P)}.sub.MCURR and R.sub.MCURR would be outside the safe region 616 and would thus potentially result in a reduction in service life for the tool positioner 314 of the instrument 110. Block 1006 then directs the microprocessor 400 to block 718 pf the process 700 for initiation of the movement management function.
(100) If at block 1006, the notional line does not intersect the surface 616, then the end effector position 612 corresponding to {right arrow over (P)}.sub.MCURR and R.sub.MCURR would be within the safe region 616 and would thus not result in a reduction in service life for the tool positioner 314 of the instrument 110. Block 1006 then directs the microprocessor 400 to block 714 of the process 700 and motion control signals are transmitted to the instrument processor circuit 118 to facilitate movement of the end effector 316.
(101) In the process 1000 the vector {right arrow over (P)}.sub.MCURR and rotational matrix R.sub.MCURR represent desired positions for the end effector 316 of the instrument 110a. However physical movement of the tool positioner 314 only occurs after the workstation processor circuit 114 writes these values to the values the vector {right arrow over (P)}.sub.EENEW and rotation matrix R.sub.EENEW stored in stores 430 and 432 of the current buffer 420 and then transmits these values via the interface cable 116 to the instrument processor circuit 118.
(102) Referring to FIG. 11A, an embodiment of a movement management process for implementing block 718 of the movement process 700 is shown generally at 1100. The movement management process 1100 runs in parallel with the movement process 700 and begins at block 1102. Block 1102 directs the microprocessor 400 to generate an alert that the movement to the desired spatial position is associated with a potential service reduction. An example of a displayed alert is shown in FIG. 11B at 1130. The alert 1130 includes a message indicating that the requested movement is associated with a potential service life reduction and provides a “Cancel” button 1132, an “Override” button 1134, and an “Information” button 1136, for receiving an operator selection. The alert 1130 may be displayed on the display 120 and/or on the secondary display 132. Block 1102 then directs the microprocessor 400 to block 1104, which directs the microprocessor 400 to determine whether the “Information” button 1136 has been activated by the operator. If the information button 1136 has been selected the process continues at block 1106, which directs the microprocessor 400 to display additional information. For example, a pop-up box (not shown) may be displayed on the display 120 or secondary display 132 that includes information such as the current remaining service life for the instrument, background information on the reasons for the movement causing a service life reduction, and information on the override process. The Information display may include a cancel button for returning to the displayed alert 1130 once the operator has reviewed the information presented. Block 1106 then directs the microprocessor 400 back to block 1104.
(103) If at block 1104, the “Information” button 1136 was not activated, the microprocessor 400 is directed to block 1108. Block 1108 directs the microprocessor 400 to determine whether the “Cancel” button 1132 has been activated by the operator. If the “Cancel” button 1132 was activated, the microprocessor 400 is directed to block 1110, which directs the microprocessor 400 to discontinue display of the alert 1130. The movement process 700 continues to run as before and if the operator still provides input via the input device 112 that represent a desired end effector position outsider the safe region 616, block 712 will again direct the microprocessor 400 to block 718 and the process 1100 will be re-initiated and the alert 1130 will be displayed again.
(104) If at block 1108, the “Cancel” button 1132 was not activated, the microprocessor 400 is directed to block 1112, which directs the microprocessor 400 to determine whether the “Override” button 1134 has been activated by the operator. If the “Override” button 1134 has not been activated, the microprocessor 400 is directed back to block 1104, which causes blocks 1104, 1108, and 1112 to be repeated until the operator makes a selection of one of the buttons 1132, 1134, or 1136. If at block 1112, the “Override” button 1134 has been activated, the microprocessor 400 is directed to block 1114. Block 1114 directs the microprocessor 400 to update a service life parameter for the instrument 110a. In the embodiment shown in FIG. 4, the service life parameter of the instrument 110a is stored in the memory 464 on the monitor board 328. The memory 464 of the monitor board 328 may be accessed by the instrument processor circuit 118 via the instrument interface 458 and interface 462 on the monitor board. In one embodiment the microprocessor 400 may send an update command via the interface cable 116 that causes the microprocessor 450 of the instrument processor circuit 118 to initiate the necessary update to the service life parameter.
(105) In some embodiments, the instrument 110a when newly manufactured may have a pre-determined number of uses loaded into the memory 464 on the monitor board 328. As an example, an instrument may be designed to be reused a number of times (for example 20 times). During each use, the mechanical structures of the instrument 110a will be subjected to some stresses and eventually components of the instrument may become strained or worn. Additionally, following each use the instrument 110a must be cleaned and sterilized, which may involve autoclaving or other processes that cause additional stress and/or deterioration of the materials and components of the instrument. The determination that a desired spatial positioning would result in a movement of the positioner 314 associated with a potential service life reduction may be based on an estimated strain in the control wires associated with the movement. Positions within the tool workspace 602 that are associated with increased strain in the control wires may be mapped to generate the safe region 616 as shown in FIG. 6.
(106) In this embodiment, the service life parameters are stored in the memory 464 rather than the workstation processor circuit 114 or instrument processor circuit 118. This avoids circumvention of the service life restrictions by simply using the instrument with another system 100. The interface 462 may also implement security functions for controlling access for reading and writing to the memory 464. The security functions may be implemented to prevent unauthorized access to the memory 464 for changing the remaining service life of the instrument 110a. As an example, the interface 462 may implement a cryptography system that uses pairs of cryptographic keys to prevent access to the memory 464 by a host not having a corresponding cryptographic key. In other embodiments, although less desirable, the service life parameter may be stored in the workstation memory 402 or the memory 452 of the instrument processor circuit 118.
(107) Use of the instrument 110a outside the safe region 616 shown in FIG. 6 results in additional strain in the control wires and may cause additional wear of the vertebra 604. The updating of the service life parameter accounts for this additional strain by reducing the number of service lives remaining for the instrument. For example, a single override may be associated with a reduction of one or more of the 20 uses, as set out in the example above.
(108) Once the service life parameter has been updated at block 1114, the microprocessor 400 is directed to block 1116. Block 1116 directs the microprocessor 400 to enable or assert the override signal for use at block 720 of the movement process 700, as described above. Block 1116 also directs the microprocessor 400 to start a countdown timer T.sub.o. The countdown timer provides a pre-determined override period during which the operator is able to provide inputs to the input device 112 that cause the end effector to be positioned outside of the safe region 616. For example, the timer T.sub.o may be set for 30 or 60 seconds. The microprocessor 400 is then directed to block 1118, which directs the microprocessor to determine whether the countdown timer T.sub.0 has expired. If the timer has not yet expired, block 1118 is repeated. If at block 1118, the timer T.sub.o has expired, the microprocessor 400 is directed to block 1120. Block 1120 directs the microprocessor 400 to disable the override signal. As such, the microprocessor 400 will discontinue transmitting drive signals at block 714 of the movement process 700 for movements of the positioner 314 that are associated with the potential service life reduction on expiry of an override period. Block 1120 then directs the to block 1122, where the movement management process 1100 ends. A further determination at block 712 as to whether the desired spatial positioning of the end effector 316 is outside the safe region 616 may again trigger the movement management process 1100.
(109) In an alternative embodiment, the service life parameter may correspond to a pre-determined usage time for the instrument 110a. In this case the microprocessor 400 may be configured to decrement a usage time parameter stored in the memory 464 of the monitor board 328 based on an expected reduction in service life-time caused by the movement. Various other alternatives for implementing the service life parameter may include a parameter that includes a pre-determined number of movements of the positioner 314 of the instrument 110a associated with a potential service life reduction. For example, it may be pre-determined that the instrument 110a can safely move outside the safe region 616 a certain number of times and the microprocessor 400 may be configured to decrement a remaining number of these movements stored in the memory 464 of the monitor board 328 each time the override input is received from the operator.
(110) Referring back to FIG. 1, in the embodiment shown an image of the surgical workspace including anatomical features and the instruments 110 is displayed on the display 120. In one embodiment the movement management block 718, when initiated causes an alert icon 130 to be displayed overlaying a portion of an image of the left hand side tool. In other embodiments the alert may take the form of causing a portion of the screen such as the screen border to be colored red or by causing the screen to flash.
(111) Alternatively, the workstation 102 may include an audible warning device that is capable of generating an alert tone. The alert tone may be combined with a display of the alert 1130 in FIG. 11B on the secondary display 132.
(112) As disclosed above, the input device 112 may be configured to generate haptic forces for providing feedback to the operator via to the hand controllers 122 and 124. In one embodiment the alert may involve the movement management block 718 directing the microprocessor 400 to generate a haptic feedback signal that is communicated to the input device 112 via the USB connection 518 to cause generation of haptic forces. As an example, when the left hand instrument 110 generates input signals that would result in the end effector of the right hand instrument 110a moving outside of the safe region 616, then the right hand controller 122 may generate a perceptible force on the hand controller 122 that provides the alert to the operator while grasping the hand controller.
(113) The instrument 110a in the embodiment described above includes articulated linkages in the form of vertebra 604 that provide smoothly bendable articulated segments sections 800 and 802 shown in FIG. 9. Referring to FIG. 12, in another embodiment an instrument 1200 includes linkages 1202 and 1204 and a wrist 1206 that are articulated at discrete joints 1208 and 1210. The articulated linkages 1202 and 1204 include control wires (not shown) that run through the linkages and activate the instrument 1200 to cause bending at the discrete joints 1208 and 1210. In this embodiment the wrist 1206 includes articulated segments as generally described above that provide a smoothly bendable linkage for positioning an end effector 1212 in a surgical workspace. A second instrument 1214 is similarly configured. The above described embodiments may be implemented for the instruments 1200 and 1214.
(114) While the above embodiments have been described in terms of a positioning function, the process may be implemented for mechanical functions other than positioning. For example, referring back to FIG. 8, the gripper jaws 614 pf the end effector 316 may be actuated to open and close by one of the actuator slides 320. The applicable actuator slide 320 thus provides an actuation force by tensioning control wires extending along the positioner 314 and coupled to one or both of the gripper jaws 614 of the end effector 316 at the distal end of the positioner. The microprocessor 400 of the workstation processor circuit 114 may be configured to generate end effector drive signals for causing the opposing gripper jaw elements to close with a desired force in proportion to an end effector actuation signal. The actuation signal is generated by the input device 112 in response to a force imparted by the operator on the end effector actuator 520 of the input device 112 shown in FIG. 5. The end effector actuator 520 may provide a force sensitive input that generates end effector input signals in response to a force exerted by the operator on the actuator. The microprocessor 400 may be configured to make a determination that the desired force would result in a potential service life reduction for the actuation of the gripper jaws 614. As described above, the microprocessor 400 may initiate an actuation management function. Similarly, if the microprocessor 400 determines that the desired force would not result in a potential service life reduction for the instrument, end effector drive signals may be generated to cause the end effector to close with the desired force.
(115) There is provided a non-transitory computer readable medium storing instructions, which when executed by at least one processor, cause the at least one processor to perform any of the methods as generally shown or described herein and equivalents thereof.
(116) While specific embodiments have been described and illustrated, such embodiments should be considered illustrative only and not as limiting the disclosed embodiments as construed in accordance with the accompanying claims.
Claims
1. A robotic surgery system comprising: an input device configured to generate input signals in response to manipulation by an operator, the input signals representing a desired spatial positioning of a tool of an instrument within a tool workspace, the tool workspace including extents corresponding to physical movement limitations associated with a positioner of the instrument to which the tool is coupled; and a processor configured to: receive the input signals from the input device; process the input signals to determine the desired spatial positioning of the tool within the tool workspace; in response to a determination that the desired spatial positioning would result in a movement of the positioner associated with a potential service life reduction for the instrument, initiate a movement management function; and in response to a determination that the desired spatial positioning would not result in the movement of the positioner associated with the potential service life reduction for the instrument, generate drive signals for movement of the positioner to cause the tool to be positioned at a position corresponding to the desired spatial positioning in the tool workspace.
2. The system of claim 1 wherein the processor is configured to make the determination that the desired spatial positioning would result in the movement of the positioner associated with the potential service life reduction by determining that the desired spatial positioning associated with the input signals lies outside a pre-determined safe region of the tool workspace.
3. The system of claim 2, wherein the processor is configured to initiate the movement management function by temporarily permitting the operator to extend the pre-determined safe region to permit the tool to move outside the pre-determined safe region.
4. The system of claim 1 wherein the processor is configured to initiate the movement management function by: causing an alert to be generated to indicate to the operator that the movement is associated with the potential service life reduction; and generating drive signals to inhibit movement of the positioner to cause the tool to remain positioned at a current position in tool workspace.
5. The system of claim 4 wherein the input device is configured to deliver a haptic feedback to an operator of the input device, and wherein the processor is configured to generate the alert by causing the input device to deliver the haptic feedback.
6. The system of claim 1 wherein the processor is configured to initiate the movement management function by: causing an alert to be generated to indicate to the operator that the desired spatial positioning is associated with the potential service life reduction; and in response to receiving an override input from the operator: generate drive signals for movement of the positioner to cause the tool to be positioned at the position in the tool workspace; and update a service life parameter associated with the instrument based on an expected reduction in service life caused by the movement.
7. The system of claim 6 wherein the service life parameter comprises a pre-determined number of uses for the instrument, the number of uses being decremented each time the instrument is used in a surgical procedure, and wherein the processor is configured to decrement the number of uses based on the expected reduction in service life caused by the movement.
8. The system of claim 6 wherein the service life parameter comprises a pre-determined usage time, and wherein the processor is configured to decrement the usage time based on the expected reduction in service life caused by the movement.
9. The system of claim 6 wherein the service life parameter comprises a pre-determined number of movements of the positioner that are associated with the potential service life reduction, and wherein the processor is configured to decrement the number of movements each time the override input is received from the operator.
10. The system of claim 6 wherein the processor is configured to discontinue generating drive signals for movements of the positioner that are associated with the potential service life reduction responsive to expiry of an override period.
11. The system of claim 6 further comprising a display configured to display an image of the tool workspace to the operator, and wherein the processor is configured to cause the alert to be generated by causing displaying of an alert icon on the display.
12. The system of claim 11 wherein the processor is configured to cause displaying an interactive alert icon on the display, the interactive alert icon being configured to generate the override input when activated by the operator.
13. The system of claim 6 wherein the input device is configured to deliver a haptic feedback to an operator of the input device, and wherein the processor is configured to generate the alert by causing the input device to deliver the haptic feedback.
14. The system of claim 1 wherein a service life parameter is configured to be stored in a memory associated with the instrument, and wherein the processor is configured to update the service life parameter by writing a new service life parameter to the memory.
15. The system of claim 14 wherein the memory comprises a memory located on the instrument, and wherein the system comprises an instrument interface configured to place the processor in data communication with the memory responsive to the instrument being loaded into the system.
16. The system of claim 15 wherein access for reading and writing to the memory is protected by a security function to prevent unauthorized changes to the service life parameter.
17. The system of claim 14 wherein the memory comprises a memory of the processor, and wherein the service life parameter includes an identifier that associates the service life parameter with the instrument.
18. The system of claim 1 wherein the positioner comprises: a plurality of articulated linkages; and a plurality of control wires that are pushed or pulled to cause movement of the articulated linkages to position the tool within the tool workspace; and wherein the determination that the desired spatial positioning would result in the movement of the positioner associated with the potential service life reduction is based on an estimated strain in the control wires associated with the movement.
19. The system of claim 1 wherein the tool comprises an end effector positioned at a distal end of the tool, and wherein the end effector comprises a pair of opposing elements, the opposing elements being actuated to close by an end effector actuation signal received from the input device, and wherein the processor is configured to: determine an end effector drive signal for causing the opposing elements to close with a desired force in proportion to the end effector actuation signal; in response to a determination that the desired force would result in the potential service life reduction for the instrument, initiate an actuation management function; and in response to a determination that the desired force would not result in the potential service life reduction for the instrument, generate the end effector drive signal to cause the end effector to close with the desired force.
20. A method for operating a robotic surgery system, the robotic surgery system including a processor and an input device, the method comprising by the processor: receiving input signals in response to manipulation of the input device by an operator, the input signals representing a desired spatial positioning of a tool of an instrument within a tool workspace, the tool workspace including extents corresponding to physical movement limitations associated with a positioner of the instrument to which the tool is coupled; processing the input signals to determine the desired spatial positioning of the tool within the tool workspace; in response to a determination that the desired spatial positioning would result in a movement of the positioner associated with a potential service life reduction for the instrument, initiating a movement management function; and in response to a determination that the desired spatial positioning would not result in a movement of the positioner associated with the potential service life reduction for the instrument, generating drive signals for movement of the positioner to cause the tool to be positioned at a position corresponding to the desired spatial positioning in the tool workspace.
21. The method of claim 20 wherein initiating the movement management function comprises: generating an alert to indicate to the operator that the desired spatial positioning is associated with the potential service life reduction; and in response to receiving an override input from the operator: generating drive signals for movement of the positioner to cause the tool to be positioned at the position in the tool workspace; and updating a service life parameter associated with the instrument based on an expected reduction in service life caused by the movement.
good opportunity to enter or cover...
pick the rose
took my order!
I'm out for now... kinda sorry
Enos goldmine
https://www.massdevice.com/titan-medical-patent-expand-surgical-robot-ip/
Huge market
https://www.businesswire.com/news/home/20221213005834/en/42-Billion-Worldwide-Medical-Robotic-Systems-Industry-to-2027---Featuring-Accuray-DENSO-Intuitive-Surgical-and-Medtronic-Among-Others---ResearchAndMarkets.com
Cellectis Announces Positive Preliminary Clinical Data for UCART22 in ALL and UCART123 in AML
* UCART22: ANTI-TUMOR ACTIVITY OBSERVED IN 60% (N=3) OF PATIENTS AT DL3 USING FCA LYMPHODEPLETION
* AMELI-01 STUDY (EVALUATING UCART123) NOW ENROLLING PATIENTS IN A TWO-DOSE REGIMEN ARM AT DL2
* UCART123: 25% (N=2) OF PATIENTS AT DL2 IN FCA ARM ACHIEVED MEANINGFUL RESPONSE
* NEXT DATA SET IS EXPECTED TO BE RELEASED IN 2023
* UCART123: ONE PATIENT EXPERIENCED A DURABLE MINIMAL RESIDUAL DISEASE (MRD)-NEGATIVE COMPLETE RESPONSE THAT CONTINUES BEYOND 12 MONTHS
dry hands still sell
be rushes
only this we can!
but if the rushes get compacted it becomes difficult to fold them!
X Magnus…
it may seem that paul has sold but you know he then bought back!
We'll soon see Enos... by now we shouldn't have to wait long, about ten days go by quickly!
I guess it's starting to feel uncomfortable being short right now
but they don't sell what they have!
https://www.ldmicro.com/profile/tmdi/insiders
I m pleased so…
but if they bought are we sure it would have a positive effect?
$103.9 million -
$75.7 million =
$28 milion +
$18 milion=
$46 milion+
Since our inception, we have not recorded any income tax benefits for the net losses we have incurred in each year or for our research and development tax credits generated, as we believe, based upon the weight of available evidence, that it is more likely than not that all of our net operating loss, or NOL, carryforwards and tax credits will not be realized. As of December 31, 2021, we had U.S. federal and state net operating loss carryforwards of $534.2 million and $534.8 million, respectively, which may be available to offset future taxable income. The federal NOLs include $37.2 million, which expire at various dates through 2037, and $497.0 million, which carryforward indefinitely. The state NOLs expire at various dates through 2041. As of December 31, 2021, we also had U.S. federal and state research and development tax credit carryforwards of $22.7 million and $15.6 million, respectively, which may be available to offset future tax liabilities and begin to expire in 2034 and 2026, respectively. We have recorded a full valuation allowance against our net deferred tax assets at each balance sheet date.+
the equipment the proceeds of which could finance the current costs...
“Along with recent patent application filings, including some that have recently published, this patent demonstrates the company’s focus on being an innovation leader in single-access RAS. The technology covered in this patent could assist with early-stage artificial intelligence that works to limit movement of instruments, including movements based on patient anatomy or more general keep-out zones, as well as enhancements in single-access RAS beyond the Enos system, including next generation single-access RAS technologies and systems. We believe the breadth and depth of our patent portfolio provides us with options for monetization or other strategic opportunities.”
I like it!
On December 6, 2022, Rubius Therapeutics, Inc. (the “Company”), entered into a Purchase and Sale Agreement (the “Agreement”) with DIV Acquisition V, LLC (the “Buyer”) for the sale of certain real property located at 100 Technology Way, Smithfield, Rhode Island 02917 (Map 49, Lot 219) and 30 Hanton City Road, Smithfield, Rhode Island 02917 (Map 49, Lot 78), together with the buildings and improvements, including the Company's manufacturing facility, and certain fixtures and personal property located in or on the real property, for an aggregate purchase price of $18,500,000, subject to adjustment. The transaction, which is subject to customary closing conditions, is expected to close on December 21, 2022.
MC 15 millions
https://www.ft.com/content/4b6f0fab-66ef-4e33-adec-cfc345589dc7
inflation will soon come down!