Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
if they have given them a loan, the conditions are there to do it!
or not?
I think we will soon know how things are!!!
1 $ Why not?
Market Cap $ 3.79M
Shares 1.91M
$16.4M in cash and cash equivalents at December 31, 2022; no debt
Estimated that these cash resources will fund company operations into the first quarter of 2024
catalyst at the end of March
https://investors.invivotherapeutics.com/static-files/b31472d0-67d0-4eda-8359-04ba5f7bba2e
Essentially I think they will take the money out of the market once the new entity has a valuation!
CBIO will have the rights excluding the profitable China!
So the listing should reflect the potential value of the new entity!
Up to there we dance...
Then the market will tell if this stuff is standing!
... we can spread a pitiful veil on how the market works...
So assuming that there is a market maneuverer who has given the green light to the operation, otherwise what would be the point of doing it, I expect that the new entity will receive an evaluation!
Could it currently be a bear trap for retail investors?
watch out
Baltic Exchange Dry Index rebound
https://tradingeconomics.com/commodity/baltic
But GLBS none
never a joy
I guess when no one expects it will move!
I don't sell.
Fuck you shorts
I added at 0,294
fuck the shorts!
mmmm
the company that has revenue of $23 million is the one that will merge with cbio.
I'm curious how much cash remains in cbio before the merger!
I think there might be 5 to 10 mil in cash
only $6 million!
I was hoping for better
GNI Group reported its Consolidated Financial Results for its Fiscal Year 2022 on February 15, 2023 showing continued revenue and profit growth from pirfenidone sales in China for the treatment of idiopathic pulmonary fibrosis, which includes revenue of approximately $102 million and net profit of approximately $23 million.
https://www.daiwair.co.jp/td_download.cgi?c=2160&i=2522199
Sales 2022 120 M
Net income 2022 8,06 M
Net Debt 2022 -
Capitalization 371 M
Jefferies Adjusts GNI's Price Target to 1,300 Yen From 2,100 Yen, Keeps at Buy
02/27/2023 | 05:18am EST
Now 1078
Continent is profitable with a robust fibrosis pipeline in various stages of development, including a Phase 3 study of F351 in HBV associated fibrosis and a Phase 2 study poised to initiate in NASH fibrosis.”
The quastion is: 2.5% of that, How much is it worth?
the merger must be voted on!
I wouldn't go short!
So..
The Company's product candidates consist of the coagulation-related assets marzeptacog alfa (activated) (MarzAA), dalcinonacog alfa (DalcA), and CB 2679d-GT.
This agreement will bring to GC Biopharma 3 programs, including "Marzeptacog
alfa (MarzAA)"
if the merger is approved, the loot goes to whoever has the cvr otherwise to everyone...
keep going short LOLOLOL
MC just $ 15 mil....
perhaps I add
Thank you for further potential cash distributions
How much is this deal worth?
I think $1 is within reach!
Merger getting closer
dancing in the next days!
IMO it's just the beginning...
because there is the risk that someone interested will come forward at the last minute!
rather than nothing...
better rather than nothing!
cursed they ate everything!
it does not make sense
We cannot predict with certainty the amount of distributions, if any, to our stockholders. However, based on the information currently available to us and if our stockholders approve the Dissolution, we estimate that the aggregate amount of cash that will be available for distribution to our stockholders in the Dissolution will be in the range between approximately $1.2 million and $4.0 million and the total amount distributed to stockholders will be in the range between approximately $0.01 and $0.04 per share of common stock. These amounts may be paid in one or more distributions. You may receive substantially less than the amount that we currently estimate that you may receive, or you may receive no distribution at all. Such distributions, if any, will not occur until after the Certificate of Dissolution is filed, and we cannot predict the timing or amount of any such distributions, as uncertainties as to the ultimate amount of our liabilities, the operating costs and amounts to be reserved for claims, obligations and provisions during the liquidation and winding-up process, and the related timing to complete such transactions make it impossible to predict with certainty the actual net cash amount, if any, that will ultimately be available for distribution to stockholders or the timing of any such distributions. Accordingly you will not know the exact amount of any distribution you may receive as a result of the Plan of Dissolution when you vote on the proposal to approve the Plan of Dissolution.
FUNDS Total (only for Top 20) 23.07% 20,851,795
ISTITUTIONS Total (only for Top 20) 74.41% 67,242,732
they charged them above $1
https://ih.advfn.com/stock-market/NASDAQ/rubius-therapeutics-RUBY/stock-news/87802292/rubius-therapeutics-ruby-gets-a-buy-rating-from-h
not even a year ago! Fool!
Have they waited until now for this?
but it's not worth shit!
Maybe one day like the vaccine it will come in handy but only to those who take it!
Right now it's worth nothing even if something seems to have yielded in terms of curing cancer!
Even if they have found a more sustainable formula!
Nothing!
Rubius has recently generated new non-human primate data with the next generation cell conjugation RED PLATFORM, demonstrating longer circulation time than observed with the first generation platform and pronounced pharmacodynamic effects as shown by increased levels of interferon gamma, a cytokine critical to both innate and adaptive immunity. In parallel, the Company has advanced a next generation Red Cell Therapeutic, RTX-250, an antigen-specific therapy that is designed to activate dendritic cells.
With the first generation Red Cell Therapeutics, Rubius demonstrated that engineered red blood cells could be manufactured at scale, safely administered and activate a patient’s immune system, resulting in clinical benefit in certain cancer patients, including evidence of tumor shrinkage and prolonged stable disease in PD-(L)-1 refractory solid tumors.
So why didn't istitutions sell?
IMO because there will be 30/40 million to distribute!
Let's hope they settle for free patents and leave us the cash!
I don't think we will be waiting long. Do they make a total balance up to the end? Or will we see the 2022 closure soon? The CEO expires soon. There is no BK here! Who knows what will become of patents?
this ludopathy is enough for me. But what does it matter what do I do?
How much would J&J or MDT spend to make Enos?
What they did in TMDI is peanuts in comparison!
Too many bet on defeat! It's getting hard to fund useful things like ENOS!
Better NTF or go short, make more... but then things change!
BOOM!
ROBOTIC SURGERY SYSTEM
DOCUMENT ID
US 20230055386 A1
DATE PUBLISHED
2023-02-23
INVENTOR INFORMATION
NAME
CITY
STATE
ZIP CODE
COUNTRY
Laakso; Aki Hannu Einari
Raleigh
NC
N/A
US
Pflaumer; Hans Christian
Apex
NC
N/A
US
Davies; James
Cambridge
N/A
N/A
GB
Cooper; Adrian Edward
Cambridgeshire
N/A
N/A
GB
Brittain; Thomas Henry
Bury St Edmunds
N/A
N/A
GB
Monteiro; Andre Fraser
Cambridgeshire
N/A
N/A
GB
Medeiros; Chace Francis
Raleigh
NC
N/A
US
APPLICANT INFORMATION
NAME
Titan Medical Inc.
CITY
Toronto
STATE
N/A
ZIP CODE
N/A
COUNTRY
CA
AUTHORITY
N/A
TYPE
assignee
APPLICATION NO
17/820392
DATE FILED
2022-08-17
DOMESTIC PRIORITY (CONTINUITY DATA)
us-provisional-application US 63260481 20210820
US CLASS CURRENT:
1/1
CPC CURRENT
TYPE
CPC
DATE
CPCI
A 61 B 50/13
2016-02-01
CPCI
A 61 B 34/30
2016-02-01
CPCA
A 61 B 2034/306
2016-02-01
CPCA
A 61 B 2090/066
2016-02-01
CPCA
A 61 B 2034/301
2016-02-01
CPCA
A 61 B 2017/00398
2013-01-01
Abstract
A robotic surgical system is provided with a central drive unit that supports and operates one or more robotic tools and a robotic arm and boom assembly that movably supports the control unit assembly in space. The robotic arm and boom assembly selectively allows movement of the control unit assembly along a plane, as well as in pitch and yaw, upon actuation of one or more actuators of the robotic arm and boom assembly to allow movement of the central drive unit.
Background/Summary
INCORPORATION BY REFERENCE TO ANY PRIORITY APPLICATIONS
[0001] Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet as filed with the present application are hereby incorporated by reference under 37 CFR 1.57.
BACKGROUND
Field
[0002] The present disclosure generally relates to robotic surgical systems, and more particularly to mechanisms for moving an arm assembly and control unit assembly of a robotic surgical system.
Description of the Related Art
[0003] Robotic surgery systems generally include an operator interface that receives operator input from a surgeon and causes corresponding movements of surgical tools within a body cavity of a patient to perform a surgical procedure. The operator interface can be on a workstation that the surgeon interfaces with to perform a surgical procedure using the surgical tools. The surgical tools can be on a cart separate from the workstation. The cart can be mobile, allowing hospital staff to move the cart into an operating room prior to the surgical procedure, and to remove it from the operating room once the surgical procedure has been completed.
SUMMARY
[0004] In accordance with one aspect of the disclosure, a robotic surgical system is provided with a central drive unit that supports and operates one or more robotic tools and a robotic arm and boom assembly that movably supports the control unit assembly in space. The robotic arm and boom assembly selectively allows movement of the control unit assembly along a plane, as well as in pitch and yaw, upon actuation of one or more actuators of the robotic arm and boom assembly to allow movement of the central drive unit.
[0005] In accordance with another aspect of the disclosure, a robotic surgical system has an arm assembly constrained to move in a horizontal direction and one or more joints for adjusting one or more of a lateral position, a pitch and a yaw of a central drive unit to thereby adjust a respective pitch and yaw of the surgical instruments and endoscope removably coupled to the central drive unit based on operator input.
[0006] In accordance with another aspect of the disclosure, a robotic surgical system is operable to adjust a location of a remote center of motion along an axis of an insertion tube to maintain the remote center of motion at an incision location of a patient while allowing a length of the insertion tube that extends inside the patient to be adjusted to thereby adjust an insertion depth of the surgical instruments and endoscope in a surgical space within the patient based on operator input.
[0007] In accordance with another aspect of the disclosure, a robotic surgical system is provided. The system includes a cart extending vertically above a base. The system also includes an arm assembly movably coupled to the cart, the arm assembly selectively movable relative to the cart to vary a height of the arm assembly relative to the base. The arm assembly is constrained to move horizontally and includes a plurality of transverse arms extending perpendicular to the cart. Each transverse arm is pivotally coupled to another of the transverse arms via a joint and configured to pivot about a vertical axis through the joint. A vertical arm is pivotally coupled to a last of the transverse arms about a yaw joint and extending downwardly therefrom. A central drive unit is pivotally coupled to the vertical arm about a pitch joint, the central drive unit comprising one or more robotic surgical instruments and an endoscope removably coupled thereto. The surgical instruments and endoscope are configured to extend through a single insertion tube configured to be inserted through an incision location in a patient. Each of the joint, yaw joint and pitch joint is robotically controlled to adjust one or more of a lateral position, a pitch and a yaw of the central drive unit to thereby adjust a respective pitch and yaw of the surgical instruments and endoscope based on operator input, and to adjust a location of a remote center of motion along an axis of the insertion tube to maintain the remote center of motion at the incision location while allowing a length of the insertion tube that extends inside the patient to be adjusted to thereby adjust an insertion depth of the surgical instruments and endoscope in a surgical space within the patient based on operator input.
[0008] In accordance with another aspect of the disclosure, a robotic surgical system is provided. The system includes a cart extending vertically above a base. The system also includes an arm assembly movably coupled to the cart, the arm assembly selectively movable relative to the cart to vary a height of the arm assembly relative to the base. The arm assembly is constrained to move horizontally. The arm assembly includes a first transverse arm vertically movable relative to the cart and at least partially extending along a first plane. A second transverse arm is pivotally coupled to the first transverse arm about a first joint and extending along a second plane parallel to the first plane, the second transverse arm configured to pivot about a vertical axis through the first joint. A third transverse arm is pivotally coupled to the second transverse arm about a second joint and extending along a third plane parallel to the first and second planes, the third transverse arm configured to pivot about a vertical axis through the second joint. A vertical arm is pivotally coupled to the third transverse arm about a fourth joint and extending downwardly therefrom. A central drive unit is pivotally coupled to the vertical arm about a fifth joint. The central drive unit comprises one or more robotic surgical instruments and an endoscope removably coupled thereto. The surgical instruments and endoscope are configured to extend through a single insertion tube configured to be inserted through an incision location in a patient. Each of the joints is robotically controlled to adjust one or more of a lateral position, a pitch and a yaw of the central drive unit to thereby adjust a respective pitch and yaw of the surgical instruments and endoscope based on operator input, and to adjust a location of a remote center of motion along an axis of the insertion tube to maintain the remote center of motion at the incision location while allowing a length of the insertion tube that extends inside the patient to be adjusted to thereby adjust an insertion depth of the surgical instruments and endoscope in a surgical space within the patient based on operator input.
[0009] In accordance with another aspect of the disclosure, a robotic surgical system is provided. The system includes a cart extending vertically above a base. An arm assembly is movably coupled to the cart, the arm assembly selectively movable relative to the cart via a boom arm that connects the arm assembly to the cart to vary a height of the arm assembly relative to the base. The arm assembly is pivotally coupled to the boom arm via a first joint and configured to pivot about a vertical axis through the first joint. The arm assembly includes a plurality of transverse arm sections extending perpendicular to the cart. Each transverse arm section is telescopically coupled to another of the transverse arm sections and operable by one or more actuators to linearly extend relative thereto between an extended position and a retracted position. A vertical arm pivotally coupled to a last of the transverse arm sections about a second joint and extending downwardly therefrom. A central drive unit is pivotally coupled to the vertical arm portion about a third joint. The central drive unit comprises one or more robotic surgical instruments and an endoscope removably coupled thereto, the surgical instruments and endoscope configured to extend through a single insertion tube configured to be inserted through an incision location in a patient. Each of the joints and actuators is robotically controlled to adjust one or more of a lateral position, a pitch and a yaw of the central drive unit to thereby adjust a respective pitch and yaw of the surgical instruments and endoscope based on operator input, and to adjust a location of a remote center of motion along an axis of the insertion tube to maintain the remote center of motion at the incision location while allowing a length of the insertion tube that extends inside the patient to be adjusted to thereby adjust an insertion depth of the surgical instruments and endoscope in a surgical space within the patient based on operator input.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] FIG. 1 is a perspective view of a robotic surgical system in an example deployed configuration.
[0011] FIG. 2 is a perspective view of the robotic surgical system of FIG. 1 in a stowed configuration.
[0012] FIG. 3 is a front view of the robotic surgical system of FIG. 1 in the stowed configuration.
[0013] FIG. 4 is a side view of the robotic surgical system of FIG. 1 in the stowed configuration.
[0014] FIG. 5 is a side view of the robotic surgical system of FIG. 1 in an example deployed configuration.
[0015] FIG. 6 is a front view of the robotic surgical system of FIG. 1 in an example deployed configuration.
[0016] FIG. 7 is a side view of the robotic surgical system of FIG. 1 in an example deployed configuration.
[0017] FIG. 8 is a top view of the robotic surgical system of FIG. 1 in an example deployed configuration.
[0018] FIG. 9 is a top view of the robotic surgical system of FIG. 1 in an example deployed configuration.
[0019] FIG. 10 is a top view of the robotic surgical system of FIG. 1 in an example deployed configuration.
[0020] FIG. 11 is a perspective partial view of the robotic surgical system of FIG. 1 in an example deployed configuration.
[0021] FIGS. 12A-12B are partial cross-sectional views of a portion of the robotic surgical system of FIG. 1.
[0022] FIG. 13 shows perspective views of variations of the robotic surgical system of FIG. 1.
[0023] FIG. 14 shows side views of the variations in FIG. 13 of the robotic surgical system of FIG. 1.
[0024] FIG. 15 is a perspective view of a robotic surgical system in an example deployed configuration.
[0025] FIG. 16 is a perspective view of the robotic surgical system of FIG. 15 in a stowed configuration.
[0026] FIG. 17 is a front view of the robotic surgical system of FIG. 15 in the stowed configuration.
[0027] FIG. 18 is a side view of the robotic surgical system of FIG. 15 in the stowed configuration.
[0028] FIG. 19 is a side view of the robotic surgical system of FIG. 15 in an example deployed configuration.
[0029] FIG. 20 is a front view of the robotic surgical system of FIG. 15 in an example deployed configuration.
[0030] FIG. 21 is a side view of a portion of the robotic surgical system of FIG. 15 depicting the adjustability of the insertion tube depth.
[0031] FIG. 22 is a top view of the robotic surgical system of FIG. 15 in an example deployed configuration.
[0032] FIG. 23 is a perspective partial view of the robotic surgical system of FIG. 15 in an example deployed configuration.
[0033] FIG. 24 shows a side view of a variations of the robotic surgical system of FIG. 15.
DETAILED DESCRIPTION
[0034] SCARA Robotic Surgical System
[0035] FIGS. 1-12B show a robotic surgical system 100. The system 100 includes a cart 10 (e.g., patient cart) with a base 12. The base 12 has one or more wheels 14 that allow the cart 10 (e.g., tower) to move along a surface (e.g., floor of the operating room). The wheels 14 can be selectively locked and unlocked (e.g., via a foot pedal 13, shown in FIG. 4). The system also includes an arm assembly 20 that is movably coupled to the cart 10, as further described below. The arm assembly 20 is a robotic (e.g., electromechanically controlled) arm assembly. A central drive unit (CDU) 50 is movably coupled to the arm assembly 20. One or more robotic surgical tools (not shown), such as surgical instruments and an endoscope, can be removably coupled to the central drive unit 50, which can extend at least a portion of the robotic surgical tools through a cannula or insertion tube 65 attached to a support arm 60 mechanically coupled (e.g., fixed to) the central drive unit 50 into a surgical space in a patient and operate end effectors of the robotic surgical tools to perform a surgical procedure within the surgical space.
[0036] In the illustrated implementation, the arm assembly 20 is a selective compliance assembly robot arm (SCARA) assembly. The arm assembly 20 has a first arm 22 movably attached to the cart 10 (e.g., to a side of the cart 10). In the illustrated embodiment, a proximal portion 21 of the first arm 22 can move (e.g., vertically) relative to the base 12 to vary a vertical position of the first arm 22 and thereby the arm assembly 20. An opposite end of the first arm 22 is movably (e.g., rotatably, pivotally) coupled to a second arm 24 via a joint 23 (e.g., first rotatable joint). An opposite end of the second arm 24 is movably (e.g., rotatably, pivotally) coupled to a third arm 26 via a joint 25 (e.g., second rotatable joint). An opposite end of the third arm 26 is movably (e.g., rotatably, pivotally) coupled to a fourth arm 28 via a joint 27 (e.g., third rotatable joint or yaw joint). An opposite end of the fourth arm 28 can be movably (e.g., rotatably, pivotally) coupled to the central drive unit (CDU) 50 via a joint 29 (e.g., fourth rotatable joint or pitch joint).
[0037] With continued reference to FIGS. 1-12B, the cart 10 can include one or more electric motors, drive units, transmissions and/or encoders to effect a vertical motion of the proximal portion 21 (and thereby the first arm 22 and arm assembly 20) relative to the base 12. In one implementation, the base 12 can have 1:1 wheel aspect ratio that inhibits (e.g., prevents) tip-over of the cart 10 through the full range of motion of the arm assembly 20. Drive units for effecting vertical and/or horizontal motion can include a harmonic drive, a planetary gearbox, belt drives, cable and pulley drives and worm and wheel drives.
[0038] Advantageously, the arm assembly 20 is horizontally constrained. Apart from the vertical adjustment of the arm assembly 20 and the pitch joint (joint 29), the degrees of freedom are naturally balancing. For example, second and third arms 24, 26 extend linearly (e.g., horizontally or transverse to an axis of the cart 10) and move only horizontally (e.g., relative to each other and relative to the first arm 22). The first arm 22 has at least a portion that extends horizontally (e.g., relative to each other and relative to the first arm 22) and can move vertically via the proximal portion 21, as discussed above. Each of the joints 23, 25, 27 and 29 can be robotic (e.g., electromechanically controlled), as discussed further below. The joint 27 (e.g., third rotatable joint) effects a yaw movement of the central drive unit 50, and the joint 29 (e.g., fourth rotatable joint) effects a pitch movement of the central drive unit 50. The arm assembly 20 can be operated to control the horizontal (e.g., proximal-distal) translation of the central drive unit 50. In the illustrated embodiment, the surgical system 100 has five degrees of freedom provided by the vertical motion of the first arm 22 and the four joints 23, 25, 27, 29 (e.g., excluding any degrees of freedom in the robotic surgical tools that are coupled to the central drive unit 50).
[0039] FIGS. 2-4 show the robotic surgical system 100 in a stowed configuration. Advantageously, the arm assembly 20 allows the robotic surgical system 100 to be moved into a compact assembly (e.g., for storage and/or movement between operating rooms). As best shown in FIG. 4, the first, second and third arms 22, 24, 26 can be arranged over each other (e.g., so that the arms 22, 24, 26 are aligned over each other), thereby achieving a compact arrangement. The second arm 24 has a shorter length than the third arm 26, allowing the second arm 24 to fit under the third arm 26 when in the compact position (e.g., where the third arm 26 is aligned with, or disposed over, the second arm 24 along the length of the second arm 24).
[0040] With reference to FIG. 5, the robotic surgical system 100 is shown in one orientation that provides for a range of motion from vertical of ±30° pitch and ±30° yaw (e.g., about the remote center of motion or RCM) in one implementation. In another implementation, the robotic surgical system 100 provides for a range of motion from vertical of ±90° pitch and ±180° yaw (e.g., about the remote center of motion or RCM). With reference to FIG. 6, the robotic surgical system 100 is shown in another orientation that provides for a range of motion from horizontal of between -10° to +90° pitch and of 120° of yaw (e.g., about the remote center of motion or RCM), as further described below. With reference to FIG. 7, the robotic surgical system 100 is shown in another orientation that provides for a range of motion from horizontal of between -10° to +90° pitch and of 120° of yaw (e.g., about the remote center of motion or RCM), as further described below. With reference to FIG. 8, the robotic surgical system 100 is shown in another orientation that provides for a range of motion from horizontal of between -10° to +90° pitch and of 60° of yaw (e.g., about the remote center of motion or RCM), as further described below. With reference to FIG. 9, the robotic surgical system 100 is shown in another orientation that provides for a range of motion of ±30° yaw (or total of 60° of yaw) (e.g., about the remote center of motion or RCM), where the base 12 of the cart 10 is approximately 500 mm from the operating table. With reference to FIG. 10, the robotic surgical system 100 is shown in another orientation that provides for a range of motion of between -60° and +90° yaw (or total of 120° of yaw), such as relative to an axis of the operating table (e.g., about the remote center of motion or RCM), where the base 12 of the cart 10 is approximately 50 mm from the operating table.
[0041] FIG. 11 shows a portion of the robotic surgical system 100 with the arm assembly 20 fully extended. The arm assembly 20 can move vertically relative to the cart 10 along a distance H1. In one implementation, the distance H1 can be between 800 mm and 900 mm, such as about 850 mm, and can allow for pitch rotation of between -10° to +90° relative to vertical. The first arm 22 can have a horizontal length L1, the second arm 24 can have a horizontal length L2, the third arm 26 can have a horizontal length L3, and the fourth arm 28 can have a length (e.g., vertical length or height) H2. The length L1 can be between 400 mm and 500 mm, such as about 480 mm. The length L2 can be between 300 mm and 400 mm, such as about 370 mm. The length L3 can be between 500 mm and 600 mm, such as about 560 mm. The length H2 can be between 300 mm and 400 mm, such as about 380 mm. The joint 23 (e.g., first rotatable joint) can in one implementation have a range of rotation R1 that is 360° (i.e., can fully rotate about the axis of the joint 23). The joint 25 (e.g., second rotatable joint) can in one implementation have a range of rotation R2 that is 360° (i.e., can fully rotate about the axis of the joint 25). The joint 27 (e.g., third rotatable joint) can in one implementation have a range of rotation R3 that is approximately 300° (yaw motion over a range of about 300°, such as with an internal hard stop in the joint 27). The joint 29 (e.g., fourth rotatable joint) can in one implementation have a range of rotation R4, or pitch motion over a range, of between-10° and about +90° (e.g., relative to horizontal, such as with an internal hard stop in the joint 29).
[0042] FIGS. 12A-12B show an example robotic drive system 200. Though FIG. 12A shows the drive system 200 in the joint 29 (e.g., fourth rotatable joint, or pitch joint), one of skill in the art will recognize that the drive system can be used in one or more (e.g., all) of the joints 23, 25, 27, 29 to control proximal-distal translation, pitch and/or yaw of the central drive unit 50. The robotic drive system 200 can include an electric motor assembly 210. In one implementation, the electric motor assembly 210 can include a brushless direct current (DC) motor for high efficiency and long service life and a harmonic drive for compactness, high torque capability, repeatability and minimal or no backlash. The system 200 can also include an encoder 212 to a known motor position, an encoder 220 to a known drive output position, and an encoder 230 to a known central drive unit 50 position when driven manually. The robotic drive system 200 can also include a brake (or clutch) 240 with a brake surface 250 that is engaged during drive motion or when in a static position or powered off (e.g., due to permanent magnets), which can then be released for manual operation. The brake 240 has a side 260 that is attached to a hub of the central drive unit 50 and is free to rotate whenever the drive system 200 is powered. The brake 240 also has a side 270 attached to the drive output. The robotic surgical system 200 also includes a safety brake 280 that engages when an emergency stop is actuated or when the robotic drive system 200 is powered off. The robotic surgical system 200 can operate in manual mode and robotic mode. In manual mode, the geartrain and motor have to be decoupled from the joint rotation, for example with a clutch. However, this can increase the mass of the joint.
[0043] In another implementation, the robotic surgical system 200 in one or more (e.g., all) of the joints 23, 25, 27, 29 can have a torque sensor T instead of the brake (or clutch) 240 so that the robotic surgical system 200 is a force-follow system to accommodate a robotic mode of operation and manual mode of operation without having to decouple the drive system during manual mode; rather, the drive system works with the operator (e.g., pushing on one or more of the first, second, third and fourth arms 22, 24, 26, 28 or one or more joints 23, 25, 27, 29). Additionally, the force-follow implementation advantageously allows active counterbalancing of the pitch joint (e.g., joint 29) at any configuration of the central drive unit 50. The robotic surgical system 200 with force-follow implementation can advantageously be approximately 40% lighter (e.g., have 40% of the mass) relative to the robotic surgical system 200 without force-follow (e.g., including a brake or clutch).
[0044] FIGS. 13-14 show example robotic surgical systems 100A-100D that are variations of the robotic surgical system 100. The features of the robotic surgical systems 100A-100D are similar to features of the robotic surgical system 100 in FIGS. 1-12B. Thus, reference numerals used to designate the various components of the robotic surgical systems 100A-100D are identical to those used for identifying the corresponding components of the robotic surgical system 100 in FIGS. 1-12B, except that an “A”, “B”, “C”, or “D” has been added to the numerical identifier. Therefore, the structure and description for the various features of the robotic surgical system 100 in FIGS. 1-12B are understood to also apply to the corresponding features of the robotic surgical systems 100A-100D in FIGS. 13-14, except as described below.
[0045] The robotic surgical system 100A differs from the robotic surgical system 100 in that the proximal portion 21A of the first arm 22A is attached to a boom arm (not shown) that extends (e.g., telescopically) from the top of the cart 10A, and the fourth arm 28A is longer than the fourth arm 28. Also, the cart 10A is smaller (e.g., shorter) than the cart 10.
[0046] The robotic surgical system 100B differs from the robotic surgical system 100 in that the proximal portion 21B of the first arm 22B is attached to a boom arm B that extends (e.g., telescopically) from the top of the cart 10B, and first arm 22B can be a “dogleg” arm (e.g., having two horizontal sections spaced vertically apart from each other). Also, the cart 10B is smaller (e.g., shorter) than the cart 10.
[0047] The robotic surgical system 100C differs from the robotic surgical system 100 in that the proximal portion 21C of the first arm 22C is attached to a boom arm B that extends (e.g., telescopically, not telescopically) from the top of the cart 10C, and the second arm 24C extends below the first arm 22C (e.g., inverse SCARA arrangement). Also, the cart 10C is smaller (e.g., shorter) than the cart 10.
[0048] The robotic surgical system 100D differs from the robotic surgical system 100 in that the cart 10D is taller than the cart 10, and the fourth arm 28D is shorter than the fourth arm 28 and be able to reach the operating table. In one example, the cart 10D can have the same height as the cart 10, but has a taller stowed position (e.g., than shown in FIGS. 2-4) because the cart 10D houses all the components that effect up/down motion of the arm assembly 20D statically. The fourth arm 28A-28D of the robotic surgical systems 100A-100D have different lengths to compensate for the different height of their respective cart 10A-10D and range of travel of the arm assembly 20A-20D relative to the cart 10A-10D, and still achieve the nominal remote center of motion (RCM) at the same height.
Prismatic Robotic Surgical System
[0049] FIGS. 15-23 show a robotic surgical system 100'. The robotic surgical system 100' includes a cart 10' (e.g., patient cart) with a base 12'. The base 12' has one or more wheels 14' that allow the cart 10' (e.g., tower) to move along a surface (e.g., floor of the operating room). The wheels 14' can be selectively locked and unlocked (e.g., via a foot pedal 13', shown in FIG. 16). The system also includes an arm assembly 20' that is movably coupled to the cart 10', as further described below. The arm assembly 20' is a robotic (e.g., electromechanically controlled) arm assembly. A central drive unit (CDU) 50' is movably coupled to the arm assembly 20'. One or more robotic surgical tools (not shown) can be removably coupled to the central drive unit 50', which can extend at least a portion of the robotic surgical tools through a cannula or insertion tube 65' attached to a support arm 60' mechanically coupled (e.g., fixed to) the central drive unit 50' into a surgical space in a patient and operate end effectors of the robotic surgical tools to perform a surgical procedure within the surgical space.
[0050] In the illustrated implementation, the arm assembly 20' is a prismatic boom arm (e.g., a multi-stage prismatic boom arm) assembly. The arm assembly 20' has a first arm section 22' movably (e.g., rotatably, pivotally) coupled to a boom arm (e.g., pillar) B' via a joint 23' (e.g., first rotatable joint). The boom arm B' is movably attached to the cart 10' (e.g., to a top side of the cart 10') so that the boom arm B' can move relative to the cart 10' to adjust a height of the arm assembly 20' relative to the base 12' to vary a vertical position (e.g., in the Z direction) of the arm assembly 20'. The arm assembly 20' can include a second arm section 24' movably coupled to the first arm section 22' in a telescoping manner (e.g., at least a portion of the second arm section 24' extends from within the first arm section 22'). The arm assembly 20' can include a third arm section 26' movably coupled to the second arm section 24' in a telescoping manner (e.g., at least a portion of the third arm section 26' extends from within the second arm section 24'). A fourth arm section 28' can be movably coupled to the third arm section 26' at an opposite end thereof via a joint 27' (second rotatable joint or yaw joint) from the second arm section 26'. The central drive unit 50' can be movably (e.g., rotatably, pivotally) coupled to the fourth arm section 28' via a joint 29' (e.g., third rotatable joint or pitch joint). The first, second and third arm sections 22', 24', 26' can generally extend along the same axis, and can extend generally perpendicular to an axis of the boom arm B' (e.g., extend generally horizontal or parallel to a support surface on which the base 12' sits). In another implementation, the arm assembly 20' can have two telescoping arm sections (instead of three) between the proximal portion 21' and the fourth arm section 28'. In still another implementation, the arm assembly 20' can have more than three telescoping arm sections (e.g., four, five). Advantageously, the fourth arm section 28' or support arm for the central drive unit 50' is shaped to allow the robotic surgical system 100' to stow in a compact position, as shown in FIG. 18. The shape of a support arm portion 28A' and a second support arm portion 28B' (see FIGS. 19-20) of the fourth arm section 28' can have a similar profile to an inner facing portion I' of the arm assembly 20', and facilitate the compact stowing profile (see FIG. 18). Additionally, said similar shapes between the fourth arm section 28' and the inner facing portion I' of the arm assembly 20' inhibits (e.g., prevents) collision between the fourth arm section 28' or support arm for the central drive unit 50' and the arm assembly 20' during operation of the robotic surgical system 100'.
[0051] The arm assembly 20' includes electric motors, brakes and/or encoders for rotational motion of the arm assembly 20' (e.g., first arm section 22') relative to the boom arm B' and for proximal-distal translation of the first, second and third arm sections 22', 24', 26'. In one implementation, the arm assembly 20' can include a linear drive. For example, the linear drive can be a 1.5 Nm stepper motor that powers a timing belt drive. In other implementations the linear drive can have other configurations. In one implementation, one or more (e.g., all) of the joints 23', 27', 29' (e.g., first rotatable joint, second rotatable joint or yaw joint, and third rotatable joint or pitch joint) can include a robotic drive system, such as the robotic drive system 200 described above in connection with FIGS. 12A-12B, for maintaining or adjusting the pitch and yaw of the central drive unit 50'. Drive units for effecting pitch and/or yaw motion can include a harmonic drive, a planetary gearbox, belt drives, cable and pulley drives and worm and wheel drives. Drive units for effecting the horizontal (proximal-distal) motion of the arm assembly 20' can include belt drives, rack and pinions, lead screws, cables and pulleys and linear actuators.
[0052] With continued reference to FIGS. 15-23, the cart 10' can include one or more electric motors, drive units, transmissions brakes and/or encoders to effect a vertical motion of the boom arm B' relative to the base 12'. In one implementation, the base 12' can have 1:1 wheel aspect ratio that inhibits (e.g., prevents) tip-over of the cart 10 through the full range of motion of the arm assembly 20'.
[0053] Advantageously, the arm assembly 20' is horizontally constrained. Apart from the vertical adjustment of the arm assembly 20' and the pitch joint (joint 29'), the degrees of freedom are naturally balancing. For example, first, second and third arm sections 22', 24', 26' extend linearly (e.g., horizontally or transverse to an axis of the cart 10') and move only horizontally (e.g., relative to each other). The first, second and third arm sections 22', 24', and 26' can move relative to each other between an extended configuration (shown in FIG. 15) and a collapsed or retracted or stowed configuration (shown, for example, in FIG. 18), as further described below. Each of the joints 23', 27', and 29' can be robotic (e.g., electromechanically controlled), as discussed further below. The joint 27' (e.g., second rotatable joint) effects a yaw movement of the central drive unit 50', and the joint 29' (e.g., third rotatable joint) effects a pitch movement of the central drive unit 50'. The arm assembly 20' can be operated to control the horizontal (e.g., proximal-distal) translation of the central drive unit 50'. In the illustrated embodiment, the surgical system 100' has five degrees of freedom provided by the vertical motion of the boom arm B', the horizontal motion of the first, second and third arm sections 22', 24', 26' and the three joints 23', 27', 29' (e.g., excluding any degrees of freedom in the robotic surgical tools that are coupled to the central drive unit 50').
[0054] FIGS. 16-18 show the robotic surgical system 100' in a stowed configuration. Advantageously, the arm assembly 20' allows the robotic surgical system 100' to be moved into a compact assembly (e.g., for storage and/or movement between operating rooms). As best shown in FIG. 18, each of the second and third arm sections 24', 26' can be at least partially retracted (e.g., mostly retracted, fully retracted) within the first arm section 22', thereby achieving a compact arrangement (e.g., horizontally compact arrangement). Further, the boom arm B' can be lowered relative to the cart (or tower) 10' (e.g., to its lowest vertical position), to reduce the vertical height (in the Z direction) of the arm assembly 20' relative to the base 12', thereby achieving a compact arrangement (e.g., vertically compact arrangement). At least a portion of the fourth arm section 28' can have an L-shaped profile between the joint 27' and joint 29', allowing the central drive unit 50' to be oriented vertically (as shown in FIGS. 16-18) and proximate a side surface of the arm assembly 20' (e.g., side surface of the third arm section 26').
[0055] With reference to FIG. 19, the robotic surgical system 100' is shown in one orientation that provides for a range of motion from horizontal of -20° to +200° pitch (e.g., about the remote center of motion or RCM). With reference to FIG. 20, the robotic surgical system 100' is shown in another orientation that provides for a range of motion from horizontal of between -20° to +200° pitch (e.g., about the remote center of motion or RCM), as further described below. With reference to FIG. 21, the robotic surgical system 100' is operable to adjust an insertion tube depth (e.g., an amount the insertion tube 65' is inserted into the patient). In one example, the robotic surgical system 100' is operable to adjust the insertion tube depth within a range of -50 mm and +150 mm relative to a nominal insertion depth position—that is between a position up to 50 mm retracted away from the remote center of motion (e.g., at the incision location) and a position up to 150 mm further inserted into the patient (e.g., and relative to the remote center of motion or incision point). In one implementation, the adjustment in the insertion depth of the insertion tube 65' is provided via operation of the arm assembly 20', such as adjustment in the length of the first arm section 22', second arm section 24' and/or third arm section 26'. With reference to FIG. 22, the robotic surgical system 100' is shown in another orientation that provides for a range of motion from horizontal of between -160° to +160° of yaw (e.g., about the remote center of motion or RCM). That is, the robotic surgical system 100' can have a range of motion of -20° to +200° of pitch and -160° to +160° of yaw. The robotic surgical system 100' advantageously can be placed at varying distances from the operating table. In one implementation the base 12' of the cart 10' can be approximately 600 mm from the operating table. In another implementation, the base 12' of the cart 10' can be approximately 50 mm from the operating table.
[0056] In one implementation, the robotic surgical system 100' can have a torque sensor T proximate the joint 27' (e.g., disposed between the third arm section 26' and the fourth arm section or support arm 28') so that the robotic surgical system 100' is a force-follow system to accommodate a robotic mode of operation and manual mode of operation without having to decouple the drive system during manual mode; rather, the drive system works with the operator (e.g., pushing on one or more of the first, second, third and fourth arms 22', 24', 26', 28' or actuating one or more of the joints 23', 25', 27', 29'). Additionally, the force-follow implementation advantageously allows active counterbalancing of the pitch joint (e.g., joint 29') at any configuration of the central drive unit 50'. The robotic surgical system 100' with force-follow implementation can advantageously be approximately 40% lighter (e.g., have 40% of the mass) relative to the robotic surgical system 100' without force-follow (e.g., including a brake or clutch).
[0057] FIG. 23 shows a portion of the robotic surgical system 100' with the arm assembly 20' fully extended. The arm assembly 20' can move vertically relative to the cart 10' along a distance H1' (e.g., via movement of the boom arm B' relative to the cart 10'). In one implementation, the distance H1' can be between 800 mm and 1200 mm, such as about 1100 mm. The first arm section 22' can have a horizontal length L1', the second arm section 24' can have a horizontal length L2', and the third arm section 26' can have a horizontal length L3'. The length L1' can be between 700 mm and 800 mm, such as about 750 mm. The length L2' can be between 700 mm and 800 mm, such as about 750 mm. The length L3' can be between 600 mm and 700 mm, such as about 650 mm. In one implementation, the horizontal travel distance L4' of the second arm section 24' and third arm section 26' relative to the end of the first arm section 22' is approximately between about 800 mm and about 1000 mm, such as about 840 mm. The joint 23' (e.g., first rotatable joint) can in one implementation have a range of rotation R1' that is 120° (e.g., +60° to -60° to horizontal, such as with an internal hard stop in the joint 23'). The joint 27' (e.g., second rotatable joint) can in one implementation have a range of rotation R3', or yaw motion over a range, of 320° (i.e., can rotate between-160° and +160° about the axis of the joint 27'). The joint 29' (e.g., third rotatable joint) can in one implementation have a range of rotation R4', or pitch motion over a range, of between +20° and about -200° (e.g., relative to horizontal, such as with an internal hard stop in the joint 29').
[0058] FIG. 24 shows an example robotic surgical system 100A' that is a variation of the robotic surgical system 100'. The features of the robotic surgical system 100A' are similar to features of the robotic surgical system 100' in FIGS. 15-23. Thus, reference numerals used to designate the various components of the robotic surgical system 100A' are identical to those used for identifying the corresponding components of the robotic surgical system 100' in FIGS. 15-23, except that an “A” has been added to the numerical identifier. Therefore, the structure and description for the various features of the robotic surgical system 100' in FIGS. 15-23 are understood to also apply to the corresponding features of the robotic surgical systems 100A' in FIG. 24, except as described below.
[0059] The robotic surgical system 100A' differs from the robotic surgical system 100' in that the arm assembly 20A' includes two stages—a first arm portion 22A' and a second arm portion 24A', with the arm portion 28A' movably (e.g., rotatably, pivotally) coupled to the end of the second arm portion 24A' via a joint 27A'. The central drive unit 50A' movably (e.g., rotatably, pivotally) coupled to another end of the arm portion 28A'. Additionally, the joint 23A' (first rotatable joint) is between the boom arm B? and an intermediate portion of the first arm section 22A' (i.e., not at the proximal portion 21A' of the first arm section 22A'). Advantageously, the robotic surgical system 100' has fewer mechanical drives and drive transmissions (e.g., due to there being one less arm portion or state in the prismatic arm assembly 20A'). Additionally, a counterbalance can optionally be incorporated with (e.g., attached to, housed within) the proximal portion 21A' that overhangs relative to the joint 23A'.
Method of Operation
[0060] With respect to the robotic surgical system 100, 100A-100D, 100', 100A', the remote center of motion (RCM) can be sufficient to reach sites in the human body to perform at least the following procedures: Hysterectomy, Cholecystectomy, Colectomy, Splenectomy, Partial Nephrectomy, Prostatectomy, Tongue base surgery. The remote center of motion (RCM) can be set at the beginning of the surgical procedure. The RCM can be a software implemented RCM, where the RCM can be moved along the axis of the insertion tube. For example, during the procedure the surgeon may prefer to have the surgical instruments deeper in the surgical space. To achieve this, the RCM can be modified via software, allowing the central drive unit 50, 50', 50A' to be moved closer to the patient (e.g., by the arm assembly 20, 20A-20D, 20', 20A') to move the insertion tube further into the surgical space but maintaining the pivot point of the insertion tube at the incision point.
[0061] The robotic surgical system 100, 100A-100D, 100', 100A' can operate in static, manual and robotic modes. In manual mode, the system 100, 100A-100D, 100', 100A' is active during setup and after the procedure. Under manual guidance, the arm assembly 20, 20A-20D, 20', 20A' is positioned by the operator to dock to the insertion tube. Gravity compensation is provided for non-horizontal degrees of freedom (e.g., vertical motion, pitch movement). The manual mode also allows the operator to return the system 100, 100A-100D, 100', 100A' into a compact stowed state.
[0062] In static mode, the arm assembly 20, 20A-20D, 20', 20A' is static when the surgeon is actively driving one or more of the surgical instruments and/or endoscope. All degrees of freedom of the arm assembly 20, 20A-20D, 20', 20A' are locked to provide a stable position for the surgical instruments and endoscope via the central drive unit 50, 50A-50D, 50', 50A'. In a no power state (e.g., due to loss of power), the system 100, 100A-100D, 100', 100A' is static and in a locked state. The surgical instruments, endoscope and insertion tube can be manually retracted to allow the cart 10, 10A-10D, 10', 10A' to be moved away from the operating table.
[0063] In robotic mode, under surgeon control, the yaw and/or pitch angle of the central drive unit 50, 50A-50D, 50', 50A' can be adjusted robotically (e.g., to adjust the pitch and/or yaw of the surgical instruments and endoscope). The RCM remains fixed so there is no translation about the mid-abdominal wall. The RCM can be a software implemented RCM, where the RCM can be moved along the axis of the insertion tube, as described above.
[0064] While certain embodiments of the inventions have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the disclosure. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions and changes in the systems and methods described herein may be made without departing from the spirit of the disclosure. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the disclosure. Accordingly, the scope of the present inventions is defined only by reference to the appended claims.
[0065] Features, materials, characteristics, or groups described in conjunction with a particular aspect, embodiment, or example are to be understood to be applicable to any other aspect, embodiment or example described in this section or elsewhere in this specification unless incompatible therewith. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. The protection is not restricted to the details of any foregoing embodiments. The protection extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed.
[0066] Furthermore, certain features that are described in this disclosure in the context of separate implementations can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations, one or more features from a claimed combination can, in some cases, be excised from the combination, and the combination may be claimed as a subcombination or variation of a subcombination.
[0067] Moreover, while operations may be depicted in the drawings or described in the specification in a particular order, such operations need not be performed in the particular order shown or in sequential order, or that all operations be performed, to achieve desirable results. Other operations that are not depicted or described can be incorporated in the example methods and processes. For example, one or more additional operations can be performed before, after, simultaneously, or between any of the described operations. Further, the operations may be rearranged or reordered in other implementations. Those skilled in the art will appreciate that in some embodiments, the actual steps taken in the processes illustrated and/or disclosed may differ from those shown in the figures. Depending on the embodiment, certain of the steps described above may be removed, others may be added. Furthermore, the features and attributes of the specific embodiments disclosed above may be combined in different ways to form additional embodiments, all of which fall within the scope of the present disclosure. Also, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described components and systems can generally be integrated together in a single product or packaged into multiple products.
[0068] For purposes of this disclosure, certain aspects, advantages, and novel features are described herein. Not necessarily all such advantages may be achieved in accordance with any particular embodiment. Thus, for example, those skilled in the art will recognize that the disclosure may be embodied or carried out in a manner that achieves one advantage or a group of advantages as taught herein without necessarily achieving other advantages as may be taught or suggested herein.
[0069] Conditional language, such as “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment.
[0070] Conjunctive language such as the phrase “at least one of X, Y, and Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to convey that an item, term, etc. may be either X, Y, or Z. Thus, such conjunctive language is not generally intended to imply that certain embodiments require the presence of at least one of X, at least one of Y, and at least one of Z.
[0071] Language of degree used herein, such as the terms “approximately,” “about,” “generally,” and “substantially” as used herein represent a value, amount, or characteristic close to the stated value, amount, or characteristic that still performs a desired function or achieves a desired result. For example, the terms “approximately”, “about”, “generally,” and “substantially” may refer to an amount that is within less than 10% of, within less than 5% of, within less than 1% of, within less than 0.1% of, and within less than 0.01% of the stated amount. As another example, in certain embodiments, the terms “generally parallel” and “substantially parallel” refer to a value, amount, or characteristic that departs from exactly parallel by less than or equal to 15 degrees, 10 degrees, 5 degrees, 3 degrees, 1 degree, or 0.1 degree.
[0072] The scope of the present disclosure is not intended to be limited by the specific disclosures of preferred embodiments in this section or elsewhere in this specification, and may be defined by claims as presented in this section or elsewhere in this specification or as presented in the future. The language of the claims is to be interpreted broadly based on the language employed in the claims and not limited to the examples described in the present specification or during the prosecution of the application, which examples are to be construed as non-exclusive.
[0073] Of course, the foregoing description is that of certain features, aspects and advantages of the present invention, to which various changes and modifications can be made without departing from the spirit and scope of the present invention. Moreover, the devices described herein need not feature all of the objects, advantages, features and aspects discussed above. Thus, for example, those of skill in the art will recognize that the invention can be embodied or carried out in a manner that achieves or optimizes one advantage or a group of advantages as taught herein without necessarily achieving other objects or advantages as may be taught or suggested herein. In addition, while a number of variations of the invention have been shown and described in detail, other modifications and methods of use, which are within the scope of this invention, will be readily apparent to those of skill in the art based upon this disclosure. It is contemplated that various combinations or subcombinations of these specific features and aspects of embodiments may be made and still fall within the scope of the invention. Accordingly, it should be understood that various features and aspects of the disclosed embodiments can be combined with or substituted for one another in order to form varying modes of the discussed devices.
Claims
1. A robotic surgical system, comprising: a cart extending vertically above a base; an arm assembly movably coupled to the cart, the arm assembly being selectively movable relative to the cart via a boom arm that connects the arm assembly to the cart to vary a height of the arm assembly relative to the base, the arm assembly pivotally coupled to the boom arm via a first joint and configured to pivot about a vertical axis through the first joint, the arm assembly comprising a plurality of transverse arm sections extending perpendicular to the boom arm, each transverse arm section telescopically coupled to another of the transverse arm sections and operable by one or more actuators to linearly extend relative thereto between an extended position and a retracted position, a support arm pivotally coupled to a last of the transverse arm sections about a second joint and extending downwardly therefrom; and a central drive unit pivotally coupled to the support arm about a third joint, the central drive unit comprising one or more robotic surgical instruments and an endoscope removably coupled thereto, the surgical instruments and endoscope configured to extend through a single insertion tube configured to be inserted through an incision location in a patient, wherein each of the joints and actuators is robotically controlled to adjust one or more of a lateral position, a pitch and a yaw of the central drive unit to thereby adjust a respective pitch and yaw of the surgical instruments and endoscope based on operator input, and to allow an insertion depth of the insertion tube when inserted in the patient to be adjustable to thereby adjust an insertion depth of the surgical instruments and endoscope in a surgical space within the patient based on operator input while maintaining a location of a remote center of motion at an incision location on the patient.
2. The robotic surgical system of claim 1, wherein the plurality of transverse arm sections include: a first transverse arm section pivotally coupled to the boom arm and configured to rotate relative to the boom arm along a first plane, a second transverse arm section telescopically coupled to the first transverse arm section and configured to move linearly relative to the first transverse arm section between a retracted position and an extended position relative to the first transverse arm section, and a third transverse arm section telescopically coupled to the second transverse arm section and configured to move linearly relative to the second transverse arm section and the first transverse arm section between a retracted position and an extended position relative to the first and second transverse arm sections, the support arm being pivotally coupled to the third transverse arm section.
3. The robotic surgical system of claim 1, wherein each of the joints includes an electric motor operable to effect a pivoting motion of the joint about its axis to effect a motion to adjust one or more of the lateral position, the pitch and the yaw of the central drive unit to thereby adjust a respective pitch and yaw of the surgical instruments and endoscope based on operator input.
4. The robotic surgical system of claim 3, further comprising a brake or clutch operatively coupled to the electric motor and selectively operable to decouple the electric motor from the joint to allow for manual operation of the arm assembly and selectively operable to lock a position of the joint during operation of the surgical instruments or endoscope or when the robotic surgical system experiences loss of power.
5. The robotic surgical system of claim 3, further comprising a torque sensor, the electric motor configured to operate in a force-follow mode based on input from the torque sensor that senses an operator force on the joint.
6. The robotic surgical system of claim 5, wherein the torque sensor is disposed between the last of the transverse arm sections and the support arm.
7. The robotic surgical system of claim 2, wherein the arm assembly is configured to achieve a compact stowed configuration where the second transverse arm section is retracted relative to the first transverse arm section and the third transverse arm section is retracted relative to the second transverse arm section.
8. The robotic surgical system of claim 1, wherein the boom arm is a pillar that extends from a top end of the cart and is operable to vary a height of the arm assembly above the base by axially moving the pillar relative to the cart.
9. The robotic surgical system of claim 1, wherein the support arm has a shape corresponding to a shape of an inner facing portion of the arm assembly.
10. The robotic surgical system of claim 9, wherein the support arm has a support arm portion that extends at an angle corresponding to an angle of the inner facing portion of the arm assembly.
11. A robotic surgical system, comprising: a cart extending vertically above a base; an arm assembly movably coupled to the cart, the arm assembly being selectively movable relative to the cart to vary a height of the arm assembly relative to the base, the arm assembly constrained to move horizontally and comprising a plurality of transverse arms extending perpendicular to the cart, each transverse arm pivotally coupled to another of the transverse arms via a joint and configured to pivot about a vertical axis through the joint, a support arm pivotally coupled to a last of the transverse arms about a yaw joint and extending downwardly therefrom; and a central drive unit pivotally coupled to the support arm about a pitch joint, the central drive unit comprising one or more robotic surgical instruments and an endoscope removably coupled thereto, the surgical instruments and endoscope configured to extend through a single insertion tube configured to be inserted through an incision location in a patient, wherein each of the joint, yaw joint and pitch joint is robotically controlled to adjust one or more of a lateral position, a pitch and a yaw of the central drive unit to thereby adjust a respective pitch and yaw of the surgical instruments and endoscope based on operator input, and to allow an insertion depth of the insertion tube when inserted in the patient to be adjustable to thereby adjust an insertion depth of the surgical instruments and endoscope in a surgical space within the patient based on operator input while maintaining a location of a remote center of motion at an incision location on the patient.
12. The robotic surgical system of claim 11, wherein the plurality of transverse arms include: a first transverse arm vertically movable relative to the cart and at least partially extending along a first plane, a second transverse arm pivotally coupled to the first transverse arm about a first joint and extending along a second plane parallel to the first plane, the second transverse arm configured to pivot about a vertical axis through the first joint, and a third transverse arm pivotally coupled to the second transverse arm about a second joint and extending along a third plane parallel to the first and second planes, the third transverse arm configured to pivot about a vertical axis through the second joint, the support arm pivotally coupled to the third transverse arm.
13. The robotic surgical system of claim 11, wherein each of the joints includes an electric motor operable to effect a pivoting motion of the joint about its axis to effect a motion to adjust one or more of the lateral position, the pitch and the yaw of the central drive unit to thereby adjust a respective pitch and yaw of the surgical instruments and endoscope based on operator input.
14. The robotic surgical system of claim 13, further comprising a brake or clutch operatively coupled to the electric motor and selectively operable to decouple the electric motor from the joint to allow for manual operation of the arm assembly and selectively operable to lock a position of the joint during operation of the surgical instruments or endoscope or when the robotic surgical system experiences loss of power.
15. The robotic surgical system of claim 13, further comprising a torque sensor, the electric motor configured to operate in a force-follow mode based on input from the torque sensor that senses an operator force on the joint.
16. The robotic surgical system of claim 11, wherein the arm assembly is configured to achieve a compact stowed configuration where one of the transverse arms overlaps with another of the transverse arms.
17. The robotic surgical system of claim 11, wherein the arm assembly is movably coupled to a side of the cart.
18. The robotic surgical system of claim 11, wherein the arm assembly is movably coupled to a top of the cart.
an entry at 0.25 does not happen often!
the risk is as high as the potential market is large!
Maybe you could take it even lower, who knows?
According to surgeons, single space is the future!
Here they have created and patented a very clever toy that has everything it is supposed to have!
After showing due contempt perhaps some interested hand will knock!
Come on Vance do something magical!
BOOM!
Imaging Apparatus Having Configurable Stereoscopic Perspective
DOCUMENT ID
US 11586106 B2
DATE PUBLISHED
2023-02-21
INVENTOR INFORMATION
NAME
CITY
STATE
ZIP CODE
COUNTRY
Jones; Evan Rittenhouse
Levittown
NY
N/A
US
Blain; Maxime
Rosedale
NY
N/A
US
Smith; Christopher Dean
Shirley
NY
N/A
US
APPLICANT INFORMATION
NAME
Titan Medical Inc.
CITY
Toronto
STATE
N/A
ZIP CODE
N/A
COUNTRY
CA
AUTHORITY
N/A
TYPE
assignee
ASSIGNEE INFORMATION
NAME
TITAN MEDICAL INC.
CITY
Toronto
STATE
N/A
ZIP CODE
N/A
COUNTRY
CA
TYPE CODE
03
APPLICATION NO
16/235246
DATE FILED
2018-12-28
US CLASS CURRENT:
1/1
CPC CURRENT
TYPE
CPC
DATE
CPCI
H 04 N 13/296
2018-05-01
CPCI
H 04 N 13/239
2018-05-01
CPCI
A 61 B 1/00193
2013-01-01
CPCI
A 61 B 34/30
2016-02-01
CPCI
A 61 B 1/0019
2013-01-01
CPCI
A 61 B 1/00096
2013-01-01
CPCI
G 03 B 35/04
2013-01-01
CPCA
G 02 B 26/0816
2013-01-01
CPCA
G 02 B 5/06
2013-01-01
CPCA
H 04 N 2013/0081
2013-01-01
CPCA
G 02 B 26/0875
2013-01-01
CPCA
A 61 B 2034/302
2016-02-01
CPCA
H 04 N 2213/001
2013-01-01
CPCA
G 02 B 3/12
2013-01-01
CPCA
G 02 B 26/0883
2013-01-01
Abstract
In some embodiments, a stereoscopic imaging apparatus includes a tubular housing having a bore extending longitudinally through the housing. First and second image sensors are disposed proximate a distal end of the bore, each including a light sensitive elements on a face and mounted facing laterally outward. The apparatus further includes a first beam steering element associated with the first image sensor and a second beam steering element associated with the second image sensor. The beam steering elements receive light from first and second perspective viewpoints and direct the received light onto the faces of the image sensors forming first and second images. Either the first and second beam steering elements or the first and second image sensors are moveable to cause a change a spacing between or an orientation of the perspective viewpoints to cause sufficient disparity between the first and second images to provide image data including three-dimensional information.
Background/Summary
BACKGROUND
1. Field
(1) This disclosure relates generally to stereoscopic imaging and more particularly to a stereoscopic imaging apparatus wherein a spacing between or an orientation of the stereoscopic viewpoints may be changed to cause sufficient disparity between images for generating three-dimensional (3D) information.
2. Description of Related Art
(2) Stereoscopic imaging generally involves capturing a pair of images from spaced apart perspective viewpoints and processing the images to generate a three-dimensional (3D) view or 3D information based on a disparity between the images. Small format image sensors may be used to generate stereoscopic images while being sufficiently small to fit within a small diameter tubular housing. However when the spacing between image sensors is constrained by the size of the housing, the disparity between images may be insufficient particularly when viewing images that are close to the image sensors. The lack of disparity results in some views providing an inadequate 3D viewing effect. The extraction of 3D information may also be limited by the lack of disparity between stereo images.
SUMMARY
(3) In accordance with some embodiments there is provided a stereoscopic imaging apparatus. The apparatus includes a tubular housing configured for insertion into a confined space, the tubular housing having a bore extending longitudinally through the housing. The apparatus also includes first and second image sensors disposed proximate a distal end of the bore, each image sensor including a plurality of light sensitive elements on a face of the image sensor and being mounted facing laterally outward with respect to a longitudinal axis extending through the bore. The apparatus further includes a first beam steering element associated with the first image sensor and a second beam steering element associated with the second image sensor, the beam steering elements being operably configured to receive light from respective first and second perspective viewpoints extending longitudinally outward into an object field and direct the received light onto the faces of the respective first and second image sensors for forming respective first and second images. Either the first and second beam steering elements or the first and second image sensors are moveable to cause a change at least one of a spacing between and an orientation of the perspective viewpoints with respect to a longitudinal axis of the bore to cause sufficient disparity between the first and second images to provide image data including three-dimensional information.
(4) Each of the first and second beam steering elements may include a plurality of beam steering elements disposed in different locations with respect to the longitudinal axis and the first and second image sensors may be moveable to cause the first and second images to be selectively received by one of the plurality of beam steering elements.
(5) The first and second image sensors may be mounted back-to-back on a moveable carrier.
(6) The moveable carrier may include a circuit substrate.
(7) The moveable carrier may be constrained for longitudinal motion within the bore and may further include an actuator disposed within the bore and operably configured to cause longitudinal movement of the carrier.
(8) The actuator may include one of a piezoelectric actuator, a rotary piezoelectric motor, and a control wire.
(9) The plurality of beam steering elements may be disposed in different locations may include longitudinally spaced apart prisms at a periphery of the housing, each prism being operably configured to receive light from a different perspective viewpoint.
(10) Each of the first and second beam steering elements may include a moveable reflective element operably configured to be pivoted to receive light from different perspective viewpoints.
(11) The moveable reflective elements are operably configured to be disposed along an outer periphery of the housing while the apparatus is being inserted into the confined space and are deployable after insertion to receive light from the respective first and second perspective viewpoints.
(12) Each of the first and second beam steering elements may include a deformable optical element operably configured to deform to receive light from different perspective viewpoints.
(13) The deformable optical element may include at least one of a liquid lens and a liquid prism.
(14) The apparatus may include an actuator operably configured to cause movement of imaging lenses associated with each of the first and second image sensors in a direction aligned with the longitudinal axis to cause a change in orientation of the perspective viewpoints with respect to a longitudinal axis.
(15) The tubular housing may be attached to a distal end of an elongate sheath having a passage extending through the sheath for carrying signals to and from the image sensors.
(16) At least a portion of the sheath may include a manipulator operably configured to cause the sheath to be bend for positioning the tubular housing with the confined space.
(17) The confined space may include a body cavity of a patient undergoing a medical or surgical procedure.
(18) The stereoscopic imaging apparatus may be used in a robotic surgery system.
(19) The tubular housing may have a generally circular cross section.
(20) The bore of the tubular housing may have a diameter of about 10 millimeters.
(21) The apparatus may include a controller in communication with the apparatus and operably configured to cause movement of either the first and second beam steering elements or the first and second image sensors in response to making a determination that an object field being captured by the apparatus may have insufficient disparity between the first and second images to provide image data including three-dimensional information.
(22) Other aspects and features will become apparent to those ordinarily skilled in the art upon review of the following description of specific disclosed embodiments in conjunction with the accompanying figures.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) In drawings which illustrate disclosed embodiments,
(2) FIG. 1 is a perspective view of a stereoscopic imaging apparatus;
(3) FIG. 2 is a perspective view of an imaging assembly of the stereoscopic imaging apparatus shown in FIG. 1;
(4) FIG. 3A is a schematic plan view of an optical configuration of the imaging apparatus shown in FIG. 2;
(5) FIG. 3B is a schematic plan view of a further optical configuration of the imaging apparatus shown in FIG. 2 and FIG. 3A;
(6) FIG. 4 is a perspective view of a stereoscopic imaging apparatus in accordance with another embodiment;
(7) FIG. 5A is a schematic plan view of an optical configuration of the imaging apparatus shown in FIG. 4;
(8) FIG. 5B is a schematic plan view of a further optical configuration of the imaging apparatus shown in FIG. 4 and FIG. 5A;
(9) FIG. 6 is a schematic plan view of an optical configuration for implementing some embodiments;
(10) FIG. 7A is a schematic plan view of another optical configuration of the imaging apparatus shown in FIG. 4 in accordance with another embodiment; and
(11) FIG. 7B is a schematic plan view of the optical configuration of the imaging apparatus shown in FIG. 7A.
DETAILED DESCRIPTION
(12) Referring to FIG. 1, a stereoscopic imaging apparatus in accordance with a first embodiment is shown generally at 100. The apparatus 100 includes a tubular housing 102 configured for insertion into a confined space. The tubular housing 102 has a bore 104 extending longitudinally through the housing that accommodates imaging components (shown in FIG. 2). In the embodiment shown, the tubular housing 102 has a generally circular cross section, which in one embodiment may have a diameter of about 10 millimeters.
(13) In the embodiment shown, the tubular housing is attached to a distal end of an elongate sheath 106 having a passage 108 extending through the sheath for carrying signals to and from the imaging components within the tubular housing 102. A portion of the sheath 106 includes a manipulator 110, which is configured to cause the sheath to be bent to position the tubular housing within the confined space for capturing images. In one embodiment, the manipulator may include a plurality of vertebra actuated to bend by a plurality of control links or cables 112 for disposing the apparatus 100 at various positions with respect to a longitudinal axis 120 of the bore 104. The passage 108 also accommodates various signal cables 114 for carrying image data to a host system controller 122 and for transmitting control and command signals to the apparatus 100. The host system controller 122 is in communication with a display 124 for displaying the images, which may be viewed through a stereoscopic viewing device (not shown) to provide separate left and right stereoscopic images to a user's left and right eyes.
(14) The apparatus 100 includes a first beam steering element 116 laterally disposed on the tubular housing 102 of the apparatus 100 proximate a distal end 118. A second beam steering element (not visible in FIG. 1) is similarly laterally disposed on the opposite side of the tubular housing 102. The beam steering element 116 in FIG. 1 is shown schematically as a demarcated portion of the tubular housing 102 but may take on various forms, such as described in more detail below.
(15) In one embodiment, the confined space within which the apparatus 100 may be employed may be a body cavity of a patient undergoing a medical or surgical procedure. For example, the apparatus 100 may be used for imaging during a laparoscopic surgery procedure or may be part of a robotic surgery system for performing robotic surgery.
(16) Referring to FIG. 2, the apparatus 100 includes an imaging assembly shown generally at 200. The imaging assembly 200 includes a first image sensor 202 and a second image sensor 204 (of which only a portion is visible in FIG. 2). The first and second image sensors 202 and 204 are substantially identical and are disposed proximate the distal end 118 of the bore 104 mounted facing laterally outward with respect to the longitudinal axis 120 of the bore. Each of the image sensors 202 and 204 include a plurality of light sensitive elements 206 on a face 208 of the image sensor.
(17) The imaging assembly 200 also includes a first beam steering element 210 associated with the first image sensor 202 and a second beam steering element 212 associated with the second image sensor 204. The beam steering element 210 is operably configured to receive light from a first perspective viewpoint in an object field 218, which is directed through an imaging lens 214 onto the face 208 of the image sensor 202 for forming a first image. The beam steering element 212 is operably configured to receive light from a second perspective viewpoint in the object field 218, which is directed through an imaging lens 216 onto the face of the image sensor 204 for forming a second image.
(18) In this embodiment, the first beam steering element 210 includes two prisms 220 and 222 longitudinally spaced apart at a periphery of the imaging assembly 200. Similarly, the second beam steering element 212 includes two prisms 224 and 226 longitudinally spaced apart on an opposite side of the imaging assembly 200. The first and second image sensors 202 and 204 are moveable along the longitudinal axis 120 to cause the first and second images to be selectively received by either the prisms 220, 224 or the prisms 222, 226. In the embodiment shown in FIG. 2, the first and second image sensors 202 and 204 are mounted back-to-back on a moveable carrier 228, which in the embodiment shown comprises respective circuit substrates 234 and 236 on which the imaging sensors are mounted. In the embodiment shown in FIG. 1 the imaging lenses 214 and 216 are each mounted in a lens tube (shown in FIG. 3) which is coupled to the respective first and second image sensors 202 and 204 and thus move with the sensors and the moveable carrier 228.
(19) The moveable carrier 228 is received within a channel 230 in a frame 232 (shown partially cut-away in FIG. 2 to reveal underlying elements). The frame 232 is received within and fixed relative to the bore 104 of the tubular housing 102. The moveable carrier 228 is constrained for longitudinal movement within the channel 230 of the frame 232 in a direction aligned with the longitudinal axis 120 of the bore. The imaging assembly 200 further includes an actuator 238 which is coupled to the moveable carrier 228 to cause the longitudinal movement on the carrier when actuated by a control signal provided by the host system controller 122. In some embodiments the actuator 238 may be a piezoelectric actuator, a rotary piezoelectric motor, or a control wire, for example.
(20) In the imaging assembly 200 shown in FIG. 2, the moveable carrier 228 is disposed such that the first and second image sensors 202 and 204 receive images via the prisms 220 and 224 respectively. The optical configuration corresponding to FIG. 2 is shown in plan view in FIG. 3A, in which the first and second image sensors 202 and 204 have perspective viewpoints 300 and 302 within the object field 218. The perspective viewpoints 300 and 302 are separated by a distance D.sub.1 and in this embodiment where the prisms 220 and 224 have a 45° prism angle, the perspective viewpoints are also substantially parallel.
(21) Referring to FIG. 3B, when the moveable carrier 228 is moved by the actuator 238 to align the sensors 202 and 204 with the prisms 222 and 226, the first and second image sensors have respective perspective viewpoints 300' and 302' within the object field 218. The perspective viewpoints 300' and 302' are separated by a distance D.sub.2 and due to the 45° prism angle of the prisms 222 and 226, are also substantially parallel. The increased separation between the perspective viewpoints from D.sub.1 to D.sub.2 increases the disparity between the first and second images received at the respective first and second image sensors 202 and 204. The increased image disparity may provide for more effective display and extraction of 3D information. Under some imaging conditions the smaller disparity D.sub.1 as shown in FIG. 3A may be insufficient to provide a view having appreciable 3D depth.
(22) Referring to FIG. 4, a stereoscopic imaging apparatus in accordance with another embodiment is shown generally at 400. The apparatus 400 includes a tubular housing 402, shown partially cut away in FIG. 4 to reveal imaging components. The apparatus 400 includes first and second image sensors 404 and 406 disposed back-to-back and proximate a distal end 408 of a bore 410. The back-to-back mounting has an advantage of providing options for packaging the optical components within the tubular housing 102 in that the image sensors 404 and 406 may be located proximate a widest portion of the bore 104. In systems where image sensors are disposed side-by-side at a distal end of a tubular housing and facing the object field 218, the maximum size of sensor that can be accommodated would have a width of less than half of the diameter of the tubular housing 102. For a 10 millimeter diameter housing, the maximum diagonal size of image sensor would be about 6 millimeters (or ¼ inch). The configuration of imaging assembly 200 shown in FIG. 2 would permit the sensors to be increased in size to close to the full 10 millimeters (or just less than ½ inch). While image sensors as small as 3.62 millimeters ( 1/7 inch) are now available, a larger image sensor may provide improved light capture, imaging performance, reduced image signal noise, and also increased image resolution.
(23) The image sensors 404 and 406 each include a plurality of light sensitive elements 412 on a face 414 of the image sensors. The image sensors 404 and 406 are mounted on a carrier 418 facing laterally outward with respect to a longitudinal axis 416 extending through the bore 410. In this embodiment the carrier 418 is made up by circuit substrates 420 and 422 on which the sensors 404 and 406 are mounted. In this embodiment the carrier 418 and image sensors 404 and 406 are immobilized within the bore 410 of the tubular housing 402.
(24) The apparatus 400 also includes a first beam steering element 424 associated with the first image sensor 404 and a second beam steering element 426 associated with the second image sensor 406. The first beam steering element 424 in this embodiment is implemented using a reflective element or mirror 428 mounted on a moveable support 430 via hinges to the tubular housing 102 and operable to pivot outwardly as indicated by the arrow 432. Similarly, the second beam steering element 426 includes a mirror 434 mounted on a moveable support 436 mounted via hinges to the tubular housing 102 and operable to pivot outwardly. In this embodiment the first beam steering element 424 includes a miniature actuator 438 coupled to the moveable support 430 to cause the movement 432 for deploying the mirror. The second beam steering element 426 also includes an actuator (not visible in FIG. 4) for actuating movement of the moveable support 436. While the apparatus 100 is being inserted into a confined space, the beam steering elements 424 and 426 may be maintained in an un-deployed disposition lying along an outer periphery of the housing 102. Once the apparatus 100 is inserted, the beam steering elements 424 and 426 may be deployed to receive light from an object field 440. The mirrors 428 and 434 each receive light from different perspective viewpoints within the object field 440. The received light is directed by the respective mirrors 428 and 434 through lenses 442 and 444 toward the sensors 404 and 406 for forming left and right images on the sensors.
(25) Referring to FIG. 5A, the apparatus 400 is shown in a first deployed operating condition where the mirrors 428 and 434 are pivoted outwardly to an angle a.sub.1 of about 35° with respect to the longitudinal axis 416. Under these conditions the image sensors 404 and 406 receive light from respective first and second perspective viewpoints 500 and 502 that are angled inwardly (or toed in) toward the longitudinal axis 416 and converge at a convergence plane 504. Images captured of objects located at the convergence plane 504 will not have any disparity and will appear to be located at a screen plane when viewed on the display 124 using a 3D viewing device. Objects closer to the apparatus 400 than the convergence plane 504 will exhibit positive parallax and will appear to be located rearwardly of the screen plane, while objects behind the convergence plane 504 will have negative parallax and appear to be located forward of the screen plane.
(26) In FIG. 5B, the mirrors 428 and 434 are pivoted outwardly to an angle a.sub.2 of about 40° such that the image sensors 404 and 406 receive light from respective first and second perspective viewpoints 500' and 502' that are less inwardly angled with respect to the longitudinal axis 416. This has the effect of moving an associated convergence plane 504' for the perspective viewpoints 506 and 508 outwardly with respect to the apparatus 400.
(27) One advantage of the configuration shown in FIGS. 5A and 5B is that the convergence plane 504 may be located at a desired depth in the object field 440 to facilitate generation of 3D information at the desired depth. Some 3D information may also generated for objects located away from the convergence plane 504, but the 3D effects are enhanced and the resulting view may result in increased eyestrain for the user.
(28) Referring back to FIG. 2, FIG. 3A and FIG. 3B, in the embodiment shown the prisms 220 and 224 may be configured with a prism angle less than 45° to cause the perspective viewpoints 300 and 302 to be angled inwardly, generally as shown in FIG. 5. The prisms 222 and 226 may be configured with a prism angle less than 45° to cause the perspective viewpoints 300' and 302' to be angled inwardly. Other embodiments may be configured to maintain the parallel perspective viewpoints 300' and 302', while the perspective viewpoints 300 and 302 are toed in. Parallel perspective viewpoints effectively locate the convergence plane at infinity such that the screen plane is at infinity and all objects are displayed having positive parallax.
(29) In an embodiment configured as shown in FIG. 3A and FIG. 3B, the prism angle once selected remains fixed. Referring to FIG. 6, in some embodiments the imaging assembly 200 may further include an actuator 600 and the imaging lenses 214 and 216 may be moveable in a direction aligned with the longitudinal axis 416 in response to movement of the actuator. Displacement of the imaging lenses 214 and 216 with respect to an optical centerline 602 of the first and second image sensors 202 and 204 causes the perspective viewpoints 300? and 302? to be toed in to a degree permitted by the optical design of the imaging lenses.
(30) In some embodiments one or more conventional optical elements of the imaging assembly 200 or apparatus 400 may be replaced with a deformable optical element. For example the prisms 220-226 may be implemented as a liquid prism that is capable of changing beam steering characteristics in response to a control signal received from the host system controller 122. Similarly, one or more of the imaging lenses 214 216, 442, or 444 may include a deformable optical element such as a liquid lens. The deformable optical element facilitates some adjustment of the perspective viewpoint orientation and/or separation by changing optical properties of the deformable element.
(31) In some embodiments the host system controller 122 may be configured to make a determination whether the object field 218 or object field 440 being captured by the imaging assembly 200 or imaging apparatus 400 is capable of providing sufficient disparity between the first and second images for successful extraction of 3D information. The host system controller 122 may be further configured to cause movement of the applicable first beam steering elements, imaging lenses, or deformable optics when insufficient disparity is found in the images currently being captured.
(32) In some embodiments the mirrors 428 and 434 (shown in FIGS. 4 and 5) may be replaced by mirrors 700 and 702 as shown in FIG. 7A. Each of the mirrors 700 and 702 in FIG. 7A has a first reflective surface 704 and a second reflective surface 706. The first reflective surface 704 is disposed at an angle ?.sub.1, which in the example shown is 45° resulting in perspective viewpoints 708 and 710 within the object field 440 generally as described in connection with FIG. 3A. The perspective viewpoints 708 and 710 are separated by a distance D.sub.1 and in this embodiment where the first reflective surface 704 is at a 45° angle to the longitudinal axis 416 are also substantially parallel.
(33) Referring to FIG. 7B, when the mirrors 700 and 702 are pivoted further outwardly with respect to the longitudinal axis 416, the first and second image sensors 404 and 406 will have respective perspective viewpoints 708' and 710' within the object field 440. The perspective viewpoints 708' and 710' are separated by a distance D.sub.2 and due to the further 45° angle of the second reflective surface 706, are also substantially parallel. In this embodiment the mirrors 700 and 702 may also be actuated to angles other than 45°, thus facilitating toeing in the perspective viewpoints while also providing a selectable spacing between the perspective viewpoints.
(34) The embodiments set forth above provide for selectively changing orientation and/or the spacing between perspective viewpoints for producing stereoscopic views of an object field. The back-to-back orientation of the lateral facing image sensors also facilitates the accommodation of the imaging components within a small bore housing suitable for insertion into confined spaces. The provision of beam steering elements that are located peripherally on the housing increases the spacing between perspective viewpoints over a side-by-side image sensor configuration.
(35) While specific embodiments have been described and illustrated, such embodiments should be considered illustrative only and not as limiting the disclosed embodiments as construed in accordance with the accompanying claims.
Claims
1. A stereoscopic imaging apparatus comprising: a tubular housing configured to be inserted into a confined space, the tubular housing including a bore extending longitudinally through the tubular housing; first and second image sensors disposed proximate a distal end of the bore, each image sensor including a plurality of light sensitive elements on a face of the image sensor and being mounted facing laterally outward with respect to a longitudinal axis extending through the bore; a first beam steering element associated with the first image sensor and a second beam steering element associated with the second image sensor, the first and second beam steering elements configured to receive light from respective first and second perspective viewpoints extending longitudinally outward into an object field and direct the received light onto the faces of the respective first and second image sensors to form respective first and second images; and a first movable support mounted via a first hinge to the tubular housing and fixedly supporting the first beam steering element thereon, and a second movable support mounted via a second hinge to the tubular housing and fixedly supporting the second beam steering element thereon, wherein the first and second beam steering elements are pivotably moveable outside the tubular housing to cause a change in an orientation of the perspective viewpoints with respect to the longitudinal axis of the bore to cause sufficient disparity between the first and second images to provide image data including three-dimensional (3D) information.
2. The apparatus of claim 1, wherein each of the first and second beam steering elements comprises a moveable reflective element configured to be pivoted to receive light from different perspective viewpoints.
3. The apparatus of claim 2, wherein the moveable reflective elements are configured to be disposed along an outer periphery of the tubular housing while the apparatus is being inserted into the confined space and are deployable after insertion to receive light from the respective first and second perspective viewpoints.
4. The apparatus of claim 1 wherein each of the first and second beam steering elements comprises a deformable optical element configured to deform to receive light from different perspective viewpoints.
5. The apparatus of claim 4 wherein the deformable optical element comprises at least one of a liquid lens or a liquid prism.
6. The apparatus of claim 1 further comprising an actuator configured to cause movement of imaging lenses associated with each of the first and second image sensors in a direction aligned with the longitudinal axis to cause a change in orientation of the perspective viewpoints with respect to a longitudinal axis.
7. The apparatus of claim 1, wherein the tubular housing is attached to a distal end of an elongate sheath having a passage extending through the elongate sheath to carry signals to and from the first and second image sensors.
8. The apparatus of claim 7, wherein at least a portion of the elongate sheath comprises a manipulator configured to cause the elongate sheath to be bent to position the tubular housing with the confined space.
9. The apparatus of claim 1, wherein the confined space comprises a body cavity of a patient undergoing a medical or surgical procedure.
10. The apparatus of claim 9, wherein the stereoscopic imaging apparatus is used in a robotic surgery system.
11. The apparatus of claim 1, wherein the tubular housing includes a generally circular cross section.
12. The apparatus of claim 1, wherein the bore of the tubular housing has a diameter of about 10 millimeters.
13. The apparatus of claim 1, further comprising a controller in communication with the apparatus and configured to cause movement of either the first and second beam steering elements or the first and second image sensors based on a determination of whether or not the object field being captured by the apparatus provides sufficient disparity between the first and second images to extract three-dimensional (3D) information.
14. A stereoscopic imaging apparatus comprising: first and second image sensors configured to be disposed proximate a distal end of a bore of a tubular housing, each image sensor including a plurality of light sensitive elements on a face of the image sensor and being mounted facing laterally outward with respect to a longitudinal axis extending through the bore of the tubular housing; a first beam steering element associated with the first image sensor and a second beam steering element associated with the second image sensor, the first and second beam steering elements configured to receive light from respective first and second perspective viewpoints extending longitudinally outward into an object field and direct the received light onto the faces of the respective first and second image sensors to form respective first and second images ; and a first movable support mounted via a first hinge to the tubular housing and fixedly supporting the first beam steering element thereon, and a second movable support mounted via a second hinge to the tubular housing and fixedly supporting the second beam steering element thereon, wherein the first and second beam steering elements are pivotably moveable outside the tubular housing to cause a change in an orientation of the perspective viewpoints with respect to the longitudinal axis of the bore of the tubular housing to cause sufficient disparity between the first and second images to provide image data including three-dimensional (3D) information.
7x3=21
Crescent moon
it would be curious if today is the day
Boston firm acquire Rubius therapeutics facility
https://pbn.com/boston-firm-acquires-rubius-therapeutics-facility/
everyone would like that I guess!
Nothing institutional funding, let's see if they can sell off the whole shack!
curious are two
Camera Positioning Method And Apparatus For Capturing Images During A Medical Procedure
DOCUMENT ID
US 11576562 B2
DATE PUBLISHED
2023-02-14
INVENTOR INFORMATION
NAME
CITY
STATE
ZIP CODE
COUNTRY
Andrews; Richard
North Attleboro
MA
N/A
US
Faria; Leonard M.
Swansea
MA
N/A
US
APPLICANT INFORMATION
NAME
Titan Medical Inc.
CITY
Toronto
STATE
N/A
ZIP CODE
N/A
COUNTRY
CA
AUTHORITY
N/A
TYPE
assignee
ASSIGNEE INFORMATION
NAME
TITAN MEDICAL INC.
CITY
Toronto
STATE
N/A
ZIP CODE
N/A
COUNTRY
CA
TYPE CODE
03
APPLICATION NO
16/085152
DATE FILED
2017-04-04
DOMESTIC PRIORITY (CONTINUITY DATA)
us-provisional-application US 62319426 20160407
US CLASS CURRENT:
1/1
CPC CURRENT
TYPE
CPC
DATE
CPCI
A 61 B 1/01
2013-01-01
CPCI
A 61 B 1/06
2013-01-01
CPCI
A 61 B 1/00096
2013-01-01
CPCI
A 61 B 1/008
2013-01-01
CPCI
A 61 B 1/05
2013-01-01
CPCI
A 61 B 1/0661
2013-01-01
Abstract
A method and apparatus for positioning a camera to capture images inside a body cavity of a patient during a medical procedure is disclosed. The apparatus includes an insertion tube, a plurality of connected linkages extending from a distal end of the insertion tube, each linkage having a threaded actuator received on a threaded end of a drive shaft extending between the threaded actuator and a proximal end of the insertion tube. The apparatus also includes a camera disposed at a distal end of the plurality of connected linkages. Each connected linkage has at least one associated movement actuated by movement of the threaded actuator in response to rotation of the drive shaft, the associated movements of the connected linkages together operable to facilitate positioning of the camera within the body cavity of the patient.
Background/Summary
CROSS-REFERENCE TO RELATED APPLICATION
(1) This application is a U.S. National Phase Application under 35 U.S.C. 371 of International Application No. PCT/CA2017/000078 filed on Apr. 4, 2017, and published as WO 2017/173524 A1 on Oct. 12, 2017, which claims priority to U.S. Provisional Application No. 62/319,426, filed on Apr. 7, 2016. The entire disclosures of all of the above applications are incorporated herein by reference.
BACKGROUND
1. Field
(1) This disclosure relates generally to positioning a camera for imaging and more particularly to positioning a camera inside a body cavity of a patient for capturing images during a medical procedure.
2. Description of Related Art
(2) Miniaturized cameras are used during investigative medical procedures and surgical procedures such as laparoscopic surgery and computer assisted robotic surgery to produce images of a site of the procedure within a body cavity of the patient. The camera generally includes an illumination source for illuminating the site of the procedure.
SUMMARY
(3) In accordance with one disclosed aspect there is provided an apparatus for positioning a camera to capture images inside a body cavity of a patient during a medical procedure. The apparatus includes an insertion tube, a plurality of connected linkages extending from a distal end of the insertion tube, each linkage having a threaded actuator received on a threaded end of a drive shaft extending between the threaded actuator and a proximal end of the insertion tube. The apparatus also includes a camera disposed at a distal end of the plurality of connected linkages. Each connected linkage has at least one associated movement actuated by movement of the threaded actuator in response to rotation of the drive shaft, the associated movements of the connected linkages together operable to facilitate positioning of the camera within the body cavity of the patient.
(4) Each drive shaft may include a drive coupler at the proximal end of the drive shaft, the drive coupler operable to receive a drive torque for causing rotation of the drive shaft.
(5) The drive couplers may be housed within a drive interface operably configured to removably couple to a driver unit, the driver unit being operable to provide the respective drive torques.
(6) Each drive coupler may include a rotational coupler for transmitting torque to each drive shaft, the rotational coupler being operably configured to receive the proximal end of the drive shaft and to transmit the drive torque to the drive shaft while accommodating linear movement of the proximal end due to resulting movements of the camera.
(7) The rotational coupler may include a tubular body for receiving the proximal end of drive shaft, the tubular body having a slotted portion that engages a pin extending through the proximal end of the drive shaft for coupling to the tubular body.
(8) Each rotational coupler may include a moveable detent coupled to the proximal end of the drive shaft and operable to resiliently engage a fixed detent in the drive interface corresponding to a startup position for each of the proximal ends of the respective drive shafts, the startup positions of the drive shafts defining an insertion position of the camera.
(9) The interface may be removably received on the drive unit, and wherein when received the moveable and fixed detents may be disengaged to permit movement of the camera away from the insertion position. Prior to removal of the interface, the drive unit is operably configured to place the camera in the insertion position causing the moveable and fixed detents to be aligned. When removed, the moveable and fixed detents are engaged to retain the rotational couplers in the startup position.
(10) In the insertion position the camera may be positioned generally in line with a longitudinal axis extending outwardly from the insertion tube.
(11) The plurality of connected linkages may include at least a panning linkage for producing side-to-side motion of the camera, an elevating linkage for moving the camera away from the longitudinal axis, and a tilt linkage for tilting the camera forward and backward with respect to the longitudinal axis.
(12) The panning linkage may be connected to the distal end of the insertion tube, the elevating linkage is connected to the panning linkage and the tilt linkage is connected to the elevating linkage, and the camera may be attached to the tilt linkage.
(13) At least one of the drive shafts may include a compliant portion facilitating bending of the shaft in response to movements of the camera while continuing to permit rotation of the at least one drive shaft.
(14) Each linkage may include a revolute joint constrained to permit motion in a single degree of freedom corresponding to the associated movement of the linkage and the threaded actuator may be coupled to the linkage to cause motion about the revolute joint.
(15) In accordance with another disclosed aspect there is provided a method for positioning a camera to capture images inside a body cavity of a patient during a medical procedure, the camera being disposed at a distal end of a plurality of connected linkages extending from a distal end of an insertion tube, each linkage having a threaded actuator received on a threaded end of a drive shaft extending between the threaded actuator and a proximal end of the insertion tub. The method involves selectively causing rotation of the respective drive shafts to cause movement of the respective threaded actuators, the movement of the respective threaded actuators causing associated movements of the connected linkages to positioning of the camera within the body cavity of the patient.
(16) Selectively causing rotation of the respective drive shafts may involve causing the respective drive shafts to position the camera in an insertion position prior to removal from the body cavity of a patient.
(17) Causing the respective drive shafts to position the camera in an insertion position may involve causing the camera to be positioned generally in line with a longitudinal axis of the insertion tube.
(18) In accordance with another disclosed aspect there is provided an apparatus for positioning a camera to capture images inside a body cavity of a patient during a medical procedure. The apparatus includes an articulated arm includes a plurality of connected moveable linkages, a camera disposed at a distal end of the plurality of connected linkages, the camera including a camera housing enclosing image capture optics, an image sensor, and image capture electronic circuitry operable to produce image data representing images captured by the image sensor, and data transmission electrical circuitry operable to generate and transmit data signals encoding the image data to a host system, the data transmission electrical circuitry being housed within in one of the moveable linkages and coupled to the image capture electronic circuitry via a flexible interconnect.
(19) Other aspects and features will become apparent to those ordinarily skilled in the art upon review of the following description of specific disclosed embodiments in conjunction with the accompanying figures.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) In drawings which illustrate disclosed embodiments,
(2) FIG. 1 is a perspective view of a robotic surgical apparatus;
(3) FIG. 2 is a perspective view of a drive unit and camera of the robotic surgical apparatus shown in FIG. 1;
(4) FIG. 3 is a perspective view of an insertion tube, linkages, and the camera shown in FIG. 2;
(5) FIG. 4 is a further enlarged perspective view of the linkages and camera shown in FIG. 3;
(6) FIG. 5 is a rear perspective view of the linkages and camera in a deployed state;
(7) FIG. 6 is a rear perspective view of a drive interface shown in FIG. 3; and
(8) FIG. 7 is a front perspective view of the linkages and camera in a deployed state.
DETAILED DESCRIPTION
(9) Referring to FIG. 1, a robotic surgical apparatus is shown generally at 100. The surgical apparatus 100 includes a cart 102 that supports an articulated boom 104 that carries a drive unit 106 having a camera 108 mounted on the drive unit. The cart 102 may be wheeled up to a patient (not shown) and the articulated boom 104 deployed to maneuver the drive unit 106 and camera 108 into a location for accessing a body cavity of the patient and positioning a camera to capture images inside the body cavity of a patient during a medical procedure. The surgical apparatus 100 may be controlled by a workstation console (not shown) connected to the surgical apparatus via a cable 110 that carries signals for controlling the drive unit 106 and camera 108.
(10) Referring to FIG. 2, the drive unit 106 and camera 108 are shown in front view. The camera 108 is mounted at a distal end of a plurality of connected linkages 120 extending from a distal end 122 of an insertion tube 124. The insertion tube 124 extends outwardly from a drive interface 126 that is removably received on the drive unit 106.
(11) The camera 108, insertion tube 124, and drive interface 126 are shown in greater detail in FIG. 3. Referring to FIG. 3, in the embodiment shown the plurality of connected linkages 120 include a panning linkage 130, an elevating linkage 132, and a tilt linkage 134. The panning linkage 130 is connected by a revolute joint 136 to the distal end 122 of the insertion tube 124, which constrains the panning linkage to side-to-side motion in the direction indicated by the arrow 138. The elevating linkage 132 is connected to the panning linkage 130 by a revolute joint 140, which constrains the linkage to movement away from a longitudinal axis 142 in the direction indicated by the arrow 144. The tilt linkage 134 is connected to the elevating linkage 132 by a revolute joint 148, which constrains the linkage to movement for tilting the camera 108 forward and backward with respect to the longitudinal axis 142 in the direction indicated by the arrow 150.
(12) In the embodiment shown the panning linkage 130 is thus connected to the distal end 122 of the insertion tube 124, the elevating linkage 132 is connected to the panning linkage 130 and the tilt linkage 134 is connected to the elevating linkage 132. The camera 108 is disposed at a distal end of the plurality of connected linkages 120, in this case connected to the tilt linkage 134. In other embodiments the plurality of connected linkages 120 may be otherwise arranged and one or more of the linkages may be omitted.
(13) The connected linkages 120 are shown in enlarged detail in FIG. 4 with a distal cap 152 (shown in FIG. 3) on the insertion tube 124 removed. Referring to FIG. 4, the panning linkage 130 has a threaded actuator 180 received on a threaded end 182 of a drive shaft 184. The elevating linkage 132 has a threaded actuator 188 received on a threaded end 190 of a drive shaft 192. The tilt linkage 134 has a threaded actuator 194 received on a threaded end 196 of a drive shaft 198. Each of the drive shafts 184, 192 and 198 extend between the respective threaded actuators 180, 188, and 194 and a proximal end 186 (shown in FIG. 3) of the insertion tube 124. The drive shafts 184, 192 and 198 are routed through respective bores 170, 172, and 174 extending through the insertion tube 124 (only shown in part in FIG. 4). The bores 170, 172, and 174 are sized and configured such that each drive shaft 184, 192 and 198 is freely rotatable within the bores as indicated by the arrows shown in FIG. 4.
(14) Each connected linkage 120 thus has at least one associated movement actuated by movement of the respective threaded actuators 180, 188, and 194 in response to rotation of the respective drive shafts 184, 192 and 198. The associated movements of the connected linkages 120 are together operable to facilitate positioning of the camera 108 within the body cavity of the patient. For example, rotation of the shaft 184 causes the threaded actuator 180 to move either forwardly or rearwardly in a direction aligned with the longitudinal axis 142 causing the panning linkage 130 to pan about the revolute joint 136 moving the camera 108 from side to side. In the embodiment shown, each of the linkages 120 thus includes a revolute joint (136, 140, 148) constrained to permit motion in a single degree of freedom corresponding to the associated movement of the linkage and a threaded actuator (180, 188, and 194) coupled to the linkage to cause motion about the revolute joint.
(15) Referring to FIG. 5, the camera 108 is shown in rear view in a deployed state with the drive shafts 184, 192 and 198 omitted for clarity. The threaded actuator 180 terminates in a ball and socket joint 200 on the rear of the panning linkage 130 which facilitates pivoting at the joint during movement. Similarly the threaded actuator 188 terminates in a ball and socket joint 202 on a strut 204 of the elevating linkage 132. A proximal end threaded actuator 188 is received in a hinged block 206 and rotation of the drive shaft 192 causes the elevating linkage 132 to raise or lower with respect to the longitudinal axis 142. Finally, the threaded actuator 194 is mounted in a first swivel block 208 on the elevating linkage 132 and has a distal end that is clamped to a second swivel block on the tilt linkage 134. Rotation of the drive shaft 198 causes the camera 108 to tilt up or down about the revolute joint 148.
(16) When the drive shafts 184, 192 and 198 are rotated to cause the camera 108 to be deployed, the linkages 120 are displaced from the longitudinal axis 142 causing portions of the drive shafts 192 and 198 running through the panning linkage 130 and elevating linkage 132 to be bent through an angle. The drive shafts 192 and 198 thus have at least a compliant portion within the linkages to facilitating bending of the shaft in response to movements of the camera 108. The compliant portion permits the drive shaft 192 and 198 to be bent through the necessary angle while continuing to permit rotation of the drive shafts for actuating the respective linkages. In some embodiments the drive shafts may be fabricated entirely from a compliant material, while in other embodiments the drive shafts may have some rigid portions and some compliant portions. In one embodiment at least a portion of drive shafts may be fabricated from a hollow stainless steel tube.
(17) Referring back to FIG. 3, the camera 108 and plurality of connected linkages 120 are generally aligned along the longitudinal axis 142 extending outwardly from the insertion tube, which may define an insertion position for inserting the camera 108, linkages 120 and insertion tube 124 into the body cavity of the patient. Once inserted the drive shafts 184, 192 and 198 may be rotated to deploy the camera 108 as shown in FIG. 5. Referring to FIGS. 3 and 4, in the embodiment shown the insertion tube 124 includes at least one bore 154 for receiving an instrument for performing surgical operations within the body cavity of the patient. The instrument may be a dexterous surgical instrument such as described in commonly owned PCT Patent Application PCT/CA2013/001076 entitled ARTICULATED TOOL POSITIONER AND SYSTEM EMPLOYING SAME and PCT Patent Application PCT/CA2015/000098 entitled ACTUATOR AND DRIVE FOR MANIPULATING A TOOL, both of which are incorporated herein in their entirety.
(18) Referring back to FIG. 3, the drive interface 126 includes a housing 158 having a front cover 160 and a rear cover 162. Referring to FIG. 6, the drive interface 126 is shown with the front cover 160 omitted and the rear cover 162 removed to reveal the drive components. The drive shafts 184, 192 and 198 are routed back through the respective bores 172, 174, and 176 in the insertion tube 124 and are bent upwardly within the housing 158 and have proximal ends 260, 262, and 264 that terminate in respective drive couplers 266, 268, and 270. The drive couplers 266, 268, and 270 are identical and the drive coupler 270 will be further described herein. The drive coupler 270 includes a bevel gear assembly 272 that receives a drive torque from the drive unit 106 (shown in FIG. 2) at a drive hub 274 when the drive interface 126 is engaged on the drive unit. The bevel gear assembly 272 rotates in the direction indicated by the arrow and the rotating motion is coupled through the gears via a shaft 276 to a rotational coupler 278. The rotational coupler 278 is generally operable to receive the proximal end 264 of the drive shaft 198 and to transmit the drive torque to the drive shaft while accommodating linear movement of the proximal end due to resulting movements of the camera 108. When the plurality of connected linkages 120 move, the drive shafts 184, 192 and 198 extend or retract with the motion, which must be accommodated. In the embodiment shown, the rotational coupler 278 has a tubular body 280 for receiving the proximal end 264 of drive shaft 198. The tubular body 280 has a slotted portion 282 that engages a pin 284 extending through the proximal end of the drive shaft for coupling to the tubular body. The pin 284 couples the rotational torque to the proximal end 264 of the drive shaft 198 while permitting the proximal end and pin to slide within the slotted portion 282 of the tubular body 280, thus accommodating extension or retraction of the drive shaft.
(19) In the embodiment shown the drive coupler 270 also includes a moveable detent mechanism 290, which is coupled to move with the proximal end 264 of the drive shaft 198. The moveable detent 290 has a pin 292 operable to resiliently engage a rear side of a fixed detent plate 294 on the rear cover 162. The fixed detent plate 294 has an opening 296 sized to accommodate a head of the pin 292, the opening being positioned to define a startup position for the proximal end 264 of the drive shaft 198 that places the camera 108 in the insertion position aligned with the longitudinal axis 142, as shown in FIG. 3. In one embodiment, the drive interface 126 is removably received on the drive unit 106 and when received, the pin 292 on the moveable detent mechanism 290 is disengaged to permit movement of the camera 108 away from the insertion position. Prior to removal of the interface 126 from the drive unit 106, the drive unit is operably configured to return the camera 108 to the insertion position causing the pin 292 and the opening 296 on the fixed detent plate 294 to be aligned but not yet engaged. When the drive interface 126 is removed from the drive unit 106, the pin 292 and the opening 296 engage and retain the rotational coupler 278 in the startup position. The drive couplers 266 and 268 have similar moveable and fixed detent mechanisms that operate in the same way. Advantageously, the detent mechanism locks the drive interface 126 in the insertion position when not received on the drive unit 106 preventing movement of the drive hub 274 and other drive hubs which would at least partially deploy the camera 108. The plurality of connected linkages 120 and camera 108 thus remain in the insertion position while being cleaned and sterilized, and when re-used will be in a known orientation.
(20) The camera 108 shown in the above embodiments will general be miniaturized to improve access to the body cavity of the patient and to reduce the size of incision needed to provide access for the camera in surgical procedures. In some embodiments, the camera may include one or more high definition image sensors (not shown), where a pair of image sensors are capable of producing stereoscopic 3D views within the body cavity. The image sensors include sensor electronic circuitry that generates image data representing the captured images. The captured image data must be transmitted back to the drive unit 106, which requires additional data transmission circuitry. The image capture electronic circuitry and data transmission electrical circuitry may generate significant heat within the housing of the camera 108. Referring to FIG. 7, in one embodiment the camera 108 houses the image sensors and image capture electronic circuitry. A data transmission printed circuit board 300 carries the data transmission electrical circuitry and is housed within the elevating linkage 132. The image capture electronic circuitry and data transmission electrical circuitry may be coupled via a flexible interconnect (not shown) that permits the 108 to be tilted by the tilt linkage 134. Advantageously, the separation of electrical circuitry places a significant source of heat in the linkage away from the housing of the camera 108, thus spreading the heat load over a larger area.
(21) In accordance with another disclosed aspect there is provided an apparatus for positioning a camera to capture images inside a body cavity of a patient during a medical procedure. The apparatus includes an articulated arm that includes a plurality of connected moveable linkages, a camera disposed at a distal end of the plurality of connected linkages, the camera including a camera housing enclosing image capture optics, an image sensor, and image capture electronic circuitry operable to produce image data representing images captured by the image sensor, and data transmission electrical circuitry operable to generate and transmit data signals encoding the image data to a host system, the data transmission electrical circuitry being housed within in one of the moveable linkages and coupled to the image capture electronic circuitry via a flexible interconnect.
(22) While specific embodiments have been described and illustrated, such embodiments should be considered illustrative of the invention only and not as limiting the invention as construed in accordance with the accompanying claims.
Claims
1. An apparatus for positioning a camera to capture a plurality of images inside a body cavity of a patient during a medical procedure, the apparatus comprising: an insertion tube; a plurality of drive shafts extending proximally through the insertion tube, the plurality of drive shafts including: a first drive shaft configured to actuate a panning movement of the camera; a second drive shaft configured to actuate an elevating movement of the camera; and a third drive shaft configured to actuate a tilting movement of the camera; a plurality of connected linkages extending from a distal end of the insertion tube, the plurality of connected linkages including: a first linkage connected to a distal end of the insertion tube; a second linkage connected to the first linkage; and a third linkage connected to the second linkage, each of the plurality of connected linkages including a threaded actuator received on a threaded end of a respective drive shaft of the plurality of drive shafts, wherein the respective drive shaft of the plurality of drive shafts extend between the threaded actuator and a proximal end of the insertion tube, wherein at least one of the threaded actuators is located distal of at least one of the plurality of connected linkages wherein the threaded actuators include: a first threaded actuator received on a threaded end of the first drive shaft such that rotation of the first threaded drive shaft actuates the first threaded actuator to actuate the first linkage of the plurality of linkages to actuate the panning movement of the camera; a second threaded actuator received on a threaded end of the second drive shaft such that rotation of the second threaded drive shaft actuates the second threaded actuator to actuate the second linkage of the plurality of linkages to actuate the elevating movement of the camera, wherein the second threaded actuator is located distal of the first threaded actuator; and a third threaded actuator received on a threaded end of the third drive shaft such that rotation of the third threaded drive shaft actuates the third threaded actuator to actuate the third linkage of the plurality of linkages to actuate the tilting movement of the camera, wherein the third threaded actuator is located distal of the second threaded actuator; a camera disposed at a distal end of the third linkage of the plurality of connected linkages, wherein the camera includes image sensors and image capture electronic circuitry housed within the third linkage; and a data transmission printed circuit board housed within the second linkage of the plurality of connected linkages, wherein the data transmission printed circuit board is separated from the image sensors and the image capture electronic circuitry of the camera; wherein each linkage of the plurality of connected linkages is configured to move in response to being actuated by movement of the threaded actuator in response to rotation of the respective drive shaft, the associated movements of each linkage of the plurality of the connected linkages together operable to facilitate positioning of the camera within the body cavity of the patient.
2. The apparatus of claim 1 wherein each of the plurality of drive shafts includes a drive coupler at a proximal end of the drive shaft, the drive coupler operable to receive a drive torque for causing rotation of the drive shaft.
3. The apparatus of claim 2 wherein the drive couplers of the plurality of drive shafts are housed within a drive interface operably configured to removably couple to a driver unit, the driver unit being operable to provide the respective drive torques.
4. The apparatus of claim 3 wherein each drive coupler of the plurality of drive shafts comprises a rotational coupler configured to transmit torque to each drive shaft of the plurality of drive shafts, the rotational coupler being operably configured to receive the proximal end of the drive shaft and to transmit the drive torque to the drive shaft while accommodating linear movement of the proximal end of the drive shaft due to resulting movements of the camera.
5. The apparatus of claim 4 wherein at least one rotational coupler comprises a tubular body for receiving the proximal end of drive shaft, the tubular body including a slotted portion that engages a pin extending through the proximal end of the drive shaft and configured to couple to the tubular body.
6. The apparatus of claim 4 wherein each rotational coupler of the plurality of drive shafts includes a moveable detent coupled to the proximal end of the drive shaft and operable to resiliently engage a fixed detent in the drive interface corresponding to a startup position for each of the proximal ends of the respective drive shafts, the startup positions of the drive shafts defining an insertion position of the camera.
7. The apparatus of claim 6 wherein the drive interface is configured to be removably received on a drive unit, and wherein: when received, the moveable and fixed detents are disengaged to permit movement of the camera away from the insertion position; prior to removal of the drive interface, the drive unit is operably configured to place the camera in the insertion position causing the moveable and fixed detents to be aligned; and when removed, the moveable and fixed detents are engaged to retain the rotational couplers in the startup position.
8. The apparatus of claim 6 wherein in the insertion position the camera is positioned generally in line with a longitudinal axis extending outwardly from the insertion tube.
9. The apparatus of claim 1 wherein: the first linkage of the plurality of linkages is a panning linkage configured to produce side-to-side motion of the camera; the second linkage of the plurality of linkages is an elevating linkage configured to move the camera away from the longitudinal axis; and the third linkage of the plurality of linkages is a tilt linkage configured to tilt the camera forward and backward with respect to a longitudinal axis extending outwardly from the insertion tube.
10. The apparatus of claim 1 wherein a drive shaft comprises a compliant portion configured to facilitate bending of the drive shaft in response to movements of the camera while continuing to permit rotation of the drive shaft.
11. The apparatus of claim 1 wherein at least one linkage of the plurality of connected linkages includes a revolute joint constrained to permit motion in a single degree of freedom corresponding to the associated movement of the connected linkage, and wherein the threaded actuator is coupled to the connected linkage to cause motion about the revolute joint.
12. A method for positioning a camera to capture images inside a body cavity of a patient during a medical procedure, the camera being disposed at a distal end of a plurality of connected linkages extending from a distal end of an insertion tube, each of the plurality of connected linkages including a threaded actuator received on a threaded end of a respective drive shaft extending between the threaded actuator and a proximal end of the insertion tube, the method comprising: providing an apparatus for positioning the camera, the apparatus including: an insertion tube; a plurality of drive shafts extending proximally through the insertion tube, the plurality of drive shafts including: a first drive shaft configured to actuate a panning movement of the camera; a second drive shaft configured to actuate an elevating movement of the camera; and a third drive shaft configured to actuate a tilting movement of the camera; a plurality of connected linkages extending from a distal end of the insertion tube, the plurality of connected linkages including: a first linkage connected to a distal end of the insertion tube; a second linkage connected to the first linkage; and a third linkage connected to the second linkage, each linkage of the plurality of connected linkages including a threaded actuator received on a threaded end of a respective drive shaft of the plurality of drive shafts, wherein the respective drive shaft of the plurality of drive shafts extend between the threaded actuator and a proximal end of the insertion tube, wherein at least one of the threaded actuators is located distal of at least one of the plurality of connected linkages, wherein the threaded actuators include: a first threaded actuator received on a threaded end of the first drive shaft such that rotation of the first threaded drive shaft actuates the first threaded actuator to actuate the first linkage of the plurality of linkages to actuate the panning movement of the camera; a second threaded actuator received on a threaded end of the second drive shaft such that rotation of the second threaded drive shaft actuates the second threaded actuator to actuate the second linkage of the plurality of linkages to actuate the elevating movement of the camera, wherein the second threaded actuator is located distal of the first threaded actuator; and a third threaded actuator received on a threaded end of the third drive shaft such that rotation of the third threaded drive shaft actuates the third threaded actuator to actuate the third linkage of the plurality of linkages to actuate the tilting movement of the camera, wherein the third threaded actuator is located distal of the second threaded actuator; the camera disposed at a distal end of the third linkage of the plurality of connected linkages, wherein the camera includes image sensors and image capture electronic circuitry housed within the third linkage; and a data transmission printed circuit board housed within the second linkage of the plurality of connected linkages, wherein the data transmission printed circuit board is separated from the image sensors and the image capture electronic circuitry of the camera; selectively causing rotation of at least one of the respective drive shafts to cause movement of the respective threaded actuators, the movement of the respective threaded actuators causing associated movement of the at least one of the plurality of connected linkages to position the camera within the body cavity of the patient.
13. The method of claim 12 wherein selectively causing rotation of at least one of the respective drive shafts comprises causing the respective drive shafts to position the camera in an insertion position prior to removal from the body cavity of a patient.
14. The method of claim 13 wherein causing at least one of the respective drive shafts to position the camera in the insertion position comprises causing the camera to be positioned generally in line with a longitudinal axis of the insertion tube.
15. The method of claim 12, further comprising: spreading a heat load of the camera between: image sensors and image capture circuitry of the camera, supported at the distal end of the third linkage of the plurality of linkages; and the data transmission circuit board disposed in the second linkage of the plurality of linkages.
16. An apparatus for positioning a camera to capture a plurality of images inside a body cavity of a patient during a medical procedure, the apparatus comprising: an insertion tube; a plurality of connected linkages extending from a distal end of the insertion tube, at least some of the plurality of connected linkages including a threaded actuator received on a threaded end of a respective drive shaft of a plurality of drive shafts, wherein the respective drive shaft of the plurality of drive shafts extend between the threaded actuator and a proximal end of the insertion tube, wherein at least one of the threaded actuators is located distal of at least one of the plurality of connected linkages; a camera disposed at a distal end of a distal-most linkage of the plurality of connected linkages, wherein the camera includes image sensors and image capture electronic circuitry housed within the distal-most linkage; and a data transmission printed circuit board housed within a linkage of the plurality of connected linkages which is proximal of the distal-most linkage, wherein the data transmission printed circuit board is separated from the image sensors and the image capture electronic circuitry of the camera; wherein at least some of the plurality of connected linkages are configured to move in response to being actuated by movement of the threaded actuator in response to rotation of the respective drive shaft, the associated movements of the at least some of the plurality of the connected linkages together operable to facilitate positioning of the camera within the body cavity of the patient.
17. The apparatus of claim 16 wherein each of the plurality of drive shafts includes a drive coupler at a proximal end of the drive shaft, the drive coupler operable to receive a drive torque for causing rotation of the drive shaft.
18. The apparatus of claim 17 wherein the plurality of connected linkages comprise at least: a panning linkage configured to produce side-to-side motion of the camera; an elevating linkage configured to move the camera away from the longitudinal axis; and a tilt linkage configured to tilt the camera forward and backward with respect to a longitudinal axis extending outwardly from the insertion tube.
19. The apparatus of claim 18 wherein the panning linkage is connected to the distal end of the insertion tube, the elevating linkage is connected to the panning linkage and the tilt linkage is connected to the elevating linkage, and wherein the camera is connected to the tilt linkage.
20. The apparatus of claim 16 wherein at least some of the plurality of connected linkages comprise a revolute joint constrained to permit motion in a single degree of freedom corresponding to the associated movement of the connected linkage, and wherein the threaded actuator is coupled to the connected linkage to cause motion about the revolute joint.
I thought you posted them
Hand Controller For Robotic Surgery System
DOCUMENT ID
US 11576736 B2
DATE PUBLISHED
2023-02-14
INVENTOR INFORMATION
NAME
CITY
STATE
ZIP CODE
COUNTRY
Unsworth; John D.
Hamilton
N/A
N/A
CA
APPLICANT INFORMATION
NAME
Titan Medical Inc.
CITY
Toronto
STATE
N/A
ZIP CODE
N/A
COUNTRY
CA
AUTHORITY
N/A
TYPE
assignee
ASSIGNEE INFORMATION
NAME
TITAN MEDICAL INC.
CITY
Toronto
STATE
N/A
ZIP CODE
N/A
COUNTRY
CA
TYPE CODE
03
APPLICATION NO
16/913809
DATE FILED
2020-06-26
DOMESTIC PRIORITY (CONTINUITY DATA)
continuation parent-doc US 16455192 20190627 US 10695139 child-doc US 16913809
continuation parent-doc US 16160200 20181015 US 10357319 20190723 child-doc US 16455192
continuation parent-doc US 15490098 20170418 US 10130434 20181120 child-doc US 16160200
continuation parent-doc US 15211295 20160715 US 9681922 20170620 child-doc US 15490098
continuation parent-doc US 14831045 20150820 US 9421068 20160823 child-doc US 15211295
continuation parent-doc US 14302723 20140612 US 9149339 20151006 child-doc US 14831045
continuation parent-doc US 12449779 US 8792688 20140729 WO PCT/CA2008/000392 20080229 child-doc US 14302723
us-provisional-application US 60604187 20070301
us-provisional-application US 60921467 20070403
us-provisional-application US 60907723 20070413
us-provisional-application US 60933948 20070611
us-provisional-application US 60937987 20070702
us-provisional-application US 61001756 20071105
US CLASS CURRENT:
1/1
CPC CURRENT
TYPE
CPC
DATE
CPCI
A 61 B 34/74
2016-02-01
CPCI
A 61 B 34/20
2016-02-01
CPCI
H 05 K 999/99
2013-01-01
CPCI
G 06 F 3/0308
2013-01-01
CPCI
G 01 D 5/262
2013-01-01
CPCI
G 06 T 11/00
2013-01-01
CPCI
A 61 B 90/06
2016-02-01
CPCI
A 61 B 34/25
2016-02-01
CPCI
A 61 B 34/70
2016-02-01
CPCI
G 06 F 3/0325
2013-01-01
CPCI
A 61 B 34/37
2016-02-01
CPCI
G 06 F 3/016
2013-01-01
CPCI
A 61 B 90/361
2016-02-01
CPCI
A 61 B 34/10
2016-02-01
CPCI
A 61 B 34/76
2016-02-01
CPCI
A 61 B 34/30
2016-02-01
CPCA
A 61 B 90/36
2016-02-01
CPCA
A 61 B 2034/107
2016-02-01
CPCA
A 61 B 2034/2068
2016-02-01
CPCA
A 61 B 2034/2051
2016-02-01
CPCA
A 61 B 2090/365
2016-02-01
CPCA
A 61 B 2034/2055
2016-02-01
CPCA
A 61 B 2090/062
2016-02-01
CPCA
A 61 B 2017/00703
2013-01-01
CPCA
A 61 B 2017/00207
2013-01-01
Abstract
A Robotic control system has a wand, which emits multiple narrow beams of light, which fall on a light sensor array, or with a camera, a surface, defining the wand's changing position and attitude which a computer uses to direct relative motion of robotic tools or remote processes, such as those that are controlled by a mouse, but in three dimensions and motion compensation means and means for reducing latency.
Background/Summary
CROSS-REFERENCE TO RELATED APPLICATIONS
(1) This application is a continuation of U.S. patent application Ser. No. 16/455,192, filed Jun. 27, 2019, which is a continuation of U.S. patent application Ser. No. 16/160,200, filed Oct. 15, 2018, which is a continuation of which is U.S. patent application Ser. No. 15/490,098, filed Apr. 18, 2017, which is a continuation of U.S. patent application Ser. No. 15/211,295, filed Jul. 15, 2016, which is continuation of U.S. patent application Ser. No. 14/831,045, filed Aug. 20, 2015, which is a continuation of U.S. patent application Ser. No. 14/302,723 filed Jun. 12, 2014, which is a continuation of U.S. patent application Ser. No. 12/449,779 filed Aug. 25, 2009 which is a 371 of International Patent Application No. PCT/CA2008/000392 filed Feb. 29, 2008, which claims the benefit of the filing dates of U.S. Patent Application No. 60/904,187 filed 1 Mar. 2007 under the title LIGHT SENSOR ARRAY FORMING CAGE AROUND OPERA TOR MANIPULATED WAND USED FOR CONTROL OF ROBOT OR REMOTE PROCESSES; U.S. Patent Application No. 60/921,467 filed 3 Apr. 2007 under the title OPERA TOR MANIPULATED WAND WHICH CASTS BEAM ONTO LIGHT SENSOR ARRAY FOR CONTROL OF ROBOT OR REMOTE PROCESSES, WITH HAPTIC FEEDBACK; U.S. Patent Application No. 60/907,723 filed 13 Apr. 2007 under the title OPERATOR MANIPULATED WAND WHICH CASTS BEAM ONTO LIGHT SENSOR ARRAY OR SURFACE FOR CONTROL OF ROBOT OR REMOTE PROCESSES, WITH OR WITHOUT HAPTIC FEEDBACK; U.S. Patent Application No. 60/933,948 filed 11 Jun. 2007 under the title OPERA TOR MANIPULATED WAND WHICH CASTS BEAM(S) ONTO LIGHT SENSOR ARRAY OR SURFACE FOR CONTROL OF ROBOT OR REMOTE PROCESSES IN THREE DIMENSIONS, WITH HAPTIC FEEDBACK AND MOTION COMPENSATION; U.S. Patent Application No. 60/937,987 filed 2 Jul. 2007 under the title OPERA TOR MANIPULATED WAND WHICH CASTS BEAM(S) OR SHAPES ONTO LIGHT SENSOR ARRAY OR SURFACE FOR CONTROL OF ROBOT OR REMOTE PROCESSES IN THREE DIMENSIONS, WITH HAPTIC FEEDBACK AND MOTION COMPENSATION; and U.S. Patent Application No. 61/001,756 filed 5 Nov. 2007 under the title OPERA TOR MANIPULATED WAND WHICH CASTS BEAM(S) OR SHAPES ONTO LIGHT SENSOR ARRAY OR SURFACE FOR CONTROL OF ROBOT OR REMOTE PROCESSES IN THREE DIMENSIONS, WITH HAPTIC FEEDBACK, MOTION AND LATENCY COMPENSATION. The content of these patent applications is hereby expressly incorporated by reference into the detailed description hereof.
FIELD OF INVENTION
(1) This invention relates to operator interfaces for controlling robots and remote processes, including pointing devices, such as a mouse. It also relates to methods and systems for controlling remote processes.
BACKGROUND OF THE INVENTION
(2) Real-time operator control of robots has been accomplished with electro-mechanical controls such as joysticks and multiple axis hand grips. These devices suffer from a limited range of motion due to being constrained by the geometry of the control device. In other applications, such as surgery, the operator's hand and finger motions used to operate the device do not closely approximate those motions he would use in conducting the operation by hand. This requires the surgeon to use a different repertoire of hand motions for the robot control, than he would for conducting the operation by hand. Other devices such as a glove actuator, while more closely approximating the actual motion of the hand, suffers from a lack of accuracy regarding the motion of the instrument the hand and fingers grasp, and it is the working end of the instrument which is being mimicked by the robot's tools that do the work. Other interfaces have been developed that rely on multiple cameras to record the motion of the operator's hands with or without faux instruments, but these can also suffer from a lack of accuracy.
(3) These devices also suffer from mechanical wear and tear, which compromises accuracy and require maintenance.
(4) These devices suffer from latency, especially when the operator is separated from the worksite by sufficient distances that there is a significant delay in transmission.
(5) It is an object of some aspects of the invention to address one or more of the above existing concerns.
(6) Other concerns may be also be addressed in those aspects, or separately in other aspects of the invention as will be evident from the remainder of this specification.
SUMMARY OF THE INVENTION
(7) In a first aspect the invention provides a method comprising the steps of actively generating an image pattern on a surface of a first object, detecting the image pattern on the surface of the first object, wherein either the step of actively generating or the step of detecting is performed at a second object spaced away from the first object, and determining parameters of the relative poses of the second object and the surface utilizing the detected image pattern and utilizing reference data for actively generating the image pattern.
(8) The method may further comprise the step of actively displaying on the first surface an image of a remote process that is controlled in accordance with the determined parameters of the pose of the second object.
(9) The step of actively generating may comprise the step of projecting a known image pattern to actively generate the image pattern on the surface of the first object, wherein the step of projecting is from either the second object if the step of actively generating is performed at the second object or a first location other than the second object and the first object if the step of detecting is performed at the second object.
(10) The step of projecting may comprise projecting the image pattern from the second object. The step of detecting may comprise detecting the image pattern at the surface of the first object. The step of projecting may comprise projecting the image pattern from the first location. The step of detecting further comprises detecting the image pattern from a second location other than the first object and the second object.
(11) The method may further comprise the step of maintaining the first object in a known pose during the steps of projecting and detecting. The method may further comprise the step of maintaining the second object in a known pose during the steps of projecting and detecting.
(12) The surface of the first object may be substantially planar.
(13) The method may further comprise the step of detecting movement of the detected pattern, and the step of determining parameters of the pose of the second object comprises determining movement of parameters of the pose of the second object from the detected movement of the detected pattern.
(14) The method may further comprise the step of detecting linear movement of the second object parallel to the surface by detecting motion against texturing on the surface.
(15) The step of projecting may further comprise projecting the image pattern such that the image pattern is asymmetrical about an axis of rotation inline with a direction of projection of the image pattern. The step of projecting may further comprise projecting the image pattern such that the size of the image pattern varies continuously with distance from the first object inline with a direction of projection of the image pattern.
(16) The step of actively generating the image pattern may include actively generating elements of the image pattern over time, and the step of detecting includes detecting elements of the formed image pattern in synchronization with actively generating the image elements.
(17) The method of claim 1 wherein the steps of actively generating and detecting comprise actively generating on the surface which surface forms a three dimensional cavity with access for the second object through an opening in the first object, and detecting the image pattern formed on such surface, respectively.
(18) The surface may comprise a plurality of substantially planar sub-surfaces. The step of projecting further comprises projecting the image pattern as a combination of three or more spot beams of known relationship. The step of actively generating may further comprise actively generating the image pattern as a combination of three or more spot beams of known relationship.
(19) The step of projecting may comprise projecting the image pattern with image pattern elements directed at a plurality of angles about an axis of the second object. The method may further comprise the step of user imparting movement of the second object.
(20) The step of projecting may further comprise projecting encoded information, other than pose-related information, in an image pattern projected from the second object.
(21) The step of determining an element of the pose of the second object may further comprise determining a distance from the image pattern on the surface of the first object to a reference point on the second object based upon the size of the detected image pattern.
(22) In a second aspect the invention provides a method of controlling instruments of a surgical robot in use on a heart, the method comprising the steps of receiving a signal that a heart is about contract, and initiating movement of the surgical robot instruments so as to accommodate movement of the heart in the vicinity of the instruments during contraction as movement of the heart occurs.
(23) The step of receiving may further comprise receiving a signal related to an anticipated nature of the contraction, and the step of initiating further comprises utilizing the anticipated nature of the contraction from the signal to control the accommodation. The method may comprise the steps of detecting a contour of movement of a heart by, projecting an image pattern on to a surface of the heart in the vicinity of the instrument, repeatedly detecting the image pattern formed on the surface of the heart, and determining movement of the heart based on a transformation of the detected image pattern from reference image pattern data, and moving the surgical robot instruments so as to accommodate the contour of movement of the heart in the vicinity of the instrument, so that operator intended motions can be carried out from this normalized position.
(24) In a third aspect the invention provides a method of controlling an instrument of a surgical robot comprising the steps of detecting a contour of movement of a heart by, projecting an image pattern on to a surface of the heart in the vicinity of the instrument, repeatedly detecting the image pattern formed on the surface of the heart, and determining movement of the heart based on a transformation of the detected image pattern from reference image pattern data, and moving the surgical robot instruments so as to accommodate the contour of movement of the heart in the vicinity of the instrument, so that operator intended motions can be carried out from this normalized position.
(25) In a fourth aspect the invention provides a robot system comprising a robot including and controlling an instrument, controls for an operator to control the robot to operate the instrument, a controller for determining quantified information related to motion of the instrument, and a display for displaying the information from the controller to an operator of the robot during use of the robot.
(26) In a fifth aspect the invention provides a method of conveying information regarding the latency between motion of a controller and motion of an instrument in a remote process controlled by the controller, the method comprising displaying to an operator of the controller an image of the instrument and at least a portion of the remote process surrounding the instrument in a direction of motion of the instrument, and overlaying on the displayed image, an image of the instrument in a pose requested by motion of the controller, such that the operator can see an image of the actual pose of the instrument, and the requested pose of the instrument.
(27) In a sixth aspect the invention provides a method of conveying information regarding the latency between motion of a controller of a surgical robot and motion of an instrument of the surgical robot controlled by the controller, the method comprising displaying on a display visible to an operator of the controller, an image of the instrument and at least a portion of a surgical field surrounding the instrument in a direction of motion of the instrument, and overlaying on the displayed image, an image of the instrument in a pose requested by motion of the controller, such that the operator can see an image of the actual pose of the instrument, and the requested pose of the instrument.
(28) In a seventh aspect the invention provides a method of controlling latency between motion of a controller and motion of the instrument in a remote process controlled by the controller, the method comprising the steps of acquiring an original image of the instrument and at least a portion of a surgical field surrounding the instrument in a direction of motion of the instrument, and displaying the original image to an operator of the controller, acquiring an instruction from the controller to move the instrument to an instructed pose relative to the original image, transmitting the instruction and information to identify the original image to the remote process, acquiring an updated image of the remote process, performing pattern recognition at the remote process on the image identified by the transmitted information and the updated image to determine a desired pose of the instrument relative to the updated image that corresponds to the instructed pose on the original image, and moving the instrument to the desired pose.
(29) In an eighth aspect the invention provides a method comprising the steps of actively displaying on a surface of a first object an image of a remote process that is controlled in accordance with parameters of the pose of a second object spaced away from the first object, detecting an image pattern on the surface of the first object, wherein either the image pattern is actively generated from the second object or the image pattern is detected at the second object, determining parameters of the relative poses of the second object and the surface utilizing the detected image pattern and utilizing reference data for the image pattern, and controlling the remote process in accordance with the determined parameters of the pose of the second object.
(30) In a ninth aspect the invention provides a method comprising the steps of projecting a known image pattern on to a surface of a first object, wherein the step of projecting is from either a second object or a first location other than the second object and the first object, and the first object, second object and first location are at a distance from one another, detecting the image pattern formed on the surface of the first object, wherein if the step of projecting is from the second object then the step of detecting is from either the first object, second object or a second location other than the first and the second object, and if the step of projecting is from the first location then the step of detecting is from the second object, and determining parameters of the pose of the second object utilizing the detected image pattern and reference image pattern data for the known pattern.
(31) In a tenth aspect the invention provides a method of controlling an instrument of a robot comprising the steps of detecting a contour of movement of an object being worked by the instrument, projecting an image pattern on to a surface of the object in the vicinity of the instrument, repeatedly detecting the image pattern formed on the surface of the object, and determining movement of the object based on a transformation of the detected image pattern from reference image pattern data, and moving the robot instruments so as to accommodate the contour of movement of the object in the vicinity of the instrument, so that operator intended motions can be carried out from this normalized position.
(32) In a eleventh aspect the invention provides an input interface comprising a pattern generator for actively generating an image pattern on a surface of a first object, a detector for detecting the image pattern on the surface of the first object, wherein the pattern generator or the detector is at a second object spaced away from the first object, and a computer for determining parameters of the relative poses of the second object and the surface utilizing the detected image pattern from the detector and utilizing reference data for actively generating the image pattern.
(33) In a twelfth aspect the invention provides a system comprising a surgical robot including an instrument controlled by the robot, a computer for receiving a signal that a heart being operated on by the instrument is about to contract, and generating instructions to the robot to initiate movement of the surgical robot instrument so as to accommodate movement of the heart in the vicinity of the instruments during contraction as movement of the heart occurs.
(34) In a thirteenth aspect the invention provides a robot system comprising a robot including and controlling an instrument, controls for an operator to control the robot to operate the instrument, a controller for determining quantified information related to motion of the instrument, and a display for displaying the information from the controller to an operator of the robot during use of the robot.
(35) In a fourteenth aspect the invention provides a system for conveying information regarding the latency between motion of a controller and motion of an instrument in a remote process controlled by the controller, the system comprising a computer and a display for displaying to an operator of the controller an image of the instrument and at least a portion of the remote process surrounding the instrument in a direction of motion of the instrument, and an overlay on the displayed image, an image of the instrument in a pose requested by motion of the controller, such that the operator can see an image of the actual pose of the instrument, and the requested pose of the instrument.
(36) In a fifteenth aspect the invention provides system for conveying information regarding the latency between motion of a controller of a surgical robot and motion of an instrument of the surgical robot controlled by the controller, the system comprising a computer and a display for displaying on a display visible to an operator of the controller, an image of the instrument and at least a portion of a surgical field surrounding the instrument in a direction of motion of the instrument, and overlaying on the displayed image, an image of the instrument in a pose requested by motion of the controller, such that the operator can see an image of the actual pose of the instrument, and the requested pose of the instrument.
(37) In a sixteenth aspect the invention provides a system for controlling latency between motion of a controller and motion of the instrument in a remote process controlled by the controller, the system comprising a camera for acquiring an original image of the instrument and at least a portion of a surgical field surrounding the instrument in a direction of motion of the instrument, and a display for displaying the original image to an operator of the controller, acquiring an instruction from the controller to move the instrument to an instructed pose relative to the original image, and transmitting the instruction and information to identify the original image to the remote process, wherein the camera is also for acquiring an updated image of the remote process, a computer for performing pattern recognition at the remote process on the image identified by the transmitted information and the updated image to determine a desired pose of the instrument relative to the updated image that corresponds to the instructed pose on the original image, and instructing the remote process to move the instrument to the desired pose.
(38) In a seventeenth aspect, the invention provides a computer readable medium storing program instructions executable by one or more processors in one or more computers for causing the computers to implement the method of any one of the method aspects.
(39) Other aspects of the present invention and detailed additional features of the above aspects will be evident based upon the detailed description, FIGS. and claims herein, including for example systems corresponding to the methods of the above aspects, methods corresponding to the systems of the above aspects, input interfaces, wands, robots, computing systems, alignment systems, software, methods of using the above, and the like.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) For a better understanding of the present invention and to show more clearly how it may be carried into effect, reference will now be made, by way of example, to the accompanying drawings that show the preferred embodiment of the present invention and in which:
(2) FIG. 1 is a perspective view of portions of an input interface including a first object, an open sided box having a surface (light sensor array), and a second object (a wand) projecting an image pattern (light beams) to actively generate an image pattern on the surface (spots of light) which is detected by the light sensor in accordance with various example embodiments of aspects of the present invention.
(3) FIG. 2 is a perspective view of portions of an alternative input interface including a buckyball shaped sensor array in accordance with various example embodiments of aspects of the present invention.
(4) FIG. 3 is a perspective view of additional portions of an input interface, utilizing, for example, the input interface of FIG. 1, and including transmission means from the sensor array to computer, and a three dimensional viewer including superimposed force feedback information on top of a three dimensional image of a work space in accordance with various example embodiments of aspects of the present invention.
(5) FIG. 4 and FIG. 5 are perspective views of details of two examples of force feedback information for the input interface of FIG. 3.
(6) FIG. 6 is a perspective view and block view of various elements of a robotic control system, including the input interface of FIG. 1, in accordance with various embodiments of aspects of the present invention.
(7) FIG. 6a is an example of an alternative input interface which uses only a single panel to form the sensor array.
(8) FIG. 6a1 is a perspective view of a further alternative user interface, similar to that illustrated in FIG. 6a, except that the sensor array is comprised of two panels, at an angle relative to each other, known to a computer.
(9) FIG. 6a2 is a perspective view of another alternative user interface, similar to that illustrated in FIG. 6a, except that the camera is located in a stationary position above the surface.
(10) FIG. 6b is a block diagram illustrating another further alternate user interface in which a lens is included and which tracks the spots projected onto a surface and transmits the information wirelessly to the controller/encoder and/or the computer.
(11) FIG. 6c is a cross-sectional, perspective view of an example embodiment of a faux instrument wand which includes a lens.
(12) FIG. 7 is a cross-sectional, perspective view of an example embodiment of a wand, including rechargeable battery and controller/encoder, various example controls, and light emitter cluster, which houses the light emitters.
(13) FIG. 8 is a cross-sectional, perspective view of a faux forceps wand.
(14) FIG. 8a is a cross-sectional, perspective view of an example embodiment of a wand similar to FIG. 7, but instead of multiple fixed emitters, there is one emitter, the beam of which is redirected by a mirror or other beam redirecting device.
(15) FIG. 8b is a cross-sectional, perspective view of the distal end of the wand of FIG. 8a, illustrating an emitter beam which is redirected by a mirror.
(16) FIG. 8c is a cross-sectional, perspective view of the distal end of the wand of FIG. 8a, illustrating an emitter beam which is redirected by mirrors.
(17) FIG. 8d is a perspective view of a surface on which an addressing grid has been overlain. For diagrammatical clarity, only parts of the grid have been illustrated, it being understood that the grid is continuous over the surface.
(18) FIG. 9 is a cross-sectional, perspective view of an example embodiment of a faux forceps wand which includes a finger slider and sensor and/or haptic feedback device.
(19) FIG. 10 is a perspective view of a further example embodiment of an operator viewer, with force feedback information as illustrated in detail, and tool icons of available tools a selected tool.
(20) FIG. 11 is a cross-sectional, perspective view which illustrates an example of relative movement of wand controls and consequent movement of a tool.
(21) FIGS. 12, 13 and 14 are cross-sectional, perspective views which illustrate an example of relative movement of wand controls and consequent movement of a tool relative to a bolt.
(22) FIGS. 15 and 16 are cross-sectional, perspective views which illustrate an example of tools with adjustable extensions, which can retract in order to compensate for a rising and falling surface in accordance with an example embodiment of an aspect of the present invention.
(23) FIG. 17 is a cross-sectional, perspective view of a camera tool which illustrates the effect of spacing of neighboring projected dots on a surface at two stages of movement. The separations, along with known information: the angles of the beams, relative to the tool and the position of a camera tool provide a computer with a description of the changing position of the surface at each point in time.
(24) FIG. 18 is a perspective view of a distal end of the camera tool of FIG. 17 projecting beams at various predetermined angles, relative to the tool.
(25) FIG. 19 is a cross-sectional, block diagram of an example passive haptic feedback device in which the flow of an electro-rheological or magneto-rheological fluid is controlled by an electrical or magnetic field between elements, which can be electrodes or magnetic coils in accordance with an embodiment of an aspect of the present invention.
(26) FIG. 20 is a cross-sectional, block view of an alternate embodiment of a passive haptic feedback device in which the flow of fluid, such as saline or glycerin is controlled by an electrical valve.
(27) FIG. 21 is a cross-sectional, perspective view of the operator's view of a worksite as viewed through an example embodiment of a viewer with eyepieces, illustrating superimposed tool cursors of the operator's intended position of tools at the worksite, and the actual position of the tools at the worksite.
(28) FIG. 22 is a cross-sectional, perspective view of an example wand attached to any body part, tool, or other object, by means of connectors, which have complementary indexing means, to ensure their proper alignment.
(29) FIG. 23 is a cross-sectional, perspective view of two wands that can be aligned in a desired manner, or be placed in a desired orientation or position with respect to each other or another object. In this example a drill is positioned so that it can drill through a hidden hole.
(30) FIG. 24 is a cross-sectional, perspective view of one wand, and sensor array assembly (an example detector) which can be aligned in a desired manner, or be placed in a desired orientation or position with respect to each other or another object. In this example a drill is positioned so that it can drill through a hidden hole. A sensor array replaces the emitter housing in the sensor array assembly but the assembly is otherwise similar in construction to a wand. The sensor array communicates with a controller/encoder through communicating means and thence wirelessly to a computer.
(31) FIG. 25 is a cross-sectional, perspective view of two combination wand and sensor array assemblies 47 which have been daisy chained with two other combination units (not shown and in combination with a sensor array 1.
(32) FIG. 26 is a graphic illustration of a screen plane (surface of a first object) and device planes with mounted lasers (second object) and related coordinate systems.
(33) FIG. 27 is a graphic illustration of a linear translation between coordinate systems of FIG. 26.
(34) FIG. 28 is a graphic illustration of a rotational translation between coordinate systems of FIG. 26.
(35) FIGS. 29a to 29e are partial, sectional, perspective views of the operating theatre and remote work site, which illustrate methods to reduce or eliminate operational latency of the system.
DETAILED DESCRIPTION
(36) An object location, sometimes referred to as position, and orientation, sometimes referred to as attitude, will together be called the “pose” of the object, where it is understood that the orientation of a point is arbitrary and that the orientation of a line or a plane or other special geometrical objects may be specified with only two, rather than the usual three, orientation parameters.
(37) A pose can have many spatial parameters, referred to herein as parameters. As described above, such parameters may include the location and orientation of the object. Parameters may include location information in one, two or three dimensions. Pose location parameters may also be described in terms of vectors, providing a direction and a distance. Pose orientation parameters may be defined in terms of an axis of the object, for example, the skew (rotation about the axis), rotation (rotation of the axis about an intersection of the axis and a line normal to a plane), and tilt (rotation of the axis about an intersection of the axis and a line parallel to the plane). Other pose orientation parameters are sometimes referred to as roll, pitch and yaw.
(38) It will be evident to those skilled in the art that there are many possible parameters to a pose, and many possible methods of deriving pose information. Some parameters will contain redundant information between parameters of the pose. The principles described herein include all manner of deriving pose information from the geometric configuration of detector and surface described herein, and are not limited to the specific pose parameters described herein.
(39) Pose parameters may be relative to an object (such as a surface), or some other reference. Pose parameters may be indirectly derived, for example a pose relative to a first object may be derived from a pose relative to a second object and a known relationship between the first object and second object. Pose parameters may be relative in time, for example a change in the pose of an object resulting from motion over time may itself by a pose element without determining the original pose element.
(40) The description provided herein is made with respect to exemplary embodiments. For brevity, some features and functions will be described with respect to some embodiments while other features and functions will be described with respect to other embodiments. All features and functions may be exchanged between embodiments as the context permits, and the use of individual features and functions is not limited by the description to the specific embodiments with which the features and functions are described herein. Similarly, the description of certain features and functions with respect to a given embodiment does not limit that embodiment to requiring each of the specific features and functions described with respect to that embodiment.
(41) In this description one or more computers are referenced. It is to be understood that such computers comprise some form of processor and memory, which may or may not be integrated in a single integrated circuit. The processor may be provided by multiple CPUs which may be integrated on a single integrated circuit as is becoming more and more common, or a single CPU. Dedicated processors may be utilized for specific types of processing, for example, those that are mathematically computationally intensive. The functions of the computer may be performed in a single computer or may be distributed on multiple computers connected directly, through a LAN local area network (LAN) or across a wide area network (WAN) such as the Internet. Distributed computers may be in a single location or in multiple locations. Distributed computers may be located close to external devices that utilize their output or provide their input in order to reduce transmission times for large amounts of data, for example image data may be processed in a computer at the location where such data is produced, rather than transmitting entire image files, lesser amounts of post-processed data may be transmitted where it is required.
(42) The processing may be executed in accordance with computer software (computer program instructions) located in the memory to perform the various functions described herein, including for example various calculations and the reception and transmission of inputs and outputs of the processor. Such software is stored in memory for use by the processor. Typically, the memory that is directly accessible to the processor will be read only memory (ROM) or random access memory (RAM) or some other form of fast access memory. Such software, or portions thereof, may also be stored in longer term memory for transfer to the fast access memory. Longer term storage may include for example a hard disk, CD-ROM in a CD-ROM drive, DVD in a DVD drive, or other computer readable medium.
(43) The content of such software may take many forms while carrying out the features and functions described herein and variants thereof as will be evident to those skilled in the art based on the principles described herein.
(44) Patterns include for example the spots emitted from the emitters described herein. Patterns also include other examples provided herein such as ellipses and other curves. It may also include asymmetrical patterns such as bar codes. Actively generating a pattern includes for example a pattern on a computer monitor (herein called a screen) or other display device. Actively generating a pattern may alternatively include projecting the pattern onto a surface. A detector includes for example a camera or a sensor array incorporating for example CCD devices, and the like. Reference pattern data may include for example the location and direction of emitters, or other projectors.
(45) Objects as used herein are physical objects, and the term is to be construed generally unless the context requires otherwise. When projection or detection occurs at an object it is intended to include such projection or detection from objects fixedly connected to the initial object and the projector or detector is considered to be part of the initial object.
(46) Referring to the FIGS., like items will be referenced with the same reference numerals from FIG. to FIG., and the description of previously introduced items will not be repeated, except to the extent required to understand the principle being discussed. Further, similar, although not identical, items may be referenced with the same initial reference numeral and a distinguishing alphabetic suffix, possibly followed by a numerical suffix.
(47) In some aspects embodiments described herein provide a solid state operator interface which accurately reports the movements of the working end of an operator's faux instruments, which are then accurately reported to the working end of the robot's tools. In the case of a surgical robot, the operator (surgeon) manipulates instruments similar to those the surgeon would normally use, such as a tubular wand, for a scalpel and an instrument that would be similar in shape to forceps. This approach reduces the training that is required to become adept at using a robotic system, and also avoids the deterioration of learned skills learned in the hands-on operating procedures.
(48) In some aspects embodiments described herein provide an operator interface that permits an input device, and the hands of the operator, to move in a larger space, which would eliminate or reduce the occasions in which the system requires resetting a center point of operator interface movements.
(49) In some aspects embodiments described herein provide an interface which allows for fine coordinated movements by input device, and by both hands, such as when the surgeon attaches a donor and recipient vessels with sutures.
(50) In some aspects embodiments described herein provide an interface that may include haptic feedback.
(51) In some aspects embodiments described herein provide an interface system that can position the tools at any point in time so that non-operationally created motions are fully compensated for, and a relatively small patch of surface, where the procedure is being carried out, is rendered virtually static to the operator's point of view.
(52) In some aspects, embodiments described herein provide a method for virtually limiting latency, during the operation. In some other aspects, embodiments described herein provide a method for alerting an operator to the existence and extent of latency during the operation.
(53) Referring to FIG. 1, an operator's hand 6 controls the motion of the wand 2 within a sensor array 1, comprised of five rectangular segments forming an open-sided box. Narrow light beams 4 emanate from a light-emitting cluster 3 and project spots of light 5 on the light sensors of the sensor array 1.
(54) Referring to FIG. 2, the box sensor array 1 of FIG. 1 is replaced by a buckyball-shaped sensor array 1a, comprised of hexagonal and pentagonal segments, and an opening 7, which permits the wand 2 to be inserted into the sensor array 1a.
(55) Referring to FIG. 3, a system, includes the sensor array 1 and transmission means 11a that deliver signals from the segments of the sensor array 1 at interface pads 11b to computer 11. A three dimensional viewer 8 includes superimposed force feedback information 10b, 10c, as shown in detail 10a on top of the three dimensional image of the work space.
(56) Referring to FIG. 4 and FIG. 5, two examples are shown of the force feedback information 10d, 10e, 10f and 10g, which may be used in substitution or in addition to haptic feedback.
(57) Referring to FIG. 6, various elements of a robotic control system are shown. FIG. 6 illustrates an example where a body 14 is being operated on through an incision 14a. The robot in this case is fitted with a tool controller 15 and example tools: forceps 15b, three dimensional camera 15c and cauterizing scalpel 15d. The robot's principal actuators 15a control the various movements of the tools in response to the positions of the wands 2 including the goose-neck camera guiding wand 13, and commands of the operator.
(58) Referring to FIG. 6a, an example of a user interface using a single panel to form the sensor array 1 is shown.
(59) Referring to FIG. 6a1, a user interface is shown that is similar to that illustrated in FIG. 6a, except that the sensor array 1b is comprised of two panels at an angle relative to each other, which is known to the computer 11.
(60) Referring to FIG. 6a2, an interface is shown that is similar to that illustrated in FIG. 6a, except that the camera 3c is located in a stationary position above the surface 1b, such that it can view the spots of light 5 projected onto the surface and their position on the surface, but at an angle which minimizes or eliminates interference by the wand 2 with the emitted beams 4. The camera 3c is connected to the computer 11 by connecting means 3b1.
(61) Referring to FIG. 6b, a user interface is shown in which a lens 3c is included and which tracks the spots of light 5 projected onto a surface 1b, which may not contain sensors, and transmits the information wirelessly to the controller/encoder 18 and/or the computer 11.
(62) Referring to FIG. 6c, a faux forceps wand 2b is shown that includes a lens 3c.
(63) Referring to FIG. 7, a generally cylindrical wand 2 is shown that includes a rechargeable battery 17 and a controller/encoder 18, various example controls 19, 20, 20a and a light emitter cluster 3, which houses light emitters 3a.
(64) Referring to FIG. 8, the faux forceps wand 2b is shown that has finger holes 21, return spring 21a and sensor/haptic feedback controller 21b.
(65) Referring to FIG. 8a, the wand 2 is shown similar to FIG. 7, but instead of multiple fixed emitters 3a, there is one emitter 3a, the beam 4 of which is redirected by a mirror 3d or other beam redirecting device. FIG. 8a also illustrates the wand 2 with a camera 3c.
(66) Referring to FIG. 8b, shown is a cross-sectional, perspective view of the distal end of wand 2, illustrating in greater detail emitter 3a and beam 4, part of which is redirected by mirror 3d1, in some embodiments being one of an array of mirrors 3e.
(67) Referring to FIG. 8c, shown is a cross-sectional, perspective view of the distal end of wand 2, illustrating in greater detail the emitter 3a, beam 4, part of which is redirected by mirrors 3d2 and 3d3. FIG. 8c also illustrates an alternative location for camera 3c, in this case being located at the distal end of the mirror array 3e.
(68) Referring to FIG. 8d, a surface 1b has an addressing grid 1c overlain. For diagrammatical clarity, only parts of the grid have been illustrated, it being understood that the grid 1c is continuous over the surface 1b.
(69) Referring to FIG. 9, the faux forceps wand 2b includes a finger slider control 19a and sensor and/or haptic feedback device 19c.
(70) Referring to FIG. 10, an operator viewer 8 has force feedback information 10 as illustrated in detail 10a, and also illustrated in FIG. 3. Tool icons 10h represent available tools. In this example, the operator has selected a forceps icon 26b for the left hand and a wrench tool icon 27b for the right hand. As an example, the respective selected tool is indicated by the icon being bolded.
(71) Referring to FIG. 11, example relative movement of the wand 2b controls is shown, including the finger hole control 21, and the finger slider control 19a, (See FIG. 9), and the consequent movement of a tool 26 (See FIG. 11).
(72) Referring to FIGS. 12, 13 and 14, example of relative movement of the wand 2b controls is shown, including a finger hole control 21, the finger slider control 19a, a rotary control 20 and the consequent movement of a tool 27 relative to a bolt 29.
(73) Referring to FIGS. 15 and 16, example tools 15b, 15c and 15d have respective adjustable extensions 15b1, 15c1 and 15d1 which can retract 15b2, 15c2 and 15d2 in order to compensate for rising and falling of a surface, for example a heart surface 14d1, 14d2.
(74) Referring to FIG. 17, camera tool 15c views the effect of the spacing of neighboring projected dots/spots of lights 5 on the surface of the heart 14d1, 14d2, at two stages in the heart's beat. The separations, along with known information: the angles of the beams 4, relative to the tool 15c and the position of the camera tool 15c, provide computer 11 with a description of the changing position of the heart surface at each point in time. It also illustrates one example position of cameras, or camera lenses 3c1 and 3c2.
(75) Referring to FIG. 18, distal end of the camera tool 15c is shown in detail. The emitter cluster 3 and emitters 3a project beams 4 at various predetermined angles, relative to the tool 15c.
(76) Referring to FIG. 19, an example passive haptic feedback device has flow of an electrorheological or magnetorheological fluid controlled by an electrical or magnetic field between elements 36, which can be electrodes or magnetic coils. The control of the flow of this fluid affects the speed with which piston 31a can move back and forth through the cylinder 31.
(77) Referring to FIG. 20, the example passive haptic feedback device has a flow of fluid, such as saline or glycerin, controlled by an electrical valve 37. The control of the flow of this fluid affects the speed with which piston 31a can move back and forth through the cylinder 31.
(78) Referring to FIG. 21, an operator's view of the worksite (a remote process) seen through the viewer 8 and eyepieces 9 has superimposed tool cursors 15d3 and 15b3 that illustrate the operator's intended position of the tools at the worksite. Respective actual positions of the tools 15d2 and 15b2 at the worksite are also shown in the viewer 8 to display to the operator the difference between the two due to temporal latency.
(79) Referring to FIG. 22, the wand 2b may be attached to any body part, tool 15d2 15c2, or other object, by means of connectors 42 and 42a, that have complementary indexing means 42c and 42b, to ensure their proper alignment. Where an external camera 3c, such as illustrated in FIG. 6a2, or a sensor array 1, as illustrated in FIG. 23, is provided, the wand 2b may then not have an integral camera 3c.
(80) Referring to FIG. 23, two wands 2i and 2ii (similar to wand 2b shown in FIG. 22) can be aligned in a desired manner, or be placed in a desired orientation or position with respect to each other or another object. In this example a drill 44 is positioned so that it can drill through a hidden hole 46.
(81) Referring to FIG. 24, a sensor array assembly 1d replaces wand 2ii and the wand 2i and sensor array assembly 1d can be aligned in a desired manner, or be placed in a desired orientation or pose with respect to each other or another object. In this example the drill 44 is posed so that it can drill through the hidden hole 46. The sensor array 1 replaces the emitter housing 3 but is otherwise similar in construction to the wand 2. The sensor array 1 communicates with the controller/encoder 18 by communicating means 11a and thence wirelessly to computer 11 (not shown).
(82) Some general elements of embodiments of some aspects of the present of invention will now be discussed.
(83) One embodiment is a system which accurately records the motions of the working end of an operator's faux instruments, herein referred to as a wand, which can approximate the shape of the devices the operator would use in a manual procedure. These motions are reported to the working end of the tools that the robot applies to the work site.
(84) Other embodiments simply use the wand as an input device and its shape may not in any way relate to a particular instrument. For clarity, this disclosure will use a surgical interface to illuminate some convenient features of the invention, but for some embodiments the shape of the wand may not in any way mimic standard tools or instruments. It should also be noted that reference is made to a system controlling robotically controlled tools. It should be understood that some embodiments will control actuators that perform all types of work, such as controlling reaction devices, such as rocket motors or jet engines; the position of wing control surfaces, to name a few. The system may control virtual computer generated objects that are visually displayed or remain resident within the computer and where actuators may not even be used. Embodiments of this type would include manipulation of models of molecular structures (molecular modeling) and manipulation of protein structures. In such embodiments the wand may be thought of as a computer mouse in three dimensions, for example allowing the operator to view a three dimensional image of a structure, and then to make alterations to it, by moving the wand and making control commands, for example in the space in front of a sensor array. Such an embodiment of the wand and method could be used in architecture, machine design or movie animation. It will be recognized by those skilled in the art that these are examples only of uses of such embodiments and the embodiments are not limited to these examples.
(85) In some described embodiments wands 2 incorporate light-emitting elements 3a that collectively cast multiple narrow beams of light, at known angles to each other, onto a sensor array 1 constructed of one or more light detecting panel(s) as illustrated on FIG. 3. The light detecting panel(s) reports the location of the incident light, in real-time, to a computer. Knowing the angles at which the emitters 3a project the light beams from the wand 2, the computer can convert various locations of incident spot of light 5 of the light beams 4, using triangulation and mathematical methods and algorithms, well known to the art, to calculate the position and attitude of the wand 2 relative to the sensor array 1, at each particular time interval. As the wand 2 moves, so do the spots of incident light 5 on the sensor array(s) 1, and so the computer can produce a running calculation of the position and attitude (example parameters of the pose) of the wand 2, from time to time. The computer can convert changes in parameters of the pose into instructions to the robot to assume relative motions. Small changes in the position and attitude of the wand can trace relatively large positional changes where the spots of light 5 fall on the sensor array 1. This can allow for accurate determining of the position and attitude of the wand.
(86) Mathematical calculations that may be used to determine parameters of a pose of the wand and other parameters of pose described herein have been developed, for example, in the field of photogrammetry, which provides a collection of methods for determining the position and orientation of cameras and range sensors in a scene and relating camera positions and range measurements to scene coordinates.
(87) In general there are four orientation problems:
(88) A) Absolute Orientation Problem
(89) To solve this problem one can determine, for example, the transformation between two coordinate systems or the position and orientation of a range sensors in an absolute coordinate system from the coordinates of calibration points. This can be done by recovery of a rigid body transformation between two coordinate systems. One application is to determine the relationship between a depth measuring device, such as a range camera or binocular stereo system, and the absolute coordinate system.
(90) In the case of range camera, the input is at least a set of four conjugate pairs from one camera and absolute coordinates. In the case of a binocular stereo system, input is at least three conjugate pairs seen from the left and right camera.
(91) B) Relative Orientation Problem
(92) To solve this problem one can determine, for example, the relative position and orientation between two cameras from projections of calibration points in the scene. This is used to calibrate a pair of cameras for obtaining depth measurements with binoculars stereo.
(93) Given n calibration points, there are 12+2n unknowns and 7+3n constraints.
(94) At least 5 conjugate pairs are needed for a solution.
(95) C) Exterior Orientation Problem
(96) To solve this problem one can determine, for example, the position and orientation of a camera in an absolute coordinate system from the projections of calibration points in a scene. This problem must be solved for an image analysis application when necessary to relate image measurements to the geometry of the scene. This can be applied to a problem of position and orientation of a bundle of rays.
(97) D) Interior Orientation Problem
(98) To solve this problem one can determine, for example, the internal geometry of a camera, including camera constants, location of the principal point and corrections for lens distortions.
(99) Some examples of these problems and their solutions are found in Ramesh Jain, Rangachar Kasturi and Brian G. Schunck, Machine Vision, McGraw-Hill, New York, 1995. ISBN 0-07-032018-7. Chapter 12 on Calibration deals in particular with an absolute orientation problem with scale change and binocular stereo, and with camera calibration problems and solutions which correlate the image pixels locations to points in space. Camera problem includes both exterior and interior problems.
(100) In addition to calibration problems and solutions, the Jain et al reference addresses an example problem and solution for extracting distance or depth of various points in the scene relative to the position of a camera by direct and indirect methods. As an example, depth information can be obtained directly from intensity of a pair of images using two cameras displaced from each other by a known distance and known focal length. As an alternative example solution, two or more images taken from a moving camera can also be used to compute depth information. In addition to those direct methods 3D information can also be estimated indirectly from 2D intensity images known as “Shape from X Technique”, where X denotes image cues such as shading, texture, focus or motion. Examples are discussed in Chapter 11 in particular.
(101) The above Jain et al. reference is hereby incorporated by reference into the detailed description hereof.
(102) As a further example discussion of solutions to mathematical calculations that may be used to determine parameters of a pose of the wand for the purposes of determining 3D-position of a hand-held device equipped with laser pointers through a 2D-image analysis of laser point projections onto a screen, two sets of coordinate systems can be defined as shown in FIG. 26. The centre of a first coordinate system (xS,yS,zS) can be placed in the middle of the plane that coincides with the screen (projection) plane and is considered to be fixed. The lasers installed on the hand-held device can be described with a set of lines in a second coordinate system (xD,yD,zD) which origin agrees with an intersection of the laser pointers. Additionally, the second coordinate system can have a freedom of translation and rotation as shown in FIGS. 27 and 28. Translation and rotation coordinates such as those shown in FIGS. 27 and 28 can also be found in linear algebra book such as Howard Anton, John Wiley & Sons, 4th edition ISBN 0-471-09890-6; Section 4.10, at pp. 199 to 220.
(103) The projection of the laser on the fixed plane is mathematically equivalent to finding the intersection between the plane equation zS=0 and the line equation describing the laser path. However, the line equations have to be transformed in the original coordinate system. There are many ways to define an arbitrary rotation and translation of one coordinate frame into another. One of the ways is via the transform matrix elements.
(104) The table 1 and 2 shows the coordinate transforms of the point P from one coordinate system to the other as a result of the linear transposition and rotation.
(105) TABLE-US-00001 TABLE 1 Linear translation x = x* + a1 y = y* + a2 z = z* + a3
(106) TABLE-US-00002 TABLE 2 Rotational transformation and definition of a.sub.ik coefficients a.sub.ik k = 1 k = 2 k = 3 x = a.sub.11x* + a.sub.12y* + a.sub.13z* i = 1 Cos?cos? cos?sin? sin? x = a.sub.21x* + a.sub.22y* + a.sub.23z* i = 2 cosfsin? + sinfsin? cos? cosf cos? - sinf sin? sin? -sinfcos? x = a.sub.31x* + a.sub.32y* + a.sub.33z* i = 3 sinf sin? - cosf sin?cos? sinfcos? + cosf sin?sin? cosfcos?
(107) The table 3 is a summary of example laser property and image analysis requirements for the reconstruction of the translation or rotation of the hand held device based on the observations of movement of the projection point as set out above. For the purpose of this discussion, multiple lasers are equivalent to a single laser split into multiple spot beams.
(108) TABLE-US-00003 # of Translation Rotation lasers x y Z Along Z Along x Along y 1 custom character custom character Requires the light Possible with the off Not detectable for source with large set sensor and path the narrow laser dispersion angle. reconstruction from beam. It would Requires edge minimum 3 frames for be interpreted detection and large angles. However, as the translation. area or perimeter not very sensitive In the case of calculations. for small rotational the dispersed beam, angles. requires edge detection and shape reconstruction. 2 custom character custom character custom character Problem with It would be Requires non detection of interpreted as the parallel laser left-right laser translation in the beams and equivalent to 180° case of horizontal or distance rotation. Requires vertical alignments. calibration. marking of one of For misaligned lasers, the lasers. Still not very sensitive and requires path requires the distance reconstruction via calculation and frame history. calibration. 3 custom character custom character custom character Requires marking of Requires area or With non one of the lasers . perimeter parallel calibration/calculation. laser beams. 4 or Can provide additional information to potentially avoid more singularities or ambiguities.
(109) Additional image frames can be used to change the number of lasers, or spots used at any one time. The linear transposition in x and y direction can be reconstructed from the center of mass. The translation along the z axis can utilize a calibration of the area/perimeter of the triangle. Detection of the rotation around z-axis can be achieved with marking of one of the lasers or by asymmetrical placement of lasers. Whereby, the marking of the laser may result in the faster processing time compared to the second option which requires the additional image processing in order to find the relative position of triangle. The marking of the laser can be achieved, for example, by having one laser of larger power which would translate in the pixel intensity saturation of the projection point.
(110) With respect to the image processing time, it may be preferable to limit the area of the laser projection, for example to a 3 by 3 pixel array. Once, the first laser point has been detected, a search algorithm for the rest of the laser points could be limited to the smaller image matrix, based on the definition of allowable movements.
(111) Other illustrative examples of mathematical calculation that may be used to determine parameters of a pose of the wand and other parameters of pose described herein are included for example in B. K. P. Horn. Robot Vision. McGraw-Hill, New York, 1986; U.S. patent application of Fahraeus filed Mar. 21, 2001 under application Ser. No. 09/812,902 and published in Pub. No. US2002/0048404 on Pub. Date: Apr. 25, 2002 under title APPARATUS AND METHOD FOR DETERMINING SPATIAL ORIENTATION which discusses among other things determining the spatial relationship between a surface having a predetermined pattern and an apparatus; in U.S. patent of Zhang et al. issued Apr. 4, 2006 under title APPARATUS AND METHOD FOR DETERMINING ORIENTATION PARAMETERS OF AN ELONGATE OBJECT; Marc Erich Latoschik, Elmar Bomberg, Augmenting a Laser Pointer with a Diffraction Grating for Monoscopic 6DOF Detection, Journal of Virtual Reality and Broadcasting, Volume 4(2006), no. 14, urn:nbn:de:0009-6-12754, ISSN 1860-2037 http://www.jvrb.org/4.2007/1275; Eric Woods (HIT Lab NZ), Paul Mason (Lincoln University, New Zealand), Mark Billinghurst (HIT Lab NZ) MagicMouse: an Inexpensive 6-Degree-of-Freedom Mouse http://citeseer.ist.psu.edu/706368.html; Kynan Eng, A Miniature, One-Handed 3D Motion Controller, Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, CH-8057 Zurich, Switzerland http://www.ini.ethz.ch/˜kynan/publications/Eng-3DController-ForDistribution-2007.pdf. The content of each of the above references cited above in this paragraph is hereby incorporated by reference into the detailed description hereof.
(112) Rather than using a sensor array to detect the incident spots of light 5 of the beams 4, a camera above a passive surface 1b, as illustrated in FIG. 6a2 may similarly detect the position of the incident spots of light 5 on surface 1 and make the same calculations described above to determine the position and attitude of the wand 2. Alternatively, a camera 3c may be incorporated into the wand 2 to detect where the spots of light fall on the surface 1b, as illustrated on FIG. 6a1.
(113) With reference to FIG. 6, since the space in front of the sensor array(s) may be different from the space that the robot operates in, the operator may reset or, as is usually the case, center the wands 2 in front of the sensor array 1, to coordinate the wand's position with that of the working end of the robotic arms 15b, 15c and 15d, for the next work sequence. Additionally, the travel distances, while relatively the same, between the wands 2 and the working end of the robot arms 15b, 15c, 15d, may differ. For example, where accuracy is critical, the wand 2 may be set to move relatively long distances, to effect a relatively short displacement at the working end of the robotic arms 15b, 15c, 15d. Conversely, where accuracy is not important and quicker movements, over larger distances are desired, the computer can be instructed to translate short length movements of the wand 2 into relatively large distances of travel at the working end of the robotic arms 15b, 15c, 15d. This relationship can be changed by the operator, at any time, by moving a control on the wand 2 or controls 11e on a console 11d. These methods of computer control are well known to the art and embodiments of the invention that incorporate such controls are within the ambit of the invention.
(114) The relative attitude of the sensor array 1 to the attitude of the robot arm work space 14b can also be set, which is usually at the commencement of the work, although it may be changed during the operation. For example, the vertical line in the sensor array 1 will usually be set to be the vertical line in the work space 14b, so that when the wand 2 is raised up vertically in front of the sensor array(s) 1, the robot will produce a vertical motion at the working end 15b, 15c, 15d of the robotic arm. This however may be changed by the operator varying the settings of the vertical and/or horizontal plane at the console 11d or in some other embodiments in the wand 2.
(115) Similarly the longitudinal axis of the wand 2 is generally set as the same as the longitudinal axis of the working end of the robot's arms 15b, 15c, 15d, although this too can be altered by controls at the console and in some other embodiments in the wand itself.
(116) At the start or reset times, the position and attitude of the wand 2 can be translated to be the same as the position of the working end of the robot arms 15b, 15c, 15d; the motions thereafter, until the next reset, can be relative. This allows the operator to change the operator's start or reset position and attitude of the wand to make it more comfortable to execute the next set of procedures, or provide sufficient room for the next set of procedures, in front of the sensor array 1, as referred to above.
(117) The movement of the wands will then control the movement of the tools to which they are assigned by the operator. Finer movements and movements that require haptic feedback can be effected by controls on the wands 2b, such as the finger hole control 21, the rotary control 20, 20a and the finger slider control 19b, illustrated on FIG. 6c. Switches on the wand or on the console can turn off the active control of the tools by the movement of the wand(s), but may turn on or leave on the active control of the tools by the controls on the wand to prevent inadvertent jiggling or wander while critical and/or fine work is being conducted by the operator. On other occasions the operator may wish to manipulate the wand 2 position simultaneously with moving the controls 20, 20a and 19b or other controls which that preferred embodiment might include.
(118) The sensor array 1 may be made of one or more sheets or panels of light sensor arrays, in which each pixel of the sensor array 1 can communicate the fact that the spot of light 5 has or has not fallen on that pixel to the computer, and identify which light beam 4 and from which wand 2, it originated. When integrated by the computer 11 with other inputs from other locations, this information can identify the location and attitude of the wand 2, by triangulation, mathematic methods and computer algorithms, well known to the art.
(119) In some embodiments the color of the incident light, and/or the addressable pulse frequency of the light that is detected, identifies which particular light beam and wand has cast the light so incident. For example, in some embodiments a wand may have several light-emitting elements, such as a laser, diode laser or light-emitting diode, having a different light wave length (or color), which can be identified and distinguished by the sensor array 1 (in combination with the computer). In other embodiments, the light emitter 3a is modulated or pulsed to give it a unique pulse address, which when its beam 4 is detected by the sensor array 1, which with the computer identifies the particular light beam 4, wand 2 and location and attitude of the same. Other embodiments may take advantage of the relative unique patterns of beams 4 emitted from each wand 2 to identify the wand 2 and perhaps the particular beam 4 from that said wand. Other embodiments can include a combination of these methods, or other similar beam identification methods, well known to the art. It can be desirable to provide additional light emitters 3a to provide redundancy, in the event one or more of the beams does not strike a sensor. For example, in some embodiments an axial reference beam 4 may be directed straight along the longitudinal axis of the wand 2.
(120) One or more of the light beams 4 may be modulated so as to provide information as to the wand 2 identity, and its mode of operation. For example, it might convey information as to the desired heat setting and off/on state of the cauterizing scalpel, or the forceps clasping position, as set by the wand's operator. It might also indicate the rotation of a particular tool. These are only examples of the information that may be selected by the operator, on the wand controls, and then conveyed to the sensor array 1, and hence to the computer to control the robotic arms. Embodiments can include all other convenient instructions and inputs, and all are included within the ambit of the embodiments described herein. This method of conveying instructions may be handled by a dedicated light emitting element 3a, or be bundled into one or more of the light emitting elements 3a that are used to determine the position and attitude of the wand 2. This method of conveying instructions and status information from the wand may be in addition to wireless communications 16, 16a means embedded in the wand, or in place of it.
(121) The pulses of light from the light-emitting elements 3a from cluster 3 of the wands, may be synchronized such that spots of light 5 of the beam 3 fall on the sensor array 1 at discrete times so as to avoid conflicting signals in those architectures that do not have direct connections between the sensor elements and drivers, such as active or passive matrix. In other embodiments, redundant beams are sufficient to resolve any signal interference and software means such as path prediction algorithms can be used to resolve any such conflicts. The beams in most cases will fall on more than one and in most cases many pixels in the sensor array, which will improve reliability, at the expense of resolution, and may also be used to distinguish between two beams that strike approximately the same pixels group.
(122) There are many methods of constructing a light sensor array 1, well known to the art, and includes thin film transistor (TFT) arrays in which there may be included color filter arrays or layers, to determine the color of the incident light and report the location to the computer by direct and discreet connection, or more often, by way of a passive or active connection matrix. These active matrixes or AMTFT's architectures can be used in some embodiments. Recently, Polymer TFT's sensor arrays are being made which substantially reduce the cost of such sensor arrays. These less expensive arrays will mean that the sensor array(s) 1 can be made much larger. An example of a Polymer TFT, is described by F. Lemmi, M. Mulato, J. Ho, R. Lau, J. P. Lu, and R. A. Street, Two-Dimensional Amorphous Silicon Color Sensor Array, Xerox PARC, United States, Proceedings of the Materials Research Society, 506 Keystone Drive, Warrendale, Pa., 15086-7573, U.S.A. It is understood that any convenient light sensor array may be used, including any future development in light sensor arrays, their architecture and composition, and such an embodiment is within the ambit of the embodiments described herein.
(123) In some embodiments, the sensor array pixels may be combined with light emitting elements, forming a superimposed sensor array and a light emitting array. In these embodiments an image of the working end of the robot arms 15b, 15c, 15d and work sight can be formed on the sensor array 1, and the operator can at the same time view the wand(s) 2 that are initiating the motion of the working end of the robot's arms 15b, 15c, 15d. This embodiment is most effective if the image is generated as a three dimensional image, although this is not required. Methods for creating a three dimensional effect are well known to the art and include synchronous liquid crystal glasses and alternating left eye, right eye, image generation and single pane three dimensional arrays. It is to be understood that the embodiments described herein includes all these methods and future three dimensional image generation methods.
(124) Other embodiments may use an additional camera aimed at the operator's hands and wands, and append the image to that of the worksite that is viewed in the operator viewer 8. This appended image may be turned on and off by the operator.
(125) In those preferred embodiments that use a surface 1b, and camera 3c, in place of the sensor array 1, as illustrated in FIG. 6c, the wand 2b operates partly as a laser or optical mouse, that is, detecting movement by comparing images acquired by the lens of part(s) of the surface 1b. In some preferred embodiments images of spot(s) 5 can be detected by the said lens 3c, noting both their texture or image qualities, and their positions relative to other spot(s) 5. Since the relative angle of the projected beams 4 are known, the computer 11 and/or controller/encoder 18, can process this information to determine the three dimensional position of the wand 2b relative to the surface 1b, for example by using both methods used by optical/laser mice and mathematical methods including trigonometry, well known to the art. As an example, movement of the wand 2b on planes parallel to the surface 1b, can be determined by methods used by optical/laser mice, which are well known to the art; and the height and attitude of the wand in three dimensional space can be determined by the lens 3c detecting the relative position of the spots 5 projected onto the surface 1b, and using triangulation and mathematical methods described above, which are also well known to the art. More particularly, the position of the wand 2b in three dimensional space can then be computed by integrating these two information streams to accurately establish both the lateral location of the wand 2b and its height and attitude in space. Thus, not all parameters of the pose are determined utilizing the detected pattern of the spots on the surface; rather, some of the parameters are determined utilizing the texture information (lateral location), while other parameters are determined utilizing the detected pattern of spots (height and attitude).
(126) In other embodiments, where there are two or more panels, that are placed at relative angles known to the computer 11, such as those illustrated in FIG. 6a1, the wands 2 may contain camera(s) 3c which are able detect the position of spots 5 on two or more panels. In these arrangements, where the panels are surfaces 1b, the orientation and position of the wand 2 may be determined for example as described above by mathematical methods, including trigonometry. For example, in an embodiment where the panels are arranged at right angles to each other (at 90 degrees), as illustrated in FIG. 6a1, and where the angles at which the light beams 4 trace relative to the longitudinal axis of the wand 2 are known, and where the relative positions of the projected spots 5 which fall on both panels are recorded by the camera(s); the position and orientation of the wand 2 in three dimensional space can be directly determined by mathematical methods, including trigonometry.
(127) This information, for example, can then be used to control the tools 15b, 15c, and 15d, or control any process, virtual or real. It can be readily appreciated that the wand 2b, like the wand 2 can be any shape and have any function required, for example having the shape of an optical/laser mouse and pointing and directing processes in a similar manner.
(128) In this disclosure, references to wand 2, should be read as including wand 2b and vice versa, as the context permits. Similarly references to sensor array 1 should be read as including surface 1 and vice versa, as the context permits.
(129) Embodiments of the invention that incorporate a surface 1b, rather than a sensor array(s) 1, pass information from buttons and hand controls, for example 19a, 20 and 21, on the wand 2b wirelessly or by direct connection, herein described, and by other methods well known to the art. The beams 4 may be encoded for maintaining identification of each beam and each spot 5; for example, the light emitting elements 3a may be pulsed at different frequencies and/or have different colors, which the lens 3c may detect from the light reflected from the spots 5. Although, a wand 2b, may resort exclusively to those methods used by optical/laser mice, to determine its position in three dimensional space, without resort to detecting computing and integrating the relative positions of projected spots 5, the accuracy of such a system will be inferior to those that include those latter methods and the computational overhead will be greater as well. It is to be understood that some embodiments can rely solely on those methods used by optical/laser mice, where accuracy is not as important.
(130) In some embodiments, the surface 1b may be any suitable surface including those that contain textures and marks that are typically used in association with optical/laser mice. The surface 1b may have reflectivity or surface characterizes, such that the reflected spots 5 that are detected by the camera 3c are within a known envelope and thus spots 5 that are off the surface 1b, can be rejected in calculating the orientation of the wand 2b, accompanied by a warning signal to the operator.
(131) The wands 2, 2b may include resting feet that allow them to rest on the surface 1, 1b, such that the beams 4 and spots 5 can be detected by the camera 3c, and such that the system can calibrate itself with a known wand starting orientation, and if placed on a specific footprint, position; or sensor array 1 or the surface 1b may include an elevated cradle 1e, as illustrated on FIG. 6b to hold the wand 2b in a fixed position for the calibration routine. The number of light emitting elements, such as lasers or photo-diodes, will depend upon the accuracy and redundancy required.
(132) The wand 2 may in some applications be stationary, or have an otherwise known position, and measure it's position relative to a moving surface or changing contours on a surface. The embodiments of the invention may include such a wand 2 or be incorporated into a tool, such as those, 15b, 15c, 15d, illustrated in FIG. 15, FIG. 16 and FIG. 17, and be used to compensate for motions, such as the beating of the heart 14d1, 14d2.
(133) Feedback of forces acting on the working end of the robotic arms 15b, 15c, 15d, may be detected by sensors on the robot arms, by means well known to the art and this real-time information may be conveyed to the computer which can regulate the haptic feedback devices and impart approximately the same forces on the operator's fingers and hands and/or resist the movement of the operator's fingers and hands. These haptic feedback devices, which are well known to the art, can, for example, be incorporated into the controls 19, 19a, 20, 21 or 25 other similar controls of the wand 2 or 2b. These haptic feedback devices can be active or passive and can impart force on the operator's fingers or hands (active), and/or resist the motion of the operator's fingers or hands (passive). Examples of passive haptic feedback devices are illustrated in FIGS. 19 and 20. FIG. 19 illustrates a passive haptic feedback device in which the flow of an electro-rheological or magneto-rheological fluid is controlled by an electrical or magnetic field. FIG. 20 illustrates a passive haptic feedback device in which the flow of fluid, such as saline or glycerin is controlled by an electromechanical valve. Embodiments of this invention may incorporate haptic feedback devices of any design known to the art, and all come within the ambit of the embodiments described herein.
(134) These haptic feedback devices can for example be incorporated into the finger hole 21 sensor/feedback controller 2. For example the finger holes 21 of the wand that is a faux forceps, as illustrated in FIG. 9, can be provided with haptic feedback devices which provide pinching feedback forces to the operator's hands and which accurately simulate the forces acting on the working end of the forceps tool 15b on the working end of the robotic arm. The position and motion of the mobile finger hole 21 can be conveyed to the computer wirelessly, by beam modulation, as described above or by cable.
(135) Similarly, the same faux forceps, illustrated in FIG. 9 can on some preferred embodiments of the invention, include a haptic feedback device in the finger slider sensor/haptic feedback device 19c, which senses the movement of the finger slider 19a, and which can move the forceps tool 15b, back and forth in a direction parallel to the longitudinal direction of the said tool 15b. As the operator slides the finger slider from 19a position to 19b, the operator feels the same resistance that the tool 15b senses when it pulls back tissue that it grasps, in response to the pulling back of the said slider 19a.
(136) The faux forceps, illustrated in FIG. 9 can transform its function from forceps to any other tool or instrument that is required. For example the same faux forceps, illustrated in FIG. 9 can act as a controller for a scalpel tool 15d, a wrench 27 (illustrated in FIG. 13), or any other tool or instrument, in which the various controls 19, 19a, 20, 21 of the wand are programmed to have different, but usually analogous, functions for each particular tool. The operator can select a particular tool by pressing a particular footswitch, a switch on the wand 2, or other switch location. All tools available and the selected tool may be presented as icons on the operator viewer 8, through the three dimensional eyepieces 9, an example of which is illustrated in FIG. 10 as detailed at 10h. For example, the selected tool might be bolded as the forceps icon 26b is bolded for the left hand wand 2 in the detail 10h, while the wrench tool icon 27b is bolded, for the right hand. Once selected, by the operator, the other various controls 19, 19a, 20, 21 and other controls, would be assigned to various analogous functions. The operator might call up on the viewer 8 a summary of which controls on the wand relate to what actions of the tools 15b, 15c, 15d, or other applicable tools or actions. All icons may be switched off by the operator to maximize his viewing area through the eyepieces 9.
(137) Some embodiments also include means for reducing latency and accommodating to the motion of the subject.
(138) Further details of the embodiments will now be discussed with particular reference to the FIGS.
(139) FIG. 1 illustrates the operator's hand 6 controlling the motion of the wand 2 within the sensor array 1, comprised of five rectangular segments, forming an open-sided box. FIG. 1 also illustrates the narrow light beams 4 emanating from the light-emitting cluster 3, and projecting spots of light 5 on the light sensors on the inside of the sensor array 1. The light-emitting elements 3a, that comprise the light-emitting cluster 3, are usually positioned such that the narrow beams of light 4 that they emit form a unique pattern, so as to aid in identifying the particular wand 2 that is being used. Various embodiments contain various numbers of light-emitting elements, depending upon the accuracy required and whether dedicated information carrying beams are used. Any shape of sensor array 1 can be utilized, and those illustrated in FIG. 1, FIG. 2, FIG. 6a and FIG. 6a1 are only intended to be examples of a large class of sensor array shapes, sizes and arrangements. The density of pixels or discrete sensors comprising the sensor array 1 will vary depending upon the use to which the robot is put.
(140) FIG. 3 illustrates the three dimensional viewer 8 which includes two eyepieces 9 and feedback information 10 which is superimposed on the image of the work area. As illustrated in FIG. 4 and FIG. 5 the size and orientation of the vectors 10d and 10f, and the numerical force unit 10e and 10g can be computer generated to graphically report the changing forces acting on the working end of the robot's tool that corresponds to the wand that is being manipulated. In some embodiments, these vectors are three dimensional views, such that the vector position will correspond with the forces acting on the three dimensional view of the instruments, viewed through the viewer 8. The viewer 8 may superimpose feedback information on additional wands on top of the three dimensional view of the work area. These superimposed views may of course be resized, repositioned, turned on and off by the operator. The view of the work area is captured by a three dimensional camera 15c, as illustrated in FIG. 6, which transmits the image information along transmitting means 11c to the computer 11 and viewer 8. The position of the camera, like that of any robot tool may be controlled by a separate wand 13, such as that illustrated in FIG. 6, or be controlled by a multi-purpose wand, which changes its function and the tool it controls, by a mode selecting control such as through rotary control 20, which is incorporated into the wand 2, as illustrated in FIG. 7. The camera may also be programmed to keep both tools 15b and 15d in a single view, or selected tools in a single view. This automatic mode may be turned on or off by the operator, who may then select a wand controlling mode. The feedback reporting means may be presented in many ways and that described is meant to be an example of similar feedback reporting means, all of which come within the ambit of the embodiments described herein.
(141) In some embodiments the viewer 8 is attached to a boom support, so that it may be conveniently placed by the operator. Various preferred embodiments place the controls 11e on the console 11d which is adjacent to the sensor array 1 and the wands 2, but they may also include foot switches 12, one of which is illustrated in FIG. 6. It can be readily appreciated that the computer 11 may be replaced with two or more computers, dividing functions. For example, the sensor array 1, wands 2, one computer 11 and viewer 8 may communicate at a significant distance with a second computer 11' (not shown) and work site robot controller 15. This connection could be a wideband connection which would allow the operator to conduct a procedure, such as an operation from another city, or country.
(142) The wands 2 and 2b illustrated in FIGS. 7, 8, 9 and 12 are only meant to be examples and other embodiments would have different shapes and controls and still be within the ambit of the embodiments described herein. For example, some embodiments may have a revolver shape. FIG. 7 illustrates the principal components of one embodiment. The wand 2 in FIG. 7 contains a rechargeable battery 17 to supply power to the various functions of the wand 2. The terminals 17a extend beyond the wand and provide contacts so that the wand may recharge when placed in a docking station which may accommodate the other wands, when not in use. Transmission means 17b provides power to controller/encoder 18 from battery 17. Controls 19, 20 and 20a are meant to be illustrative of control means, to switch modes of operation, such as from a cauterizing scalpel to a camera or forceps; and/or to vary the heat of the cauterizer or the force applied to the forceps grippers, to name just a few examples. In those cases where the robot arms are snake-like, these controls 19, 20 and 20a or similar controls, may control the radius of turn, and location of turns, of one or more of the robot's arms. In FIG. 7 transmission means 19a connects a lever control 19 to the controller/encoder 18; transmission means 20b connect the rotary controls 20 and 20a to the controller/encoder 18.
(143) The controller/encoder 18 in some embodiments pulse the one or more of the light emitters 3a to pass-on control information to the computer, via the sensor array 1, as mentioned above. Transmission means 3b connects the emitters to the controller/encoder 18. The light-emitting array 3 may contain discrete emitters; they may also be lenses or optical fibers that merely channel the light from another common source, for example, a single light-emitting diode or laser. Other wireless means may be included in the wand 2, which require an aerial 16a which communicates with an aerial 16 in communication with the computer 11, as illustrated in FIG. 6.
(144) While the wands illustrated are wireless, it should be understood that various other embodiments may have wired connections to the computer 11 and/or to a power source, depending upon their use, and these embodiments come within the ambit of the invention. In some embodiments, such as those in which the wand 2 is connected directly to the computer 11, the controller/encoder 18 and all or parts of its function are incorporated into the computer 11.
(145) FIG. 8 illustrates a faux set of forceps 2b, which give the operator or surgeon the feel of the forceps he may use later in the same procedure or another day when the robot is not available or suitable for the operation. FIG. 8 is meant to be illustrative of designing the wand to resemble instruments or tools that would be otherwise used in a manual procedure. This allows the skills learned using these devices to be used when controlling a robot and reduces dramatically the learning time required to use the robot effectively. While embodiments may include wands of many shapes, and configurations, those that resemble in function or appearance the tools or instruments that are normally used, are particularly useful to those situations where the operator must carry out similar procedures both manually and by robot.
(146) FIG. 8 illustrates a faux forceps wand 2b which has two finger holes 21, one of which pivots at the controller/feedback device 21b, which detects motion of the movable finger hole 21, which is transmitted by transmission means 21d to the controller/encoder 18 which then transmits the motion wirelessly, or directly, to the computer 11 or encodes pulses by modulating the output of the light emitters 3a, the light beam produced transmitting the motion and position of the movable finger hole 21 to the sensor array, and subsequently the computer 11. FIG. 8 also illustrates an alternative method of detecting and transmitting changes in the position of the various control elements on the wand 2b. Emitter(s) 3a may be placed on the movable elements, such as the finger hole 21. The projected light 4 that is incident on the sensor array 1 or surface 1 may then be used by the computer 11 to determine the position of the moving element, as it moves, such as the finger hole 21, illustrated in FIG. 8. This method of detecting and reporting the movement of control elements may be used in any such elements which are contained in various embodiments of the invention. For diagrammatical simplicity the connection from the light emitter 3a, on the finger hole 21, to the controller/encoder 18 has not been shown.
(147) The controller/feedback device 21b may also receive instructions wirelessly or by direct connection from computer 11, which directs the magnitude and direction of haptic feedback forces on the pivoting action of the movable finger hole 21. These haptic feedback forces can be passive or active, depending upon the design of the controller/feedback device. In some embodiments, no haptic feedback component is incorporated into the controller/feedback device, and in these embodiments the controller/feedback device 21b merely transmits motion and position data of the movable finger hole 21 to the computer; via the sensor array, wirelessly or directly to the computer 11.
(148) FIG. 8 also illustrates a notional end 4a for the wand 2b which the operator sets at the console 11d to allow for sufficient room between the ends of the wands 2b, when the tools are in close proximity.
(149) FIG. 8a, and detail drawings 8b and 8c, illustrate a wand 2 similar to FIG. 7, but instead of multiple fixed emitters 3a, there is one or more emitters 3a, the beam(s) 4 of which are redirected by a mirror(s) 3d or other beam redirecting device. In this embodiment, the controller/encoder 8 directs each mirror 3d in the mirror array 3e, housed in a transparent housing 3f, and secured to it by rod supports 3g, to redirect part or the entire beam 4 produced by the emitter 3a. As illustrated in FIG. 8b, the controller/encoder 18 and/or the computer 11 selects each mirror 3d1 and varies its angle relative to the mirror array 3e (one at a time or in groups) and, with other mirrors in the array, directs the beam(s) in a programmed sequence, noting the angle of the projected beam relative to the wand 2 and simultaneously comparing this to the point(s) 5 detected on the surface 1b, and by mathematical means, including trigonometric methods, defining at every selected pair, at that point in time, the position of the sensor relative to the surface 1b (or sensor array 1 in those embodiments where a sensor array is used to detect the spot 5). Embodiments include all means of redirecting the beam 4, including solid state electronic mirror arrays, such as those developed by Texas Instruments Corp. or mechanical or other optical redirecting devices well known to the art. The solid state mirror arrays that have been developed by Texas Instruments Corp. may incorporate any number of mirrors and may incorporate thousands of them, each of them or groups of them being controlled by electronic means. This system is one of a larger class known as microelectronic mechanical systems (MEMS). Because the beam can be programmed to quickly produce multiple pair inputs at various angles, for mathematical comparison, as described above, the controller/encoder 18 and/or computer 11 can calculate the position of the wand 2 in three dimensional space at each point in time. The beam may be directed in various patterns, and may adapt the pattern so as to maximize the coverage on the sensor array 1 or surface 1b and minimize or eliminate the occasions in which the beam would fall incident outside of the perimeter of either the sensor array 1 or the surface 1b.
(150) Other embodiments, such as that illustrated in FIG. 8a, may include a motor or motive device rotating mirror or prism, in place of the mirror array 3e, which redirects the beam 4 and, for example, may project an ellipse (when stationary, and open curves, when the wand 2 is in motion) or other set of curves, on the sensor array 1 or surface 1b. In such a case at every point in time the controller/encoder 18 and/or computer 11 can calculate the position of the wand 2, as at each point in time the angle of the beam emitted, relative to the wand, is known and matched with its other pair 5 that is projected on the sensor array 1 or surface 1b at that same point in time. Obviously, the rate of rotation must be sufficient so that every motion of the wand 2 is captured by the sensor array 1, or camera 3c. Since the controller/encoder 18 and/or the computer 11 direct the mirrors in the mirror array and control the angle at every point in time each mirror elevates from the mirror array 3e surface, the angle at which the beam 4 is redirected, relative to the wand 2 is known, speeding the mathematic calculation, described above. As illustrated in FIG. 8c, any number of beams may be actuated at the same time, some being pulsed, panned about, while others may stay on, and may be fixed or be set at various angles. For example, FIG. 8c illustrates how mirrors 3d2 and 3d3 may be elevated at different angles, producing divergent beams 4, with a known angle. Also, by way of further example, an embodiment in which the wands 2 incorporate a camera(s), which may be located on various parts of the wand or some other convenient location, some beams may stay on so that the camera 3c can record the surface patterns, which assist in locating the position of the wand 2, in three dimensional space, relative to the surface 1b.
(151) In other embodiments, as illustrated in FIG. 8d, shapes such as, circles or ellipses are projected on the sensor array 1 or surface 1b by optical means, such that the changing shapes, define the orientation and position of the wand 2b. For example, a single light emitter 3a, may include a lens, or other optical device, which converts the light beam into a cone, which may project a ring of light; or a field of light having the same outside boundary as the ring type (herein called a filled ring) onto the sensor array 1 or surface 1b. In most embodiments a ring (not filled) is preferred, as the amount of data that requires processing is reduced, however a filled ring or field may be used for some embodiments. The three dimensional orientation and position of the wand 2, 2b may be calculated by comparing the projected shape and the detected shape that is detected on the sensor array 1 or surface 1b, by various mathematical means well known to the art such as projection geometry and trigonometry. For example, a light emitter 3a and dispersing lens which projects a circle onto the sensor array 1 or surface 1b, when the longitudinal axis of the wand 2 is normal to the said sensor array 1 or surface 1b, may for example project a parabola, when tilted off the normal. The computer can use this change in shape to calculate the orientation and position of the wand 2 with respect to the said sensor array 1 or surface 1b. It can be readily appreciated that the shapes 5c, illustrated in FIG. 8d, are in fact equivalent to a string of points 5 illustrated in FIG. 1 and FIG. 6a. The advantage is that a single emitter 3a including a dispersing lens(s) may be used rather than a series of emitters 3a. The other advantage is there is greater redundancy. On the other hand, a few discrete points of light 5 require far less computation than many points, and where speed of movement is important, a few points of light are preferable. The embodiment illustrated in FIG. 8d may be used with a sensor array 1b in which the projected shape 5c, comprised of spots of light 5, is sensed and reported to the computer 11, or one in which a camera 3c on the wand 2, or remote from it, is used to record the projected shapes 5c. As illustrated in FIG. 8d, where a camera 3c is used for detection, in addition to those means described above for determining the position of the wand 2b, a coded grid 1c, may be applied to the surface of surface 1b. The grid may be coded, in a similar way to a bar code, such that the position of the shape 5c or points 5 can be viewed by the camera 3c and their absolute position on the surface can be reported by the camera to the computer 11, to calculate the orientation and the position of the wand 2b in three dimensional space. As illustrated in FIG. 8d, the bar code grid may be formed from two bar coded patterns, superimposed at right angles. Any spot on the surface 1a, will then have a unique address, defined by the adjacent group of bars. The thickness, of the bars and their relative separation from each other may be arranged to encode locational information, by means well known in the art. Since the computer 11 has the same grid in memory, it can make a simple pattern match, or other method, well known in the art, to determine the location of each point of light that forms the shape 5c or for that matter any spot 5 which other embodiments of the invention rely on, such as those illustrated in FIG. 6a and FIG. 6a1. At any point on the surface, there will be a unique address defined by the two patterns immediately adjacent to the spots 5 and shapes 5c. These patterns will form the nearest address to each point at which the spots 5 and shapes 5c are incident. Since the computer has stored in memory the grid, it can then refine the position of each of the incident spots 5 and shape 5c, by noting the displacement of the said spots and shapes from the nearest bars, the exact position of which is in the computer memory. Some spots 5 and shapes 5c may by happenstance fall on the intersection of two bars, in which event the displacement calculation may not be necessary. It should be appreciated that while reference has been made to a bar code type of indexing system, other encoding schemes may be used in other embodiments and be within the ambit of the embodiments described herein.
(152) FIG. 9 illustrates a wand 2b that includes a sliding finger control 19a with associated controller/feedback device 19c which functions in a similar manner to the movable finger hole 21, except that the sliding finger control 19 provides a convenient means of conveying linear motion to the robot tools. In the example illustrated in FIGS. 9 and 11, when the sliding finger control 19a is moved to position 19b, a distance of 19d, the controller/feedback device instructs the computer 11 to cause the tool, in this example 26, to move a given distance 19d in a similar linear direction, as assumed by 26a in FIG. 11. As mentioned above, the operator may set the ratio between the motion of the sliding finger control 19a and the consequent motion of the tool 19a, thus these distances may be different, even though relative. Simultaneously, the operator may squeeze the finger hole control 21, to position 21c, a displacement of 21d, to instruct the fingers of tool 26 to close a distance of 21d to assume the configuration of 26a in FIG. 11. As referred to above, haptic feedback may be provided by the controller/feedback controller 21b by means described above.
(153) FIG. 10 illustrates the operator viewer 8, while the tool 26 is being manipulated, as illustrated in FIGS. 9 and 11. In this example the operator is manipulating wand 2/2b in his left hand. The left tool icon display 10h has bolded tool icon 26b, which indicates that the operator has chosen tool 26 to be controlled by his wand, such as that illustrated in FIG. 9. The right tool icon display 10h has bolded tool icon 27b, which indicates that the operator has chosen tool 27, as illustrated in FIGS. 12, 13 and 14, to be controlled by his wand 2, such as that illustrated in FIG. 9.
(154) FIGS. 12, 13, and 14, illustrates that rotary motion at the tools can be controlled from a wand, such as that illustrated in FIGS. 9 and 12. In this example of the invention, the movable finger hole control 21 can be squeezed by the operator, displacing it a distance of 21d to position 21c, which causes the tool 27 to close a distance of 21d, gripping bolt head 29, assuming configuration 27a, as illustrated in FIG. 13. Simultaneously, the operator moves the finger slider control 19b a distance of 19d, to assume position 19a, to move the tool forward a distance of 19d, toward the bolt head 29, as illustrated in FIG. 13. The operator may then choose to rotate the bolt head by rotating roller control 20 a distance and direction 20b, to move the tool in direction and distance 20b, to assume position 27c. The controller/feedback controller 20c senses the motion and position of the roller control 20, and may impart haptic feedback, in a similar manner as described above in relation to the finger hole control 21, above.
(155) While the disclosure and examples of the invention above are in the context of a guiding device that is controlled by the operator's hands, and describes the attitude and position of the wand 2, 2b in three dimensional space, it should be understood that the guiding device may be used to describe the relative motion of a surface, where the wand or guiding device is fixed, or its position is otherwise known. For example FIG. 15 and FIG. 16 illustrate the movement of the surface 14d1, 14d2 of the heart as it beats. In this case the components of the wand 2, 2b are incorporated into the distal end of the camera tool 15c, although they may be incorporated into any other tool as well, and come within the ambit of the invention. The emitter cluster 3 and emitters 3a may be seen in greater detail in FIG. 18. It should be noted that this example of the emitter cluster 3 which uses any number of emitters 3a, can be replaced with any of the other types of emitter clusters, including mirror arrays or articulating mirrors and prisms, referred to above. The angles between the beams 4, including ?1, ?2, and ?3, and the angles between the beams 4 and the tool 15c as illustrated in FIG. 18 are known to the computer 11, in calculating the surface topology 14d1 and 14d2 as illustrated in FIG. 18. As illustrated in FIG. 17, the stereo camera 3c1 and/or 3c2 record the spots 5a and 5b projected on the surface of the heart 14d1, 14d2. It can be readily be appreciated that as the heart beats, the surface 14d1 and 14d2 moves up and down, and the spots projected on the surfaces, including 5a and 5b, change their distance from their neighbors 5a and 5b on their respective surfaces. This distance change, along with the angle of the beam, is recorded by the camera or cameras, 3c1 and/or 3c2, and this information is processed by the computer 11, which computes the distance of those parts of the surface from the distal end of the camera tool 15c, using trigonometric and other mathematical methods, well known to the art. It should be noted that this information also provides the distance between the surface and any other tool, such as 15b and 15d, as illustrated in FIG. 15 and FIG. 16, as the relative position of the tools is known, but positional sensors incorporated into the said tools. The more spots 5 (in this illustration referred to as 5a and 5b to denote their change in position) that are projected at any given time, the greater will be definition of the changing topology of the surface and its distance from the distal end of the tools, 15a, 15b and 15c, and any other tools that may be used. Various shapes or patterns, such as grid patterns may be projected onto the surface of the heart, by various optical means, herein described, or well known to the art. These shapes or patterns may be considered as strings of spots 5, 5a and 5b.
(156) As the heart beats, and the distance between the distal ends of the tools and the heart surface 14d1 and 14d2 varies, the computer can instruct the tool arms to vary their length to keep the distance between the surface and the distal end of the tools constant (assuming the operator has not instructed any change in tool position). In the example illustrated in FIG. 15 and FIG. 16, the arms are telescoping, for example, the arm 15c, the camera arm, has a distal shaft which can slide in and out of the main arm 15d. In FIG. 15 the distal shaft 15c1 is relatively extended, so that it is located in an ideal position to view the distal end of the other tool shafts, 15b1 and 15d1 which are positioned, in this example, immediately above the surface 14d1 of the heart. As the surface of the heart moves up, as illustrated in FIG. 16 and FIG. 17, the movement is detected by the changing lateral separation between the neighboring dots, such as dots 5a and 5b, and their respective neighboring dots on their respective surfaces. The computer may use this information, using trigonometric calculations and other mathematical techniques, well known to the art, to direct the arms to move up sufficiently, so as to keep the distal end of the tools, 15b2, 15c2 and 15d2 at the same relative distance to the heart surface 14d2. As can be appreciated, this dynamic adjustment of the tool arm length can effectively compensate for the motion of the beating heart, allowing the operator to control other tool motions (which overlay the compensating motions) and which actually do the work, just as if the heart were stationary. As mentioned above, lateral movements of the heart surface 14d1 and 14d2 can also be compensated for by using texture and pattern recognition methods utilizing the surface that is illuminated by the spots 5a, 5b and 5 (in addition to areas, not so illuminated). For this purpose, the spots 5 may be considerably larger to incorporate more textural or pattern information. The vertical and lateral means of characterizing the motions of the heart surface can then be integrated by the computer 11 and any motion of the heart surface can be fully compensated for, effectively freezing the heart motion, to allow for precise manipulation of the tools, for example, to cut and suture the heart tissue. The integration of this information will provide information on the bending, expansion and contraction of the surface, in addition to (in this example) the changes in elevation of the surface. Fortunately, as the surface that is being worked on by the surgeon is small, this additional characterization (ie. bending, expansion and contraction) is most often not required. It should be noted that as the camera tool 15c is making compensating motions, the operator's view of the heart surface will remain the same, ie the heart will appear to virtually stop, and any more complex movements, ie. stretching and shrinking and localized motions may be compensated by software manipulating the image, by means well known to the art. Similarly, rather than the camera tool 15c, making compensation motions, the image presented to the operator can by optical and electronic means be manipulated to give the same effect. For example in some embodiments of the invention, the camera lens may be zoomed back as the surface of the heart advances toward it, giving the effect of an approximately stationary surface. The operator may of course choose to override any or some compensating features of the system. The operator may also choose to select the area of the surface of the heart or other body, for which motion compensation is required. This may involve selecting a tool, such as the sensor cluster 3, with varying angles of emitter 3a angles, or instructing the computer to compute only those changes within a designated patch, which might be projected on the operator viewer 8. In most cases the area of relevant motion will be small, as the actual surgical work space is usually small. The operator may, or the system may periodically scan the surface to define its curvature, especially at the beginning of a procedure.
(157) The stereo camera's 3c1 and 3c2 may also provide distance information, using parallax information and trigonometric and standard mathematical methods well known in the art of distance finders. Other optical methods of distance determination, such as is used in auto-focusing cameras and medical imaging, and well known to the art, may be used as well, and be within the ambit of the invention, such as Doppler detection and interferometry. This information, acquired by all these methods, may be used to supplement or backstop the other distance information, which is acquired by methods described above and integrated by the computer 11. It should be noted that embodiments that use one or more of these methods is within the ambit of the embodiments described herein.
(158) In some embodiments, the computer 11 may receive information from the electrocardiogram (ECG) 14c, which has sensors 14e on the patient's abdomen and which indicates that an electrical pulse has been detected, which will result in a muscular response of the heart tissue, and hence a change in the shape and the position of the heart surface. The time delay between receiving the electrical triggering pulse and the actual resulting heart muscular activity, even though small, allows for the system to anticipate the motion and better provide compensating motions of the length and attitude of the robot's tools, 15b, 15c, and 15d. The system software can compare the electrical impulses, as detected by the ECG, with the resultant changes in the shape and position of the heart wall, as observed by the methods described above, to model the optimum tool motion that is required to virtually freeze the heart motion. In combination with the methods of motion compensation described above, the inclusion of the ECG initiating information, generally allows for a smoother response of the tools to the motion of the surface it is accommodating to.
(159) It can be readily appreciated that the system herein described allows many surgical procedures to be conducted without resort to a heart lung machine or to other heart restraining devices, all of which can have serious side effects.
(160) It should be readily appreciated that embodiments that compensate for the motion of bodies being manipulated, whether fine grain or course grain, (as chosen by the operator) inherently reduce the effects of latency between the operator's instructions and the motion of the tools, which he guides. This effective reduction or elimination of latency means that telesurgery over great distances, which increases with distance, becomes more practical. The system's software distinguishes between operator generated motion, such as the lifting of a tissue flap, and non-operational motion, such as the beating of the heart. Generally, the former is much finer grained and the latter larger grained. For example, the software may set the compensating routines to ignore small area of motion, where the procedure is being executed, such as the suturing of a flap, but compensate for grosser motions, such as the beating of the heart, which causes a large surface of the heart to move. The design of this software and the relative sizes of the body to which the compensation routine responds or ignores, and their location, will depend upon the particular procedure for which the system is being utilized.
(161) FIG. 21 illustrates an embodiment, which includes additional means to overcome temporal latency between the operator's instructions and the actual tool movements, any of which may be used separately or in combination with the others. FIG. 21 illustrates the operator's view of the worksite as viewed through the viewer 8 and eyepieces 9 illustrating the superimposed tool cursors 15d3 and 15b3 which illustrate the operator's intended position of the tools at the worksite. These cursors are no normal cursors, they show the exact intended position of the working edges of the tools they control. FIG. 21 also illustrates that the operator also sees the latest reported actual position of the tools 15d2 and 15b2 at the worksite, the difference between the two being due to temporal latency. The superimposed tool cursors 15d3 and 15b3 can be electronically superimposed onto the operator's view, and these show the intended position, while 15d2 and 15b2 show their most recently reported actual position. In most preferred embodiments the cursors are rendered in 3-D, and change perspective, to conform to the 3-D view of the worksite, are simple outlines, so as not to be confused with the images of the actual tools, and may be manually tuned on and off, or automatically presented when the system detects that latency has exceeded a preset threshold. The intended tool position cursors, 15d3 and 15b3 may also change color or markings to indicate the depth to which they have passed into the tissue, as indicated 15d4 in FIG. 21. The cursors 15d3 and 15b3 may also change color markings in response to forces acting on the actual tools 15d2 and 15b2, so as to prevent the operator from exceeding a safe threshold for that particular substrate he is manipulating.
(162) FIGS. 29a to 29e illustrate an example method of limiting the effects of latency in transmission of tool instructions and movement of the body relative to the position of the tools at the remote worksite. Each video image at the worksite FIG. 29b is recorded, time coded, and transmitted to the operating theatre, along with the time code for each video frame. The operator at the operating theatre, then sees the video frame FIG. 29a, and then causes the tool 15d2 to advance along the incision 14a, which he views as an icon 15d3 in FIG. 29c, and the displacement between 15d3 and 15d2 being the measure of latency. The position of the cursors, that is, the intended tool positions, are transmitted to the remote worksite along with the corresponding frame time-code, of the operator's video frame at each time step. In most embodiments of the invention, the time-code is originally encoded onto the video stream at the remote work site by the remote worksite robot controller 15 which also saves in memory the corresponding video frame(s). As a separate process, and at each time step, at the remote work site, the position of the tools are adjusted to accommodate to their intended position relative to the changing position of the body, as described above, which is illustrated as the accommodation of tool position 45 in FIG. 29d and becomes the real time image for the comparison to follow. Upon receiving each instruction from the operator, the worksite controller 15 then retrieves from memory the corresponding video frame and notes the intended machine instruction relative to it. It then compares this frame FIG. 29b, retrieved from memory with the real time image at the remote worksite FIG. 29d, and carries out the intended machine instruction embedded in FIG. 29c resulting in the performance of the intended instruction as illustrated in FIG. 29e. This comparison may be accomplished by pattern recognition methods well known to the art which note the relative location of such features as protruding veins and arteries and other visible features. In some embodiments, markers suitable for optical marker recognition 40 are placed on or detachably attached to the operation surface, such as the heart 14d to assist in tracking movements of the worksite. While the normalization process, including pattern recognition and other means noted above impose a system overhead on computations, the area that is monitored and the precision of monitoring can be adjusted by the operator. The area immediately adjacent to the present tool position can have, for example, fine grained monitoring and normalization, whereas more peripheral areas can have, for example, coarser gained treatment.
(163) As illustrated on FIG. 21 and FIG. 29c, the operator's intended movement of the tools as illustrated to him by cursors 15b3 and 15d3, may diverge from the actual tools that he views 15b2, 15d2 the difference being the latency between the two. The operator will immediately know the degree to which latency is occurring, and he may choose to slow his movements to allow the actual tools, 15b2 and 15d2 to catch up. In some embodiments the systems stops in the event a preset latency threshold is exceeded. It is important to note that the operator, when he stops the tool, will know where it will stop at the worksite. For example, in FIG. 21 the operator is making an incision which must stop before it transects artery 38. Even though the tool 15d2 will continue to move forward, they will stop when the meet the intended tool position indicated by cursor 15d3, just short of the artery 38. While this disclosure has described cursors resembling a scalpel and forceps and their corresponding cursors, it should be understood that these are merely examples of a large class of embodiments, which include all manner of tools and instruments and there corresponding cursors, and all are within the ambit of this invention.
(164) FIG. 19 and FIG. 20 illustrate two exemplar passive haptic feedback modules that can be incorporated into the controller/feedback controllers in the wand 2, such as 19c, 20c and 21b. Other haptic feedback devices, well known to the art, whether active or passive, may be incorporated into the controller/feedback controller, and all such systems are within the ambit of the invention.
(165) FIG. 19 is a typical passive haptic feedback device 30 in which the flow of an electro-rheological or magneto-rheological fluid is controlled by an electrical or magnetic field between elements 36, which can be electrodes or magnetic coils. The control of the flow of this fluid affects the speed with which piston 31a can move back and forth through the cylinder 31. The piston is connected and transmits motion and forces to and between the piston and the various control input devices on the wand 2, for example, the movable finger hole 21, the finger slider control 19b and the roller control 20. The total displacement of the piston 19d may for example be the same as the displacement 19d of the finger slider control 19b, or may vary depending upon the mechanical linkage connecting the two. The working fluid moves 35 between each side of the piston 31a through a bypass conduit 32, where its flow may be restricted or alleviated by varying the electrical or magnetic field imposed on an electro-rheological or magneto-rheological fluid. The controller/encoder modulates the electrical energy transmitted by transmitting means 34a to the electrodes or coils 36. In other passive haptic feedback devices a simple electromechanical valve 37 controls the flow 35 of working fluid, which may for example be saline or glycerin, as illustrated in FIG. 20. The controller/encoder modulates the electrical energy transmitted to the electromechanical valve 37 which is transmitted by transmitting means 37a, as illustrated in FIG. 20.
(166) In both the haptic feedback devices 30 illustrated in FIGS. 19 and 20, a motion and position sensor 33, transmits information on the motion and position of the piston 31a by transmission means 34 to the controller/encoder 18. The controller/encoder 18 receives instructions wirelessly 16a, or directly from the computer, and sends motion and position information received from the motion and position sensor 33 to the computer.
(167) FIG. 22 is wand 2b which may be attached to any body part, tool, or other object, by means of connectors 42 and 42a, which have complementary indexing means 42c and 42b, to ensure their proper alignment. By such means, and similar connecting means, well known to the art, these wands 2b may be placed on a body part, such as the surface of the heart 14d1 to project the beams 4 to a sensor array 1 or surface 1b (not shown) and thereby establish the orientation and position of the heart as it moves. Similarly a wand 2b may be connected to any object to determine its position and orientation in space, together with the means hereinbefore described, in cooperation with computer 11.
(168) FIG. 23 illustrates how multiple wands 2i, 2ii may be used in combination to provide accurate alignment between two or more objects in space. In this example FIG. 23, one wand 2i is connected to a drill 44. The other wand 2ii is connected to a bone nail 45 with a slotted proximal end, for indexing position, and which has a hidden hole 46 which will receive a bolt, once a hole is drilled through bone 46, and the hidden hole 46 in direction 41. Since the position and orientation of the hidden hole 46 relative to the end of the bone nail, connected to the wand 20d is known, the operator can drill a hole along an appropriate path, which is provided by computer 11 calculating the appropriate path and graphically illustrating the proper path with a graphical overlay of the bone shown on viewer 8. The position of the wands 2i and 2ii in space is determined by those means hereinbefore described. While FIG. 23 illustrates a single sensor array 1, it should be understood that any number is sensor arrays or surfaces 1b, might be used, so long as their position and orientation are known to the computer 11, and in the case of surface 1b, the camera 3c, which would be incorporated into the assembly, as illustrated in FIG. 22, can identify each screen by means of identifying barcodes or other identifying marks. In FIG. 23, the sensor array 1 is above the operating space. FIG. 23 also illustrates two connectors 42a that are fixed to a calibrating table 43, which is calibrated in position to sensor array 1. This permits the wands 2i and 2ii to be connected to the said connectors 42a on calibrating table 43 to ensure accurate readings when ambient temperature changes might affect the relative angles of the beams 4, or the distance between emitters 3a. The computer 11 can recalibrate the position of the wands 2i and 2ii by noting the pattern of spots 5 that are projected onto the sensor array 1. While the example shown in FIG. 23 illustrates two wands 2i and 2ii, any number of wands may be used for purposes of comparing the position of objects, to which they are connected, or changes in position of those objects over time. For example, a one wand might be connected to the end of a leg bone, while another might be attached to prosthesis, and the two might be brought together in perfect alignment. Another example would be connecting a wand 2i to a probe of known length, and another wand 2ii to a patient's scull, in a predetermined orientation. The wand 2i could then be inserted into the brain of a patient and the exact endpoint of the probe could be determined. The wand 2i could also be attached to the tools 15b1, 15c1 and 15d1, as illustrated on FIG. 15 to ensure perfect positioning of the tools. For example one tool might have a drill attached, such that the drill illustrated in FIG. 23, is controlled robotically and in coordination with the position of the bone nail 45 in that of FIG. 23. Due to modern manufacturing processes, the wand 2b illustrated in FIG. 22, the wand 2i illustrated in FIG. 23, and sensor array assemblies 1d illustrated in FIG. 24, can be made to be very small and placed as an array on objects such as cars, bridges or buildings to measure their stability over time. Others might be connected to the earth to measure seismic or local movements of the soil. These wands 2b, 2i, might also be connected to scanners to allow for the scanning of three dimensional objects, since these wands can provide the information as to the scanner's position in space; the scanning data can be assembled into a virtual three dimensional output. Since the wands 2b and 2i may be put on any object, the uses for assembling objects are countless.
(169) While FIG. 23 illustrates a system in which the camera 3c is located in the wand 2, it should be understood that a surface 1b, as illustrated in FIG. 6a1, or a separate camera 3c could be used, as illustrated in FIG. 6a2, all of which can detect the position of the incident spots 5.
(170) FIG. 24 illustrates a similar arrangement of wands 2i and 2ii as illustrated in FIG. 23, but the wand 2ii is replaced with sensor array assembly 1d. The sensor array assembly 1d uses a sensor array 1, which senses the position 5 of the incident beams 4 and reports their coordinates by connection 11a to controller/encoder 18 and then wirelessly to the computer 11 (not shown). This system provides the same positional information as that system illustrated on FIG. 23, except that the large sensor in FIG. 23 has been replaced with a much smaller sensor in FIG. 24, making it more economical for certain purposes.
(171) Referring to FIG. 25, a cross-sectional, perspective view illustrates two combination wand and sensor array assemblies 47 which have been daisy chained with two other combination units (not shown). Such arrays may also be combined with sensor arrays 1 or surfaces 1b for greater accuracy. Such arrays can be used to detect and report the relative movement of parts of structures, to which they are attached, such as bridges, ships and oil pipelines.
(172) While embodiments have been described with respect to a system comprised of three tools 15b, 15c, and 15d, it is to be understood that any number of tools and any number of wands 2 may be used in such a system.
(173) While embodiments have used examples of tools that a robot could manipulate, it is to be understood that any tool, object or body may be moved or directed by the methods and devices described by way of example herein, and all such embodiments are within the ambit of the embodiments herein.
(174) While embodiments have been described as being used as a surgical robot, it is to be understood that this use is merely used as a convenient example of many uses to which the robot could be employed, all of which come within the ambit of the embodiments described herein.
(175) While embodiments have been described as being used to manipulate tools, it is to be understood that the methods and devices described by example herein may be used to manipulate virtual, computer generated objects. For example, embodiments may be used for assembling and/or modeling physical processes, such as molecular modeling and fluid dynamics modeling to name just a few.
(176) It is to be understood that modifications and variations to the embodiments described herein may be resorted to without departing from the spirit and scope of the invention as those skilled in the art will readily understand. Such modifications and variations are considered to be within the purview and scope of the inventions and appended claims.
Claims
1. A hand controller for a robotic surgery system, the hand controller comprising: a body shaped to be grasped by a user's hand and extending between a proximal end and a distal end; a first control pivotally attached to the body, the first control configured to permit a user to control a first function of a surgical instrument; and a second control supported by a side of the body at a location between the proximal end and the distal end, the second control configured to move in a direction parallel to a length of the body to permit a user to control, via a linear input, a second function of the surgical instrument, the second function being different from the first function.
2. The hand controller of claim 1, wherein the first control comprises at least one lever.
3. The hand controller of claim 2, wherein the first function of the surgical instrument comprises adjusting a distance between first and second jaws of an end effector of the surgical instrument.
4. The hand controller of claim 3, wherein the distance between the first and second jaws of the end effector of the surgical instrument is adjusted responsive to movement of the at least one lever.
5. The hand controller of claim 1, wherein the first control comprises at least one finger opening.
6. The hand controller of claim 1, wherein the second function of the surgical instrument comprises a linear movement of an end effector of the surgical instrument.
7. The hand controller of claim 6, wherein: a linear movement of the second control causes the linear movement of an end effector.
8. The hand controller of claim 1, wherein the second control comprises a finger slider.
9. The hand controller of claim 1, wherein control of at least one of the first or second function causes provision of a feedback to the user.
10. The hand controller of claim 9, wherein the feedback comprises haptic feedback.
11. The hand controller of claim 10, wherein the first function of the surgical instrument comprises adjusting a distance between first and second jaws of an end effector of the surgical instrument, and wherein the haptic feedback comprises pinching feedback.
12. The hand controller of claim 1, further comprising a spring configured to bias the first control.
13. The hand controller of claim 12, wherein the spring is configured to bias the first control to a position in which the first function of the surgical instrument is not activated.
14. The hand controller of claim 13, wherein the position comprises an open position.
15. A hand controller for a robotic surgery system, the hand controller comprising: a body shaped to be grasped by a user's hand and extending between a proximal end and a distal end; a first control supported by the body, the first control configured to cause activation of a first function of a surgical instrument; and a second control supported by a side of the body at a location between the proximal end and the distal end, the second control configured to move along the side of the body to cause, via a user's linear input, activation of a second function of the surgical instrument, the second function being different from the first function.
16. The hand controller of claim 15, wherein the first control comprises a lever pivotally attached to the body.
17. The hand controller of claim 15, wherein the first function of the surgical instrument comprises adjusting a distance between first and second jaws of an end effector of the surgical instrument.
18. The hand controller of claim 15, wherein the second function of the surgical instrument comprises a linear movement of an end effector of the surgical instrument.
19. The hand controller of claim 18, wherein: a linear movement of the second control causes the linear movement of an end effector.
20. A method of operating a robotic surgery system comprising a hand controller, the method comprising: activating a first function of a surgical instrument by turning about a pivot point a first control of the hand controller, the first control pivotally coupled to a body of the hand controller that extends between a proximal end and a distal end; and activating a second function of the surgical instrument by linearly moving a second control of the hand controller along a side of the body, the second control being supported on the side of the body at a location between the proximal end and the distal end, the second function being different from the first function.
Noubar B. Afeyan,
Mmmmm Moderna
Magara!
a permanent J-code, J1449, has been issued for ROLVEDON by the U.S. Centers for Medicare & Medicaid Services (CMS) effective as of April 1, 2023.
“This is an important milestone in the ROLVEDON launch. A permanent J-code will enable a more efficient and predictable reimbursement in the outpatient setting. The combination of a permanent J-code on April 1, 2023 and ROLVEDON’S inclusion in the National Comprehensive Cancer Network® Supportive Care Guidelines (NCCN Guidelines) announced on December 6, 2022 are key elements in establishing brand awareness and building customer confidence in our novel product,” said Tom Riga, President and Chief Executive Officer of Spectrum Pharmaceuticals.
J-codes are permanent reimbursement codes used by commercial insurance plans, Medicare, Medicare Advantage, and other government payers for Medicare Part B drugs like ROLVEDON that are administered by a physician. Claims submission and documentation are simplified with a permanent J-code, facilitating and streamlining the billing and reimbursement process.
why did they turn the light back on to the site?
Why is the CEO extended until March 3?
MMMMMMMMMMMMMMMMMMMMMMerger plan?
https://www.rubiustx.com/about-us/
$103.9 million - cash on September 30th
$75.7 million = debt repayment
$28 milion +
$18 milion= real estate asset sale
$46 milion-
$6 milion= or less for leasing
$40 milion
$5/6 milion = equipment sale
$ 45/46 milion +
Since our inception, we have not recorded any income tax benefits for the net losses we have incurred in each year or for our research and development tax credits generated, as we believe, based upon the weight of available evidence, that it is more likely than not that all of our net operating loss, or NOL, carryforwards and tax credits will not be realized. As of December 31, 2021, we had U.S. federal and state net operating loss carryforwards of $534.2 million and $534.8 million, respectively, which may be available to offset future taxable income. The federal NOLs include $37.2 million, which expire at various dates through 2037, and $497.0 million, which carryforward indefinitely. The state NOLs expire at various dates through 2041. As of December 31, 2021, we also had U.S. federal and state research and development tax credit carryforwards of $22.7 million and $15.6 million, respectively, which may be available to offset future tax liabilities and begin to expire in 2034 and 2026, respectively. We have recorded a full valuation allowance against our net deferred tax assets at each balance sheet date. -
we need to remove the salaries which I don't think are more than 10 million +
plus perhaps the patented platform could have some value
I don't think the cash is less than 30 million now
why does anyone sell?
BOOM!
Apparatus And Method For Controlling An End-effector Assembly
DOCUMENT ID
US 11571820 B2
DATE PUBLISHED
2023-02-07
INVENTOR INFORMATION
NAME
CITY
STATE
ZIP CODE
COUNTRY
Butt; Eric
Orange
CT
N/A
US
Ransden; Jeff
Fairfield
CT
N/A
US
Shvartsberg; Alexander
Oakville
CA
N/A
US
Rayman; Reiza
Toronto
N/A
N/A
CA
APPLICANT INFORMATION
NAME
Titan Medical Inc.
CITY
Toronto
STATE
N/A
ZIP CODE
N/A
COUNTRY
CA
AUTHORITY
N/A
TYPE
assignee
ASSIGNEE INFORMATION
NAME
Titan Medical Inc.
CITY
Toronto
STATE
N/A
ZIP CODE
N/A
COUNTRY
CA
TYPE CODE
03
APPLICATION NO
16/676311
DATE FILED
2019-11-06
DOMESTIC PRIORITY (CONTINUITY DATA)
continuation parent-doc US 14262221 20140425 US 10471607 child-doc US 16676311
continuation parent-doc US PCT/CA2011/001225 20111104 PENDING child-doc US 14262221
US CLASS CURRENT:
1/1
CPC CURRENT
TYPE
CPC
DATE
CPCI
A 61 B 34/70
2016-02-01
CPCI
B 25 J 18/00
2013-01-01
CPCA
Y 10 T 74/20329
2015-01-15
CPCA
A 61 B 2017/2908
2013-01-01
CPCA
Y 10 T 74/20311
2015-01-15
CPCA
Y 10 S 901/23
2013-01-01
CPCA
A 61 B 2017/00309
2013-01-01
CPCA
A 61 B 2017/2905
2013-01-01
CPCA
A 61 B 2034/301
2016-02-01
CPCA
B 25 J 15/0213
2013-01-01
CPCA
A 61 B 2017/00314
2013-01-01
CPCA
B 25 J 15/0206
2013-01-01
CPCA
A 61 B 2034/305
2016-02-01
CPCA
A 61 B 2017/00526
2013-01-01
CPCA
A 61 B 2017/2927
2013-01-01
Abstract
An apparatus for controlling an end-effector assembly is provided. The apparatus includes a elongated element configured to engage the end-effector assembly and a drive assembly. A first motion transfer mechanism is disposed at an end of the elongated element. The first motion transfer mechanism is configured to transfer a rotational motion of the elongated element to a motion of the end-effector assembly. A second motion transfer mechanism is disposed at the second end of the elongated element. The second motion transfer mechanism is configured to transfer a motion of the drive assembly to the rotational motion of the elongated element.
Background/Summary
INCORPORATION BY REFERENCE TO ANY PRIORITY APPLICATIONS
(1) Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet as filed with the present application are hereby incorporated by reference under 37 CFR 1.57.
FIELD
(2) The present specification here relates in general to a field of robotic instruments, and more particularly, to a robotic system for use in surgery.
BACKGROUND
(3) With the gradual transition of medical surgery from the conventional process of making a long incision in the patient's body for performing a surgery to the next generation of surgery, i.e. minimal invasive surgery (MIS), continuous research is going on to develop and integrate robotic instruments in a system which can be used for MIS purposes. Such integration can help a surgeon perform a surgery in a substantially error-free manner, and at the same time work in a realistic environment that gives the surgeon a feel of conventional surgery.
SUMMARY
(4) In accordance with an aspect of the invention, there is provided an apparatus for controlling an end-effector assembly. The apparatus includes a first elongated element having a first end and a second end. The first end of the first elongated element is configured to engage the end-effector assembly. The second end of the first elongated element is configured to engage a drive assembly. The apparatus further includes a first motion transfer mechanism disposed at the first end of the first elongated element. The first motion transfer mechanism is configured to transfer a rotational motion of the first elongated element to a first motion of the end-effector assembly. Furthermore, the apparatus includes a second motion transfer mechanism disposed at the second end of the first elongated element. The second motion transfer mechanism is configured to transfer a first motion of the drive assembly to the rotational motion of the first elongated element.
(5) The apparatus may further include a second elongated element having first and second ends. The first end of the second elongated element may be configured to engage the end-effector assembly. The second end of the second elongated element may be configured to engage the drive assembly.
(6) The second elongated element may be configured to adjust a roll of the end-effector assembly.
(7) The apparatus may further include a third motion transfer mechanism disposed at the first end of the second elongated element. The third motion transfer mechanism may be configured to transfer a rotational motion of the second elongated element to a second motion of the end-effector assembly. The apparatus may also include a fourth motion transfer mechanism disposed at the second end of the second elongated element. The fourth motion transfer mechanism may be configured to transfer a second motion of the drive assembly to the rotational motion of the second elongated element.
(8) The first elongated element may include a first tube.
(9) The second elongated element may include a second tube.
(10) The first elongated element may be nested within the second tube.
(11) The first elongated element may be configured to rotate independently from the second tube.
(12) The first motion transfer mechanism of the first elongated element may include a plurality of teeth.
(13) The plurality of teeth of the first elongated element may be configured to mate with a first plurality of teeth of the end-effector assembly.
(14) The third motion transfer mechanism of the second elongated element may include a plurality of teeth.
(15) The plurality of teeth of the second elongated element may be configured to mate with a second plurality of teeth of the end-effector assembly.
(16) The first elongated element may include a flexible portion.
(17) The first elongated element may include stainless steel.
(18) The flexible portion of the first elongated element may be laser cut to increase flexibility.
(19) The second elongated element may include a flexible portion.
(20) The second elongated element may include stainless steel.
(21) The flexible portion of the second elongated element may be laser cut to increase flexibility.
(22) The apparatus may further include a third elongated element having first and second ends. The first end of the third elongated element may be configured to engage the end effector assembly. The second end of the third elongated element may be configured to engage the drive assembly.
(23) The third elongated element may be configured to adjust a roll of the end-effector assembly.
(24) The third elongated element may include a third tube.
(25) The first and second elongated elements may be nested within the third tube.
(26) The first elongated element may be configured to rotate independently from the third tube.
(27) The apparatus may be configured to provide a coarse motion proximate to the end-effector assembly.
(28) The apparatus may further include a plurality of cables to control the coarse motion.
(29) The apparatus may further include a rigid outer cover.
(30) The rigid outer cover may be fixed.
(31) The plurality of cables may be disposed between the rigid outer cover and the first elongated element.
(32) The apparatus may further include an electrical wire extending through the first tube.
(33) At least one elongated element may be electrically conductive.
(34) In accordance with another aspect of the invention, there is an end-effector assembly. The assembly includes a first working member configured to engage a first elongated element. Furthermore, the assembly includes a motion transfer mechanism disposed on the first working member. The motion transfer mechanism is configured to transfer a rotational motion of the first elongated element to a motion of the first working member.
(35) The assembly may further include a connector. The connector may be configured to connect to a second elongated element. The second elongated element may provide a rotational motion to adjust a roll of the end-effector assembly.
(36) The assembly may further include a second working member configured to engage a second elongated element. In addition, the assembly may further include a motion transfer mechanism disposed on the second working member. The motion transfer mechanism mat be configured to transfer a rotational motion of the second elongated element to a motion of the second working member.
(37) The motion transfer mechanism of the first working member may include a plurality of teeth.
(38) The plurality of teeth of the first working member may be configured to mate with a plurality of teeth of the first elongated element.
(39) The motion transfer mechanism of the second working member may include a plurality of teeth.
(40) The plurality of teeth of the second working member may be configured to mate with a plurality of teeth of the second elongated element.
(41) The first working member may include a first jaw.
(42) The motion of the first working member may include opening and closing the first jaw.
(43) The second working member may include a second jaw.
(44) The motion of the second working member may include opening and closing the second jaw.
(45) In accordance with another aspect of the invention, there is provided a drive assembly configured to connect to a rotatable elongated element. The drive assembly includes a drive mechanism configured to engage the rotatable elongated element. Furthermore, the drive assembly includes a motion transfer mechanism disposed on the drive mechanism. The motion transfer mechanism is configured to transfer a motion of the drive mechanism to a rotational motion of the rotatable elongated element.
(46) The motion transfer mechanism may include a plurality of teeth.
(47) The plurality of teeth may be configured to mate with a plurality of teeth of the rotatable elongated element.
(48) The drive mechanism may include an electric motor.
(49) In accordance with another aspect of the invention, there is provided a robotic instrument having first and second ends. The robotic instrument includes an end-effector assembly disposed at the first end of the robotic instrument, the end-effector assembly comprising a first working member. Furthermore, the robotic instrument includes a drive assembly disposed at the second end of the robotic instrument. In addition, the robotic instrument includes a first elongated element having a first end and a second end, the first end of the first elongated element engaged with the end-effector assembly and the second end of the first elongated element engaged with a drive assembly such that rotation of the first elongated element causes the first working member of the end-effector assembly to move.
(50) The robotic instrument may further include a second elongated element having first and second ends. The first end of the second elongated element may be engaged with the end-effector assembly. The second end of the second elongated element may be engaged with the drive assembly.
(51) Rotation of the second elongated element may adjust a roll of the end-effector assembly.
(52) The end-effector assembly may further include a second working member. Rotation of the second elongated element may cause the second working member of the end-effector assembly to move.
(53) The first elongated element may include a first tube.
(54) The second elongated element may include a second tube.
(55) The first elongated element may be nested within the second tube.
(56) The first elongated element may be connected to the end-effector assembly with a gear mechanism.
(57) The second elongated element may be connected to the end-effector assembly with a gear mechanism.
(58) The first elongated element may include a flexible portion.
(59) The first elongated element may include stainless steel.
(60) The flexible portion of the first elongated element may be laser cut to increase flexibility.
(61) The second elongated element may include a flexible portion.
(62) The second elongated element may include stainless steel.
(63) The flexible portion of the second elongated element may be laser cut to increase flexibility.
(64) The robotic instrument may further include a third elongated element having first and second ends. The first end of the third elongated element may be configured to engage the end-effector assembly. The second end of the third elongated element may be configured to engage the drive assembly.
(65) The third elongated element may be configured to adjust a roll of the end-effector assembly.
(66) The third elongated element may include a third tube.
(67) The first and second elongated elements may be nested within the third tube.
(68) The robotic instrument may be configured to provide a coarse motion proximate to the end-effector assembly.
(69) The robotic instrument may further include a plurality of cables to control the coarse motion.
(70) The robotic instrument may further include a rigid outer cover.
(71) The rigid outer cover may be fixed.
(72) The plurality of cables may be disposed between the rigid outer cover and the first elongated element.
(73) The robotic instrument may further include an electrical wire extending through the first tube.
(74) At least one elongated element may be electrically conductive.
(75) The robotic instrument may further include a fixed outer cover.
(76) In accordance with an aspect of the invention, there is provided a method for controlling an end-effector assembly at the end of a robotic instrument. The method involves rotating a first elongated element using a drive assembly, wherein the first elongated element is engaged with the drive assembly. The method further involves transferring a rotational motion of the first elongated element to move a first working member of the end-effector assembly.
(77) The method may further involve rotating a second elongated element using the drive assembly. The second elongated element may be engaged with the drive assembly.
(78) Rotating the second elongated element may adjust a roll of the end-effector assembly.
(79) Rotating the second elongated element may move a second working member of the end-effector assembly.
(80) Rotating a first elongated element may involve rotating a first tube.
(81) Rotating a second elongated element may involve rotating a second tube.
(82) The first elongated element may be nested within the second tube.
(83) The method may further involve flexing a flexible portion of the first elongated element.
(84) The first elongated element may include stainless steel.
(85) The flexible portion of the first elongated element may be laser cut to increase flexibility.
(86) The method may further involve flexing a flexible portion of the second elongated element.
(87) The second elongated element may include stainless steel.
(88) The flexible portion of the second elongated element may be laser cut to increase flexibility.
(89) The method may further involve rotating a third elongated element using the drive assembly. The third elongated element may be engaged with the drive assembly and wherein rotating the third elongated element adjusts a roll of the end-effector assembly.
(90) The third elongated element may include a third tube.
(91) The first and second elongated elements may be nested within the third tube.
(92) The method may further involve controlling a coarse motion of the first end of the first elongated element.
(93) Controlling may involve applying tension to a plurality of cables.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Reference will now be made, by way of example only, to the accompanying drawings in which:
(2) FIG. 1 is a perspective view of an operating theater according to an embodiment;
(3) FIG. 2 is a perspective view of a robotic instrument in accordance with an embodiment;
(4) FIG. 3 is another perspective view of the robotic instrument with the working member in an open position in accordance with the embodiment of FIG. 2;
(5) FIG. 4 is perspective view of a drive assembly of the robotic instrument in accordance with the embodiment of FIG. 2;
(6) FIG. 5 is a perspective view of a robotic instrument in accordance with another embodiment;
(7) FIG. 6 is another perspective view of the robotic instrument in accordance with the embodiment of FIG. 5 with a cutaway portion;
(8) FIG. 7 is a cross sectional view of a robotic instrument in accordance with the embodiment of FIG. 5 through the line A-A;
(9) FIG. 8 is perspective view of a drive assembly of the robotic instrument in accordance with the embodiment of FIG. 5;
(10) FIG. 9 is a view showing the a movement of the robotic instrument of FIG. 5;
(11) FIG. 10 is a perspective view of a robotic instrument in accordance with another embodiment;
(12) FIG. 11 is another perspective view of the robotic instrument in accordance with the embodiment of FIG. 10;
(13) FIG. 12 is a perspective view of a robotic instrument in accordance with another embodiment;
(14) FIG. 13 is another perspective view of the robotic instrument in accordance with the embodiment of FIG. 12 with a cutaway portion;
(15) FIG. 14 is a cross sectional view of a robotic instrument in accordance with the embodiment of FIG. 12 through the line B-8;
(16) FIG. 15 is perspective view of a drive assembly of the robotic instrument in accordance with the embodiment of FIG. 12;
(17) FIG. 16 is a view showing the a movement of the robotic instrument of FIG. 12;
(18) FIG. 17 is a perspective view of a robotic instrument in accordance with another embodiment;
(19) FIG. 18 is another perspective view of the robotic instrument in accordance with the embodiment of FIG. 17 with a cutaway portion;
(20) FIG. 19 is a perspective view of a robotic instrument in accordance with another embodiment;
(21) FIG. 20 is a perspective view showing the a movement of a robotic instrument in accordance with another embodiment;
(22) FIG. 21 is a perspective view showing the another movement of a robotic instrument in accordance with the embodiment of FIG. 20;
(23) FIG. 22 is a perspective view showing the a movement of a robotic instrument in accordance with another embodiment;
(24) FIG. 23 is a perspective view showing the another movement of a robotic instrument in accordance with the embodiment of FIG. 22;
(25) FIG. 24 is a perspective view of a robotic instrument in accordance with another embodiment; and
(26) FIG. 25 is a perspective view of a portion of a robotic instrument in accordance with another embodiment.
DETAILED DESCRIPTION OF THE EMBODIMENTS
(27) Referring to FIG. 1, a schematic representation of an operating theater for Minimal Invasive Surgery (MIS) is shown at 100. It is to be understood that the operating theater 100 is purely exemplary and it will be apparent to those skilled in the art that a variety of operating theaters are contemplated. The operating theater 100 includes a surgical table 104 and a surgical system 108. The surgical table 104 includes a surface 112 supported by a base 116. It is to be understood that the surgical table 104 is not particularly limited to any particular structural configuration. A patient P rests on the surface 112. The surgical system 108 includes a base unit 120, an input device 124, a robotic arm 128, and at least one robotic instrument 132 with an end-effector assembly 136.
(28) In a present embodiment, the base unit 120 is generally configured to support and control the robotic arm 128 in response to input control signals from input device 124 under the control of a surgeon or other medical professional. In terms of providing physical support, the base unit 120 is mechanically structured to support the robotic arm 128, the robotic instrument 132, and their associated movements. For example, the base unit 120 can be bolted to a fixed structure such as a wall, floor, or ceiling. Alternatively, the base unit 120 can have a mass and a geometry such that when base unit 120 is free-standing, it will support the robotic arm 128. In some embodiments, the base unit 120 can include a moveable cart to provide easy movement of the base unit 120 around the operating theater 100. In terms of providing control, the base unit 120 can include mechanical controls (not shown), or electrical controls (not shown), or both. For example, mechanical controls can include gears, cables or other motion transfer mechanisms (not shown) connected to a motor. Other mechanical controls can also involve hydraulics. Alternatively, in embodiments where a motor is disposed in the robotic arm 128 or the robotic instrument 132, the base unit 120 can supply only electrical control signals to operate the motors in the robotic arm 128 or the robotic instrument 132.
(29) Referring again to FIG. 1, the robotic arm 128 is generally configured to support the robotic instrument 132. In terms of providing physical support, the robotic arm 128 is mechanically structured to support the robotic instrument 132, and its associated movement. For example, the robotic arm 128 is constructed such that it is rigid enough to be suspended above the patient P. In addition, the robotic arm 128 can be configured so that robotic instrument 132 is positionable in relation to the base unit 120 and surface 112. For example, the robotic arm 128 can include a moveable joint (not shown) for providing a pivotal degree of freedom. In another example, the robotic arm 128 can include a rail system (not shown) for linear movement of the robotic instrument 132. It will now be understood that the movement of the robotic arm 128 is controlled by the base unit 120 through various controls described above.
(30) In general terms, the robotic instrument 132 and its end-effector assembly 136 are generally configured for performing MIS responsive to inputs from the input device 124 mediated by the base unit 120 and the robotic arm 128. However, it is to be re-emphasized that the structure shown in FIG. 1 is a schematic, non-limiting representation only. For example, although only one robotic arm 128 is shown in FIG. 1, it is to be understood that the surgical system 108 can be modified to include a plurality of robotic arms 128, each robotic arm 128 having its own a separate robotic instrument 132 and separate end-effector assembly 136. Furthermore, it is also to be understood that where the surgical system 108 includes a plurality of robotic arms 128 with robotic instruments 132, each robotic arm 128 or robotic instrument 132 can have different structures. Indeed, a plurality of different configurations of robotic instrument 132 are contemplated herein.
(31) In use, the robotic instrument 132 is configured to provide the end-effector assembly 136 with at least one degree of freedom. A degree of freedom refers to an ability of an end effector assembly 136 to move according to a specific motion. For example, a degree of freedom can include a rotation of the end-effector assembly 136 or a component thereof about a single axis. Therefore, for each axis of rotation, the end-effector assembly 136 is said to have a unique degree of freedom. Another example of a degree of freedom can include a translational movement along a path. It will now be apparent that each additional degree of freedom increases the versatility of the end-effector assembly 136. By providing more degrees of freedom, it will be possible to position the end-effector assembly 136 in a wider variety of positions or locations to, for example, reach around obstacles.
(32) Referring to FIG. 2, an embodiment of the robotic instrument 132 is shown in greater detail. It is to be understood that the robotic instrument 132 is purely exemplary and it will be apparent to those skilled in the art that a variety of robotic instruments are contemplated including other embodiments discussed in greater detail below. The robotic instrument 132 includes an end-effector assembly 136, an elongated element 140 and a drive assembly 144.
(33) In the present embodiment, the end-effector assembly 136 is shown in FIG. 3. The end-effector assembly 136 is generally configured to interact with the patient P during MIS. The end-effector assembly 136 includes two working members 148 and 152. The end-effector assembly 136 also includes a motion transfer mechanism. In the present embodiment, the transfer mechanism is a gear 156 having a plurality of teeth. In particular, the gear 156 of the present embodiment is a bevel gear. However, other embodiments may use other types of gears. It is to be understood that the end-effector assembly 136, including the working members 148 and 152, is not particularly limited to any material and that several different types of materials are contemplated. The end-effector assembly 136 is typically constructed from materials which can withstand the harsh conditions of a sterilization process carried out prior to an actual surgery. Some examples of suitable materials include stainless steel, such as surgical stainless steel, titanium, plastics, composites and other materials commonly used in surgical instruments. The exact configuration of working members 148 and 152 is not particularly limited. In the present embodiment shown in FIGS. 2-4, the working members 148 and 152 are the jaws of forceps. In other embodiments, the working members can be other surgical instruments such as scissors, blades, graspers, clip appliers, staplers, retractors, clamps or bipolar cauterizers or combinations thereof. Also, in other embodiments the end-effector assembly may include a single working member such as imaging equipment, such as a camera or light source, or surgical instruments such as scalpels, hooks, needles, catheters, spatulas or mono-polar cauterizers.
(34) Referring again to FIG. 2, the elongated element 140 extends between the end effector assembly 136 and the drive assembly 144. The elongated element 140 is generally configured to support and control the end-effector assembly 136. It is to be understood that the elongated element 140 is not particularly limited to any material and that several different types of surgical-grade materials are contemplated. Examples of surgical grade materials include surgical stainless steel, titanium, plastics, composites and other materials commonly used in surgery, which in general can withstand sterilization. The elongated element 140 includes two motion transfer mechanisms. In the present embodiment, the motion transfer mechanisms include first and second gears 160 and 164 (FIG. 4) each having a plurality of teeth and disposed at opposite ends of the elongated element 140. The first gear 160 is configured to mate with the gear 156 of the end-effector assembly 136. In certain embodiments, the elongated element 140 is rigid, such that applying a rotational torque about an axis 168 at the second gear 164 will cause the elongated element 140 to rotate without significant deformation at the first gear 160. It will now be appreciated that the first gear 160 is configured to transfer rotational motion of the elongated element 140 to the gear 156 of the end-effector assembly 136 to move the working member 148.
(35) The drive assembly 144 of the present embodiment is shown in greater detail in FIG. 4. The drive assembly 144 includes a motion transfer mechanism. In the present embodiment, the transfer mechanism is a drive gear 172 having a plurality of teeth. The drive gear 172 is configured to mate with the second gear 164 of the elongated element 140. It will now be appreciated that the drive gear 172 is configured to transfer motion from the drive assembly 144 to a rotational motion of the elongated element 140 about the axis 168 by applying a rotational torque to the second gear 164 of the elongated element 140. The drive gear 172 can be driven by various means, such as via an electric motor (not shown), hydraulics, pneumatics, magnetic actuators or a piezoelectric motor. It will now be appreciated that the motion used to rotate the drive gear 172 does not need to be a rotational motion and can be any type of motion capable of applying a torque to rotate the drive gear 172.
(36) In operation, the present embodiment of the robotic instrument 132 controls the movement of the working member 148 of the end-effector assembly 136. A source of motion in the drive assembly rotates the drive gear 172. The drive gear 172 engages the second gear 164 of the elongated element 140. Therefore, as the drive gear 172 is rotated, engagement to second gear 164 of the elongated element 140 will cause the elongated element to rotate about the axis 168. The rotation of the elongated element 140 will cause a corresponding rotation of the first gear 160. The first gear 160 engages the gear 156 of the end-effector assembly 136. Therefore, as the first gear 160 rotates, engagement to the gear 156 of the end-effector assembly 136 will cause the working member 148 to pivot about a first axis 176 to open and close the jaw. It will now be appreciated by a person skilled in the art with the benefit of this description and the accompanying drawings that the working member 152 can be fixed or can also be pivoted about the first axis 176. When the working member 152 is controlled by the elongated element 140, rotating the elongated element 140 can cause the working members 148 and 152 to open or close. For example, if the first gear 160 engages both working members 148 and 152 on opposite sides of the first gear 160, the first gear 160 can apply opposite torques to working members 148 and 152 about the first axis 176. By applying opposite torques, the working members 148 and 152 may be opened and closed by rotating the elongated element 140. It is to be understood that when both working members 148 and 152 are controlled by the elongated element 140, the working members 148 and 152 will close at the same position relative to the elongated element 140.
(37) Therefore, in embodiments of end-effector assemblies comprising at least one jaw, such as the present embodiment, the first motion is characterized by the rotation motion within the same plane in which a jaw opens and closes.
(38) It will now be appreciated that the first rotational motion provides a degree of freedom which involves rotating the end-effector assembly 136 about a first axis 176. However, it will now be appreciated that the first axis 176 will be substantially perpendicular to the axis 168 nearest to the first axis 176. In other words, the first axis 176 is not necessarily fixed with respect to the surface 112 or the surgical system 108.
(39) In general terms, the robotic instrument 132 is generally configured to transfer a motion from a source in the drive assembly 144 to control the working member 148 of the end effector assembly 136. It is to be re-emphasized that the structure shown in FIGS. 2 to 4 is a non-limiting representation only. Notwithstanding the specific example, it is to be understood that other mechanically equivalent structures and motion transfer mechanisms can be devised to perform the same function as the robotic instrument 132. For example, other motion transfer mechanisms can include frictional engagement, belts, or cables or combinations thereof. Furthermore, although the motion of the drive gear 172 is a rotational motion, it is not necessary that this be a rotational motion as discussed above. Other types of motion, such as a linear motion, are also contemplated. Furthermore, in some embodiments, the drive gear 172 and the second gear 164 of the elongated element 140 may be omitted and the elongated element 140 may be directly driven by a motor.
(40) Referring to FIGS. 5 to 9, another embodiment of a robotic instrument 132a is shown. Like components of the robotic instrument 132a bear like reference to their counterparts in the robotic instrument 132, except followed by the suffix “a”. The robotic instrument 132a includes an end-effector assembly 136a, first and second elongated elements 140a and 180a respectively, and a drive assembly 144a.
(41) In the present embodiment, the end-effector assembly 136a is shown in greater detail in FIG. 6. The end-effector assembly 136a is generally configured to interact with the patient P during MIS. The end-effector assembly 136a includes two working members 148a and 152a. The end-effector assembly 136a also includes two motion transfer mechanisms. In the present embodiment, the transfer mechanisms are first and second gears 156a and 184a each having a plurality of teeth. It is to be understood that the end-effector assembly 136a, including the working members 148a and 152a, is not particularly limited to any material and that several different types of materials are contemplated such as those contemplated for the end-effector assembly 136. The exact configuration of working members 148a and 152a is not particularly limited. In the present embodiment shown in FIGS. 5 to 9, the working members 148a and 152a are jaws of forceps. In other embodiments, the working members can be other surgical instruments such as scissors, blades, graspers, clip appliers, staplers, retractors, clamps or bi-polar cauterizers or combinations thereof. Also, in other embodiments the end effector assembly may include a single working member such as imaging equipment, such as a camera or light source, or surgical instruments such as scalpels, hooks, needles, catheters, spatulas or mono-polar cauterizers.
(42) Referring to FIG. 5, the first and second elongated elements 140a and 180a extend between the end-effector assembly 136a and the drive assembly 144a. The first and second elongated elements 140a and 180a are generally configured to support and control the end-effector assembly 136a. It is to be understood that the first and second elongated elements 140a and 180a are not particularly limited to any one type material and that several different types of surgical-grade materials are contemplated such as those contemplated for the elongated element 140. The first and second elongated elements 140a and 180a each include two motion transfer mechanisms. In the present embodiment, the motion transfer mechanisms of the first elongated element 140a include first and second gears 160a and 164a each having a plurality of teeth and disposed at opposite ends of the elongated element 140a. The first gear 160a is configured to mate with the first gear 156a of the end-effector assembly 136a. The motion transfer mechanisms of the second elongated element 180a include first and second gears 188a and 192a each having a plurality of teeth and each disposed at opposite ends of the second elongated element 180a. The first gear 188a is configured to mate with the second gear 184a of the end-effector assembly 136a. In certain embodiments, the first and second elongated elements 140a and 180a are each rigid, such that independently applying a rotational torque about an axis 168a at the second gears 164a and 192a will cause the first and second elongated elements 140a and 180a, respectively, to rotate independently from each other without significant deformation. It will now be appreciated that the first gears 160a and 188a are configured to transfer rotational motion of the first and second elongated elements 140a and 180a to the first and second gears 156a and 184a of the end-effector assembly 136a to move, independently, the working members 148a and 152a, respectively.
(43) Referring to FIG. 6, the first gears 160a and 188a of the present embodiment are sector gears. Using sector gears in the present embodiment permits both of the first gears 160a anti 188a to rotate within a range of angles in the same plane to independently control the working members 148a and 152a. It is to be understood that the first gears 160a and 188a are not limited to sector gears and that other other embodiments are contemplated. For example, the first gears 160a and 188a can be modified to be other types of gears such as nested circular gear racks.
(44) Referring to FIG. 8, the drive assembly 144a of the present embodiment is shown in greater detail in FIG. 8. The drive assembly 144 includes two motion transfer mechanisms. In the present embodiment, the transfer mechanisms are first and second drive gears 172a and 196a, each having a plurality of teeth. The first and second drive gears 172a and 196a are configured to mate with the gears 164a and 192a respectively. It will now be appreciated that the first and second drive gears 172a and 196a are configured to transfer, independently, motion from the drive assembly 144a to a rotational motion of the first and second elongated elements 140a and 180a about the axis 168a, respectively, by applying a rotational torque to the second gears 164a and 192a, respectively. The first and second drive gears 172a and 196a can be driven, independently, by various means, such as those discussed above in connection with drive assembly 144.
(45) In operation, the present embodiment of the robotic instrument 132a controls the movement of the working members 148a and 152a of the end-effector assembly 136a. A source of motion in the drive assembly rotates the first and second drive gears 172a and 196a. The first and second drive gears 172a and 196a engage the second gears 164a and 192a of the elongated elements 140a and 180a, respectively. Therefore, as the drive gear 172a is rotated, engagement to second gear 164a of the first elongated element 140a will cause the first elongated element to rotate about the axis 168a. The rotation of the first elongated element 140a will cause a corresponding rotation of the first gear 160a. The first gear 160a engages the first gear 156a of the end-effector assembly 136a. Therefore, as the first gear 160a rotates, engagement to the first gear 156a of the end-effector assembly 136a will cause the working member 148a to pivot about a first axis 176a. Similarly, as the drive gear 196a is rotated, engagement to second gear 192a of the second elongated element 180a will cause the second elongated element to rotate about the axis 168a. The rotation of the second elongated element 180a will cause a corresponding rotation of the first gear 188a. The first gear 188a engages the second gear 184a of the end-effector assembly 136a. Therefore, as the first gear 188a rotates, engagement to the second gear 184a of the end-effector assembly 136a will cause the working member 152a to pivot about the first axis 176a.
(46) It will now be appreciated by a person skilled in the art with the benefit of this description and the accompanying drawings that, in the present embodiment, the working members 148a and 152a can be pivoted about the first axis 176a independently to open and close the jaw.
(47) It will now be appreciated that the independent control of the working members 148a and 152a provides an addition degree of freedom over the robotic instrument 132 which involves rotating the working members 148a and 152a about the first axis 176a as shown in FIG. 9. Therefore, the independent control of the working members 148a and 152a allows the working members to open and close over a range of angles about the first axis 176a, whereas the working members 148 and 152 were only able to open can close at a fixed angle.
(48) Variations are contemplated. For example, although the present embodiment shows the first and second elongated elements 140a and 180a are nested tubes, it is to be understood that the embodiment is purely exemplary and it will be apparent to those skilled in the art that a variety of different configurations of the first and second elongated elements 140a and 180a are contemplated. For example, the first elongated element 140a can be modified such that it is not a hollow tube. Furthermore, it is also contemplated that the second elongated element 180a can be modified into a solid rod in some embodiments. In other embodiments, the first and second elongated elements 140a and 180a, respectively, can be modified such that they are not nested and instead are parallel and adjacent.
(49) Referring to FIGS. 10 and 11, another embodiment of a robotic instrument 132b is shown Like components of the robotic instrument 132b bear like reference to their counterparts in the robotic instruments 132 and 132a, except followed by the suffix “b”. The robotic instrument 132b includes an end-effector assembly 136b, first and second elongated elements 140b and 180b respectively, and a drive assembly (not shown).
(50) The end-effector assembly 136b is generally configured to interact with the patient P during MIS. The end-effector assembly 136b includes two working members 148b and 152b. The end-effector assembly 136b also includes two motion transfer mechanisms. In the present embodiment, the motion transfer mechanisms of the end-effector assembly 136b include a first rotating element 157b and a second rotating element 185b. A first gear 156b and a second gear 184b, each gear having a plurality of teeth, are disposed on the first and second rotating elements 157b and 185b, respectively. In the present embodiment, the motion transfer mechanisms further include a first end-effector cable 158b and a second end-effector cable 186b which are engaged with first and second rotating elements 157b and 185b (coupled to the first and second gears 156b and 184b, respectively) and the first and second working members 148b and 152b as shown in FIG. 11. In the present embodiment, the first end-effector cable 158b and a second end-effector cable 186b are cables suitable for surgical applications. In other embodiments, the first and second end-effector cables 158b and 186b can be modified to be a belt or chain. The end-effector assembly 136b also includes a set screw 149b configured to adjust the tension of the first and second end-effector cables 158b and 186b. Therefore, as the first and second end-effector cables 158b and 186b wear and expand over time, the set screw can be adjusted to maintain the required tension in the motion transfer mechanisms.
(51) Referring to FIG. 10, the first and second elongated elements 140b and 180b extend between the end-effector assembly 136b and the drive assembly (not shown). The first and second elongated elements 140b and 180b are generally configured to support and control the end-effector assembly 136b. The first and second elongated elements 140b and 180b each include a motion transfer mechanism. In the present embodiment, the motion transfer mechanism of the first elongated element 140b includes a gear 160b having a plurality of teeth disposed thereon. The gear 160b is configured to mate with the first gear 156b on the first rotatable element 157b of the end-effector assembly 136b. The motion transfer mechanism of the second elongated element 180b includes a gear 188b having a plurality of teeth 180b disposed thereon. The gear 188b is configured to mate with the second gear 184b on the second rotatable element 185b of the end-effector assembly 136b. It will now be appreciated that the gears 160b and 188b are configured to transfer rotational motion of the first and second elongated elements 140b and 180b to the first and second gears 156b and 184b to move, independently, the first and second rotatable elements 157b and 185b, respectively. In turn the first and second rotatable elements 157b and 185b apply tension to the first and second end effector cables 158b and 185b to move the working members 148b and 152b, respectively.
(52) It will now be appreciated by a person skilled in the art with the benefit of this description and the accompanying drawings that, in the present embodiment, the working members 148b and 152b can be pivoted about the first axis 176b (shown in FIG. 11) independently to open and close the jaw. Furthermore, it will also now be appreciated that by using first and second rotatable elements 157b and 185b in combination with the first and second end-effector cables 158b and 186b, the range of motion of the first and second working members 148b and 152b is increased compared with the embodiment shown in FIG. 6 where the gears 160a and 188a are sector gears.
(53) Referring to FIGS. 12 to 16, another embodiment of a robotic instrument 132c is shown. Like components of the robotic instrument 132c bear like reference to their counterparts in the robotic arms 132 and 132a, except followed by the suffix “c”. The robotic instrument 132c includes an end-effector assembly 136c, first, second, and third elongated elements 140c, 180c, and 200c respectively, and a drive assembly 144c.
(54) In the present embodiment, the end-effector assembly 136c is shown in greater detail in FIG. 13. The end-effector assembly 136c is generally configured to interact with the patient P during MIS. The end-effector assembly 136c includes two working members 148c and 152c. The end-effector assembly 136c also includes two motion transfer mechanisms. In the present embodiment, the transfer mechanisms are first and second gears 156c and 184c each having a plurality of teeth. It is to be understood that the end-effector assembly 136c, including the working members 148c and 152c, is not particularly limited to any material and that several different types of materials are contemplated such as those contemplated for the end-effector assemblies 136 and 136a. The exact configuration of working members 148c and 152c is not particularly limited. In the present embodiment shown in FIGS. 12 to 16, the working members 148c and 152c are jaws of forceps.
(55) Referring to FIGS. 12 and 13, the first, second, and third elongated elements 140c, 1 BOc, and 200c extend between the end-effector assembly 136c and the drive assembly 144c. The first, second, and third elongated elements 140c, 180c, and 200c are generally configured to support and control the end-effector assembly 136c. It is to be understood that the first, second, and third elongated elements 140c, 180c, and 200c are not particularly limited to any one type material and that several different types of surgical-grade materials are contemplated such as those contemplated for the elongated elements 140, 140a and 180a. The first and second elongated elements 140c and 180c each include two motion transfer mechanisms. In the present embodiment, the motion transfer mechanisms of the first elongated element 140c include first and second gears 160c and 164c each having a plurality of teeth and disposed at opposite ends of the elongated element 140c. The first gear 160c is configured to mate with the first gear 156c of the end-effector assembly 136c. The motion transfer mechanisms of the second elongated element 180c include first and second gears 188c and 192c each having a plurality of teeth and each disposed at opposite ends of the second elongated element 180c. The first gear 188c is configured to mate with the second gear 184c of the end-effector assembly 136c. The third elongated element 200c includes a motion transfer mechanism. In the present embodiment, the motion transfer mechanism of the third elongated element 200c is a gear 204c disposed the end of the third elongated element 200c proximate to the drive assembly 144c. The opposite end 208c of the third elongated element 200c is connected to the end-effector assembly 136c.
(56) In certain embodiments, the first, second, and third elongated elements 140c, 180c, and 200c are each rigid, such that independently applying a rotational torque about an axis 168c at the gears 164c, 192c, and 204c will cause the first, second, and third elongated elements 140c, 180c, and 200c, respectively, to rotate independently from each other without significant deformation. It will now be appreciated that the first gears 160c and 188c of the first and second elongated elements 140c and 180c are configured to transfer rotational motion of the first and second elongated elements to the first and second gears 156c and 184c of the end effector assembly 136c to move, independently, the working members 148c and 152c, all respectively.
(57) Referring to FIG. 15, the drive assembly 144c of the present embodiment is shown in greater detail in FIG. 15. The drive assembly 144c includes three motion transfer mechanisms. In the present embodiment, the transfer mechanisms are first, second, and third drive gears 172c, 196c, and 212c each having a plurality of teeth. The first, second, and third drive gears 172c, 196c, and 212c are configured to mate with the gears 164c, 192c, and 204c respectively. It will now be appreciated that the first, second, and third drive gears 172c, 196c, and 212c are configured to transfer, independently, motion from the drive assembly 144c to a rotational motion of the first, second, and third elongated elements 140c, 180c, and 200c about the axis 168c, respectively, by applying a rotational torque to the gears 164c, 192c, and 204c, respectively. The first, second, and third drive gears 172c, 196c, and 212c can be driven, independently, by various means, such as those discussed above in connection with drive assemblies 144 and 144a.
(58) In operation, the present embodiment of the robotic instrument 132c controls the movement of the end-effector assembly 136c, which includes the movements of the working members 148c and 152c. A source of motion in the drive assembly 144c rotates the first, second, and third drive gears 172c, 196c, and 212c. The first, second, and third drive gears 172c, 196c, and 212c engage the gears 164c, 192c, and 204c of the first, second and third elongated elements 140c, 180c, and 200c respectively. Therefore, as the drive gear 172c is rotated, engagement to second gear 164c of the first elongated element 140c will cause the first elongated element to rotate about the axis 168c. The rotation of the first elongated element 140c will cause a corresponding rotation of the first gear 160c. The first gear 160c engages the first gear 156c of the end-effector assembly 136c. Therefore, as the first gear 160c rotates, engagement to the first gear 156c of the end-effector assembly 136c will cause the working member 148c to pivot about a first axis 176c. Similarly, as the drive gear 196c is rotated, engagement to second gear 192c of the second elongated element 180c will cause the second elongated element to rotate about the axis 168c. The rotation of the second elongated element 180c will cause a corresponding rotation of the first gear 188c. The first gear 188c engages the second gear 184c of the end-effector assembly 136c. Therefore, as the first gear 188c rotates, engagement to the second gear 184c of the end-effector assembly 136c will cause the working member 152c to pivot about the first axis 176c. As the drive gear 212c is rotated, engagement to gear 204c of the third elongated element 200c will cause the third elongated element to rotate about the axis 168c. The rotation of the third elongated element 200c will cause a corresponding rotation of the end 208c. Since the end 208c is connected to the end-effector assembly 136c, rotation of the third elongated element 200c will cause the end-effector assembly 136c to rotate about the axis 168c. It will now be appreciated by a person skilled in the art with the benefit of this description and the accompanying drawings that, in the present embodiment, the working members 148c and 152c can be pivoted about the first axis 176c independently to open and close the jaw. It will also now be appreciated that since the end-effector assembly 136c can be rotated about the axis 168c, the first axis 176c is not necessarily fixed and can be rotated as well.
(59) Referring to FIG. 16, it will now be appreciated that the independent rotation of the end-effector assembly 136c provides an addition degree of freedom over the robotic instrument 132a which involves rotating the working members 148c and 152c about the axis 168c as shown in FIG. 16, where the axis 168c is shown in FIGS. 12, 14, and 15. Therefore, independent control of the third elongated element 200c allows the working members 148c and 152c to reach over all angles about the axis 168c. This specific degree of freedom is referred to as a roll motion. It is to be understood that variations are contemplated whereby the axis 168c is not straight, such as the embodiment generally shown in FIG. 24, which will be discussed later as an embodiment where the elongated element can be bent.
(60) Variations are contemplated. For example, although the present embodiment shows the first, second, and third elongated elements 140c, 180c, and 200c are nested tubes, it is to be understood that the embodiment is purely exemplary and it will be apparent to those skilled in the art that a variety of different configurations of the first, second, and third elongated elements 140c, 180c, and 200c are contemplated. In other embodiments, the first, second, and third elongated elements 140c, 180c, and 200c, respectively, can be modified such that they are not nested and instead are parallel and adjacent.
(61) Referring to FIGS. 17 and 18, another embodiment of a robotic instrument 132d is shown Like components of the robotic instrument 132d bear like reference to their counterparts in the robotic arms 132, 132a and 132d, except followed by the suffix “d”. The robotic instrument 132d includes an end-effector assembly 136d, first and second elongated elements 140d and 200d respectively, and a drive assembly 144d.
(62) In the present embodiment, the end-effector assembly 136d is shown in greater detail in FIG. 18. The end-effector assembly 136d is generally configured to interact with the patient P during MIS. The end-effector assembly 136d includes a working member 148d. The end-effector assembly 136d also includes a motion transfer mechanism. In the present embodiment, the transfer mechanism is a gear 156d having a plurality of teeth. It is to be understood that the end-effector assembly 136d, including the working member 148d, is not particularly limited to any material and that several different types of materials are contemplated such as those contemplated for the end-effector assemblies 136, 136a, and 136c. The exact configuration of the working member 148d is not particularly limited. In the present embodiment shown in FIGS. 17 and 18, the working member 148d is a surgical blade capable of cauterizing.
(63) Referring again to FIGS. 17 and 18, the first and second elongated elements 140d and 200d extend between the end-effector assembly 136d and the drive assembly 144d. The first and second elongated elements 140d and 200d are generally configured to support and control the end-effector assembly 136d. It is to be understood that the first and second elongated elements 140d and 200d are not particularly limited to any one type material and that several different types of surgical-grade materials are contemplated. The first elongated element 140d includes two motion transfer mechanisms. In the present embodiment, the motion transfer mechanisms of the first elongated element 140d include first and second gears 160d and 164d each having a plurality of teeth and disposed at opposite ends of the elongated element 140d. The first gear 160d is configured to mate with the gear 156d of the end-effector assembly 136d. The second elongated element 200d includes a motion transfer mechanism. In the present embodiment, the motion transfer mechanism of the second elongated element 200d is a gear 204d disposed the end of the second elongated element 200d proximate to the drive assembly 144d. The opposite end 208d of the second elongated element 200d is connected to the end-effector assembly 136d.
(64) In the present embodiment, the robotic instrument 132d additionally includes an electrical wire 216d extending through the first elongated element 140d to the working member 148d. The electrical wire 216d is generally configured to supply an electrical current to the working member 148d. The electrical current can be used to generate heat at the working member 148d to cauterize tissue when necessary. Although the present embodiment uses the electrical wire 216d, the robotic instrument can modified to provide the same functionality without an electrical wire. For example, the first elongated element 140d can be made of stainless steel, which is electrically conductive. Therefore, the electrical conductivity of the first elongated element 140d can be used in place of the electrical wire 216d.
(65) In certain embodiments, the first and second elongated elements 140d and 200d are each rigid, such that independently applying a rotational torque about an axis 168d at the gears 164d and 204d will cause the first and second elongated elements 140d and 200d, respectively, to rotate independently from each other without significant deformation. It will now be appreciated that the first gear 160d of the first elongated elements 140d is configured to transfer rotational motion of the first elongated element 140d to the gear 156d of the end-effector assembly 136d to move the working member 148d.
(66) Referring again to FIGS. 17 and 18, the drive assembly 144d includes two motion transfer mechanisms. In the present embodiment, the transfer mechanisms are first and second drive gears 172d and 212d, each having a plurality of teeth. The first and second drive gears 172d and 212d are configured to mate with the gears 164d and 204d respectively. It will now be appreciated that the first and second drive gears 172a and 212d are configured to transfer, independently, motion from the drive assembly 144d to a rotational motion of the first and second elongated elements 140d and 200d about the axis 168d, respectively, by applying a rotational torque to the second gears 164d and 204d, respectively. The first and second drive gears 172d and 212d can be driven, independently, by various means, such as those discussed above in connection with drive assemblies 144, 144a, and 144d.
(67) In operation, the present embodiment of the robotic instrument 132d controls the movement of the end-effector assembly 136d, which includes the movements of the working member 148d. A source of motion in the drive assembly 144d rotates the first and second drive gears 172d and 212d. The first and second drive gears 172d and 212d engage the gears 164d and 204d of the first and second elongated elements 140d and 200d respectively. Therefore, as the drive gear 172d is rotated, engagement to gear 164d of the first elongated element 140d will cause the first elongated element to rotate about the axis 168d. The rotation of the first elongated element 140d will cause a corresponding rotation of the first gear 160d. The gear 160d engages the gear 156d of the end-effector assembly 136d. Therefore, as the gear 160d rotates, engagement to the gear 156d of the end-effector assembly 136d will cause the working member 148d to pivot about a first axis 176d. Similarly, as the drive gear 212d is rotated, engagement to gear 204d of the second elongated element 200d will cause the second elongated element to rotate about the axis 168d. The rotation of the second elongated element 200d will cause a corresponding rotation of the end 208d. Since the end 208d is connected to the end-effector assembly 136d, rotation of the second elongated element 200d will cause the end-effector assembly 136d to rotate about the axis 168d. It will now be appreciated by a person skilled in the art with the benefit of this description and the accompanying drawings that, in the present embodiment, the working member 148d can be pivoted about the first axis 176d independently from the rotation of the end-effector assembly 136d.
(68) Variations are contemplated. For example, although the present embodiment shows a single working member 148d, the robotic instrument 132d can be modified to include a different number of working members. For example, previous embodiments show variations including two working members. However, the number of working members are not limited to two and a larger number of working members are contemplated.
(69) Referring to FIG. 19, another embodiment of a robotic instrument 132e is shown Like components of the robotic instrument 132e bear like reference to their counterparts, except followed by the suffix “e”. The robotic instrument 132e includes an end-effector assembly 136e, first and second elongated elements 140e and 180e respectively, and a drive assembly (not shown).
(70) Each elongated element 140e and 180e include a flexible portion disposed generally at 220e. The flexible portion allows for coarse motion of the elongated elements 140e and 180e, which provides even more degrees of freedom to the robotic instrument 132e. The flexible portion can be provided by using laser cutting techniques on the first and second elongated elements 140e and 180. The first and second laser cut elongated elements 140e and 180 may be obtained from Pulse Systems (Concord, Calif., U.S.A.) using uncut stainless steel tubes from VitaNeedle (Needham, Mass., U.S.A.). By laser cutting a stainless steel tube, it has been found that the flexibility of the stainless steel tube dramatically increases without compromising the rotational rigidity. Therefore, the laser cut stainless steel tubes have been shown to work well for providing flexibility, while still being effective at transferring rotational motion from a drive assembly to the end-effector assembly 136e. Although the laser cutting is shown in FIG. 19 to have produced spiral scores 224e and 228e on the first and second elongated elements 140e and 180e, respectively, variations are contemplated. It will now be appreciated that different laser cut patterns can have different characteristics and that the cut pattern selected depends on various factors.
(71) It is also contemplated that other ways of providing a flexible portion can be used. For example, the composition of the elongated elements 140e and 180e can be varied such that a portion of each elongated element 140e and 180e is more flexible than other portions.
(72) FIGS. 20 and 21 provide view of another exemplary robotic instrument 132f and its associated end-effector assembly 136f. The robotic instrument 132f includes an end-effector assembly 136f and an elongated element 140f. The end-effector assembly 136f is configured for another degree of freedom. The rotational motion shown in FIG. 20 is a degree of freedom which involves rotating the end-effector assembly 136f about the second axis 232f. In some embodiments, the second axis 232f is perpendicular to the first axis 176f to provide the robotic instrument 132f with the greatest range of motion. However, it is not essential that the second axis 232f be perpendicular to the first axis 176f. For example, similar to some of the previously discussed embodiments, the first axis 176f can be rotatable relative to the second axis 232f.
(73) FIG. 21 shows another degree of freedom involving a longitudinal translation motion allowing the robotic instrument 132f to be translated along axis 168f. For example, this allows the robotic instrument 132f to enter and penetrate deeper into the body, or be retracted. Unlike the other degrees of freedom discussed, this translational degree of freedom is provided by a system on the robotic arm 128. For example, the robotic arm can include a z-rail system (not shown) for moving the entire robotic instrument 132f.
(74) FIGS. 22 and 23 provide view of another exemplary robotic instrument 132g and its associated end-effector assembly 136g. The robotic instrument 132g includes an end-effector assembly 136g and an elongated element 140g. The end-effector assembly 136g is configured for another degree of freedom similar to the end-effector assembly 136f.
(75) FIG. 23 shows another degree of freedom involving a longitudinal translation motion allowing the robotic instrument 132g to be translated along axis 168g. For example, this allows the robotic instrument 132g to enter and penetrate deeper into the body, or be retracted. For example, the robotic arm can include a z-rail system (not shown) for moving the entire robotic instrument 132g.
(76) It is to be understood that degrees of freedom allow for a range of movements for facilitating MIS. Variations are contemplated and additional degrees of freedom not discussed in this application can be added. For example, the robotic instrument 132f can be externally moved using the robotic arm 128 or other suitable means. Therefore, the motion of the robotic arm 128 can move the end-effector assembly 136f over a large distance as an additional degree of freedom.
(77) Referring to FIG. 24, another embodiment of a robotic instrument 132h is shown Like components of the robotic arm 132h bear like reference to their counterparts, except followed by the suffix “h”. The robotic instrument 132h includes an end-effector assembly 136h, first, second, and third elongated elements (not shown) encased in a cover 240h, and a drive assembly 144h. In this particular embodiment, the robotic instrument 132h includes a flexible portion 220h configured to provide coarse motion proximate to the end-effector assembly 136h. The flexible portion 220h is located between the cover 240h and the end-effector assembly 136h.
(78) The flexible portion 220h includes first and second subsections 244h and 248h. Each of the first and second subsections 244h and 248h is generally configured to bend within first and second coarse motion planes, respectively. It is to be understood that the first, second, and third elongated elements (not shown} are consequently bent when the first and second subsections 244h and 248h are bent such that the first, second, and third elongated elements can independently rotate while bent. Furthermore, the motion of the first subsection 244h and the second subsection 248h are independent such that one or both of the first and second subsections may be bent independently. Therefore, it is to be understood that the coarse motion of the robotic instrument 132h can be controlled using a set of at least one course motion adjustment cable 236h for each of subsection 244h and 248h by independently adjusting the tension of each set of at least one course motion adjustment cable.
(79) Referring again to FIG. 24, in the present embodiment, the first and second coarse motion planes are substantially perpendicular to each other. However, it is to be appreciated that the first and second coarse motion planes do not need to be perpendicular to each other and can be at any angle in some embodiments. Furthermore, the exact configuration of first and second subsections 244h and 248h is not particularly limited. In the present embodiment, there are two subsections 244h and 248h. In other embodiments, it is to be understood that the flexible portion 220h can be modified to include more subsections to provide more coarse motion planes within which subsections of the flexible portion 220h can bend. In addition, the subsections need not be place adjacent to each other. Alternatively, it is also to be understood that the flexible portion 220h can be modified to include only one subsection to provide a single coarse motion plane.
(80) In addition, the robotic instrument 132h includes an outer cover 240h. It is to be appreciated that the outer cover 240h can be rigid to provide support for the elongated elements (not shown) within the outer cover. In addition, the plurality of coarse motion adjustment cables 236h can be disposed within the outer cover 240h in a pace between the inside wall of the outer cover and the elongated elements. By placing the coarse motion adjustment cables 236h behind an outer cover, it is to be understood that wear on the cables is reduced. Furthermore, in the embodiment shown in FIG. 24, the outer cover 240h is fixed. That is, the outer cover 240h does not rotate. It will now be appreciated that when the robotic instrument 132t is inserted inside the patient P, the outer cover 240h reduces the chance of the robotic instrument 132h getting caught on something to cause damage.
(81) It will now be appreciated that each subsection 244h and 248h will provide an additional degree of freedom. Referring back to FIG. 24, it will also now be apparent that the first and second subsections 244h and 248h add two more degrees of freedom to the robotic instrument 132h. Therefore, the robotic instrument 132h includes six degrees of freedom. The six degrees of freedom include the roll about the axis 168h (where the first, second, and third elongated elements rotate concurrently), rotation of the end-effector assembly 136h about a second axis 232h, rotation of a first working member 148h about a first axis 176h, rotation of a second working member 152h about the first axis 176h, the bending of the first subsection 244h and the bending of the second subsection 248h. In addition, the entire robotic instrument 132h can be moved on a rail system (not shown) to provide a seventh degree of freedom.
(82) It is to be understood that by moving the first and second working members 148h and 152h together by rotating the first and second elongated elements, the working members 148h and 152h can rotate together about the first axis 176h such that the working members 148h and 152h can open and close over a range of angles about the first axis 176h. Furthermore, it will also be appreciated that to change the angle about the first axis 176h at which the working members 148h and 152h open and close, the first and second elongated elements rotate at a different amount compared with the third elongated element. This different amount is called a delta and can be adjusted to control the movement of the end effector assembly 136h relative to the robotic instrument 132h.
(83) Referring to FIG. 25, another embodiment of a robotic instrument 132i is shown Like components of the robotic arm 132i bear like reference to their counterparts, except followed by the suffix “i”. The robotic instrument 132i includes an end-effector assembly 136i, first, second, and third elongated elements (not shown) encased in a cover 240i, and a drive assembly 144i. In this particular embodiment, the robotic instrument 132i includes a flexible portion 220i configured to provide coarse motion proximate to the end-effector assembly 136i in a similar manner to the flexible portion 220h in the robotic instrument 132h. The flexible portion 220i is located between the cover 240i and the end-effector assembly 136i. The end-effector assembly 136i is similar to the end-effector assembly 136b described. Therefore, the robotic instrument 132i shown in FIG. 23 adds coarse motion to the end-effector assembly 136b.
(84) Referring again to FIG. 22, it will now be appreciated that if the roll motion rotates the first axis 176g relative to the second axis 232g, the robotic instrument 132g would have positions where the first axis 176g can be parallel to the second axis 2329. However, in the embodiment shown in FIG. 25, the roll motion rotates both the first axis 176i and the second axis 232i by having the rotation occur between the flexible portion 220i and the second axis 232i. Therefore, the angle between the first axis 176h and the second axis 232i remains fixed. In the present embodiment, the angle between the first axis 176i and the second axis 232i is maintained at 90 degrees. However, in other embodiments, the angle can be greater or smaller than 90 degrees.
(85) Therefore, it is to be understood that many combinations, variations and subsets of the embodiments and teachings herein are contemplated. As a non-limiting example, the robotic instrument 132d can be modified with the variation described in relation to the robotic instrument 132g to provide for coarse motion in the robotic instrument 132d. As another nonlimiting example, the robotic instrument 132 can be modified with the variation described in relation to the robotic instrument 132d to provide cauterizing functionality to the robotic instrument 132.
(86) While specific embodiments have been described and illustrated, such embodiments should be considered illustrative only and should not serve to limit the accompanying claims.
Claims
1. A robotic surgical instrument, comprising: an end-effector assembly comprising a working member configured to pivot about a first axis; an elongated element extending between a proximal end and a distal end, the distal end being affixed with respect to the end effector assembly to hold the first axis perpendicular to the elongated element; a second elongated element extending between a second proximal end and a second distal end, the second distal end of the second elongated element being coupled with the end effector assembly so that rotation of the second elongated element about a second axis pivots the working member about the first axis; and a drive assembly comprising a first drive member coupled to the proximal end of the elongated element and a second drive member coupled to the second proximal end of the second elongated element, the drive assembly configured to selectively operate the first drive member to rotate the elongated element and configured to selectively operate the second drive member to rotate the second elongated element, wherein the elongated element and second elongated element are nested together, and wherein the elongated element is a first tube, the second elongated element is a second tube, and the elongated element and second elongated element are nested together so that one of the first tube and the second tube is disposed within the other of the first tube and the second tube.
2. The robotic surgical instrument of claim 1, wherein the working member is a blade.
3. The robotic surgical instrument of claim 1, wherein the working member is configured to cauterize a tissue.
4. The robotic surgical instrument of claim 1, wherein one or both of the elongate member and the second elongate member comprises a flexible portion.
5. The robotic surgical instrument of claim 1, wherein one or both of the first and second drive members comprise a gear.
6. The robotic surgical instrument of claim 1, wherein one or both of the first and second drive members rotate about a third axis offset from the second axis.
7. The robotic surgical instrument of claim 1, further comprising an electrical wire extending through the second elongated element and coupled to the working member.
8. The robotic surgical instrument of claim 1, wherein the second distal end of the second elongated element is coupled to the working member of the end-effector assembly via a gear.
9. A robotic surgical system, comprising: a support arm; and a surgical instrument coupled to the support arm, the surgical instrument comprising: an end-effector assembly comprising a working member configured to pivot about a first axis, an elongated element extending between a proximal end and a distal end, the distal end being affixed with respect to the end effector assembly to hold the first axis perpendicular to the elongated element, a second elongated element extending between a second proximal end and a second distal end, the second distal end of the second elongated element being coupled with the end effector assembly so that rotation of the second elongated element about a second axis pivots the working member about the first axis, and a drive assembly comprising a first drive member coupled to the proximal end of the elongated element and a second drive member coupled to the second proximal end of the second elongated element, the drive assembly configured to selectively operate the first drive member to rotate the elongated element and configured to selectively operate the second drive member to rotate the second elongated element, wherein the elongated element and second elongated element are nested together, and wherein the elongated element is a first tube, the second elongated element is a second tube, and the elongated element and second elongated element are nested together so that one of the first tube and the second tube is disposed within the other of the first tube and the second tube.
10. The robotic surgical system of claim 9, wherein the surgical instrument is movably coupled to the support arm.
11. The robotic surgical system of claim 9, wherein the support arm is a robotic arm configured to move in response to an input control signal received from an input device operated by a user.
12. The robotic surgical system of claim 9, wherein the working member is a blade.
13. The robotic surgical instrument of claim 9, wherein the working member is configured to cauterize a tissue.
14. The robotic surgical instrument of claim 9, wherein one or both of the elongate member and the second elongate member comprises a flexible portion.
15. The robotic surgical instrument of claim 9, wherein one or both of the first and second drive members comprise a gear.
16. The robotic surgical instrument of claim 9, wherein one or both of the first and second drive members rotate about a third axis offset from the second axis.
17. The robotic surgical instrument of claim 9, further comprising an electrical wire extending through the second elongated element and coupled to the working member.
18. The robotic surgical instrument of claim 9, wherein the second distal end of the second elongated element is coupled to the working member of the end-effector assembly via a gear.
https://investors.ctibiopharma.com/static-files/83278a88-ff8f-457a-bccf-1900fe5042d8
no sneak peeks on last quarter's sales
Extension of CEO Services
As previously disclosed by the Company’s Current Report on Form 8-K filed with the Commission on November 2, 2022 (the “November 8-K”), Dannielle Appelhans’ employment as the Company’s chief executive officer and president of the Company was expected to terminate no later than January 31, 2023. In light of the Company’s continued review of strategic alternatives, Ms. Appelhans has agreed to continue to serve as the Company’s chief executive officer and president until no later than March 3, 2023, and Ms. Appelhans may thereafter provide consulting services, depending on the progress or outcome of such strategic review process at such time.
Officer Departure
On January 31, 2023, Laurence Turka, M.D., stepped down from his position as Chief Scientific Officer and Head of Research and Translational Medicine of the Company on the terms previously disclosed in the November 8-K. Dr. Turka is continuing to provide consulting services as the Company undertakes its review of strategic alternatives.
Director Resignations
On January 31, 2023, Francis Cuss, M.B., B. Chir., FRCP, Michael Rosenblatt, M.D. and Susanne Schaffert, Ph.D., each notified the Company of their respective resignations as members of the Company’s board of directors, effective immediately. None of the resignations of Drs. Cuss, Rosenblatt and Schaffert resulted from any disagreement with the Company on any matter relating to the Company’s operations, policies or practices.
And let's go away other costs! few salaries remain
It must have been very nice to work in Ruby…
The platform seems valid, is it possible that nobody cares?
If we sell what do you win?