InvestorsHub Logo
Followers 16
Posts 1898
Boards Moderated 0
Alias Born 05/12/2016

Re: None

Wednesday, 03/01/2017 8:24:25 AM

Wednesday, March 01, 2017 8:24:25 AM

Post# of 140477
Playing catch-up again; family emergency yesterday (all is well now).

One thing that has bothered me the whole time about the Verb situation is that there is very little focus on an instrument in their communication. One of five pillars...

I'm thinking, as they claim, that their "system" is more advanced than a "mere" surgical robot, and the focus on software and data is the real basis for their end product. Just a hypothesis here, but could they be looking at a system which tracks and correlates surgical robotic movement with live video imaging for eventual repetition? In other words, they could be using various existing robotic platforms (limited selection today but more coming, including ours) and monitoring/logging the robot's movements and activities for a given procedure. Eventually, they could build a statistically significant database of what movements surgeons make with the instruments to achieve the surgical goal for each given procedure. With today's image recognition software, it is feasible that the end result is closer to automated surgery; the software recognizes features, differentiates tissue colors, textures, positions, etc. and either guides the surgeon through the process with on-screen prompts or could perform the movements itself, relying on the surgeon for approval for each move, merely as a sanity check. Maybe add something to measure distances more accurately such as an ultrasonic sensor, and let the computer convert colors to a wider spectrum for on screen representation (the surgical field is mostly varying shades of red and pink; maybe do a reverse-Dolby compression for the visible spectrum to provide easier recognition of different anatomical features...

They talk a lot about the future of robotic surgery and it seems digital learning is more than hinted at repeatedly. How else would this be applied to the robotic surgery environment?

Earlier stages, they would need their software to be an overlay on the existing software for data accumulation. Their database will have learned for a given patient size, gender, age, etc. where to expect to see certain anatomical features, and how to recognize them and any issues being dealt with surgically. A surgeon could then conceivably sit at the control station, clicking away at confirmations (Yup, that's the pancreas... Yup, that's a tumor... Yup, go ahead and remove the tumor...) with the ability to override and control the robot manually at any time.

Looking at today's image recognition capabilities, this is all very feasible, and it just might be the direction Verb is going. If so, they will be equally concerned with compatibility than capability of a given robot. If this is the behind-the-scenes interaction between Verb and Titan (if there is any interaction at all) then Titan should be well-positioned to enter the market as a viable platform for this next-generation, even more automated surgery paradigm, and still at a price point and per-case cost point which provides them a huge eventual market opportunity.

From my own experiences with digital imaging recognition technologies and robotics in general, I believe the tech all exists but just needs to be incorporated into one system. And that might be what Verb is doing, and why Titan and others could play a role.

All hypothesis, of course, but sometimes it's fun to let the imagination run on a bit.