Hostname: page-component-78c5997874-s2hrs Total loading time: 0 Render date: 2024-11-14T01:32:49.908Z Has data issue: false hasContentIssue false

Enabling personalization of a robotic surgery procedure via a surgery training simulator

Published online by Cambridge University Press:  27 July 2022

Mehmet İsmet Can Dede*
Affiliation:
Department of Mechanical Engineering, Izmir Institute of Technology, Izmir, Turkey
Tarık Büyüköztekin
Affiliation:
Department of Mechanical Engineering, Izmir Institute of Technology, Izmir, Turkey
Şahin Hanalioğlu
Affiliation:
Department of Neurosurgery, Hacettepe University Faculty of Medicine, Ankara, Turkey
İlkay Işıkay
Affiliation:
Department of Neurosurgery, Hacettepe University Faculty of Medicine, Ankara, Turkey
Mustafa Berker
Affiliation:
Department of Neurosurgery, Hacettepe University Faculty of Medicine, Ankara, Turkey
*
*Corresponding author. E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Although robotic or robot-assisted surgery has been increasingly used by many surgical disciplines, its application in cranial or skull base surgery is still in its infancy. Master-slave teleoperation setting of these robotic systems enables these surgical procedures to be replicated in a virtual reality environment for surgeon training purposes. A variety of teleoperation modes were previously determined with respect to the motion capability of the surgeon’s hand that wears the ring as the surgeon handles a surgical tool inside the surgical workspace. In this surgery training simulator developed for a robot-assisted endoscopic skull base surgery, a new strategy is developed to identify the preferred motion axes of the surgeon. This simulator is designed specifically for tuning the teleoperation system for each surgeon via the identification. This tuning capability brings flexibility to adjust the system operation with respect to the motion characteristics of the surgeon.

Type
Research Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press

1. Introduction

Independent of the robotic surgery systems, virtual reality (VR)-based surgical training systems have emerged for educational purposes. As an alternative to surgeon training via cadavers, animals, and physical models, the VR-based training tools received more attention especially during the current pandemic. It has been reported after [Reference Yiannakopoulou, Nikiteas, Perrea and Tsigris1] comparing different training models such as cadavers, physical models (mock-ups), laboratory animals, augmented reality simulators, and VR simulators that VR simulator training led to the acquisition of important skills, such as the acquisition of basic laparoscopic skills and advanced sewing skills.

As some of the surgical procedures shifted from manual to robotic or robot-assisted procedures, the need for training the surgeons who use these robotic systems arose. Commonly, these robotic systems have master-slave teleoperation system developed for minimally invasive surgical procedure. This robotic teleoperation setting enables these surgical procedures to be replicated in a VR environment for surgeon training purposes. Indeed, the VR training simulator (dV-Trainer) of the da Vinci robotic surgery system [Reference Okamura2] has been demonstrated to be a valid tool for assessing robotic surgery skills of surgeons [Reference Perrenot, Perez, Tran, Jehl, Felblinger, Bresler and Hubert3].

We developed a new surgical robotic system called the NeuRoboSope system, which is an assistive-robot system designed specifically for endoscopic endonasal skull base surgery [Reference Dede, Kiper, Ayav, Özdemirel, Tatlıcıoğlu, Hanalioglu, Işıkay and Berker4]. This system is designed to handle and move the endoscope while main surgeon carries out the surgery. During the surgery, the main surgeon can interact with the system in two ways: (a) direct manual interaction and (b) via teleoperation. The working systematic of the NeuRoboScope system indicates that the surgeon needs to learn how to operate the slave with the master system. Initially, the master system’s measurement procedure is kept the same for all the users which resulted in a less intuitive way of teleoperation. This work focuses on a personalization procedure to enable the intuitive use of the NeuRoboScope surgical system. This personalization procedure first identifies the surgeon’s motion characteristics and then tunes the master system’s measurement procedure accordingly. Hence, this tuning capability brings flexibility to adjust the system’s operation with respect to the motion characteristics of the surgeon.

Although a new simulator for a new robotic surgery system has its own merits in terms of novelty and contribution to the literature, the main novelty of this study is the idea and implementation of a surgical robot personalization methodology. The proposed personalization method proves to be efficient in terms of improving the intuitiveness of the surgical robot use and the spent effort while using the surgical robot.

2. Description of the NeuRoboScope surgical system

The NeuRoboSope surgical system is a master-slave teleoperation system that consists of a robot manipulator that handles the endoscope, which is regarded as the slave system. The master system is composed of a wearable ring and a foot pedal. The slave system is developed to have both active and passive arms. The active arm is a remote-center-of-motion (RCM) parallel manipulator with only three degrees of freedom (DoF), two of them being rotational and one being translational. These DoFs are controlled by the actuation system located on each leg of this parallel manipulator. The passive arm is a serial manipulator with unactuated five revolute joints and a linear motor to adjust the height of the passive arm. The purpose of the passive arm is to carry the active arm and fix its base platform’s pose within the designated workspace. The passive arm’s joints are equipped with magnetic brakes that can be released when the surgeon wants to relocate the active arm. The surgeon relocates the active arm by backdriving the passive arm without any control action (passive backdriving). This action is named as “Direct Manual Interaction.” Further details on the subcomponents of the NeuRoboScope surgical system shown in Fig. 1 can be found in ref. [Reference Perrenot, Perez, Tran, Jehl, Felblinger, Bresler and Hubert3].

Figure 1. The NeuRoboScope system’s slave system while it is backdriven by the human operator as the system is handled by the human operator at the endoscope. The system is composed of the passive arm and the active arm, and the master system which is composed of the IMU-embedded ring and a foot pedal (the foot pedal is not shown on this photo).

2.1. Direct manual interaction

The direct manual interaction is required when the endoscope is to be placed out or in the surgical workspace (inside the nostril). During this operation, surgeon releases the brakes on the passive arm which carries the active arm via activating the related button on the user interface and deactivates the same button when the endoscope is to be placed back at the surgical workspace (https://youtu.be/kUtVecdl_gY). A prismatic joint is driven by a linear motor to adjust the height of the system before the surgery, and thereon it maintains its position throughout the surgery.

2.2. Master-slave teleoperation

The master-slave teleoperation subsystem of the NeuRoboScope is designed to have the master surgeon operating with surgical tools other than the endoscope, while the endoscope is handled by the slave system (https://www.youtube.com/shorts/b4amZrX_9M0). The slave system with the endoscope and the surgeon are cooperating on the patient at the same time as shown in Fig. 2.

Figure 2. A screenshot indicating the master-slave teleoperation mode. A surgical tool, aspirator, is handled by the hand wearing the IMU-embedded ring. Robot and the surgeon cooperate on the patient.

The slave system is teleoperated by the surgeon, while the surgeon carries out the surgery on top of the patient with the other surgical tools. The information exchange between the master and the slave systems is described in Fig. 3. In Fig. 3, the “Endoscope Robot” term indicates the active arm that is placed on the passive arm. The active arm controls the pose of the endoscope inside the surgical workspace due its RCM capability of two rotational and one translational DoFs. The unilateral information flow from the master to the slave is initiated by pressing the foot pedal. The information sent via Bluetooth is the absolute orientation information received from the inertial measurement unit (IMU) that is embedded inside the wearable ring. When the foot pedal is not pressed, the slave system maintains its position and the teleoperation system does not issue any pose changing command to the slave system. Hence, any involuntary motion of the endoscope is restricted.

Figure 3. Information exchange between the master system (ring and foot pedal) and the slave system (endoscope robot) of the NeuRoboScope.

This absolute orientation information is processed [Reference Dede, Kiper, Ayav, Özdemirel, Tatlıcıoğlu, Hanalioglu, Işıkay and Berker4] at the master side to generate speed demands ( $\omega _{\textrm{pitch}}, \omega _{\textrm{roll}}, v_{\textrm{dolly}}$ ) to drive the slave system. Therefore, the motion demand generated is not an absolute position demand but relative motion demand as a result of this procedure. In fact, during this operation, surgeon continuously observes how the slave system moves since the view captured by the endoscope and displayed at the monitor changes. The aim here is to minimize the efforts of the surgeon when the surgical workspace view needs to be changed. The mapping between the motion of the ring and the motion of the endoscope resembles the operation scenario between the gas pedal of an automobile and the change of its speed.

2.3. A comparison between the NeuRoboScope system and the other robotic endonasal surgery systems

A variety of methods have been developed to integrate robots into minimally invasive endonasal surgery procedures. These methods are (1) systems with autonomous tracking capability [Reference Dai, Zhao, Zhao, He, Sun, Gao, Hu and Zhang5], (2) direct master-slave teleoperation systems where the master system is located at a console away from the patient [Reference Chumnanvej, Pillai, Chalongwongse and Suthakorn6Reference Muñoz, Garcia-Morales, Fraile-Marinero, Perez-Turiel, Muñoz-Garcia, Bauzano, Rivas-Blanco, Sabater-Navarro and de la Fuente10], (3) systems with supervisory level teleoperation with automated motion planning capabilities, (4) systems that require direct physical contact with the surgeon [Reference Colan, Nakanishi, Aoyama and Hasegawa11] and (5) systems that support the surgeon but do not directly handle any of the surgical tools [Reference Ogiwara, Goto, Nagm and Hongo12]. The NeuRoboScope system’s operation paradigm cannot be categorized into any of these methods. In the NeuRoboScope system, the surgeons can simultaneously operate with two surgical tools and the endoscope while they are operating on the patient via continuous control capability of the endoscope robot. Additionally, to the best of our knowledge, the personalized use options of these methods are limited and/or not have been reported.

  1. 1. In ref. [Reference Dai, Zhao, Zhao, He, Sun, Gao, Hu and Zhang5], an endoscope robot is designed with the RCM mechanism. This robot is designed to navigate the endoscope autonomously during the surgery. The desired motion of the robot is coupled to the motion of surgical tool that the surgeon handles during the surgical operation. A vision system tracks the motion of the surgical tool and directs the endoscope robot with respect to this captured motion. The obvious problems of such a system are as follows: (1) markers should be attached to the surgical tool which is constantly swapped by another tool throughout the surgery; (2) room lighting conditions should be stable; and (3) surgeon does not always want the same tool to be followed. These problems are still to be addressed in future works.

  2. 2. In ref. [Reference Chumnanvej, Pillai, Chalongwongse and Suthakorn6], a surgical robotic system is introduced for minimally invasive neurosurgery procedures. This robot control system guides the surgeon the endoscope is inserted inside the nostril. The surgeon is situated at a control station to control the motion of the robot throughout the surgery. This setting is a teleoperation setting. However, a surgeon is allocated to control the motion of the robot away from the patient similar to the operation routine of the da Vinci surgical system. Therefore, this method cannot be compared with the NeuRoboScope system where the surgeon controls endoscope and two other surgical tools simultaneously.

  3. 3. In ref. [Reference He, Deng and Zhang13], an assistive surgical robot is proposed, which has similar features in terms of the NeuRoboScope robot’s mechanism with RCM. The teleoperation system’s master side involves the use of a commercial voice recognition system. This system is trained with eight commands to drive the surgical robot, while the surgeon continues the surgery by using both of their hands. The only indication given is how well these eight commands are identified by using ten test subjects. There is no indication about the ergonomics of use of such a teleoperation system during the surgery. In fact, during a surgery, not only the main surgeon speaks but the whole surgical team speak which poses a potential problem for this teleoperation setting. In addition, the eight commands (up and down, left and right, forward and backward, and clockwise and counter clockwise) are not enough to change the speed of operation or range of operation of the robot. Based on the received commands, the surgical system plans the path [Reference He, Zhang, Qi, Zhao, Li and Hu14] and the endoscope robot is driven with respect to this path. Further, a robot motion constraint model is developed based on a virtual fixture [Reference He, Hu, Zhang, Zhao, Qi and Zhang15]. In this way, the safety of operation is increased, while the endoscope is inserted into the nostril as the surgeon drags the endoscope. The endoscope is backdriven by using an admittance controller. The NeuRoboScope system is passively back-drivable, and the NeuRoboScope robot has passive gravity compensation system [Reference Dede, Kiper, Ayav, Özdemirel, Tatlıcıoğlu, Hanalioglu, Işıkay and Berker4]. Therefore, the NeuRoboScope system is inherently safe compared to the robot presented in ref. [Reference He, Hu, Zhang, Zhao, Qi and Zhang15].

  4. 4. In ref. [Reference Colan, Nakanishi, Aoyama and Hasegawa11], a cooperative strategy in endoscopic endonasal surgery is proposed. The robot that handles the surgical tool is operated by a direct physical contact of the surgeon. This is not a teleoperation system but involves the direct physical interaction of the surgeon and the robotic tool holder. The aim is to increase the precision of the surgical operation. In ref. [Reference Huang, Lai, Cao, Burdet and Phee7], for the teleoperation, a flexible endoscope system hand and foot controllers are examined as the master system. In this scenario, the surgeon is situated at a control station to operate with the hand interface (commercial haptic devices) and a foot pedal with 4 DoFs. The test subjects used bot interfaces separately. The results indicate that hand controllers were found to be relatively more preferable due to the performance received by using them. Both systems presented in refs. [Reference Huang, Lai, Cao, Burdet and Phee7,Reference Colan, Nakanishi, Aoyama and Hasegawa11] have different working principle when compared to the NeuRoboScope system. In NeuRoboScope system, the surgeon can use two hands to operate two different surgical tools, while the endoscope is still under the control of the surgeon via the endoscope robot’s teleoperation system. However, the use of a foot interface could be adapted to the NeuRoboScope system.

  5. 5. iArms [Reference Ogiwara, Goto, Nagm and Hongo12] is a robotic system that supports the forearm of the surgeon during the surgery. In this way, the hand trembling and fatigue of the surgeon are aimed to be prevented. iArms follows the surgeon’s arm, but it does not directly handle the surgical tools. Therefore, iArms operation paradigm is totally different from the NeuRoboScope system. Nevertheless, iArms procedure is found to prolong endoscope lens-wiping intervals in endoscopic sinus surgery which results in continuously clear endoscopic images in a study by ref. [Reference Okuda, Okamoto, Takumi, Kakehata and Muragaki16].

3. Assessment of the adaptability to different modes of teleoperation

An initial training simulator for the NeuRoboScope system’s teleoperation scenario was previously developed [Reference Ateş, Majani and Dede17]. This training simulator was designed to understand the adaptability of the user to different modes of teleoperation. Here, these modes indicate the various ways of mapping between the measured orientation of the wearable ring and speed demand issued for the slave system. During these initial tests, it should be noted that the measurement axes of the IMU are not necessarily matched with the motion axes of the surgeon hand wearing the ring.

The modes of operation are determined with respect to the motion capability of the surgeon’s hand that wears the ring as the surgeon handles a surgical tool inside the surgical workspace. An illustration of such a case is shown in Fig. 4. In the simulation, active arm’s (hence the endoscope’s) motion is mapped to the graphical user interface. In Fig. 4, the surgical tool is an aspirator that is used during the operation by the surgeon. The surgeon wears the ring with IMU on the hand that uses the aspirator. The reason to choose the aspirator is that the surgeon has to move his hand to issue command during the teleoperation phase and the aspirator tool is the least dangerous tool that can be handled during this phase. To resemble the actual use case, a skull mock-up is used and the aspirator is inserted inside the nostril of this skull mock-up. As a consequence, the surgeon has a limited workspace while trying to teleoperate the slave system as it is the case in the actual surgery.

Figure 4. Simulator’s graphical user interface and the master system’s ring.

Two of the modes are designed for the case when the surgeon’s finger wearing the ring is always in firm contact with the surgical tool. A third mode is designed for the case when the surgeon can release the finger wearing the ring from the surgical tool as the surgeon issues commands to the active arm. The endoscope handling active arm can perform two rotations and one translational motion about a predefined pivot point. Hence, it is designed as a rRCM mechanism robot [Reference Yaşır, Kiper and Dede18]. However, the information received from the IMU placed inside the ring is the absolute three-dimensional (3D) orientation information which is received as X-Y-Z Euler angle sequence.

In one of the modes with the firm contact with the surgical tool, the Euler angles about X and Y axes are mapped to the motion of the endoscope robot’s rotation about X and Y axes that are defined in Fig. 5. The translation motion along the telescope of the endoscope and the pivot point is mapped to the Euler angle about the Z axis.

Figure 5. Active arm’s (a) indicative computer-aided design (CAD) model (b) kinematic sketch [Reference Yaşır, Kiper and Dede18] with motion axes indicated at the pivot point.

However, it is foreseen that the rotation about the telescope axis when the tool is inserted inside the surgical workspace is a relatively harder motion for the surgeon. This motion corresponds to the measured Euler angle about the Z axis. Accordingly, as an alternative to the previous mode, as a second mode with the firm contact is developed. In this mode, both the translational motion and the rotational motion about the X axis of the endoscope are mapped to the same Euler angle measurement about the X axis. In order to differentiate the issued motion type, the foot pedal’s state is used:

  1. 1. if the foot pedal is pressed and held down once, then the rotational motion is issued for the measured Euler angle about X axis,

  2. 2. if the foot pedal is double-clicked and held down, then the translational motion is issued for the measured Euler angle about X axis.

To assess the ease of use and the amount of effort when using these modes, a simulator is developed. Here the information flow is the same as the actual system as represented in Fig. 3. The only difference is that the issued motion for the endoscope robot is forwarded to a simulation environment where the measured Euler angles about X and Y axes are used for moving the tool in Fig. 4 (painted in gray color) on the two-dimensional (2D) image. The measured Euler angle about the Z axis is mapped to the tool’s enlarging–shrinking function.

The virtual tool tip that is visualized in Fig. 4 is moved on the screen, and its radius is changed via changing the orientation of the ring with the IMU. During the tests, the ring is worn by the test subjects and the hand of the test subject that wears the ring handles a surgical tool that is inserted in the skull mock-up as presented in Fig. 4. The aim in the tests with the simulator is to move the tool to each “goal” and match the radius of each “goal.” The “goal” to be reached is painted in orange color, and the next “goal” to be reached is painted in red color.

These previous tests with non-surgeon test subjects provide some indications on the user ergonomics of the developed teleoperation system. When the surgeons examined the simulator, it is observed that each surgeon has their own way of posture, motion, and handling of the surgical tools. It became obvious that using Euler angles that are defined with respect to the same earth frame axes for every surgeon is not suitable for the control of this surgical robotic system. Accordingly, a procedure is developed to identify each surgeon’s own way of hand motion during the surgery. The identified motion axes, which are named as preferred motion axes, can then be used for issuing motion demands to the slave system. To carry out this identification procedure, a personalization procedure is developed having actual surgical scenarios.

4. Personalization simulator with actual surgical scenarios

Aligned with conclusions derived from the outcomes of the initial studies, a new strategy is developed to identify the preferred motion axes of the surgeon via a new training simulator that incorporates actual surgical scenarios.

4.1. Preferred motion axes of the surgeons

Preferred motion axes of the surgeons refer to the motion characteristics of each surgeon. Surgeons have different handling of the surgical tools, and their postures during the operation are also different from each other. Consequently, their motion profiles when they try to move the surgical tool up and down and sideways are different. In order to identify the preferred motion axes of the surgeon, a methodology is developed. In this methodology, the aim is to identify the motion of the surgeons as they try to move the endoscope up and down, and sideways, respectively. Figure 6 shows these two motions. Blue arrows on the red workspaces indicates the direction of the requested motion of the endoscope. In order to move the endoscope in the desired direction, surgeons start their motion from an initial state which is identified as initial frame $\mathcal{F}_{\textrm{is}}$ and terminate their motion at a final state $\mathcal{F}_{\textrm{ts}}$ . For each of these motions, an axis of rotation is calculated which is indicated with the purple arrows in Fig. 6. As a consequence, an axis for each motion is identified and the motion signals issued for the endoscope are then calculated as the rotations measured about these preferred motion axes.

Figure 6. Change of the IMU-fixed frame (X-Y-Z axes marked with red) during the initial and terminal states of the identification process. The identified motion axes are indicated with the purple arrows.

4.2. Procedure for identifying the preferred motion axes of the surgeons

In this subsection, the procedure to determine these axes is explained along with a necessary modification to the identified preferred motion axes. Initially, using the setup described in Fig. 4, the surgeon is asked to move sideways. The motion information acquisition by the IMU on the ring is initiated when the foot pedal is pressed. The orientation of the ring at that instant is recorded as the initial frame, and it is represented by a transformation matrix $\hat{C}^{(e,\textrm{is})}\in \mathfrak{R}^{3\times 3}$ which is calculated by using the measured Euler angles about X, Y, and Z axes ( $\theta_{x}^{\textrm{is}}, \theta_{y}^{\textrm{is}}, \theta_{z}^{\textrm{is}}$ ):

(1) \begin{equation}\hat{C}^{\left(e,\textrm{is}\right)}=\hat{R}_{x}\left(\theta _{x}^\textrm{is}\right)\hat{R}_{y}\left(\theta _{y}^\textrm{is}\right)\hat{R}_{z}\left(\theta _{z}^\textrm{is}\right)\end{equation}

Here, $\hat{R}_{i}\left(\theta _{i}\right)\in \mathfrak{R}^{3\times 3}$ defines a rotation matrix about $i$ axis by amount of $\theta _{i}$ , $e$ defines the earth frame, and $is$ defines the initial frame in sideways motion in $\hat{C}^{\left(e,\textrm{is}\right)}$ .

The motion information acquisition by the IMU on the ring is terminated when the foot pedal is released. The orientation of the ring at that instant is recorded as the terminal frame, and it is represented by a transformation matrix $\hat{C}^{\left(e,\textrm{ts}\right)}\in \mathfrak{R}^{3\times 3}$ which is calculated by using the measured Euler angles about X, Y, and Z axes ( $\theta _{x}^\textrm{ts}, \theta _{y}^\textrm{ts}, \theta _{z}^\textrm{ts}$ ):

(2) \begin{equation}\hat{C}^{\left(e,\textrm{ts}\right)}=\hat{R}_{x}\left(\theta _{x}^\textrm{ts}\right)\hat{R}_{y}\left(\theta _{y}^\textrm{ts}\right)\hat{R}_{z}\left(\theta _{z}^\textrm{ts}\right)\end{equation}

In order to determine the nominal rotation axes while moving from the initial frame to the final frame in sideways motion, the transformation matrix between these two frames, $\hat{C}^{\left(\textrm{is},\textrm{ts}\right)}$ , is calculated by (3). Then, by using the Rodrigues’ formula that is presented in matrix form in (4), the nominal rotation axis $\vec {n}_{s}$ is determined where $\vec {n}_{s}$ is a unit vector:

(3) \begin{equation}\hat{C}^{\left(\textrm{is},\textrm{ts}\right)}=\left(\hat{C}^{\left(e,\textrm{is}\right)}\right)^{-1}\hat{C}^{\left(e,\textrm{ts}\right)}=\hat{C}^{\left(\textrm{is},e\right)}\hat{C}^{\left(e,\textrm{ts}\right)}\end{equation}
(4) \begin{equation}\hat{C}^{\left(\textrm{is},\textrm{ts}\right)}=\hat{R}_{{n_{s}}}\!\left(\theta _{s}\right)=\hat{I}\cos \theta _{s}+\tilde{n}_{s}\sin \theta _{s}+\overline{n}_{s}{\overline{n}_{s}}^{t}\left(1-\cos \theta _{s}\right)\end{equation}

Here, $\hat{I}\in \mathfrak{R}^{3\times 3}$ is the identity matrix and $\tilde{n}_{s}\in \mathfrak{R}^{3\times 3}$ is the skew-symmetric matrix form of $\overline{n}_{s}\in \mathfrak{R}^{3\times 1}$ column matrix. The steps for the extraction of the rotation axis $\vec {n}_{s}$ from the Rodrigues’ equation are not given, since the derivation process is textbook information.

The preferred rotation axis identification procedure for the up-down motion is the same, but this time the motion request issued to the surgeon is to move the endoscope upwards or downwards. The resultant of this procedure is the nominal rotation axis $\vec {n}_{u}$ for the up-down motion where $\vec {n}_{u}$ is a unit vector.

To assess if the identified rotation axes are independent from each other, their orthogonality should be investigated by taking the dot product of these two vectors. If $\vec {n}_{s}\cdot \vec {n}_{u}=0$ , then these two axes are perpendicular to each other. If such a case can be reached, the identification of the preferred rotation axes is completed. Generally, such an ideal identification process is not possible. In this case, the common normal of these two vectors $\vec {n}_{s}$ and $\vec {n}_{u}$ is calculated via (5) and named as $\vec {n}_{t}$ :

(5) \begin{equation}\vec {n}_{t}=\vec {n}_{s}\times \vec {n}_{u}\end{equation}

The angle between $\vec {n}_{s}$ and $\vec {n}_{u}$ about $\vec {n}_{t}$ is calculated by using (6), and this angle is distributed evenly to satify the orthogonolity of all axes using (7) and (8) to determine the adjusted rotation axes $\vec {n}_{s'}$ and $\vec {n}_{u}'$ :

(6) \begin{equation}\beta =\cos ^{-1} \!\left(\vec {n}_{s}\cdot \vec {n}_{u}\right)\end{equation}
(7) \begin{equation}\overline{n}_{s'}=\hat{R}_{{n_{t}}}\!\left(-\beta /2\right)\overline{n}_{s}\end{equation}
(8) \begin{equation}\overline{n}_{u'}=\hat{R}_{{n_{t}}}\!\left(+\beta /2\right)\overline{n}_{u}\end{equation}

These preferred axes of rotation define the preferred frame of measurement for each surgeon. This preferred frame’s orientation (identified as $pf$ ) with respect to the earth frame (the frame of measurement of the IMU) is defined as $\hat{C}^{\left(e,\textrm{pf}\right)}=\left[\begin{array}{ccc} \overline{n}_{s'} & \overline{n}_{u'} & \overline{n}_{t} \end{array}\right]$ .

4.2. Implementation of the identification procedure

For the implementing the identification procedure, a new simulator with different hardware setup is developed to represent the surgical procedure of the minimally invasive endoscopic endonasal pituitary tumor surgery with the NeuRoboScope system. This VR screen includes the surgical tools that are used during this surgical process. Figure 7 shows the VR screen of the simulator during the preferred motion identification procedure. The screenshot at the top is captured during the up-down motion, and the one at the bottom is captured during the side-to-side motion. The blue colored objects are generic surgical tools which can be replaced by a specific tool representation. The background image is a photo taken during a surgical procedure, which can also be replaced depending on the location of the simulation. This simulator is not designed to train the surgeon to follow the surgical procedures and complete the surgery. The purpose of the simulator is to identify the surgeon’s preferred frame to acquire motion demands to be issued to the NeuRoboScope system’s active arm. Also, during the actual surgery, the surgeon observes the views captured by the endoscope from a 2-D screen. Therefore, some characteristics for an ideal VR screen such as the level of immersion into the VR and the level of reality become less relevant for this specific purpose.

Figure 7. VR screen of the new simulator to identify the preferred frame of a neurosurgeon. The top screen capture indicates the up-down motion, and the bottom screen capture indicates the side-to-side motion.

The view changes as a result of the motion of the virtual endoscope modeled in the simulator. The motion of the endoscope is controlled by the virtual active arm. The virtual active arm receives the motion demands via the information flow described in Fig. 3.

The motion of the tools that appear on the VR screen is controlled via the motion information acquired from two haptic devices. Figure 8(a) shows how the haptic devices are places relative to the skull mock-up. This layout is specifically designed so that (1) the surgeon approaching to the patient from the patient’s right side would have enough free space to hold both the handles of the haptic devices, and (2) the haptic device’s motion limits are enough to cover the range of motion of the surgeon’s hand while the surgical tools are inside the surgical workspace. The interaction forces with the surgical environment (concha, tip of the nose, etc.) can also be provided by the use of these haptic devices.

Figure 8. (a) Layout of the haptic devices with respect to the skull mock-up; (b) the new NeuRoboScope system simulator for the identification of the surgeon’s preferred axes of measurement.

The information exchange between the peripherals and the VR screen is summarized in Fig. 9. Aside from the haptic devices, the necessary input devices are the IMU-embedded wearable ring and the foot pedal of the NeuRoboScope system.

Figure 9. Peripherals and their information exchange with the VR (virtual endoscope system) of the new simulator for the NeuRoboScope system.

During the preferred frame identification tests, the surgeon’s position with respect to the developed simulator system is shown in Fig. 8(b). This posture mimics the way the main surgeon operates during an endoscopic endonasal pituitary tumor surgery. This simulator also successfully recapitulates bimanual movements of hands and fingers of the surgeon performing the endoscopic endonasal transsphenoidal pituitary surgery.

4.3. Processing the acquired motion for controlling the NeuRoboScope’s active arm

NeuRoboScope system’s new simulator is not developed only to train a surgeon or evaluate the performance of a surgeon. This simulator is also developed to personalize the way that the NeuRoboScope robotic surgery system is used. This approach integrates the surgeon’s needs and behavior to the surgical system’s operation. The next step is to integrate this preferred measurement frame information to the control procedure of the NeuRoboScope system and hence, the control system replicated in the training simulator. Accordingly, the following procedure is defined to be integrated to the NeuRoboScope system’s operational procedure.

During the operation of the NeuRoboScope system, the change of orientation information of the ring, which is received in terms of Euler angles about X, Y, and Z axes, has to be transformed into rotations measured about the preferred axes. The procedure is as follows:

  1. 1. Measure the orientation of the ring after the foot pedal is pressed at each time step. Formulate it for step $i$ as a transformation matrix $\hat{C}^{\left(e,i\right)}$ :

    (9) \begin{equation}\hat{C}^{\left(e,i\right)}=\hat{R}_{x}\!\left(\theta _{x}^{i}\right)\hat{R}_{y}\left(\theta _{y}^{i}\right)\hat{R}_{z}\left(\theta _{z}^{i}\right)\end{equation}
  2. 2. Measure the orientation of the ring at the next time step and record it as a transformation matrix $\hat{C}^{\left(e,i+1\right)}$ :

    (10) \begin{equation}\hat{C}^{\left(e,i+1\right)}=\hat{R}_{x}\!\left(\theta _{x}^{i+1}\right)\hat{R}_{y}\left(\theta _{y}^{i+1}\right)\hat{R}_{z}\left(\theta _{z}^{i+1}\right)\end{equation}
  3. 3. Calculate the transformation matrix that defines the rotation from the ring frame measured at step $i$ to the ring frame measured at step $i+1$ :

    (11) \begin{equation}\hat{C}^{\left(i,i+1\right)}=\left(\hat{C}^{\left(e,i\right)}\right)^{-1}\!\hat{C}^{\left(e,i+1\right)}=\hat{C}^{\left(i,e\right)}\hat{C}^{\left(e,i+1\right)}\end{equation}
  4. 4. Transform $\hat{C}^{\left(i,i+1\right)}$ into the preferred frame so that the Euler angles solution for X-Y-Z Euler sequence can be applied to calculate the rotations about the preferred axes. In this calculation, previously determined $\hat{C}^{\left(e,\textrm{pf}\right)}$ transformation matrix is used which is identified for each surgeon:

    (12) \begin{equation}\hat{C}_\textrm{pf}^{\left(i,i+1\right)}=\hat{C}^{\left(\textrm{pf},e\right)}\hat{C}^{\left(i,i+1\right)}\hat{C}^{\left(e,\textrm{pf}\right)}=\hat{R}_{x}\left(\theta _{s}^{i+1}\right)\hat{R}_{y}\left(\theta _{u}^{i+1}\right)\hat{R}_{z}\left(\theta _{t}^{i+1}\right)\end{equation}

The calculated angles that define the rotation from step $i$ to step $i+1$ are issued as velocity demands ( $\omega _{\textrm{pitch}},\omega _{\textrm{roll}}, v_{\textrm{doly}}$ ) to the endoscope robot. Here, at step $i+1$ , $\omega _{\textrm{pitch}}^{i+1}=\theta _{s}^{i+1}+\theta _{s}^{i}$ is the angular speed demand issued for sideways motion, $\omega _{\textrm{roll}}^{i+1}=\theta _{u}^{i+1}+\theta _{u}^{i}$ is the angular speed demand issued for up-down motion, and $v_{\textrm{dolly}}^{i+1}=\theta _{t}^{i+1}+\theta _{t}^{i}$ is the speed demand issued for translational motion. These are the speed inputs issued to the endoscope robot as defined in Fig. 3.

Although the main aim of the simulator is for personalization of the NeuRoboScope system for every surgeon, it shown via the above-mentioned procedure, this new simulator can be modified for training and evaluation of the surgeons in using the teleoperation system of the NeuRoboScope system. Figure 10 shows a photo of a surgeon while having training on using the teleoperation system of the NeuRoboScope system. The surgical tools, skull mock-up, IMU-embedded ring, and the changing view of the VR screen can be observed on this figure. In this training session, the haptic devices were not used and instead actual surgical tools were handled by the surgeon working on the skull mock-up.

Figure 10. A surgeon’s operation during a training session.

5. Initial tests with non-surgeon users

5.1. Participants

Ten participants were recruited as test subjects (three females, age: 20–44 years, M = 28.30, SD = 6.22) and gave written informed consent for their participation. Measurements of task completion duration and number of foot pedal pressings are recorded and used for all of the test subjects. The study was approved by the ethics committee of Izmir Institute of Technology.

5.2. Test procedure

A test scenario is generated to assess how trivial the use of NeuRoboScope system becomes when the personalization is used. The test screen is shown in Fig. 11. The crossed arrows in the middle indicated the mid-position of the endoscope view. The aim is to locate this mid-position to the circle that appears on the screen, which are called the target positions. Since 3-D feature is not present (it is also not present in the actual surgery), to indicate the depth of the target, a color code is used. If the color of the circle is red, the user should zoom out from the scene by retracting the telescope towards the pivot position and if the color of the circle is blue, the user should zoom in to the scene by moving the telescope away from the pivot point.

Figure 11. Screen captures during the tests.

Each test subject used a non-personalized mode and the personalized mode to compare the effect of personalization. Ten repetitions are made with each mode by all the test subjects. The non-personalized mode is the mode which has a firm contact of the fingers with the tool, and the Euler angle measured about the Z axis is not used. The reasons to choose this non-personalized mode is that (1) the Euler angle measured about the Z axis and this motion is a very hard motion for the users, and (2) the surgeons, during the surgery, need a firm grasp of the surgical tool.

5.3. Test results

The results are obtained in terms of task completion durations presented in Fig. 12 and number of pedal pressings presented in Fig. 13.

Figure 12. Average task completion durations and their standard deviations throughout the trials.

Figure 13. Average number of pedal pressings and their standard deviations throughout the trials.

The average task completion duration of the non-personalized mode is calculated to be about 100 s in the first trial, and after the fifth trial, it converged to 40 s. The average task completion duration of the personalized mode is calculated to be about 60 s in the first trial, and after the third trial, it converged to 30 s.

During the tests with the non-personalized mode, the average number of pedal pressings decreased from 38 in the first trial to about 15 after the fifth trial. The average number of pedal pressing is measured to be consistently about 10 after the first trial.

6. Conclusions

This study describes a systematic procedure to personalize the use of a surgical robotic system called the NeuRoboScope system, which is specifically designed for the minimally invasive endoscopic pituitary and skull base surgeries. The procedure is described to identify the surgeon’s preferred axes of motion. These axes formulate a preferred motion measurement frame for each surgeon. The measured motion is then used in issuing driving commands for the active arm of the NeuRoboScope system which handles the endoscope. As a result of this procedure, the NeuRoboScope system can be personalized for each surgeon depending on their motion behavior.

Initial tests with non-surgeons are indicated that the personalized mode is more intuitive compared to the non-personalized mode. Even after a couple of trials, the users are able to control the slave system efficiently. This efficient use also decreased the efforts by the users which can be understood from the number of pedal pressings.

Financial support

This project was supported by the Scientific and Technological Research Council of Turkey (TUBITAK) (Grant Numbers: 115E725 and 115E726).

Ethical approval

This study was approved by the Izmir High Technology Institute Science and Engineering Sciences Scientific Research and Publication Ethics Committee (Reference number: 12.11.2021-E.44460; Date of approval: 12.11.2001).

Conflicts of interest

The authors declare no conflicts of interest.

Author contributions

Tarık Büyüküstün worked on the development of the simulator and collected the data from test subjects; Mehmet İsmet Can Dede worked on the theory, processed the data, and wrote the article; Şahin Hanalioğlu, Mustafa Berker, and A. İlkay Işıkay contributed to the development of the simulator and the writing of the article.

References

Yiannakopoulou, E., Nikiteas, N., Perrea, D. and Tsigris, C., “Virtual reality simulators and training in laparoscopic surgery,” Int. J. Surg. 13, 6064 (2015).CrossRefGoogle ScholarPubMed
Okamura, A. M., “Haptics in Robot-Assisted Minimally Invasive Surgery,” In: The Encyclopedia of Medical Robotics: Volume 1 Minimally Invasive Surgical Robotics (World Scientific, Singapore, 2019) pp. 317339.Google Scholar
Perrenot, C., Perez, M., Tran, N., Jehl, J-P., Felblinger, J., Bresler, L. and Hubert, J., “The virtual reality simulator dV-Trainer® is a valid assessment tool for robotic surgical skills,” Surg. Endosc. 26(9), 25872593 (2012).CrossRefGoogle ScholarPubMed
Dede, M. I. C., Kiper, G., Ayav, T., Özdemirel, B., Tatlıcıoğlu, E., Hanalioglu, S., Işıkay, İ. and Berker, M., “Human-robot interfaces of the NeuRoboScope: A minimally invasive endoscopic pituitary tumor surgery robotic assistance system,” J. Med. Device 15(1), 011106 (2021).CrossRefGoogle Scholar
Dai, X., Zhao, B., Zhao, S., He, Y., Sun, Y., Gao, P., Hu, Y. and Zhang, J., “An Endoscope Holder with Automatic Tracking Feature for Nasal Surgery,” In: 2016 IEEE International Conference on Information and Automation (ICIA 2016) , Ningbo, China (2016) pp. 16.Google Scholar
Chumnanvej, S., Pillai, B. M., Chalongwongse, S. and Suthakorn, J., “Endonasal endoscopic transsphenoidal approach robot prototype: A cadaveric trial,” Asian J. Surg. 44(1), 345351 (2021).CrossRefGoogle ScholarPubMed
Huang, Y., Lai, W., Cao, L., Burdet, E. and Phee, S. J., “Design and evaluation of a foot-controlled robotic system for endoscopic surgery,” IEEE Robot. Autom. Lett. 6(2), 24692476 (2021).CrossRefGoogle Scholar
Pisla, D., Gherman, B., Plitea, N., Gyurka, B., Vaida, C., Vlad, L., Grauri, F., Radu, C., Suciu, M., Szilaghi, A., Stoica, A., “PARASURG hybrid parallel robot for minimally invasive surgery,” Chirurgia 106(5), 619625 (2011).Google ScholarPubMed
Remirez, A. A., Rox, M. F., Bruns, T. L., Russell, P. T. and Webster III, R. J., “A Teleoperated Surgical Robot System,” In: Neurosurgical Robotics (Humana, New York, NY, 2021) pp. 4961.CrossRefGoogle Scholar
Muñoz, V. F., Garcia-Morales, I., Fraile-Marinero, J. C., Perez-Turiel, J., Muñoz-Garcia, A., Bauzano, E., Rivas-Blanco, I., Sabater-Navarro, J. M. and de la Fuente, E., “Collaborative robotic assistant platform for endonasal surgery: Preliminary in-vitro trials,” Sensors 21(7), 2320 (2021).CrossRefGoogle ScholarPubMed
Colan, J., Nakanishi, J., Aoyama, T. and Hasegawa, Y., “A cooperative human-robot interface for constrained manipulation in robot-assisted endonasal surgery,” Appl. Sci. 10(14), 4809 (2020).CrossRefGoogle Scholar
Ogiwara, T., Goto, T., Nagm, A. and Hongo, K., “Endoscopic endonasal transsphenoidal surgery using the iArmS operation support robot: Initial experience in 43 patients,” Neurosurg. Focus 42(5), E10 (2017).CrossRefGoogle ScholarPubMed
He, Y., Deng, Z. and Zhang, J., “Design and voice-based control of a nasal endoscopic surgical robot,” CAAI Trans. Intell. Technol. 6(1), 123131 (2021).CrossRefGoogle Scholar
He, Y., Zhang, P., Qi, X., Zhao, B., Li, S. and Hu, Y., “Endoscopic path planning in robot-assisted endoscopic nasal surgery,” IEEE Access 8, 1703917048 (2020).CrossRefGoogle Scholar
He, Y., Hu, Y., Zhang, P., Zhao, B., Qi, X. and Zhang, J., “Human-robot cooperative control based on virtual fixture in robot-assisted endoscopic sinus surgery,” Appl. Sci. 9(8), 1659 (2019).CrossRefGoogle Scholar
Okuda, H., Okamoto, J., Takumi, Y., Kakehata, S. and Muragaki, Y., “The iArmS robotic armrest prolongs endoscope lens-wiping intervals in endoscopic sinus surgery,” Surg. Innov. 27(5), 515522 (2020).CrossRefGoogle ScholarPubMed
Ateş, G., Majani, R. and Dede, M. I. C., “Design of a Teleoperation Scheme with a Wearable Master for Minimally Invasive Surgery,” In: New Trends in Medical and Service Robotics (Springer, Cham, 2019) pp. 4553.CrossRefGoogle Scholar
Yaşır, A., Kiper, G. and Dede, M. C., “Kinematic design of a non-parasitic 2R1T parallel mechanism with remote center of motion to be used in minimally invasive surgery applications,” Mech. Mach. Theory 153(3), 104013 (2020).CrossRefGoogle Scholar
Figure 0

Figure 1. The NeuRoboScope system’s slave system while it is backdriven by the human operator as the system is handled by the human operator at the endoscope. The system is composed of the passive arm and the active arm, and the master system which is composed of the IMU-embedded ring and a foot pedal (the foot pedal is not shown on this photo).

Figure 1

Figure 2. A screenshot indicating the master-slave teleoperation mode. A surgical tool, aspirator, is handled by the hand wearing the IMU-embedded ring. Robot and the surgeon cooperate on the patient.

Figure 2

Figure 3. Information exchange between the master system (ring and foot pedal) and the slave system (endoscope robot) of the NeuRoboScope.

Figure 3

Figure 4. Simulator’s graphical user interface and the master system’s ring.

Figure 4

Figure 5. Active arm’s (a) indicative computer-aided design (CAD) model (b) kinematic sketch [18] with motion axes indicated at the pivot point.

Figure 5

Figure 6. Change of the IMU-fixed frame (X-Y-Z axes marked with red) during the initial and terminal states of the identification process. The identified motion axes are indicated with the purple arrows.

Figure 6

Figure 7. VR screen of the new simulator to identify the preferred frame of a neurosurgeon. The top screen capture indicates the up-down motion, and the bottom screen capture indicates the side-to-side motion.

Figure 7

Figure 8. (a) Layout of the haptic devices with respect to the skull mock-up; (b) the new NeuRoboScope system simulator for the identification of the surgeon’s preferred axes of measurement.

Figure 8

Figure 9. Peripherals and their information exchange with the VR (virtual endoscope system) of the new simulator for the NeuRoboScope system.

Figure 9

Figure 10. A surgeon’s operation during a training session.

Figure 10

Figure 11. Screen captures during the tests.

Figure 11

Figure 12. Average task completion durations and their standard deviations throughout the trials.

Figure 12

Figure 13. Average number of pedal pressings and their standard deviations throughout the trials.