S oft-NeuroAdapt: A 3-DOF Neuro-Adaptive Patient Pose Correction System For Frameless and Maskless Cancer Radiotherapy Olalekan Ogunmolu 1 , Adwait Kulkarni 2 , Yonas Tadesse 2 , Xuejun Gu 3 , Steve Jiang 3 , and Nicholas Gans 1 Abstract — Precise patient positioning is fundamental to suc- cessful removal of malignant tumors during treatment of head and neck cancers. Errors in patient positioning have been known to damage critical organs and cause complications. To better address issues of patient positioning and motion, we introduce a 3-DOF neuro-adaptive soft-robot, called Soft- NeuroAdapt to correct deviations along 3 axes. The robot consists of inflatable air bladders that adaptively control head deviations from target while ensuring patient safety and comfort. The adaptive-neuro controller combines a state feedback component, a feedforward regulator, and a neural network that ensures correct adaptation. States are measured by a 3D vision system. We validate Soft-NeuroAdapt on a 3D printed head-and-neck dummy, and demonstrate that the controller provides adaptive actuation that compensates for intrafractional deviations in patient positioning. I. I NTRODUCTION Radiation-based treatment of head and neck (H&N) can- cers often involve intensity-modulated radiotherapy (IMRT), which modulate radiation dosage and shaping of treatment beam to the precise size of tumor cells. IMRT carefully targets organs, while minimizing toxicity and exposure of organs at risk. Used alongside image guided radiotherapy (IGRT), IMRT assures precision of dosage targets: a patient is positioned on a treatment table after dosage planning by a physician, then a rigid robotic couch is used for motion alignment during surgery. While conventional RT uses rigid immobilization techniques such as masks, frames, arm positioning devices or vacuum mattresses [1], IGRT methods employ ultrasound, 3D imaging systems, 2D X- ray devices and/or computed tomography to instantly amend positioning errors, and improve daily radiotherapy fractions’ precision. Precise, and repeatable patient positioning is therefore crucial in RT treatments when escalation of dose is necessary in a target volume and exposure of adjacent organs is to be minimized. Because it does not require rigid masks and body fixators, IGRT is more comfortable for the patient as well as more accurate with the aid of highly accurate localization *This work was supported by the Radiation Oncology Department, UT Southwestern, Dallas, Texas, USA 1 Olalekan Ogunmolu and Nicholas Gans are with the Department of Elec- trical Engineering, University of Texas at Dallas, Richardson, TX 75080, USA { olalekan.ogunmolu, ngans } @utdallas.edu 1 Adwait Kulkarni and Yonas Tadesse are with the Department of Me- chanical Engineering, University of Texas at Dallas, Richardson, TX 75080, USA { axf151530, ytt110030 } @utdallas.edu 3 Xuejun Gu and Steve Jiang are with the Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas TX 75390, USA { Xuejun.Gu, Steve.Jiang } @utsouthwestern.edu systems. Sterzing [2], after an extensive review of IGRT methods, found IGRT to be more precise and safer than conventional radiotherapy. But setup errors (interfractional) or patient motion (intrafractional) errors often need to be accounted for during RT. While intrafractional errors can be minimized by highlighting the importance of voluntary stillness to the patient, suitable means of immobilization and adaptive positioning are necessary when the patient moves involuntarily or sleeps. This would assure precise and accurate targeting of critical organs while keeping the pa- tient comfortable during treatment. Frameless and Maskless (F&M) IGRT is promising because it minimizes invasiveness and reduces setup times while comfortably positioning the patient. We present Soft-NeuroAdapt , a set of three soft actuators that employ a neuro -controller to adapt ively compensate for positioning deviations. Soft robot systems are elastomeric ac- tive and reactive nonlinear compliant systems, suitable to the task of human-machine interaction. They can dynamically change their stiffness and activate their surfaces to provide a desired motion on a human body part. Their characteristic morphological computation properties [3] make them suit- able for autonomous control including inflation, deflation and reaching. Exhibiting highly nonlinear dynamics, controlling them for precise actuation is complicated. While a metric- based modeling approach may work in highly-structured environments, system parameters and dynamics change with different patients’ head and upper torso anatomy such that achieving precise control is best achieved with a learning- based method. But reliable high parameter-space adaptive- control methods rely on open-loop adaptation mechanisms based on extensive design techniques. Our goal is to derive a learning-based controller, going beyond task-specific, expert- driven methods in order to adaptively generalize to new control systems. Following our recent investigative studies, [4], [5], on 1-DoF soft-robot compensation systems, we present a 3- DoF soft robot system that addresses 3-DoF involuntary intrafractional motions of the head and neck (H&N) region during F&M RT. This work presents a neuro-dynamics estimator that learns the underlying system model, then adapts to the model in its control law to provide bounded tracking of set trajectory. The controller corrects intrafrac- tional (involuntary) patient motions along three defined axes namely head pitch, roll and elevation angles for a patient lying in a supine position on a table. We use three inflatable air bladders (IABs), actuated through a system of inlet and outlet solenoid proportional valves. The controller uses state arXiv:1703.03821v4 [cs.RO] 22 Sep 2017 feedback to provide bounded stability of states, a reference trajectory component to provide command tracking and a neural-network component to adaptively converge states that start outside of the sphere of stability into the region of stability. We perform head pose tracking in the 3D space of a stereo-camera, and we conduct experiments to validate the proposed bio-pneumatic system and controller. This paper is organized as follows: section II presents related work, section III describes the hardware setup, section IV describes the vision segmentation algorithm and 3-DOF pose estimation, while section V describes the adaptive-neuro control algorithm. We discuss experiments in VI and provide discussions and future work in VII. II. R ELATED W ORK Studies on the feasibility of non-rigid immobilization devices during cranial stereotactic radiosurgery (SRS) fall largely under inspection-based non-rigid immobilization de- vices, and correction-based non-rigid immobilization sys- tems. Inspection-based Systems: In [6], Cervino et al. used ex- pandable foams that fit a patient’s head while leaving the face opened. Patient set-up was performed using computed tomographic (CT) scans before treatment. They found an average treatment time of 26 minutes with patients who slept during experiments taking longer as a result of involuntary movements. In [7], the authors evaluated the accuracy of a head mold that minimally immobilized a patient’s H&N region while leaving the face free in a controlled positioning experiment with volunteers. 3D surface reconstruction imag- ing system was used in monitoring patients’ position so that treatment was stopped whenever motion exceeded a defined threshold. While the monitoring system showed great clinical accuracy, it required high patient cooperation to achieve immobilization. Using a head mold, an open face-mask, and a mouthpiece, Li et al. [8] quantified the residual rotation and positioning errors in an open-loop setting to ascertain the reduction in setup time during patient positioning setup. They reported the head mold and open face mask system could restrict head motions to within 0 . 6 ◦ ± 0 . 3 ◦ with the time spent on in-situ motion corrections limited to 2 . 7 ± 1 . 0 min. Correction-based Systems: In [9], a 4-DOF motion correc- tion system was used in a real-time motion compensation of a phantom and in human trials. An optical sensing system tracked the pose of the head and a decoupling control law regulated the xyz -translational and pitch motions of the head. They reported target accuracy of 0.5mm and 0 . 2 ◦ with the decoupling controller for each axis of the 4-DOF motion. The system relied on stepper motor actuators and other electro- mechanical (EM) components positioned under the patient’s H&N region. These devices have the undesirable effect of attenuating treatment beams during RT, and as a result are not recommended during clinical procedure. The presence of the EM stages can significantly reduce the efficiency of the incident radiation targeted to tumor cells. Fig. 1: Hardware Description To address these concerns, we proposed soft-robot actu- ators for H&N motion compensation during F&M cancer radio-therapy [4], [5] actuated by pneumatic valves which are kept away from the head. Plastic hoses and silicone connectors convey air into and out of the soft robots that laid beneath the head so that our motion correction system does not interfere with planned dosage from the treatment beam. In our evaluations, we used pneumatic valves mounted on a movable plywood and connect these to the soft robots with long silicone tubes so that the medical physicist can separate EM components from the patient as deemed necessary during treatment. While our previous works related to controlling the 1-DoF motion of the head along the z-axis, here we address the 3-DOF control of the head using a surface reconstructing sensor and control the pose of the head to follow set trajectories in real-time. III. H ARDWARE D ESIGN O VERVIEW The actuation mechanism consists of three custom- designed inflatable air bladders (IABs) made from elas- tomeric polymers. The base IAB (beneath the head’s poste- rior) is 180mmx280mm when flat and inflates to a maximum width of ∼ 75 mm while the other two are 180mmx140mm in size. The IABs consist of inflatable rubber encased in a breathable foam pad for comfort, modified to be the size of an average adult male; they have separate inlet and outlets openings, connected with crack-resistant polyethylene tubing (1/8” ID and 1/4” OD); this sustains pressure of up to 320psi. Each hose leads to a proportional solenoid valve, which is in turn connected to rectangular manifolds (one manifold to the inlet supply and the second to the outlet supply). We use six Dakota Instruments EM valves (Model PSV0105, Orangeburg, NY, USA) to supply proportional torques to the soft actuators. A regulated air canister supplied constant air pressure at 15psi to the inlet-air conveying manifold, while a suction pump supplied vacuum pressure at 12 psi to the valves that removed air from the bladders. The air rate of flow into or out of each bladder was controlled via custom- built voltage regulating circuits which got PWM signals from a National Instruments (NI) myRIO microcontroller. We 3D printed a custom manikin head, measuring 155 × 240 × 200 Fig. 2: Head Coordinate System mm ( W × L × D ) and comparing between 50% and 75% weight of a typical adult male head or 99% of a typical adult female head; it weights 5kg – above average and reasonable. The head was fitted with a ball-joint in the neck to replicate motion of the human head about the neck. An Ensenso 3D camera is mounted near 45 ◦ above the head to measure the pose of the head in real time. All vision processing, systems modeling and control laws were computed on a CORSAIR PC. We exchange the neuro-control and sensor signals via the publish-subscribe IPC of the ROS middleware installed on the PC. Adaptive control laws were sent via udp packets to the RIO microcontroller. The system setup is shown in Fig. 1. The reference frame of the head is described as follows: the pitch/x-axes points from the left ear out of the right ear, the yaw/z-axes points from the back of the head through the forehead through, and the roll/y-axes goes from the neck through the top of the head. The left and right bladders control the roll angles/x-axes motions while the bladder underneath the head, henceforth referred to as the base bladder, controls the pitch angles and z-axis motions. The reference frame is illustrated in Fig. 2. IV. V ISION - BASED P OSE E STIMATION The model head lies in a supine position above a planar table as shown in Fig. 1. We employed a 3D camera from Ensenso GmbH (model N35) to reconstruct the surface image and measure head pose. The N35 camera captures multiple image pairs during exposure; each image pair is made up of different patterns, controlled by piezo-actuators. A stereo- matching algorithm gathers the information from all image pairs after capture to produce a high-resolution point cloud (PCL) of the scene [10]. We mounted the 3D sensor such that its lens faced the head at approximately 45 ◦ from the vertical during experiments. Our goal is to control the motion of the head about three axes, namely z , pitch and roll axes as described in section III; this section presents how we go about extracting representable features from the face. A. Face Segmentation The dense point cloud of the scene has (i) marked jump in rendered points along the z-axis of the camera; this is because of the single view angle by the camera; (ii) the scene clutter and lack of multiple camera view angles does not affect the representation of the face; (iii) thus, through spatial decomposition of the scene, we can separate the face from the scene. However, the point cloud is computed from monochromatic IR image pairs (with no texture informa- tion) making morphological operations difficult; due to the multiple image pairs used in 3D reconstruction to generate a highly accurate measurement, the camera is limited to a maximum frame rate of 10Hz. Inspired by Rusu’s work [11], we divide the segmentation problem into stages, with each stage involving segmenting out candidates that do not belong to the object we want to identify (the frontal face) in the scene. Our engineering philosophy in the segmentation phase is inspired by spatial decomposition methods that determine subdivisions and boundaries to allow retrieval of data that we want given a measure of proximity. In this case, we know that the location of the table cannot exceed a given height during experiments and the camera’s position is fixed while the head moves based on bladders’ actuation. Separating objects that represent planar 2D geometric shapes from the scene therefore simplifies the face segmentation algorithm. By finding and removing objects that fit primitive geometric shapes from the scene, clustering of the remaining objects would yield the face of the patient in the scene. We fit a simplified 2D planar object to the scene such that searching for points p i ∈ P that support a 2D plane can be found within a tolerance defined by the inequality 0 ≤ | d |≤ | d max , where | d max | represents a user-defined threshold to segment out [11]. We proceed as follows: • The point cloud of the scene was acquired from the computed disparity map of the two raw camera images; • To minimize sensor noise whilst preserving 3D repre- sentation, the acquired point cloud was downsampled using a SAmple Consensus (SAC)-based robust moving least squares algorithm (RMLS) [11, § 6]; • We then searched for the edges of 2D planar regions in the scene with Maximum Likelihood SAmple Consen- sus (MLESAC) [12], and we bound the resulting plane indices by computing their 2D convex hull; • A model fitting stage extrudes the computed hull (of objects lying above the 2D planar region) into a prism model based on a defined L 1 Manhattan distance; this gives the points whose height threshold is about the region of the face in the scene [13]; • We then cluster the remaining points based on a heuristi- cally determined L 2 distance between points remaining within the polygonal plane. The largest cluster gives us the face. The result of the resampling algorithm is shown in the top-right image of Fig. 3. To simplify the complexity of the planar structure in the scene, the table is modeled as a 2D planar geometric primi- tive so that finding points that fit a defined model hypothesis involves estimating a single distance to the frontal plane of the table surface rather than multiple points if the model was represented with points. Searching for horizontal planes that are perpendicular to the z-axis of the head is carried out using the maximum likelihood SAC [12] algorithm implemented Fig. 3: [Top-left]: Dense point cloud of the experimental setup scene. [Top right]: Downsampled cluttered cloud of the left scene. [Bottom-left]: Using RANSAC, we searched for 2D plane candidates in the scene and compute the convex hull of found planar regions. We then extrude point indices within the hull into a prismatic polygonal model to give the face region. [Bottom-right]: An additional step clusters the resultant cloud based on a Euclidean distance. The largest cluster is taken to be the face. TABLE I: Plane Segmentation Algorithm 1. for i = 1 to N do 2. sample non-collinear points { p i , p j , p k } from P 3. calculate the model coefficients a x + b y + c z = d 4. find distances from all p ∈ P to the plane ( a, b, c, d ) 5. store points p ∗ ∈ P that satisfy the model hypothesis, 0 ≤ | d |≤ | d max . 6. return maximum of the stored points p ∗ . in the PCL Library [14] to generate model hypotheses. The plane segmentation algorithm is defined in Table I. The plane segmentation process is run once. Once the plane model is found, its indices and those of objects lying above it are separated and stored in separate data structures. Every subsequent iteration consists of (i) computing the 2D convex hull of point indices of objects above the table using the Qhull library 1 , (ii) using a pre-defined prismatic model candidate to hold extruded points to the approximate facial height above the table; and (iii) separating the face from every other point in the resulting cloud through the Euclidean clustering (EC) method of [15]. A distinct point cluster is defined if the points in cluster C i = { p i ∈ P} and cluster C j = { p j ∈ P} satisfy the L 2 -distance threshold x ≤ min ‖ p i − p j ‖ 2 Finding the face in the scene after carrying out EC algorithm is a question of finding the largest index in the list C . This takes O ( n ) (linear) time for n clusters. The face segmentation results are presented in Fig. 3. We then compute the Cartesian position of the face with respect to the 1 The Qhull library: http://www.qhull.org/ camera origin by taking the center of mass of the segmented facial region (bottom-right image of Fig. 3). This is obtained by calculating the mean-value of all the points in the resulting cloud ( ≈ 600 points on average). B. Head Pose Estimation With the facial point cloud segmented, we define three points on the head. Our goal is to compute the optimal translation and rotation of the head from a model point set X = {− → x i } to a measured point set P = {− → p i } , where N x = N p = 3 , and the point − → x i ∈ X has the same index as − → p i ∈ P . Following the approach of Besl and McKay in [16], we compute the cross-covariance matrix of P and X as Σ px , extract the cyclic components of this skew symmetric matrix as ∆ , and use it to form the symmetric 4 × 4 matrix Q ( Σ px ) as follows, Q ( Σ px ) = [ tr ( Σ px ) ∆ T ∆ Σ px + Σ T px − tr ( Σ px ) I 3 ] . (1) The unit eigenvector, q R , that corresponds to the maximum eigenvalue of Q ( Σ px ) is selected as the optimal rotation quaternion; we find the optimal translation vector as − → q T = − → μ x − R ( − → q R ) − → μ p (2) where μ x and μ p are the mean of point sets X and P respectively. Obtaining the roll, pitch and yaw angles from q R is trivial and the pose of the face is described by tuples [ q T , q R ] = { x, y, z, θ, φ, ψ } with respect to the world frame. Given the 3-DOF setup, we choose to control three states of the head: z , θ, φ ( i.e. z, roll, and pitch). V. C ONTROL D ESIGN Our control philosophy is governed by the state feedback and feedforward regulation problem, with an adaptation mechanism based off an estimation of the head pose given a priori information about the systems states and past con- trol actions. We propose an adaptive control strategy in a Bayesian setting, which given an initial prior distribution of controls and 3-DOF head pose, minimizes a cost criterion as the expected value of control laws that will yield a future desired head pose. We consider the pwm voltages that power the valves as input, u , the head pose as the output, y and an unknown disturbance w ( k ) . We first describe the nonlin- ear function approximator model ˆ f ( u ( k − d ) , y ( k ) , w ( k )) , which is constructed from memory-based input and output experimental data that satisfy Z N = { u ( k ) , u ( k − 1) , . . . u ( k − n u ) , y ( k ) , y ( k − 1) , . . . , y ( k − n y ) } (3) that satisfy the Lipschitz continuity. (3) implies an input u ( · ) at time k − d , produces an output y ( k ) at d time instants later. The next section describes how we formulate the class of minimum error variance controllers that predict the effect of actions u ( · ) on states y ( · ) using a self-tuning regulator. Fig. 4: Function Approximator Model A. Adaptive Neuro-Control Formulation Following our previous approach in [5, § IV.B], we fix a persistently exciting input signal u ex ∈ L 2 ∩ L ∞ to excite the nonlinear modes of the system. We then parameterized the system with a neural network with sufficient number of neurons. The neural network (NN) provided information on the changing parameters of the system during control trials. The adjustment mechanism is computed from inverse Lyapunov analysis, where we choose adaptive laws that guar- antee a nonpositive-definite Lyapunov function candidate when evaluated along the trajectories of the error dynamics. Our contribution is the approximation of the nonlinear system by a long short-term memory (LSTM) [17], equipped with an adequate number of neurons in its hidden layers. We parameterized the last layer of the network with a fully connected layer that outputs control torques to the valves. The neural network can be seen as a memory-based model that remembers effective controls for the adaptation mechanism in the presence of uncertainties and external disturbance. The neural network is shown in Fig. 4. Depending on the region of attraction of the system the network is approximat- ing, it parameterizes the nonlinear dynamical system f ( · ) and maps the parameterized model to appropriate valve torques. There exists additional feedforward + feedback terms in the global controller (introduced shortly) that guarantee system stability and robustness to uncertainties. Therefore, the global controller keeps the states of the system bounded under closed-loop dynamics, ensures convergence to desired tra- jectories from states that are initialized outside the domain of attraction, and guarantees robust reference tracking in the presence of non-parametric uncertainties. For the multi-input, multi-output (MIMO) adjustable sys- tem, ̇ y = Ay + B Λ ( u − f ( y, u )) + w ( k ) (4) where y ∈ R n , u ∈ R m are known input and output vectors, and A ∈ R n × n , Λ ∈ R m × m are unknown matrices, B ∈ R n × m , sgn ( Λ ) are known matrices, and w ( k ) ∈ R n is a bounded time-varying unknown disturbance, upper-bounded by a fixed positive scalar w max . We make the following assumptions: • a dynamic RNN with N neurons, φ ( y ) , exists that will map from a compact input space U ⊃ u to an output space y ⊂ Y on the Lebesgue integrable functions with closed interval [0 , T ] or open-ended interval [0 , ∞ ) ; • the nonlinear function f ( · ) is exactly Θ T Φ( y ) with vectorized coefficients Θ ∈ R N × m and a Lipschitz- continuous vector of basis functions Φ ( y ) ∈ R N ; • inside a ball B R of known, finite radius R , the ideal NN approximation f ( y ) : R n → R m , is realized to a sufficient degree of accuracy, ε f > 0 ; • the process noise w ( k ) is estimated alongside model parameters by the dynamic RNN; • outside B R , the NN approximation error can be upper- bounded by a known scalar function ε max such that ‖ ε ‖≤ ε max , ∀ y ∈ B R ; • there exists an exponentially stable reference model ̇ y m = A m y m + B m r , (5) with a Hurwitz matrix A m ∈ R n × n and B m ∈ R n × m commanded by a reference signal r ∈ R m . For this sys- tem, we note that n = 3 and m = 6 . Our objective is to design an model-reference adaptive controller (MRAC) capable of operating in the presence of parametric ( ε f ), and non-parametric ( w ( k ) ) uncertainties so as to assure the boundedness of all signals within the closed-loop system. We propose the following controller u = ˆ K T y y + ˆ K T r r + ˆ f ( y , u ) , (6) where ˆ K y and ˆ K r are adaptive gains to be designed shortly. The ˆ K T y y term keeps the states of the approximation set y ∈ B R stable, while the K T r r term causes the states to follow a given reference trajectory. The function approximator ˆ f ( · ) ensures states that start outside the approximation set y ∈ B R converge to B R in finite time (it converges non-parametric errors ε f that puts certain states out of the approximation set into B R ). We can generally write the NN model as ˆ f ( y ) = ˆ Θ T Φ ( y ) + ε f , where ˆ Θ T denotes the vectorized weights of the neural network and Φ ( y ) denotes the vector of inputs and outputs defined as Φ ( y ) = { y ( k − d ) · · · y ( k − d − 4) , u ( k − d ) · · · u ( k − d − 5) } , (7) and ε f is the approximation error. The closed-loop dynamics therefore become ̇ y = Ay + B Λ ( ˆ K T y y + ˆ K T r r + ˆ f ( · ) − f ( · ) ) . (8) We assume nonlinear function and approximator matching conditions, f ( · ) = ˆ f ( · ) , such that after rearrangement, (8) can be written as ̇ y = ( A + B Λ ˆ K T y ) y + B Λ ( ˆ K T r r + ε f ) . (9) Furthermore, we assume model matching conditions with ideal constant gains K y and K r so that A + B Λ K T y = A m , and B Λ K T r = B m , (10) from which A + B Λ ˆ K T y − A m = B Λ ̃ K T y and B Λ ˆ K T r − B m = B Λ ̃ K T r , (11) where ̃ K T y = K T y − ˆ K T y and ̃ K T r = K T r − ˆ K T r . The generalized error state vector e ( k ) = y ( k ) − y m ( k ) has dynamics ̇ e ( k ) = ̇ y ( k ) − ̇ y m ( k ) , so that by substituting (5) and (8) into ̇ e , we have ̇ e ( k ) = A m e ( k ) + B Λ [ ̃ K T r r + ̃ K T y y − ε f ] (12) The estimation error will be bounded as long as y ∈ B R . Our goal is to keep y ∈ B R . Theorem: Given correct choice of adaptive gains ˆ K y and ˆ K r , the error vector e ( k ) , with closed loop time derivative given by (12) will be uniformly ultimately bounded, and the state y will converge to a neighborhood of r . Proof: We choose a Lyapunov function candidate V in terms of the generalized error state space e , gains, ̃ K T y , ̃ K T r , and parameter error ε f ( y ( k )) space ( [18], [19], [20]) as follows V ( e , ̃ K y , ̃ K r ) = e T Pe + tr ( ̃ K T y Γ − 1 y ̃ K y | Λ | ) + tr ( ̃ K T r Γ − 1 r ̃ K T r | Λ | ) (13) where Γ y and Γ r are fixed symmetric, positive definite (SPD) matrices of adaptation rates, tr ( A ) denote the trace of matrix A and P is a unique SPD matrix solution of the algebraic Lyapunov function PA m + A T m P = − Q , (14) where Q is a SPD matrix. Take the time derivative of (13) ̇ V ( e , ̃ K y , ̃ K r ) = ̇ e T Pe + e T P ̇ e + 2 tr ( ̃ K T y Γ − 1 y ̇ ˆ K y | Λ | ) + 2 tr ( ̃ K T r Γ − 1 r ̇ ˆ K r | Λ | ) = e T ( P A m + A T m P ) e + 2 e T PB Λ ( ̃ K T y y + ̃ K T r r − ε f ( y ) ) +2 tr ( ̃ K T y Γ − 1 y ̇ ˆ K y | Λ | ) + 2 tr ( ̃ K T r Γ − 1 r ̇ ˆ K r | Λ | ) = − e T Qe − 2 e T PB Λ ε f ( y ) + 2 e T PB Λ ̃ K T y y + 2 tr ( ̃ K T y Γ − 1 y ̇ ˆ K y ) + 2 e T PB Λ ̃ K T r r + 2 tr ( ∆ K T r Γ − 1 r ̇ ˆ K r ) Since x T y = tr ( y x T ) from trace identity, we have ̇ V ( · ) = − e T Qe − 2 e T PB Λ ε f + 2 tr ( ̃ K T y ( Γ − 1 y ̇ ˆ K y + ye T PB sgn ( Λ ) ) | Λ | + 2 tr ( ̃ K T r ( Γ − 1 r ̇ ˆ K r + re T PB sgn ( Λ ) ) | Λ | (15) where for a real-valued x , we have x = sgn ( x ) | x | . The first two terms in (15) will be negative definite for all e 6 = 0 since A m is Hurwitz and the other terms in (15) will be identically null if we choose the adaptation laws ̇ ˆ K y = − Γ y ye T P B sgn ( Λ ) , ̇ ˆ K r = − Γ r re T P B sgn ( Λ ) . (16) The time-derivative of the Lyapunov function can then be written as ̇ V ( · ) = − e T Qe − 2 e T PB Λ ε f ≤ − λ low ‖ e ‖ 2 +2 ‖ e ‖‖ PB ‖ λ high ( Λ ) ε max , (17) where λ low , λ high represent the minimum and maximum characteristic roots of Q and Λ respectively. ̇ V ( · ) is thus negative definite outside the compact set χ = ( e : ‖ e ‖≤ 2 ‖ PB ‖ λ high ( Λ ) ε max λ low ( Q ) ) . (18) and we conclude that the error e is uniformly ultimately bounded . As e converges to a neighborhood of 0, y converges to a neighborhood of y m . From the stable model reference system in (5), y converges to a neighborhood of r . Note that asymptotic convergence of e to zero is not guaranteed but the parametric errors are guaranteed to stay bounded. B. Network Design We require accurate mapping of temporally lagged pat- terns in inputs to output states, a dynamic nonlinear model of valve encoder values to sensor measurements that accurately maps f ( · ) in (4). We choose a LSTM [17] due to its capacity for long-term context memorization and inherent multiplica- tive units that avoid oscillating weights or vanishing gradients when error signals are backpropagated in time [17], [21]. LSTMs truncate gradients in the network where it is harmless by enforcing constant error flows through their constant error carousels . As a result, LSTMs are robustly more powerful for adaptive sequence-to-sequence modeling or mapping data that temporally evolve in time. Their biological model makes them more suitable for adaptive robotics such as soft robots than previously used artificial NNs such as feedforward networks [22], radial basis-functions [20], [23] or vanilla RNNs [24]. The NN model takes a memory-based concatenated vector of current inputs and past outputs as in (7), propagates them through three hidden layers, with each layer made up of { 9 , 6 , 6 } neurons each, applies 30% dropout and then maps the last layer to a fully connected layer that generates valve torques. The architecture of the neuro-controller is shown in Fig. 4. The last layer is designed to generate appropriate valve torques based on an internal model of the plant. A self-tuning adaptive control law (with a feedforward regu- lation and state feedback component) adapts to the internal parameters of the plant to ensure stability of the system and bounded tracking of given trajectory. The overall network has neuron connection weights and thresholds of approximately 1,400. This makes search for a suitable controller feasible. The LSTM model estimates a model f ( y ) , that minimizes the mean-squared error between predicted output ˆ y ( k ) and actual output y ( k ) according to f ( y ( k )) = arg min w V N ( w, Φ ( y )) (19) where V N ( w , Φ ( y )) = ∑ K t =1 ∑ n i =1 1 2 (ˆ y i ( t ) − y i ( t )) 2 , and Φ( y ) is a regression vector as defined in (7) on a bounded interval [1 , N ] . (19) is minimized using stochastic gradient descent so that at each iteration, we update the parameters (weights) of the network w i based on the ordered derivatives of V N ( w , Φ ( y )) (Werbos [25]) i.e. w k +1 ← η w k − α n ∑ i =1 ∇ w V ( y i , ˆ y i ( θ k )) . (20) η (set to 1) hastens the optimization in a direction of low but steepest descent in training error, and α is a sufficiently small learning rate (set to 5 × 10 − 3 ), and ∇ w V ( θ, Φ ( y )) is the derivative of V with respect to w averaged over the k -th batch (we used a batch size of 50). We initialized the weights of Fig. 4 from a one-dimensional normal distribution with zero-mean and unit variance. VI. E XPERIMENT : S OFT ACTUATORS C ONTROL The headpose is determined based on our formulation in IV. The 3-DOF pose of the head is made up of the state tuple { z ( k ) , θ ( k ) , φ ( k ) } . A. Adaptive Control Parameters We sample from the parameters of the trained network and we set ˆ f ( · ) in (6) to the fully connected layer of samples from the network. We publish the control law from the neural network and subscribe in a separate node. The gains ˆ K y and ˆ K r in (16), were found by solving the ODEs iteratively using a single step of the integral of the solutions to ̇ ˆ K y ( t ) , ̇ ˆ K r ( t ) . Our solution is an implementation of the Runge-Kutta Dormand-Prince 5 ODE-solver available in the Boost C++ Libraries 2 . We found a step-size of 0 . 01 to be realistic. y m in (5) is computed based on the solution to the forced response of the linear system, y m ( t ) = e A m t y m (0) + ∫ t 0 e A m ( t − τ ) B m r ( τ ) dτ. We set y m (0) = y (0) at t = 0 and for a settling time requirement of T s = 5 secs at which the response remains within 2% of final value, we find that A m =   − 1334 1705 0 0 0 − 1334 1705 0 0 0 − 1334 1705   . (21) For a nonnegative Q and a positive definite P , the pair ( Q , A m ) will be observable (LaSalle’s theorm) so that the dynamical system is globally asymptotically stable. After searching, we picked a positive definite Q = diag (100 , 100 , 100) for the dissipation energy in (17) and set Λ = I 3 × 3 so that solving the general form of the lyapunov 2 https://goo.gl/l7JyYe 10 20 30 40 50 60 70 80 time (secs) 5 10 15 20 height (mm) z-motion correction z-setpoint z 10 20 30 40 50 60 70 80 time (secs) 0.5 1 1.5 2 pitch angle(deg) pitch motion correction pitch set-point pitch motion 10 20 30 40 50 60 70 80 90 time (secs) 40 50 60 70 80 roll angle (deg) roll motion correction roll set-point roll motion Fig. 5: Head motion correction along z, pitch and roll axes. 20 40 60 80 100 120 140 160 180 time (secs) 70 72 74 76 78 80 roll (deg) roll motion correction roll setpoint roll motion Fig. 6: Head motion correction along roll axis. equation, we have P =   − 170500 2668 0 0 0 − 170500 2668 0 0 0 − 170500 2668   (22) The six solenoid valves operate in pairs so that two valves create a difference in air mass within each IAB at any given time. Therefore, we set B =   1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1   (23) The non-zero terms in (23) denote the maximum duty-cycle that can be applied to the Dakota valves based on the software configuration of the NI RIO PWM generator. B. Results and Analysis The three DoFs of the head are coupled and there is a limited reachable space with the IABs. It is therefore paramount that desired trajectories be ascertained as physi- cally realizable before rolling out control trials. We therefore placed the head to physically realizable positions in open- loop control settings before testing the close-loop control system on such feasible goal poses. Fig. 5 show the performance of the controller when commanded to move the head from [ z, θ, φ ] T = [2 . 5 mm, . 25 o , 35 o ] T to [14 mm, 1 . 6 o , 45 o ] T . We observe strong steady-state convergence along 2-DoFs, namely z and pitch axes with a 20 second rise time. The roll motion is however characterized by offshoots that may be caused by the coupled DOF. We perform a second experiment, seen in Fig. 6, where we evaluate the performance of the controller on the roll angle of the head. We observe that the controller behaves well controlling the roll motion in isolation. The overshoot of Fig. 5 are likely due to coupled dynamics not accounted for in our formulation. While our results are promising, showing the feasibility of the control law along the physically realizable axes of motions, the coupled degrees of freedom need further investigation to achieve independent and precise axial motion control whilst preserving global head goal requirements. VII. D ISCUSSION AND F UTURE W ORK We have presented a soft robot motion compensation system that uses a robust adaptive neurocontroller to correct patient positioning deviation in F&M RT along 3 degrees of freedom. Unlike related works that perform an inspection- based correction mechanism or employ radiation attenuating devices such as stepper motors positioned underneath the patient’s head on the table, our method offers the elimination of reduction in the efficacy of dosage plans through compli- ant nonlinear soft elastomeric polymers. It also eliminates the pain of wearing metallic rings that rigidly immobilize a patient during treatment setups and dosage administra- tion. The prospect of autonomously adapting to changing model parameters in our controller by learning compact and portable state representations of complex environments has widespread implications for autonomous robots. Further research in this direction will focus on decoupling the control laws for the coupled states of the system whilst preserving global pose objectives. We are investigating con- trol of custom-made multi-chambered air-bladders. Exten- sible to high-space DoF control, we will evaluate multiple soft-robot systems on a full-fledged human phantom and volunteer human trials. R EFERENCES [1] O. A. Zeidan, K. M. Langen, S. L. Meeks, R. R. Manon, T. H. Wagner, T. R. Willoughby, D. W. Jenkins, and P. A. Kupelian, “Evaluation of image-guidance protocols in the treatment of head and neck cancers,” International Journal of Radiation Oncology* Biology* Physics , vol. 67, no. 3, pp. 670–677, 2007. [2] F. Sterzing, R. Engenhart Cabillic, M. Flentje, and J. Debus, “Image- Guided Radiotherapy: A New Dimension in Radiation Oncology,” Deutsches Aerzteblatt International , vol. 108, no. 16, p. 274, 2011. [3] J. Rossiter and H. Hauser, “Soft Robotics - The Next Industrial Revo- lution? [Industrial Activities],” IEEE Robotics Automation Magazine , vol. 23, no. 3, pp. 17–20, Sept 2016. [4] O. Ogunmolu, X. Gu, S. Jiang, and N. Gans, “A Real-Time Soft Robotic Patient Positioning System for Maskless Head-and-Neck Cancer Radiotherapy: An Initial Investigation,” in IEEE International Conference on Automation Science and Engineering , Gothenburg, Sweden, Aug 2015. [5] O. Ogunmolu, X. Gu, S. Jiang, and N. Gans, “Vision-based Control of a Soft Robot for Maskless Head and Neck Cancer Radiotherapy,” in IEEE International Conference on Automation Science and Engi- neering , Fort Worth, Texas, Aug 2016. [6] L. I. Cervi ̃ no, N. Detorie, M. Taylor, J. D. Lawson, T. Harry, K. T. Murphy, A. J. Mundt, S. B. Jiang, and T. A. Pawlicki, “Initial Clinical Experience with a Frameless and Maskless Stereotactic Radiosurgery Treatment,” Practical Radiation Oncology , vol. 2, no. 1, pp. 54–62, 2012. [7] L. I. Cervi ̃ no, T. Pawlicki, J. D. Lawson, and S. B. Jiang, “Frame-less and mask-less cranial stereotactic radiosurgery: a feasibility study,” Physics in medicine and biology , vol. 55, no. 7, p. 1863, 2010. [8] G. Li, A. Ballangrud, M. Chan, R. Ma, K. Beal, Y. Yamada, T. Chan, J. Lee, P. Parhar, J. Mechalakos, et al. , “Clinical Experience with two Frameless Stereotactic Radiosurgery (fsrs) Systems using Optical Surface Imaging for Motion Monitoring,” Journal of Applied Clinical Medical Physics/American College of Medical Physics , vol. 16, no. 4, p. 5416, 2015. [9] X. Liu, A. H. Belcher, Z. Grelewicz, and R. D. Wiersma, “Robotic Stage for Head Motion Correction in Stereotactic Radiosurgery,” in American Control Conference (ACC), 2015 . IEEE, 2015, pp. 5776– 5781. [10] E. GmBh. Flexview. Accessed on January 21, 2016. [Online]. Available: http://www.ensenso.com/products/flexview/ [11] R. B. Rusu, “Semantic 3D object Maps for Everyday Manipulation in Human Living Environments,” PhD thesis , 2009. [12] A. Torr, Philip HS and Zisserman, “MLESAC: A New Robust Es- timator with Application to Estimating Image Geometry,” Computer Vision and Image Understanding , vol. 78, no. 1, pp. 138–156, 2000. [13] R. B. Rusu, Z. C. Marton, N. Blodow, M. E. Dolha, and M. Beetz, “Functional Object Mapping of Kitchen Environments,” 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS , pp. 3525–3532, 2008. [14] R. B. Rusu and S. Cousins, “3D is here: Point Cloud Library (PCL),” in IEEE International Conference on Robotics and Automation (ICRA) , Shanghai, China, May 9-13 2011. [15] R. B. Rusu, A. Holzbach, M. Beetz, and G. Bradski, “Detecting and Segmenting Objects for Mobile Manipulation,” in Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on . IEEE, 2009, pp. 47–54. [16] N. D. Besl, Paul J.; McKay, “A Method for Registration of 3D Shapes.” 1992. [17] S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory.” Neural computation , vol. 9, no. 8, pp. 1735–80, 1997. [18] P. Parks, “Liapunov Redesign of Model Reference Adaptive Control Systems,” IEEE Transactions on Automatic Control , vol. 11, no. 3, pp. 362–367, 1966. [19] Y. D. Landau, Adaptive Control: The Model Reference Approach . Marcel Dekker, Inc, 1979. [20] E. Lavretsky and K. Wise, Robust Adaptive Control with Aerospace Applications . Springer, 2005. [21] B. Y. et al., “Learning Long-term Dependencies with gradient Descent is Difficult.” IEEE Transactions on Neural Networks , 1994, doi: 10.1109/72.279181. [22] H. Dinh, S. Bhasin, R. Kamalapurkar, and W. E. Dixon, “Dynamic Neural Network-based Output Feedback Tracking Control for Uncer- tain Nonlinear Systems,” Journal of Dynamic Systems, Measurement, and Control , 2017. [23] H. Patino and D. Liu, “Neural network-based model reference adaptive control system,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) , vol. 30, no. 1, pp. 198–204, 2000. [24] J. S. Wang and Y. P. Chen, “A Fully Automated Recurrent Neural Network for Unknown Dynamic System Identification and Control,” IEEE Transactions on Circuits and Systems , vol. 53, 2006. [25] P. J. Werbos, “Backpropagation Through Time: What It Does and How to Do It,” Proceedings of the IEEE , vol. 78, no. 10, pp. 1550–1560, 1990.