arXiv:1102.3165v1 [cs.CG] 15 Feb 2011 An Approximation Algorithm for Computing Shortest Paths in Weighted 3-d Domains ∗ Lyudmil Aleksandrov † Hristo Djidjev ‡ Anil Maheshwari § J ̈ org-R ̈ udiger Sack ¶ October 1, 2018 Abstract We present the first polynomial time approximation algorithm for computing short- est paths in weighted three-dimensional domains. Given a polyhedral domain D , con- sisting of n tetrahedra with positive weights, and a real number ε ∈ (0 , 1), our algorithm constructs paths in D from a fixed source vertex to all vertices of D , whose costs are at most 1 + ε times the costs of (weighted) shortest paths, in O ( C ( D ) n ε 2 . 5 log n ε log 3 1 ε ) time, where C ( D ) is a geometric parameter related to the aspect ratios of tetrahedra. The efficiency of the proposed algorithm is based on an in-depth study of the local behavior of geodesic paths and additive Voronoi diagrams in weighted three- dimensional domains, which are of independent interest. The paper extends the results of Aleksandrov et al. [4] to three dimensions. 1 Introduction 1.1 Motivation The computation of shortest paths is a key problem arising in a number of diverse applica- tion areas including geographic information systems, robotics, computer graphics, computer- aided design, medical computing and others. This has motivated the study and subsequent design of efficient algorithms for solving shortest path problems in different settings based on the geometric nature of the problem domain (e.g., two-dimensional (2-d), three-dimensional (3-d), surfaces, presence/absence of obstacles) and the cost function/metric (e.g., Euclidean, L p , link distance, weighted/unweighted, multi-criteria). In addition to its driver - the appli- cations - the field has provided, and continues to do so, exciting challenges from a theoretical ∗ Research supported by NSERC. Preliminary results have appeared in [3]. † Bulgarian Academy of Sciences, IPP, Acad. G. Bonchev Str. Bl. 25-A, 1113 Sofia, Bulgaria. lyualeks@bas.bg ‡ Los Alamos National Laboratory, Los Alamos, NM 87544, U.S.A § School of Comp. Sci., Carleton U., Ottawa, Ontario K1S5B6, Canada. anil@scs.carleton.ca ¶ School of Comp. Sci., Carleton U., Ottawa, Ontario K1S5B6, Canada. sack@scs.carleton.ca 1 perspective. As a result, shortest path problems have become fundamental problems in areas of Computer Science such as Computational Geometry and Algorithmic Graph Theory. The standard 3-d Euclidean shortest path problem of computing a shortest path between pair of points avoiding a set of polyhedral obstacles, denoted as the ESP3D problem, is known to be NP -hard even when the obstacles are parallel triangles in the space. It is not difficult to see that the number of combinatorially distinct shortest paths from a source point to a destination point may be exponential in the input size. Canny and Reif [8] used this to establish the NP -hardness of the ESP3D problem, for any L p metric, p ≥ 1. In addition to this combinatorial hardness result, Bajaj [7] has provided an algebraic hardness argument that an exponential number of bits may be required. More recently, Mitchell and Sharir [22] gave NP -completeness proofs for the problem of computing Euclidean shortest paths among sets of stacked axis-aligned rectangles, and computing L 1 -shortest paths among disjoint balls. Given the NP -hardness of the ESP3D problem, work has focused on exploiting the geometric structure of the obstacles and/or on providing approximation algorithms. We will mention some of these approaches in Section 1.4. In many applications, the Euclidean metric does not capture adequately the nature of the problem, for instance when the problem domain is not homogeneous. This motivates the weighted versions of the shortest path problem. For example in the 2-d case, consider triangulated regions where each triangle represents a particular terrain type such as water, rock, or forest. Here different weights capture the cost of traveling a Euclidean unit-length through each face. Incorporating weights makes the solution more difficult to obtain even in 2-d, but it does provide more realistic answers. It is known that light and other types of waves (e.g., seismic and sonic) travel along the shortest paths in heterogeneous media. Hence, algorithms solving the weighted shortest path problem (WSP3D) can be used for modeling wavefront propagation in such media. In the 3-d, a number of applications are non-homogeneous in nature and can be expressed using the weighted model. Next, we list some of such potential applications. • In geology , seismic refraction and reflection methods are used based on measurements of the travel time of seismic waves refracted at the interfaces between subsurface layers of different densities. As such waves propagate along shortest paths and weighted shortest path algorithms may be used to produce more accurate and more efficient estimation of subsurface layer characteristics, e.g., the amount of oil contained in the subsurface [14]. Another related application is the assessment of garbage dumps’ health. When a new garbage dump is built, sensors are placed at the bottom, and when the garbage dump starts to fill, waves from the top passing through the garbage to these sensors are used in order to determine the decomposition rate of the garbage [14]. • Computation of 3-d shortest path have also been used to compute fastest routes for aircrafts between designated origin and destination points while avoiding hazardous, time-varying weather systems. Krozel et al. [17] investigate synthesizing weather avoid- ance routes in the transition airspace. Our weighted 3-d region model can be used to generalize that approach: instead of totally avoiding undesirable regions, one can as- sign penalty weights to them and then search for routes that minimize travel through such regions, while also avoiding unnecessarily long detours. 2 • In medical applications simulation of sonic wavefront propagation is used when per- forming imaging methods as photoacoustic tomography or ultrasound imaging through heterogeneous tissue [12, 27]. In radiation therapy, domain features include densities of tissue, bone, organs, cavities, or risk to radiation exposure, and optimal radiation treatment planning takes this non-homogeneity into consideration. • The problem of time-optimum movement planning in 2-d and 3-d for a point robot that has bounded control velocity through a set of n polygonal regions of given translational flow velocities has been studied by Reif and Sun [24]. They state that this intriguing geometric problem has immediate applications to macro-scale motion planning for ships, submarines, and airplanes in the presence of significant flows of water or air. Also, it is a central motion planning problem for many of the meso-scale and micro-scale robots that have environments with significant flows that can affect their movement. They establish the computational hardness for the 3-d version of this problem by showing the problem to be PSPACE hard. They give a decision algorithm for the 2-d flow path problem, which has very high computational complexity, and they also design an efficient approximation algorithm with bounded error. The determination of the exact computational complexity of the 3-d flow path problem is posed as an open problem. Although, our weighted 3-d model does not apply directly to this setting, it can be used to construct initial approximations by assigning appropriate weights depending on the velocity and direction of the flows in different regions. In addition, the discretization scheme and the algorithmic techniques developed here can be used for solving the 3-d flow path problem. 1.2 Problem formulation In this paper, we consider the following problem. Let D be a connected 3-d domain consisting of n tetrahedra with a positive real weight associated to each of them. The 3-d weighted shortest path problem (WSP3D) is to compute minimum-cost paths in D from a fixed source vertex to all vertices of D . The cost of a path in D is defined as the weighted sum of the Euclidean lengths of the sub-paths within each crossed tetrahedron. We will describe and analyze an approximation algorithm for this problem that, for any real number ε ∈ (0 , 1), computes paths whose costs are at most 1 + ε times greater than the costs of the minimum cost paths. In Section 2, we describe our model in detail. Note that the WSP3D problem can be viewed as a generalization of the ESP3D problem. Namely, given an instance of the ESP3D problem, one can find a large enough cube containing all the obstacles, tetrahedralize the free-space (i.e., exterior of the obstacles, but in the interior of the cube) and set equal weights to the resulting tetrahedra obtaining an instance of the WSP3D problem. 1.3 Challenges A key difference between Euclidean shortest path computation in 2-d and 3-d weighted domain is the NP -hardness already mentioned. Underlying this is the fact that, unlike in 2- d, Euclidean 3-d shortest paths are not discrete. Specifically, in 2-d, the edges of a shortest 3 path (e.g., Euclidean shortest paths among obstacles in the plane) are edges of a graph, namely, the visibility graph of the obstacles including the source and the destination points. In contrast, in polyhedral 3-d domains, the bending points of shortest paths on obstacles may lie in the interior of the obstacles’ edges. Moreover, in weighted 3-d settings, bending points may even belong to the interior of the faces. Furthermore, even in the case of weighted triangulated planar domains, the (weighted) shortest path may cross each of the n cells Θ( n ) times and may be composed of Θ( n 2 ) seg- ments in total. Not only is the path complexity higher, but the computation of weighted shortest paths in 2-d turns out to be substantially more involved than in the Euclidean set- ting. In fact, there is not even an exact algorithm known, and the first (1 + ε ) approximation algorithm due to [21] had an O ( n 8 log( n ε )) time bound, where n is the number of triangles in the subdivision. This problem has been actively researched since then, and currently the best known algorithm for the weighted region problem on planar subdivisions (as well as on polyhedral surfaces) runs in O ( n √ ε log n ε log 1 ε ) time [4]. (Also, see [4] for a detailed literature review for the planar case.) One of the classical tools of Computational Geometry is the Voronoi Diagram. This struc- ture finds numerous applications (see e.g., [6]). It is also a key ingredient in several efficient shortest path algorithms. Researchers have studied these diagrams under several metrics (including Euclidean, Manhattan, weighted, additive, convex, abstract) and for different types of objects (including points, lines, curves, polygons), but somehow the computation of these diagrams in media with different densities (i.e., the refractive media) remained elusive. One of the main ingredients in solving the problem studied here is to compute (partial) additive Voronoi diagrams of points in refractive media. The generic techniques of Klein [15, 16] and Lˆ e [19] do not apply in this case, as the bisecting surfaces do not satisfy the required conditions. In this paper, we make an important step towards the understanding and computation of these diagrams. 1.4 Previous related work By now, shortest path problems in 2-d are fairly well understood. Efficient algorithms have been developed for many problem instances and surveys are readily available describing the state of the art in the field. In 3-d, virtually all the work has been devoted to the ESP3D problem. Papadimitriou [23] suggested the first polynomial time approximation scheme for that problem. It runs in O ( n 4 ε 2 ( L + log( n/ε )) time, where L is the number of bits of precision in the model of computation. Clarkson [11] provided an algorithm running in O ( n 2 λ ( n ) log( n/ε ) / ( ε 4 ) + n 2 log nρ log( n log ρ )) time, where ρ is the ratio of the longest obstacle edge to the distance between the source and the target vertex, λ ( n ) = α ( n ) O ( α ( n )) O (1) , and α ( n ) is the inverse Ackermann’s function. Papadimitriou’s algorithm was revised and its analysis was refined by Choi et al. [9] un- der the bit complexity framework. Their algorithm runs roughly in O ( n 4 L 2 ε 2 μ ( X )) time, where μ ( X ) represents the time (or bit) complexity of multiplying X -bit integers and X = O (log( n ε ) + L ). In [10], the same authors further developed their ideas and pro- posed a precision-sensitive algorithm for the ESP3D problem. In [5], Asano et al. proposed 4 and studied a technique for computing approximate solutions to optimization problems and obtained another precision-sensitive approximation algorithm for the ESP3D problem with improved running time in terms of L . Har-Peled [13] proposed an algorithm that invokes Clarkson’s algorithm as a subroutine O ( n 2 ε 2 log 1 ε ) times to build a data structure for answering approximate shortest path queries from a fixed source in O (log n ε ) time. The data structure is constructed in roughly O ( n 6 ε 4 ) time. Agarwal et al. [1] considered the ESP3D problem for the case of convex obstacles and proposed an approximation algorithm running in O ( n + k 4 ε 7 log 2 k ε log log k ) time, where k is the number of obstacles. In contrast to all other algorithms discussed here, the complexity of this algorithm does not depend on the geometric features of the obstacles. In the same paper, the authors describe a data structure for answering approximate shortest path queries from a fixed source in logarithmic time. In the weighted (non-Euclidean) 3-d case no previous algorithms have been reported by other authors. In [3], we have announced and sketched a polynomial time approxima- tion scheme for WSP3D problem that runs in O ( n ε 3 . 5 log 1 ε ( 1 √ ε + log n )) time. The run-time improves to O ( n ε 3 log 1 ε log n ) when all weights are equal. This algorithm can be used to efficiently solve the ESP3D problem. In this paper, we apply that approach, but develop the required details, apply new techniques, improve the complexity bounds, and provide a rigorous mathematical analysis. 1.5 Contributions of this paper In this paper, we make several contributions to the fields of shortest path computations and the analysis of weighted 3-d regions model, as listed below. • We provide an approximation algorithm for solving the WSP3D problem in a poly- hedral domain D consisting of n weighted tetrahedra. The algorithm computes ap- proximate weighted shortest paths from a source vertex to all other vertices of D in O ( C ( D ) n ε 2 . 5 log n ε log 3 1 ε ) time, where ε ∈ (0 , 1) is the user-specified approximation parameter and C ( D ) is a geometric parameter related to the aspect ratios of tetrahe- dra 1 . The cost of the computed paths are within a factor of 1 + ε of the cost of the corresponding shortest paths. As we stated above, the ESP3D problem, i.e., the unweighted version of this problem, is already NP -hard even when the obstacles are parallel triangles in the space [8]. The time complexity of our algorithm, which is designed for the more general weighted setting, compares favorably even when applied in the Euclidean setting to the existing approximation algorithms. • Our detailed analysis, especially the results on additive Voronoi diagrams derived in Section 2, provides valuable insights into the understanding of Voronoi diagrams in heterogeneous media. This may open new avenues, for example, for designing an algorithm to compute discretized Voronoi diagrams in such settings. 1 See Lemma 3.1 for details on the value of C ( D ). 5 • Our approximation algorithms in 2-d have proven to be easily implementable and of practical value [18]. Our algorithm for WSP3D presented here, in spite of being hard to analyze, essentially uses similar primitives, and thus has the potential to be implementable, practical, and applicable in different areas. • Our work provides further evidence that discretization is a powerful tool when solving shortest path-related problems in both Euclidean and weighted settings. We conjecture that the discretization methodology used here generalizes to any fixed dimension. Furthermore, our discretization scheme is independent of the source vertex and can be used with no changes to approximate paths from other source vertices. This feature makes it applicable for solving all pairs shortest paths problem and for designing data structures for answering shortest path queries in weighted 3-d domains. • The complexity of our algorithm does not depend on the weights assigned to the tetrahedra composing D , but it depends on their geometry. We analyze and evaluate that dependence in detail. Geometric dependencies arise also in Euclidean settings and in most of the previous papers. For example, in Clarkson [11], the running time of the algorithm depends on the ratio of the longest edge to the distance between the source and the target vertex. Applying known techniques (see e.g., [1]), such dependency can often be removed. Here, this would be possible provided that an upper bound on the number of segments on weighted shortest paths in 3-d is known. However, the increase in the dependency on n in the time complexity that these techniques suffer from, which is of order Ω( n 2 ), appears not to justify such an approach here. In our approach, the dependency on the geometry is proportional to the average of the reciprocal squared sinuses of the dihedral angles of the tetrahedra composing D . Thus, when n is large, many tetrahedra would have to be fairly degenerate so that this average to play a major role. We therefore conjecture that in typical applications, the approach presented here would work well. 1.6 Organization of the paper In Section 2, we describe the model used throughout this paper, formulate the problem, present some properties of shortest paths in 3-d, and derive a key result on additive Voronoi diagrams in refractive media. In Section 3, we describe our discretization scheme which is a generalization of the 2-d scheme introduced in [4]. In Section 4, we construct a weighted graph, estimate the number of its nodes and edges and prove that shortest paths in D can be approximated by paths in the graph so constructed. In Section 5, we present our algorithm for the WSP3D problem. In Section 6, we conclude this paper. 2 Problem formulation and preliminaries 2.1 Model Let D be a connected polyhedral domain in 3-d Euclidean space. We assume that D is partitioned into n tetrahedra T 1 , . . . , T n , such that D = ∪ n i =1 T i and the intersection of any 6 pair of different tetrahedra is either empty or a common element (a face, an edge, or a vertex) on their boundaries. We call these tetrahedra cells . A positive weight w i is associated with each cell T i representing the cost of traveling in it. The cost of traveling along a boundary element of a cell is the minimum of the weights of the cells incident to that boundary element. We consider paths (connected rectifiable curves) that stay in D . The cost of a path π in D is defined by ‖ π ‖ = ∑ n i =1 w i | π i | , where | π i | denotes the Euclidean length of the intersection π i = π ∩ T i . Boundary edges and faces are assumed to be part of the cell from which they inherit their weight. Given two distinct points u and v in D , the shortest path problem in D is to find a minimum cost path π ( u, v ) between u and v that stays in D . We refer to the minimum cost paths as shortest paths . For a given approximation parameter ε > 0, a path π ε = π ε ( u, v ) is an ε -approximation of the shortest path π = π ( u, v ), if ‖ π ε ‖ ≤ (1 + ε ) ‖ π ‖ . Without loss of generality, we may assume that the points u and v are vertices of D , since, if they are not, we can make them such by partitioning the cells where they belong. In this paper, we present an algorithm that, for a given source vertex u and an approximation parameter ε ∈ (0 , 1), computes ε -approximate shortest paths from u to all vertices of D . In this setting, it is well known [21] 2 that shortest paths are simple (non self-intersecting) and consist of a sequence of segments, whose endpoints are on the cell boundaries. The intersection of a shortest path with the interior of a cell, a face, or an edge is a set of disjoint segments. More precisely, each segment on a shortest path is of one of the following three types: (i) cell-crossing – a segment that crosses a cell joining two points on its boundary; (ii) face-using – a segment lying along a face of a cell; (iii) edge-using – a segment along an edge of a cell. We define linear paths to be paths consisting of cell-crossing, face-using, and edge-using segments exclusively. A linear path π ( u, v ) can be represented as the sequence of its segments { s 1 , . . . , s l +1 } or, equivalently, as the sequence of points { a 0 , . . . , a l +1 } , lying on the cell boundaries that are endpoints of these segments, i.e., s i = ( a i − 1 , a i ), u = a 0 , and v = a l +1 . The points a i that are not vertices of cells are called bending points of the path. The local behavior of a shortest path around a bending point a , lying in the interior of a face f , is fully described by the directions of the two segments of the shortest path, s − and s + , that are incident to a . The direction of each of these two segments is described by a pair of angles, which we denote by ( φ − , θ − ) and ( φ + , θ + ), respectively. The in-angle φ − is defined to be the acute angle between the direction normal to f and the segment s − . Similarly, the out-angle φ + is the acute angle between the normal and the segment s + . The angles θ − and θ + are the acute angles between the orthogonal projections of s − and s + with a reference direction in the plane containing the face f , respectively (see Figure 1). It is well known that when π is a shortest path it is a linear path such that the angles ( φ − , θ − ) and ( φ + , θ + ) are related by Snell’s law as follows: 2 The 2-d case was treated there, but the arguments readily apply to the 3-d model considered in this paper. 7 θ + s + θ − a φ + s − φ − f Figure 1: An illustration of the Snell’s law of refraction. Snell’s Law of Refraction : Let a be a bending point on a geodesic path π lying in the interior of a face f of D. Let the segments preceding and succeeding a in π be s − and s + , respectively. Let the weights of the cells containing s − and s + be w − and w + , respectively. Then s + belongs to the plane containing s − and perpendicular to f and the angles φ − and φ + are related by w − sin φ − = w + sin φ + . We refer to linear paths that are locally optimal (i.e., satisfy the Snell’s law) as geodesic paths . Hence, the shortest path between pair of vertices u and v is the geodesic path of smallest cost joining them. In the following we discuss some of the implications of Snell’s law on the local behavior of geodesic paths. Hereafter, we denote by κ the ratio w + /w − . Without loss of generality, we assume that w − ≥ w + , i.e., κ ≤ 1. Let φ ∗ be the acute or right angle for which sin φ ∗ = κ . We refer to this angle as the critical angle for the face f . From Snell’s law, it follows that φ − ≤ φ ∗ . The case where φ − = φ ∗ deserves a special attention. In this case, φ + must be a right angle and therefore the segment s + is a face-using segment. Furthermore, if the second endpoint a 1 of s + is in the interior of f , then the segment following s + is inside the tetrahedron containing s − , and the out-angle at a 1 is equal to φ ∗ (see Figure 3 (b)). In summary, if s is a face-using segment, then it is preceded and followed by segments lying in the cell with bigger weight and their corresponding in-angle and out-angles are equal to the critical angle φ ∗ . In the next subsection, we study the properties of simple geodesic paths joining points in neighboring cells or in the same cell through a face-using segment. We define a function related to the cost of these geodesic paths and prove a number of properties that it possesses. These properties are essential to the design and the analysis of our algorithm. 2.2 Weighted distance function Let F be a plane in the three-dimensional Euclidean space. We denote the two half-spaces defined by F by F − and F + and assume that positive weights w − and w + have been asso- ciated with them, respectively. We extend our model by assigning a weight w to F , so that w = min( w − , w + ) if w − 6 = w + , and 0 < w = w + (= w − ) if w − = w + . The latter case models the situation where the geodesic path joins two points in the same cell through a face-using segment on the boundary of that cell. 8 O F a z + y + y = y + + y − θ x = x + + x − z − x − ℓ α κ x α φ κ φ z v Figure 2: The geodesic path ̄ π ( v, x ) joining v and x is illustrated. The weighted distance function c ( v, x ) equals to the cost of ̄ π ( v, x ), i.e. c ( v, x ) = ‖ ̄ π ( v, x ) ‖ = w − | va | + w + | a x | . We refer to the half-spaces F − and F + as the lower and the upper half-space, respectively. Let v be a point in the lower half-space F − at distance z − from F and ℓ be a line parallel to F in the upper half-space F + at distance z + from F . Let O xyz be a Cartesian coordinate system such that the plane O xy coincides with F , v has coordinates (0 , y, − z − ), and the line ℓ is described by { ℓ : y = 0 , z = z + } . We consider a point x = ( x, 0 , z + ) on ℓ and denote by ̄ π ( v, x ) the geodesic path between v and x . In this setting, the geodesic path is unique and thus coincides with the shortest path. We denote the cost of this path by c ( v, x ), where x is the x -coordinate of x . So, for fixed l , c ( v, x ) can be viewed as a function defined for any real x . We call c the weighted distance function from v to l (Figure 2). The local structure of the geodesic path ̄ π ( v, x ) is governed by Snell’s law. In the case where w − 6 = w + , the shortest path between v and x consists of two segments ( v, a ) and ( a, x ), where the bending point a is uniquely determined by Snell’s law (Figure 3 (a)). In the case where w − = w + , the structure of the path ̄ π ( v, x ) is as follows. It is a single segment ( v, x ), provided that the angle φ between ( v, x ) and the direction normal to the plane F is smaller than or equal to the critical angle φ ∗ defined by sin φ ∗ = w/w − . Or, if φ > φ ∗ , it is in the plane perpendicular to F containing v and x and consists of three segments ( v, a ), ( a, a 1 ) and ( a 1 , x ), where the acute angles between the segments ( v, a ) and ( a 1 , x ), and the direction normal to the plane F are equal to the critical angle φ ∗ , and the segment ( a, a 1 ) is in F (Figure 3 (b)). From these observations, it follows that, in all cases, weighted distance function can 9 (b) (a) x ℓ a a 1 φ ∗ v φ ∗ a s − s + φ + φ − θ + θ − ℓ x v F F Figure 3: The diagrams illustrate the local structure of geodesic paths in different cases. equivalently be defined by c ( v, x ) = ‖ ̄ π ( v, x ) ‖ = min a,a 1 ∈ F ( w − | va | + w | aa 1 | + w + | a 1 x | ) . (1) In the case where w − 6 = w + , the minimum is achieved when a = a 1 = ( τ x, (1 − τ ) y, 0), where τ is the unique solution in (0 , 1) of the equation w − τ √ τ 2 ( x 2 + y 2 ) + ( z − ) 2 = w + (1 − τ ) √ (1 − τ ) 2 ( x 2 + y 2 ) + ( z + ) 2 , (2) where v = (0 , y, z − ) and x = ( x, 0 , z + ). The latter leads to an algebraic equation of degree four and it is infeasible 3 to evaluate c ( v, x ) explicitly. The case where w − = w + is easier, as in that case the geodesic path is either a straight line, or a three segment path as described above and illustrated in Figure 3 (b) and the function c ( v, x ) has an explicit representation, which is c ( v, x ) = { w − √ x 2 + y 2 + ̄ z 2 if √ x 2 + y 2 ≤ ̄ z/ ̄ w w ( √ x 2 + y 2 − ̄ z ̄ w ) if √ x 2 + y 2 > ̄ z/ ̄ w, (3) where ̄ z = z − + z + and ̄ w = √ ( w − /w ) 2 − 1. We refer to this case as the explicit case . In the next lemma we state and prove some useful properties of the function c ( v, x ). Lemma 2.1 For a fixed v , the weighted distance function c ( v, x ) has the following properties: (a) It is continuous and differentiable. (b) It is symmetric, i.e. c ( v, x ) = c ( v, − x ) . (c) It is strictly increasing for x > 0 . 3 Although the roots of a quartic can be expressed as a rational function of radicals over its coefficients, they are too complex to be analytically manipulated and used here. 10 (d) It is convex. (e) It has asymptotes for x → ±∞ as follows: (e1) if w + < w − then the asymptotes are w + ( z − cot φ ∗ ± x ) , (e2) if w − < w + then the asymptotes are w − ( z + cot φ ∗ ± x ) , (e3) in the explicit case w + = w − ≥ w the asymptotes are ± wx , where φ ∗ is the critical angle. Proof: In the explicit case ( w − = w + ), all these properties follow straightforwardly from the explicit representation (3). So, we consider the case w − 6 = w + From (1) and a 1 = a = ( τ x, (1 − τ ) y, 0) it follows that c ( v, x ) = w − √ τ 2 ( x 2 + y 2 ) + ( z − ) 2 + w + √ (1 − τ ) 2 ( x 2 + y 2 ) + ( z + ) 2 , where τ is the root of the equation (2). The root τ can be viewed as a function of x , which by the implicit function theorem is continuous and differentiable. Hence property ( a ) holds. The property (b) follows from the observation that the value of the function c ( v, x ) is determined by the distance between the projections of the points v and x on F , which is √ y 2 + x 2 where y is fixed. To prove (c) we consider a point x ′ = ( x ′ , 0 , z + ) such that x ′ > x ≥ 0 and de- note by τ ′ the corresponding root of (2). We have c ( v, x ′ ) = w − √ τ ′ 2 ( x ′ 2 + y 2 ) + ( z − ) 2 + w + √ (1 − τ ′ ) 2 ( x ′ 2 + y 2 ) + ( z + ) 2 . Using the fact that the function c ( v, x ) is defined as the cost of the shortest path joining v and x we have c ( v, x ) ≤ w − √ τ ′ 2 ( x 2 + y 2 ) + ( z − ) 2 + w + √ (1 − τ ′ ) 2 ( x 2 + y 2 ) + ( z + ) 2 < w − √ τ ′ 2 ( x ′ 2 + y 2 ) + ( z − ) 2 + w + √ (1 − τ ′ ) 2 ( x ′ 2 + y 2 ) + ( z + ) 2 = c ( v, x ′ ) . In order to prove (d) , we show that for any three equidistant points x 1 < x 0 < x 2 on ℓ , i.e., such that 2 x 0 = x 1 + x 2 , the second finite difference △ 2 ( c ; x 1 , x 0 , x 2 ) = c ( v, x 1 ) − 2 c ( v, x 0 ) + c ( v, x 2 ) of the function c ( v, x ) is positive. We denote by a 1 and a 2 the bending points of the shortest paths from v to x 1 and x 2 , respectively. Let a ′ 0 be the middle point of the segment ( a 1 , a 2 ). Then, using the definition of c ( v, x 0 ) and the convexity of the Euclidean distance function we obtain 2 c ( v, x 0 ) ≤ 2( w − | va ′ 0 | + w + | a ′ x 0 | ) < w − ( | va 1 | + | va 2 | ) + w + ( | a 1 x 1 | + | a 2 x 2 | ) = c ( v, x 1 ) + c ( v, x 2 ), which implies △ 2 ( c ; x 1 , x 0 , x 2 ) > 0 and (d) . Finally, we prove (e) . Let us assume that w + < w − . In this case, using Snell’s law we observe that when x → + ∞ the bending point a ( x ) of the shortest path ̄ π ( v, x ) converges to the point ( z − tan φ ∗ , y, 0) (see Figure 2). Hence, we have lim x → + ∞ ( w + √ ( x − z − tan φ ∗ ) 2 + y 2 + ( z + ) 2 + w − z − /cosφ ∗ − c ( v, x )) = 0. On the other hand lim x → + ∞ ( w + √ ( x − z − tan φ ∗ ) 2 + y 2 + ( z + ) 2 + w − z − / cos φ ∗ − w + ( z − cot φ ∗ + x )) = w + lim x → + ∞ ( √ ( x − z − tan φ ∗ ) 2 + y 2 + ( z + ) 2 − ( x − z − tan φ ∗ )) = w + lim x → + ∞ y 2 + ( z + ) 2 ( √ ( x − z − tan φ ∗ ) 2 + y 2 + ( z + ) 2 + ( x − z − tan φ ∗ ) = 0 . Combining these two limits we obtain lim x → + ∞ ( c ( v, x ) − w + ( z − cot φ ∗ + x )) = 0 and thus (e1) is valid for x → + ∞ . The case where x → −∞ is symmetric. 11 In the case where w − < w + we use Snell’s law and observe that the bending point a ( x ) of the shortest path ̄ π ( v, x ) converges to x − z + tan φ ∗ , that is lim x → + ∞ ( a ( x ) − x − z + tan φ ∗ ) = 0. Then (e2) is established analogously to (e1). ✷ 2.3 Refractive Additive Voronoi diagram Next we study Voronoi diagrams under the weighted distance metric defined above. Given a set S of k points v 1 , . . . , v k in F − , called sites , and nonnegative real numbers C 1 , . . . , C k , called additive weights , the additive Voronoi diagram for S is a subdivision of F + space into regions V ( v i , F + ) = { x ∈ F + | dist( x, v i ) + C i ≤ dist( x, v j ) + C j for j 6 = i } , where ties are resolved in favor of the site with smaller additive weight. The regions V ( v i , F + ) are called Voronoi cells . In the classic case, dist( · , · ) has been defined as the Euclidean distance. Here, for dist( · , · ), we use the weighted distance function c ( v, x ). Let v and v ′ be two points in F − . We wish to study the intersection of the additive Voronoi diagram of v and v ′ with ℓ with respect to the weighted distance function. Without loss of generality, we assume that C ′ = 0 and C ≥ 0, where C and C ′ are the additive weights assigned to v and v ′ , respectively. We denote the intersection of the Voronoi cell of v with ℓ by V ( v, v ′ , ℓ ; C ), or simply by V ( v ) when no ambiguity arises. We have V ( v, v ′ , ℓ ; C ) = V ( v ) = { x ∈ ℓ : c ( v, x ) + C < c ( v ′ , x ) } . In Theorem 2.1, we will show that if v and v ′ are at the same distance to F , then the Voronoi cell V ( v ) restricted to the line ℓ has a very nice structure (i.e., it is an interval). Furthermore, in Remark 2.1, we will show that if v and v ′ are not at the same distance to F then Theorem 2.1 does not hold. In our algorithm, presented in Section 5, we use the information about the shape of V ( v ) in order to propagate approximate shortest paths in D and it turns out that we need to only consider the sites that are restricted to be within a half-space of F and at the same distance to F . But, as we will see, this case in itself is mathematically challenging and provides valuable insights into the combinatorial structure of these diagrams. Theorem 2.1 The Voronoi cell V ( v, v ′ , ℓ ; C ) is an interval on ℓ – possibly empty, finite or infinite. Proof: First, consider the case when C = 0. We denote the set of points x in F + such that c ( v, x ) = c ( v ′ , x ) by B ( v, v ′ ) and observe that it is a half-plane perpendicular to F . Therefore, the set of points x on ℓ for which c ( v, x ) = c ( v ′ , x ) is either a single point, the whole line ℓ , or empty. Correspondingly, the Voronoi cell V ( v, v ′ , ℓ ; 0) is either a half-line, empty, or the whole line ℓ and the theorem holds for C = 0. So, we assume that C > 0. We consider the equation c ( v ′ , x ) − c ( v, x ) = C and claim that it cannot have more than two solutions. Before we prove that claim (Claim 2.1 below), we argue that it implies the theorem. Assume that the equation c ( v ′ , x ) − c ( v, x ) = C has at most two solutions. If it does not have any or has just one solution, then the theorem follows straightforwardly. In the case where it has exactly two solutions, the cell V ( v, v ′ , ℓ ; C ) has to be either a finite interval on ℓ , or a complement of a finite interval on ℓ . From the definition of the Voronoi cell V ( v, v ′ , ℓ ; C ) 12 − → e 1 α ( x 0 ) α ′ ( x 0 ) F − → e 1 − → e 1 α ′ κ ( x 0 ) α κ ( x 0 ) a ′ ( x 0 ) ℓ a ( x 0 ) v v ′ x 0 Figure 4: If the function g ( x ) has a local extremum at the point x 0 then the angles α ( x 0 ) and α ′ ( x 0 ) must be equal. and C > 0 it follows that V ( v, v ′ , ℓ ; C ) ⊂ V ( v, v ′ , ℓ ; 0). We know that V ( v, v ′ , ℓ ; 0) is either empty, a half-line, or the whole line. If it is either empty or a half-line then V ( v, v ′ , ℓ ; C ) must be either empty or a finite interval and the theorem holds. It remains to consider the case where V ( v, v ′ , ℓ ; 0) is the whole line ℓ . We argue that V ( v, v ′ , ℓ ; C ) can not be a complement to a finite interval. We have V ( v, v ′ , ℓ ; 0) = ℓ and therefore the line ℓ must be parallel to the half-plane B ( v, v ′ ). Furthermore, the plane containing B ( v, v ′ ) is a perpendicular bisector of the segment ( v, v ′ ) and thus the point v ′ must have coordinates (0 , y ′ , z − ) (Figure 2). In this setting, using Lemma 2.1(e), we observe that c ( v, x ) and c ( v ′ , x ) have same asymptotes at infinity and thus lim x →∞ ( c ( v ′ , x ) − c ( v, x )) = 0. Therefore, the cell V ( v, v ′ , ℓ ; C ) must be finite. The theorem follows. ✷ Next we establish the validity of the claim used in the proof of Theorem 2.1. Claim 2.1 The equation c ( v ′ , x ) − c ( v, x ) = C has at most two solutions. We will prove the claim by showing that the function g ( x ) = c ( v ′ , x ) − c ( v, x ) is unimodal, i.e., it has at most one local extremum. We establish this property in two steps. First, we prove a characterization property of a local extremum of g (Proposition 2.1). Then, we show that there may be no more than one point possessing that property. We focus our discussion on the case w − 6 = w + , since the other case is either simpler or can be treated analogously. We denote by a ( x ) and a ′ ( x ) the bending points defining the shortest paths from v and v ′ to x , respectively. We assume that ℓ is oriented and denote by − → e 1 the positive direction unit vector on ℓ . Furthermore, let α ( x ) and α ′ ( x ) be the angles between vectors − − − → va ( x ), − − − − → v ′ a ′ ( x ) and − → e 1 , respectively (see Figure 4). These angles are completely defined by the angles φ and θ defining the corresponding shortest paths at the bending points a ( x ) and a ′ ( x ). Precisely, we have cos α = sin φ cos θ . Next, we prove that the angles α ( x 0 ) and α ′ ( x 0 ) must be equal at any local extremum x 0 . Proposition 2.1 If x 0 is a local extremum of the function g , then α ( x 0 ) = α ′ ( x 0 ) . 13 F α ′ κ ( x δ 1 ) α κ ( x δ 2 ) α ′ κ ( x δ 2 ) α κ ( x δ 1 ) ℓ a ′ ( x δ 2 ) a ( x δ 1 ) x 0 x δ 1 x δ 2 a ( x δ 2 ) a ′ ( x δ 1 ) Figure 5: Illustration of the proof of inequality (5). Proof: The proof is by contradiction. Let us assume that α ( x 0 ) 6 = α ′ ( x 0 ). We denote by α κ ( x 0 ) the angle between vectors − − − − → a ( x 0 ) x 0 and − → e 1 . Similarly, α ′ κ ( x 0 ) denotes the angle between vectors − − − − − → a ′ ( x 0 ) x 0 and − → e 1 (Figure 4). The relation cos α = cos θ sin φ and Snell’s law readily imply that κ cos α κ ( x 0 ) = cos α ( x 0 ) and κ cos α ′ κ ( x 0 ) = cos α ′ ( x 0 ), where κ = w + /w − . Thus, we have that α κ ( x 0 ) 6 = α ′ κ ( x 0 ). Without loss of generality, we assume that α κ ( x 0 ) < α ′ κ ( x 0 ). Under these assumptions, we show the existence of two points on x 1 and x 2 on ℓ , such that: x 1 < x 0 < x 2 , g ( x 1 ) = g ( x 2 ) , and | a ( x 2 ) x 2 | + | a ′ ( x 1 ) x 1 | > | a ( x 2 ) x 1 | + | a ′ ( x 1 ) x 2 | . (4) By the assumption that x 0 is a local extremum of g , it follows that, for any positive real number δ > 0 inside the interval ( x 0 − δ, x 0 + δ ), there are reals x δ 1 and x δ 2 , such that x 0 − δ < x δ 1 < x 0 < x δ 2 < x 0 + δ and g ( x δ 1 ) = g ( x δ 2 ) . On the other hand, if δ converges to zero, then a ( x δ 2 ) converges to a ( x 0 ), and a ′ ( x δ 1 ) converges to a ′ ( x 0 ). Therefore, the inequality α κ ( x 0 ) < α ′ κ ( x 0 ) implies that for a small enough δ 0 , the inequalities α κ ( x δ 0 1 ) < α ′ κ ( x δ 0 1 ) and α κ ( x δ 0 2 ) < α ′ κ ( x δ 0 2 ) hold. From these inequalities, it follows that if we make the quadrilateral a ( x δ 2 ) a ′ ( x δ 1 ) x δ 2 x δ 1 planar by rotation of the point a ( x δ 2 ) around ℓ , then the obtained planar quadrilateral will be convex (Figure 5). Therefore we have | a ( x δ 0 2 ) x δ 0 2 | + | a ′ ( x δ 0 1 ) x δ 0 1 | > | a ( x δ 0 2 ) x δ 0 1 | + | a ′ ( x δ 0 1 ) x δ 0 2 | , (5) which proves (4) for x 1 = x δ 0 1 and x 2 = x δ 0 2 . Next, we estimate the sum c ( v, x 1 ) + c ( v ′ , x 2 ) we show that there may be no more than 14 one point possessing that property. We use (4) and obtain c ( v, x 1 ) + c ( v ′ , x 2 ) = c ( v, x 2 ) + c ( v ′ , x 1 ) = w − | va ( x 2 ) | + w + | a ( x 2 ) x 2 | + w − | v ′ a ′ ( x 1 ) | + w + | a ′ ( x 1 ) x 1 | = w − ( | va ( x 2 ) | + | v ′ a ′ ( x 1 ) | ) + w + ( | a ( x 2 ) x 2 | + | a ′ ( x 1 ) x 1 | ) > w − ( | va ( x 2 ) | + | v ′ a ′ ( x 1 ) | ) + w + ( | a ( x 2 ) x 1 | + | a ′ ( x 1 ) x 2 | ) = w − | va ( x 2 ) | + w + | a ( x 2 ) x 1 | + w − | v ′ a ′ ( x 1 ) | + w + | a ′ ( x 1 ) x 2 | . On the other hand, by the definition of the weighted distance function (1), we have the inequalities c ( v, x 1 ) ≤ w − | va ( x 2 ) | + w + | a ( x 2 ) x 1 | and c ( v ′ , x 2 ) ≤ w − | v ′ a ′ ( x 1 ) | + w + | a ′ ( x 1 ) x 2 | , that contradict the previous strict inequality. Therefore, the angles α κ ( x 0 ) and α ′ κ ( x 0 ) and consequently α ( x 0 ) and α ′ ( x 0 ) must be equal. ✷ Our next step is to show that there cannot be two points on ℓ satisfying Proposition 2.1. To do that, we study in more detail the relationship between the position of points v and x , and the angle α ( x ). We observe that, for any fixed y (the y -coordinate of v ), there is a one-to-one correspondence between the real numbers x and the angles α . That is, for a fixed v , there is a one-to-one correspondence between the points x on ℓ and the angle between the shortest path ̄ π ( v, x ) and the positive direction on ℓ . Hence, we may consider and study the well defined function x = x ( y, α ). We prove the following: Proposition 2.2 The second mixed derivative of the function x = x ( y, α ) exists and is negative, i.e., x yα < 0 . Let us first show that Proposition 2.2 implies Claim 2.1. Proof of Claim 2.1: We assume that Proposition 2.2 holds and will show that the function g ( x ) has at most one local extremum. Recall that the point v has coordinates (0 , y, z − ) and let us denote the coordinates of the point v ′ by ( x ′ , y ′ , z − ). We first consider the case where y = y ′ . In this case, we observe that α ′ ( x ) = α ( x + x ′ ). In addition, the function α ( x ) is strictly monotone and, therefore, for any x , the angels α ( x ) and α ′ ( x ) are different. From Proposition 2.1 it follows that in this case the function g ( x ) has no local extremum. Next, we consider the case y 6 = y ′ . We assume, for the sake of contradiction, that g ( x ) has two local extrema, say x 1 and x 2 . By Proposition 2.1, α ( x 1 ) = α ′ ( x 1 ) and α ( x 2 ) = α ′ ( x 2 ). We denote α 1 = α ( x 1 ) = α ′ ( x 1 ) and α 2 = α ( x 2 ) = α ′ ( x 2 ). Then, the difference x 2 − x 1 can be represented, using the function x ( y, α ), in two ways x 2 − x 1 = ∫ α 2 α 1 x α ( y, α ) d α and x 2 − x 1 = ∫ α 2 α 1 x α ( y ′ , α ) d α. (6) Subtracting the last two equalities, we get 0 = ∫ α 2 α 1 x α ( y, α ) d α − ∫ α 2 α 1 x α ( y ′ , α ) d α = ∫ y y ′ ∫ α 2 α 1 x yα ( y, α ) d α d y. (7) 15 The integral on the right side is negative since by Proposition 2.2, the derivative x yα is negative, α 1 6 = α 2 , and y 6 = y ′ . Hence, we have a contradiction and Claim 2.1 follows. ✷ The proof of Proposition 2.2 is rather long and uses elaborate mathematical techniques and manipulations. On the other hand, the rest of the paper is independent of the details in that proof. So, we present the full proof for the interested readers in Appendix A.1. Corollary 2.1 Consider a plane H in F + parallel to F and two points v and v ′ in F − lying at the same distance from F . For any non-negative constant C , the Voronoi cell V ( v, v ′ , H ; C ) = { x ∈ H : c ( v, x ) + C < c ( v ′ , x ) } is convex. What can we say about the Voronoi cell of v in F + ? The above corollary implies that the intersection between the Voronoi cell and any plane, parallel to F , is convex. This, as such, is not sufficient to conclude that the cell is convex. We close this section with the following conjecture. Conjecture 2.1 In the above setting, the Voronoi cell of v in F + , V ( v, v ′ , F + ; C ) = { x ∈ F + : c ( v, x ) + C < c ( v ′ , x ) } , is convex. Remark 2.1 Examples showing that equal distance of the points v and v ′ from the bending plane F is a necessary condition for c ( v ′ , x ) − c ( v, x ) to be unimodal in Theorem 2.1 is not difficult to construct. In fact, if we take arbitrary points v and v ′ in F − at different distances from F , it is very likely that c ( v ′ , x ) − c ( v, x ) will have more than one local extrema and hence, for a proper choice of C , the equation c ( v ′ , x ) − c ( v, x ) = C will have more than two solutions. Such examples can be constructed by choosing arbitrary points v in F − and x 1 , x 2 on ℓ and then computing a point v ′ ∈ F − so that α ( v, x i ) = α ( v ′ , x i ) , for i = 1 , 2 , where α ’s are the angles between the line ℓ and the shortest paths coming from v and v ′ , respectively. As a result c ( v ′ , x ) − c ( v, x ) will have local extrema at x 1 and x 2 . 3 Discretization of D In this section, we describe the definition of a carefully chosen set of additional points placed in D , called Steiner points . These Steiner points collectively form a discretization of D , which is later used to approximate geodesic paths in D . Steiner points are placed on the edges of D and on the bisectors of the dihedral angles of the tetrahedra in D . While it may seem more natural to place the Steiner points on the faces of the tetrahedra, placing them on the bisectors proves to be more efficient, leading to a speed up of approximately ε − 1 compared to the alternate placement. Recall that ε is an approximation parameter in (0 , 1). We provide a precise estimate on the number of Steiner points which depends on ε and aspect ratios of tetrahedra of D . 3.1 Placement of Steiner points We use the following definitions: 16 Definition 3.1 (a) For a point x ∈ D , we define D ( x ) to be the union of the tetrahedra incident to x . We denote by ∂D ( x ) the set of faces on the boundary of D ( x ) that are not incident to x . (b) We define d ( x ) to be the minimum Euclidean distance from x to any point on ∂D ( x ) . (c) For each vertex v ∈ D , we define a radius r ( v ) = d ( v ) / 14 . (d) For any internal point x on an edge in D , we define a radius r ( x ) = d ( x ) / 24 . The radius of an edge e ∈ D is r ( e ) = max x ∈ e r ( x ) . Using radii r ( v ) and r ( x ) and our approximation parameter ε , we define “small” regions around vertices and edges of D , called vertex and edge vicinities, respectively. Definition 3.2 (a) The convex hull of the intersection points of the ball B ( v, εr ( v )) having center v and radius εr ( v ) with the edges incident to v is called the vertex-vicinity of v and is denoted by D ε ( v ) . (b) The convex hull of the intersections between the “spindle” ∪ x ∈ e B ( x, εr ( x )) and the faces incident to e is called the edge-vicinity of e and is denoted by D ε ( e ) . On each edge e = AB of D , we define a set of Steiner points as follows. Denote by AA ′ and BB ′ the intersections of e with vertex vicinities D ε ( A ) and D ε ( B ), respectively. Points A ′ and B ′ are Steiner points. All other Steiner points on e are placed between A ′ and B ′ . Let M e be the point on e , such that d ( M e ) = max x ∈ e d ( x ). The point M e is defined to be a Steiner point. Next, we define a sequence of points M i , for i = 0 , 1 , . . . on M e A ′ , by M 0 = M e and | M i − 1 M i | = εr ( M i ) for i = 1 , 2 , ... (8) All such points M i between M e and A ′ are defined as Steiner points. Analogously, we define the set of Steiner points on M e B ′ . The number of Steiner points defined in this way on e is bounded by C e 1 ε log 2 ε , where C e < 33 sin α ( e ) log | AB | √ r ( A ) r ( B ) and α ( e ) is the minimum angle between e and the faces on ∂D ( e ). (All logarithms with unspecified base are assumed to be base 2.) The total number of Steiner points placed on the edges of D is bounded by 6Γ 1 n ε log 2 ε , where Γ 1 is the average of the constants C e over all edges of D . The remaining Steiner points lie on the bisectors of the dihedral angles of tetrahedra in D . Steiner points in any tetrahedron T of D are defined to lie on the six bisectors of the dihedral angles of T . Let the vertices of T be A , B , C , and D and let us consider one of the bisectors of the dihedral angles of T , say ABP (see Figure 6(a)). Next, we describe the placement of Steiner points in the triangle ABP . Let the dihedral angle at AB of T be γ and let P H be the height (altitude) of ABP (see Figure 6(b)). First, we define an infinite sequence of points P 0 , P 1 , . . . on P H by P 0 = P, | P i − 1 P i | = √ ε/ 8 | HP i | sin( γ/ 2) , for i = 1 , 2 , . . . (9) 17 A B D C H B 2 B 3 P A B B 1 D ε ( B ) D ε ( A ) D ε ( AB ) B 5 B 4 P A 1 A 2 A 3 A 5 A 4 Figure 6: (a) A tetrahedron ABCD and one of its bisectors ABP . (b) Placement of Steiner points on ABP . Then, we consider the sequence of lines L i in the plane ABP , parallel to AB , and containing P i , for i = 1 , 2 , . . . . Let the intersection points of these lines with AP and BP be A i and B i , respectively. Points A i and B i lying outside of the vertex vicinities D ε ( A ) and D ε ( B ) are defined to be Steiner points, respectively. The intersection points of these lines with the boundary of the union of the edge vicinity D ε ( AB ) and vertex vicinities D ε ( A ) and D ε ( B ), are defined to be Steiner points. On each of the segments A i B i , we define a set of k i equidistantly placed Steiner points P i,j , j = 1 , . . . , k i , where k i = ⌊ | A i B i | | P i P i +1 | ⌋ and | P i,j P i,j +1 | = | A i B i | k i + 1 , for j = 0 , 1 , . . . , k i . (10) In the above expression, we have assumed that P i, 0 = A i and P i,k i +1 = B i . Definition 3.3 The set of Steiner points in the triangle ABP consists of (a) all points P i,j outside the union D ε ( AB ) ∪ D ε ( A ) ∪ D ε ( B ) , (b) the intersection points of the lines L i with the boundary of that union. Next, we estimate the number of Steiner points placed in the triangle ABP . We denote h = p 0 = | P H | and p i = | P i H | for i = 1 , 2 , . . . In this notation, we have p i − 1 − p i = | P i P i − 1 | and | P i P i − 1 | = p i √ ε/ 8 sin( γ/ 2), which implies p i = hλ i , where λ = (1 + √ ε/ 8 sin( γ/ 2)) − 1 . (11) Let i 1 be the smallest index such that the line L i 1 is at distance smaller than εr ( e ) from AB . We denote by K 1 the number of Steiner points lying on lines L i , with i < i 1 , and by K 2 the number of the remaining Steiner points in ABP . Let us estimate the number K 1 first. 18 The number of Steiner points on a line L i , with i < i 1 , is k i + 2. Using (10) and (11), we have k i = ⌊ | A i B i | | P i P i +1 | ⌋ = ⌊ ( h − p i ) | AB | h ( p i − p i +1 ) ⌋ = ⌊ (1 − λ i ) | AB | hλ i (1 − λ ) ⌋ . Thus, for the number K 1 , we obtain K 1 = i 1 − 1 ∑ i =1 (2 + k i ) ≤ 2( i 1 − 1) + | AB | h (1 − λ ) i 1 − 1 ∑ i =1 1 − λ i λ i (12) = 2( i 1 − 1) + | AB | h (1 − λ ) ( 1 − λ i 1 (1 − λ ) λ i 1 − 1 − i 1 ) ≤ 2( i 1 − 1) + | AB | h (1 − λ ) 2 1 − λ i 1 − 1 λ i 1 − 1 . From the definition of i 1 and (11), we have hλ i 1 < εr ( e ) ≤ hλ i 1 − 1 . Therefore, i 1 − 1 = ⌊ log λ εr ( e ) h ⌋ . (13) From (12) and (13), we obtain K 1 < | AB | εr ( e )(1 − λ ) 2 + 2 log λ − 1 h εr ( e ) . (14) Next, we estimate K 2 , that is the number of Steiner points lying on segments A i B i with i ≥ i 1 . By our definition, on the segment A i 1 B i 1 there is a point M 1 , such that the triangle ABM 1 lies entirely inside the edge vicinity. Let M ′ 2 be the intersection point of the boundaries of the edge vicinity D ε ( AB ) and the vertex vicinity D ε ( A ) that lies in the triangle ABP . Similarly, let M ′′ 2 be the intersection point of the boundaries of the edge vicinity D ε ( AB ), the vertex vicinity D ε ( B ), and the triangle ABP . Furthermore, let i ′ 2 be the smallest index such that the segment A i ′ 2 B i ′ 2 is closer to AB than M ′ 2 and similarly, let i ′′ 2 be the smallest index so that the segment A i ′′ 2 B i ′′ 2 is closer to AB than M ′′ 2 . All Steiner points on segments A i B i , with i ≥ i 1 , lie in the quadrilaterals A i 1 A i ′ 2 M ′ 2 M 1 and B i 1 B i ′′ 2 M ′′ 2 M 1 . We denote the number of Steiner points in these two quadrilaterals by K ′ 2 and K ′′ 2 , respectively. To estimate K ′ 2 , we show an upper bound on the number of Steiner points on A i B i , i 1 ≤ i < i ′ 2 that lie inside the quadrilateral A i 1 A i ′ 2 M ′ 2 M 1 . Namely, if we denote this number by k ′ i and by M i the intersection point between A i B i and AM 1 , we have, k ′ i ≤ 2 + | A i M i | p i − p i +1 = 2 + | A i 1 M 1 | p i p i 1 ( p i − p i +1 ) = 2 + | A i 1 M 1 | hλ i 1 (1 − λ ) . Thus, the number of Steiner points inside the quadrilateral A i 1 A i ′ 2 M ′ 2 M 1 is bounded by K ′ 2 ≤ ( i ′ 2 − i 1 )(2 + | A i 1 M 1 | hλ i 1 (1 − λ ) ). Analogously, for the number of Steiner points inside the quadrilateral B i 1 B i ′′ 2 M ′′ 2 M 1 , we obtain K ′′ 2 ≤ ( i ′′ 2 − i 1 )(2 + | B i 1 M 1 | hλ i 1 (1 − λ ) ). We sum the estimates on K ′ 2 and K ′′ 2 , use (11), (13) and obtain K 2 = K ′ 2 + K ′′ 2 ≤ ( i ′ 2 + i ′′ 2 − 2 i 1 ) ( 2 + | A i 1 B i 1 | hλ i 1 (1 − λ ) ) ≤ ( i ′ 2 + i ′′ 2 − 2 i 1 ) ( 2 + | AB | (1 − λ i 1 ) hλ i 1 (1 − λ ) ) < ( i ′ 2 + i ′′ 2 − 2 i 1 ) ( 2 + | AB | εr ( e ) λ (1 − λ ) ) . (15) 19 From the definitions of the indices i ′ 2 , i ′′ 2 , we easily derive that p i ′ 2 − 1 > √ 2 ε 2 r ( e ) r ( A ) | AM e | cos ∠ BAP 2 , p i ′′ 2 − 1 > √ 2 ε 2 r ( e ) r ( B ) | BM e | cos ∠ P BA 2 , where M e is the point on AB where the radius r ( e ) is achieved. These inequalities and (11) imply i ′ 2 ≤ 1 + log λ − 1 h | AM e | √ 2 ε 2 r ( e ) r ( A ) cos ∠ BAP 2 and i ′′ 2 ≤ 1 + log λ − 1 h | M e B | √ 2 ε 2 r ( e ) r ( B ) cos ∠ P BA 2 . Then, we use (13) and obtain i ′ 2 + i ′′ 2 − 2 i 1 ≤ 2 + 2 log λ − 1 h | AB | ε 2 r ( e ) √ r ( A ) r ( B ) − 2 log λ − 1 h εr ( e ) = 2 + 2 log λ − 1 | AB | ε √ r ( A ) r ( B ) = 2 log λ − 1 | AB | ελ √ r ( A ) r ( B ) . (16) Combining (14), (15) and (16), we obtain K 1 + K 2 ≤ 2 | AB | εr ( e ) λ (1 − λ ) log λ − 1 | AB | ελ √ r ( A ) r ( B ) + | AB | εr ( e )(1 − λ ) 2 (17) + 2 log λ − 1 h εr ( e ) + 4 log λ − 1 | AB | ελ √ r ( A ) r ( B ) . From the last equation, we easily derive that K 1 + K 2 = C ABP ( T ) 1 ε 2 log 2 ε , where the constant C ABP ( T ) depends on the geometry of the tetrahedron T and is bounded by (see Appendix A.2 for details) C ABP ( T ) ≤ 23 | AB | r ( e ) sin 2 ( γ/ 2) log 4 | AB | 2 h r ( e ) r ( A ) r ( B ) . (18) Our discussion is summarized in the following lemma. Lemma 3.1 (a) The number of Steiner points placed on a bisector ABP of a dihe- dral angle γ in a tetrahedron T , is bounded by C ABP ( T ) 1 ε 2 log 2 ε , where the constant C ABP ( T ) depends on the geometric features of D around the edge AB and is bounded by 23 | AB | r ( e ) sin 2 ( γ/ 2) log 4 | AB | 2 h r ( e ) r ( A ) r ( B ) . (b) The number of segments that are parallel to AB on a bisector ABP , containing Steiner points, is bounded by C 1 ABP ( T ) 1 √ ε log 2 ε , where C 1 ABP ( T ) < 4 sin( γ/ 2) log 2 4 | AB | 2 h r ( e ) r ( A ) r ( B ) . (c) The total number of Steiner points is bounded by C ( D ) n ε 2 log 2 ε , where n is the number of tetrahedra in D and C ( D ) is the average of C ABP ( T ) over all 6 n bisectors in D . 20 By placing Steiner points in this way, in the next lemma, we show that it is possible to approximate the cell crossing segments that have their endpoints outside the vertex and the edge vicinities. Lemma 3.2 Let ABP be the bisector of a dihedral angle γ formed by the faces ABC and ABD of a tetrahedron ABCD . Let x 1 and x 2 be points on the faces ABC and ABD , respectively, that lie outside of the union D ε ( AB ) ∪ D ε ( A ) ∪ D ε ( B ) . Then, there exists a Steiner point q on ABP , such that max( ∠ x 2 x 1 q, ∠ x 1 x 2 q ) ≤ √ ε 2 and | x 1 q | + | qx 2 | ≤ (1 + ε/ 2) | x 1 x 2 | . Proof: Clearly, the segment x 1 x 2 intersects the bisector triangle ABP in a point x 0 lying outside the vertex vicinities D ε ( A ), D ε ( B ), and the edge vicinity D ε ( AB ). Recall that Steiner points in ABP are placed on a set of lines L i parallel to AB and passing through the sequence of points P i on the altitude P H of ABP . Let i 0 be the maximum index such that the line L i 0 is farther away from AB than from x 0 . We define q to be the closest Steiner point to x 0 on the line L i 0 . D A P B C x 1 x 2 q x 0 γ Figure 7: Illustrates Lemma 3.2. First, we estimate the angles ∠ x 2 x 1 q = ∠ x 0 x 1 q and ∠ x 1 x 2 q = ∠ x 0 x 2 q . By our definition of the Steiner points and Pythagorean theorem, it follows that | x 0 q | ≤ √ 5 4 hλ i 0 √ ε 2 sin γ 2 , (19) where h and λ are as defined above (see (11)). Let ρ be the radius of the smallest sphere containing x 0 and q and touching the face ABC . It is easily observed that 2 ρ > ( hλ i 0 − | x 0 q | 2 ) sin γ 2 > 8 √ 2 − √ 5 8 √ 2 hλ i 0 sin γ 2 . (20) 21 If we denote the angle ∠ x 0 x 1 q by θ 1 , then sin θ 1 ≤ | x 0 q | 2 ρ , and using (19) and (20), we obtain sin θ 1 ≤ | x 0 q | 2 ρ < √ ε 2 . (21) The same estimate applies to angle ∠ x 0 x 2 q . Hence, the first inequality of the lemma holds. Next, we prove the second inequality. We denote by θ , θ 1 , and θ 2 the angles of the triangle qx 1 x 2 at q , x 1 and x 2 , respectively (Figure 7). By a trigonometric equality valid in any triangle, we have | x 1 q | + | qx 2 | = ( 1 + 2 sin( θ 1 / 2) sin( θ 2 / 2) sin( θ/ 2) ) | x 1 x 2 | . Thus, it suffices to prove that 2 sin( θ 1 / 2) sin( θ 2 / 2) sin( θ/ 2) ≤ ε/ 2. By (21), it follows that sin θ 1 and sin θ 2 are smaller than √ ε/ 2 and from ε ≤ 1 we have θ ≥ π/ 2. Therefore, we obtain 2 sin( θ 1 / 2) sin( θ 2 / 2) sin( θ/ 2) = sin θ 1 sin θ 2 2 sin( θ/ 2) cos( θ 1 / 2) cos( θ 2 / 2) ≤ ε 4 sin( θ/ 2) cos( θ 1 / 2) cos( θ 2 / 2) = ε 4 sin( θ/ 2)(sin( θ/ 2) + sin( θ 1 / 2) sin( θ 2 / 2)) ≤ ε 4 sin 2 ( θ/ 2) ≤ ε 2 . ✷ 4 Discrete paths In this section, we use the Steiner points introduced above for the construction of a weighted graph G ε = ( V ( G ε ) , E ( G ε )). We estimate the number of its nodes and edges and then establish that shortest paths in D can be approximated by paths in G ε . We follow the approach laid out in [4], but the details are substantially different, as we have to handle both the vertex and edge vicinities, as well as the bisectors in 3-d space. The set of nodes V ( G ε ) consists of the vertices of D , the Steiner points placed on the edges of D and the Steiner points placed on the bisectors. The edges of the graph G ε join nodes lying on neighboring bisectors as defined below. A bisector is a neighbor to itself. Two different bisectors are neighbors if the dihedral angles they split share a common face. We say that a pair of bisectors sharing a face f are neighbors with respect to f . (So, a single bisector b is a neighbor to itself with respect to both faces forming the dihedral angle it splits.) First, we define edges joining pairs of Steiner points on neighboring bisectors. Let p and q be nodes corresponding to Steiner points lying on neighboring bisectors b and b 1 , respectively, that share a common face f . We consider the shortest weighted path between p and q of the type { p, x, y, q } , where x and y belong to f (points x and y are not necessarily different). We refer to this shortest path as a local shortest path between p and q crossing f and denote it by ˆ π ( p, q ; f ). Nodes p and q are joined by an edge in G ε if none of the points x or y are on an edge of f . Such an edge is said to cross the face f . In the case where p and q lie on the same bisector, say b , splitting an angle between faces f 1 and f 2 , we define two parallel edges in G ε joining p and q – one crossing f 1 and another crossing f 2 . 22 The cost of an edge ( p, q ) in G ε that crosses a face f is defined as the cost of the local shortest path ˆ π ( p, q ; f ) and is denoted by c ( p, q ; f ), or simply by c ( p, q ) when no ambiguity arises. Formally, we have c ( p, q ) = c ( p, q ; f ) = ‖ ˆ π ( p, q ; f ) ‖ = min x,y ∈ f ( ‖ px ‖ + ‖ xy ‖ + ‖ yq ‖ ) . (22) Next, we consider a node p of G ε lying on an edge e of D . The node p can be either a Steiner point on e or a vertex of D incident to e . It is adjacent to nodes lying in tetrahedra in D ( e ). The edges of G ε incident to p are associated with pairs of neighboring bisectors as follows. We consider a tetrahedron t in D ( e ), and describe edges incident to p in t . Let f 1 and f 2 be the two faces of t incident to e , and let b be the bisector of the dihedral angle formed by f 1 and f 2 . We define edges between p and nodes lying on bisectors in t that are neighbors of b . There are four such bisectors – two with respect to f 1 and two with respect to f 2 . For a node q on a neighboring bisector b 1 sharing, say, the face f 1 with b , we consider the local shortest path ˆ π ( p, q ; f 1 ). By definition, ˆ π ( p, q ; f 1 ) = { p, x, q } , where x ∈ f 1 . We define an edge between p and q if and only if the point x defining the local shortest path is in the interior of f 1 . The cost of the edge ( p, q ) equals the cost of the local shortest path ˆ π ( p, q ; f 1 ), i.e., c ( p, q ) = c ( p, q ; f 1 ) = ‖ ˆ π ( p, q ; f 1 ) ‖ = min x ∈ f 1 ( ‖ px ‖ + ‖ xq ‖ ) . We associate the edge ( p, q ) to b , b 1 and f 1 and say that it crosses f 1 . Furthermore, p is joined to nodes on b by pair of parallel edges, provided that the corresponding local shortest paths do not touch the edges of D – one crossing f 1 and the other crossing f 2 . Lemma 4.1 We have | V ( G ε ) | = O ( n ε 2 log 1 ε ) and | E ( G ε ) | = O ( n ε 4 log 2 1 ε ) . Proof: The estimate on the number of nodes follows directly from Lemma 3.1 and the fact that D has O ( n ) vertices. The number of edges in G ε can be estimated as follows. There are O ( n ) faces in D and at most 21 pairs of neighbor bisectors with respect to a fixed face in D . By Lemma 3.1(a), there are O ( 1 ε 4 log 2 1 ε ) pairs of nodes lying on two fixed neighboring bisectors. When combined, these three facts prove the estimate on the number of edges of G ε . ✷ Paths in G ε are called discrete paths . The cost, c ( π ), of a discrete path π is the sum of the costs of its edges. Note that if we replace each of the edges in a discrete path π by the corresponding (at most three) segments forming the shortest path used to compute its cost we obtain a path in D with cost c ( π ). Next, we state the main theorem of this section. Theorem 4.1 Let ̃ π ( v 0 , v ) be a shortest path between two different vertices v 0 and v in D . There exists a discrete path π ( v 0 , v ) , such that c ( π ( v 0 , v )) ≤ (1 + ε ) ‖ ̃ π ( v 0 , v ) ‖ . Proof: We prove the theorem by constructing a discrete path π ( v 0 , v ) whose cost is as required. Recall that the shortest path ̃ π ( v 0 , v ) is a linear path consisting of cell-crossing, face-using, and edge-using segments that satisfy Snell’s law at each bending point. We construct the discrete path π by successive modifications of ̃ π described below as steps. 23 x 1 p b s x 2 (a) (b) ̃ π 2 ̃ π ̃ π 1 ̃ π 1 ̃ π x 3 x 4 x 5 p 4 p 3 p 2 p 1 x 1 x 2 ̃ π 2 Figure 8: (a) Replacement of a cell-crossing segment s = ( x 1 , x 2 ) by a two-segment path { x 1 , p, x 2 } . (b) Replacement of ̃ π 1 by a path ̃ π 2 joining Steiner points p i . Note that edges ( p i p i +1 ) denote local shortest paths, rather than straight-line segments. Step 1: In this step, we replace each of the cell-crossing segments of ̃ π , which satisfy the conditions of Lemma 3.2, by a two-segment path through a Steiner point. Precisely, let s = ( x 1 , x 2 ) be a cell-crossing segment in ̃ π (Figure 8 (a)). Let f 1 and f 2 be the faces containing x 1 and x 2 , respectively. Let e = ( A, B ) be the common edge between f 1 and f 2 . Assume that s is outside of the union of the edge and vertex vicinities D ε ( e ) ∪ D ε ( A ) ∪ D ε ( B ). We refer to such segment as vicinity-free 4 . Then, according to Lemma 3.2, there is a Steiner point p on the bisector b splitting the dihedral angle formed by f 1 and f 2 such that | x 1 p | + | px 2 | ≤ (1 + ε/ 2) | x 1 x 2 | . So, in this step, each cell-crossing and vicinity-free segment s = ( x 1 , x 2 ) is replaced by two-segment path { x 1 , p, x 2 } , where p is the approximating Steiner point as described above. Clearly, after this step, we obtain a path joining v 0 and v , whose cost does not exceed (1 + ε/ 2) ‖ ̃ π ‖ . We denote this path by ̃ π 1 (see Figure 8 (b)). Step 2: In this step, we consider the sequence of Steiner points added as new bending points along ̃ π 1 in Step 1. In the case where two consecutive Steiner points are split by a single bending point or a face-using segment on ̃ π 1 , we replace the sub-path between them by the corresponding local shortest path. Precisely, assume that p 1 and p 2 are consecutive Steiner points along ̃ π 1 and the sub-path between them is either { p 1 , ̃ x, p 2 } or { p 1 , ̃ x, ̃ y, p 2 } , ̃ x and ̃ y are bending points on the face f , shared by the two neighboring tetrahedra containing p 1 and p 2 , respectively. So, in Step 2, we replace all such sub-paths by the local shortest paths ˆ π ( p 1 , p 2 ; f ) = { p 1 , x, y, p 2 } , using (22). We denote the obtained path by ̃ π 2 (Figure 8 (b)). Clearly, ̃ π 2 is a path joining v 0 and v , whose cost does not exceed that of ̃ π 1 . Hence, ‖ ̃ π 2 ‖ ≤ ‖ ̃ π 1 ‖ ≤ (1 + ε/ 2) ‖ ̃ π ‖ . (23) In the following two steps, we identify the portions of ̃ π 2 that lie inside the vertex and edge vicinities and replace them with discrete paths using the corresponding vertices and edges. 4 Note that such a segment still can have an end-point in a vertex or edge-vicinity related to other vertices or edges incident to f 1 and f 2 . 24 Step 3: Follow ̃ π 2 from v 0 to v and let a 0 be the last bending point on ̃ π 2 that lies inside the vertex vicinity D ε ( v 0 ). Next, let b 1 be the first bending point after a 0 that is in the vertex vicinity, say D ε ( v 1 ). Likewise, let a 1 be the last bending point in D ε ( v 1 ). Continuing in this way, we define a sequence of, say k +1 for some k ≥ 1, different vertices v 0 , v 1 , . . . , v k = v and a sequence of bending points a 0 , b 1 , a 1 , . . . , a k − 1 , b k on ̃ π 2 , such that for i = 0 , . . . , k , points b i , a i are in D ε ( v i ) (we assume b 0 = v 0 , a k = v ). Furthermore, by our definition, portions of ̃ π 2 between a i and b i +1 do not intersect any vertex vicinities. We partition ̃ π 2 into portions ̃ π 2 ( v 0 , a 0 ) , ̃ π 2 ( a 0 , b 1 ) , ̃ π 2 ( b 1 , a 1 ) , . . . , ̃ π 2 ( b k , v ) . (24) The portions ̃ π 2 ( a i , b i +1 ), for i = 0 , . . . , k − 1, are called the between-vertex-vicinities portions, while the portions ̃ π 2 ( b i , a i ), for i = 0 , . . . , k , are called the vertex-vicinity portions. We define path ̃ π 3 by replacing each of the vertex-vicinities portions by a two segment path trough the corresponding vertex and show that the cost of ̃ π 3 is bounded by (1 + ε/ 6) ‖ ̃ π 2 ‖ . Consider a between-vertex-vicinities portion ̃ π 2 ( a i , b i +1 ) for some 0 ≤ i < k − 1. If this portion consists of a single segment ( a i , b i +1 ), then the vertices v i and v i +1 must be adjacent in D and we define ̃ π 3 ( v i , v i +1 ) to be the segment ( v i , v i +1 ). The length of ( v i , v i +1 ) is estimated by using the triangle inequality and the definition of the vertex-vicinities as follows: | v i v i +1 | ≤ | v i a i | + | a i b i +1 | + | b i +1 v i +1 | ≤ | a i b i +1 | + ε ( r ( v i ) + r ( v i +1 )) ≤ | a i b i +1 | + ε 14 ( d ( v i ) + d ( v i +1 )) ≤ | a i b i +1 | + ε 7 | v i v i +1 | . (25) To estimate the cost of the segment ( v i , v i +1 ), we observe that ( a i , b i +1 ) lies inside a tetrahe- dron incident to ( v i , v i +1 ). Thus, the weight of ( v i , v i +1 ) is at most the weight of ( a i , b i +1 ). This observation and (25) readily imply ‖ ̃ π 3 ( v i , v i +1 ) ‖ = ‖ v i v i +1 ‖ ≤ (1 + ε 6 ) ‖ a i b i +1 ‖ = (1 + ε 6 ) ‖ ̃ π 2 ( a i , b i +1 ) ‖ . (26) In the general case, where ̃ π 2 ( a i , b i +1 ) contains at least two segments, we follow the bending points along ̃ π 2 ( a i , b i +1 ) and define X to be the last bending point on the boundary ∂D ( v i ) (see Definition 3.1). If the path ̃ π 2 ( a i , b i +1 ) lies entirely in D ( v i ), then we set X = b i +1 . Thus, the bending points on the path ̃ π 2 between a i and X lie in the tetrahedra incident to v i . Let ̇ w i be the minimum weight among the segments of the path ̃ π 2 ( a i , X ) and let x be the first bending point after a i incident to a segment, whose weight is ̇ w i . Analogously, define the bending points Y and y , by following the bending points of the backward path ̃ π 2 ( b i +1 , a i ) from b i +1 . Note that x precedes y on the path ̃ π 2 ( a i , b i +1 ). We define the path ̃ π 3 ( v i , v i +1 ) as the concatenation of the segments ( v i , x ), ( y, v i +1 ) and the portion ̃ π 2 ( x, y ), i.e., ̃ π 3 ( v i , v i +1 ) = { ( v i , x ) , ̃ π 2 ( x, y ) , ( y, v i +1 ) } . Next, we estimate the cost of ̃ π 3 ( v i , v i +1 ). First, we observe that the weight of the segment ( v i , x ) cannot exceed ̇ w i . Then, we use the triangle inequality and the fact that a i is inside the vertex vicinity D ε ( v i ), obtaining ‖ v i x ‖ ≤ ̇ w i | v i a i | + ̇ w i | ̃ π 2 ( a i , x ) | ≤ ̇ w i | v i a i | + ‖ ̃ π 2 ( a i , x ) ‖ ≤ ̇ w i εr ( v i ) + ‖ ̃ π 2 ( a i , x ) ‖ ≤ ̇ w i ε 14 d ( v i ) + ‖ ̃ π 2 ( a i , x ) ‖ . 25 Analogously, for the cost of the segment ( y, v i +1 ), we have ‖ yv i +1 ‖ ≤ ̇ w i +1 ε 14 d ( v i +1 ) + ‖ ̃ π 2 ( y, b i +1 ) ‖ . Using these estimates, and the way we defined the path ̃ π 3 ( v i , v i +1 ), the weights ̇ w i , ̇ w i +1 , the distances d ( v i ), d ( v i +1 ), and the points X , Y , we obtain ‖ ̃ π 3 ( v i , v i +1 ‖ ≤ ‖ ̃ π 2 ( a i , b i +1 ) ‖ + ε 14 ( ̇ w i d ( v i ) + ̇ w i +1 d ( v i +1 )) ≤ (27) ‖ ̃ π 2 ( a i , b i +1 ) ‖ + ε 14( ‖ ̃ π 3 ( v i , X ) ‖ + ‖ ̃ π 3 ( Y, v i +1 ) ‖ ) ≤ ‖ ̃ π 2 ( a i , b i +1 ) ‖ + ε 7 ( ‖ ̃ π 3 ( v i , v i +1 ) ‖ , which implies the estimate (26) in the general case. Applying the above construction to each pair of consecutive vertices in the sequence v 0 , v 1 , . . . , v k = v , we obtain a linear path ̃ π 3 ( v 0 , v ) = { ̃ π 3 ( v 0 , v 1 ) , ̃ π 3 ( v 1 , v 2 ) , . . . , ̃ π 3 ( v k − 1 , v ) } , that has no bending points inside vertex vicinities except for the vertices v 0 , v 1 , . . . , v k = v . We estimate the cost of this path by summing up (26), for i = 0 , . . . , k − 1, and obtain ‖ ̃ π 3 ( v 0 , v ) ‖ ≤ (1 + ε 6) k − 1 ∑ i =0 ‖ ̃ π 2 ( a i , b i +1 ) ‖ ≤ (1 + ε 6 ) ‖ ̃ π 2 ( v 0 , v ) ‖ . (28) Observe that the path ̃ π 3 constructed above may contain self intersections (e.g., if one and the same vertex vicinity is visited twice by ̃ π 2 ). It is also possible that ̃ π 3 may contain consecutive face-using segments. Hence, at the end of Step 3, we traverse the obtained path and compress it. That is, we remove the loops in case of self intersections. We replace the consecutive face-using segments (which obviously lie in the same face) by the single face- using segment joining their free end-points. We denote the compressed path again by ̃ π 3 . Clearly, compressing reduces the cost of the path and hence the estimate (28) remains true for ̃ π 3 . Next, in Step 4, using a similar approach as above, we further partition each vertex- vicinity-portion ̃ π 3 ( v i , v i +1 ) into between-edge-vicinities portions and edge-vicinity portions. Then, we replace each edge-vicinity portion by an edge-using segment plus 2 additional segments and estimate the cost of the resulting path ̃ π 4 . Step 4: First we define analogues of vertex and between-vertex vicinities for edges. Let ( v i , a ) be the first segment of the path ̃ π 3 ( v i , v i +1 ). If a is not inside an edge-vicinity, we define a i, 0 = v i . Otherwise, if a is inside an edge-vicinity, say D ε ( e 0 ), and let a ′ be the first bending point on the path ̃ π 3 ( v i , v i +1 ) after v i lying on ∂D ( e 0 ), then we de- fine a i, 0 to be the last bending point on ̃ π 3 ( v i , a ′ ) that is inside D ε ( e 0 ). Next, let b i, 1 be the first bending point on ̃ π 3 ( a i, 0 , v i +1 ) that is inside an edge-vicinity, say D ε ( e 1 ) and let b ′ be the first bending point on ̃ π 3 ( b i, 1 , v i +1 ) that is on ∂D ( e 1 ). We define a i, 1 as the last bending point on ̃ π 3 ( b i, 1 , b ′ ) that is in the same edge vicinity as b i, 1 . Assume that, following this approach, the sequence of bending points a i, 0 , b i, 1 , a i, 1 , . . . , a i,k i − 1 , b i,k i has been defined. They partition the portion ̃ π 3 ( v i , v i +1 ) into sub-portions ̃ π 3 ( v i , v i +1 ) = 26 a ′ i,j b i,j b ′ i,j q i,j a i,j p i,j ̃ π 3 e j ̃ π 4 Figure 9: Replacement of subpath ̃ π 3 ( b i,j , a i,j ) by the edge-using segment ( q i,j p i,j ). { ̃ π 3 ( v i , a i, 0 ) , . . . , ̃ π 3 ( a i,j − 1 , b i,j ) , ̃ π 3 ( b i,j , a i,j ) , . . . , ̃ π 3 ( b i,k i , v i +1 ) } . Portions between a i,j and b i,j +1 , for j = 0 , . . . , k i − 1, are called the between-edge-vicinity portions. Portions between b i,j and a i,j , for j = 0 , . . . , k i , are called the edge-vicinity portions ( b i, 0 = v i and a i,k i = v i +1 ). According to our construction, the bending points a i, 0 , b i, 1 , a i, 1 , . . . , a i,k i − 1 , b i,k i , defining the above partition lie inside edge vicinities. Moreover, consecutive points b i,j and a i,j , for j = 0 , . . . , k i , are in one and the same edge-vicinity D ε ( e j ). For j = 0 , . . . , k i , let b ′ i,j and a ′ i,j be the orthogonal projections of the points b i,j and a i,j onto the edge e j , respectively (Figure 9). Let p i,j and q i,j be the Steiner points on e j defining the largest sub-interval of the interval ( a ′ i,j , b ′ i,j ) on e j and assume that p i,j is between a ′ i,j and q i,j . (In the case where the interval ( a ′ i,j , b ′ i,j ) contains no Steiner points, we define p i,j = q i,j to be the closest Steiner point to a ′ i,j on e j .) In ̃ π 4 , the edge-using segment ( q i,j p i,j ) will replace in ̃ π 3 the subpath ̃ π 3 ( b i,j , a i,j ). Let us estimate the resulting error. From the definition of the edge vicinity D ε ( e j ), the Steiner points on e j , and the radii r ( p i,j ) and r ( q i,j ), it is easy to derive that | p i,j a i,j | ≤ 3 2 εr ( p i,j ) and | b i,j q i,j | ≤ 3 2 εr ( q i,j ) . (29) Furthermore, by our construction and the fact that ( q i,j , p i,j ) is an edge-using segment, it follows that ‖ q i,j p i,j ‖ ≤ ‖ ̃ π 3 ( b i,j , a i,j ) ‖ . (30) Next, we modify between-edge-vicinities portions ̃ π 3 ( a i,j , b i,j +1 ), into paths ̃ π 4 ( p i,j , q i,j +1 ), joining Steiner points p i,j and q i,j +1 , and not intersecting any vertex or edge vicinities. We apply a construction analogous to the one used in Step 3 to define the paths ̃ π 3 ( v i , v i +1 ). We fix j and consider the between-edge-vicinities portion ̃ π 3 ( a i,j , b i,j +1 ). We first consider the special case where ̃ π 3 ( a i,j , b i,j +1 ) is the segment ( a i,j , b i,j +1 ). In this case, we observe that e j and e j +1 must be edges of the tetrahedron containing the segment ( a i,j , b i,j +1 ) and define ̃ π 4 ( p i,j , q i,j +1 ) = ( p i,j , q i,j +1 ). We estimate the length of this segment using the triangle inequality, the estimates (29) and Definition 3.1 as follows | p i,j q i,j +1 | ≤ | p i,j a i,j | + | a i,j b i,j +1 | + | b i,j +1 q i,j +1 | ≤ 3 ε 2 ( r ( p i,j ) + r ( q i,j +1 )) + | a i,j b i,j +1 | ≤ ε 16( d ( p i,j ) + d ( q i,j +1 )) + | a i,j b i,j +1 | ≤ ε 8 | p i,j q i,j +1 | + | a i,j b i,j +1 | . Using this estimate and the observation that the weight of the segment ( p i,j , q i,j +1 ) cannot exceed the weight of ( a i,j , b i,j +1 ), we obtain ‖ ̃ π 4 ( p i,j , q i,j +1 ) ‖ = ‖ p i,j q i,j +1 ‖ ≤ (1 + ε 7 ) ‖ a i,j b i,j +1 ‖ = (1 + ε 7) ‖ ̃ π 3 ( a i,j , b i,j +1 ) ‖ . (31) 27 Next, we consider the general case, where ‖ ̃ π 3 ( a i,j , b i,j +1 ) ‖ consists of at least two seg- ments. Let X be the first bending point after a i,j that is on the boundary ∂D ( e j ). If the path ̃ π 3 ( a i,j , b i,j +1 ) is entirely inside D ( e j ), then we set X = b i,j +1 . Furthermore, let ̇ w ′ j be the minimum weight among the segments in ̃ π 3 ( a i,j , X ), and let x be the first bending point after a i,j that is an end-point of a segment whose weight is ̇ w ′ j . We define the weight ̇ w ′ j +1 , and the bending points Y , and y , analogously with respect to b i,j +1 and the edge vicinity D ε ( e j +1 ). It follows that the point x precedes y along ̃ π 3 ( a i,j , b i,j +1 ). We define the portion of the path ̃ π 4 joining p i,j and q i,j +1 by ̃ π 4 ( p i,j , q i,j +1 ) = { ( p i,j , x ) , ̃ π 3 ( x, y ) , ( y, q i,j +1 ) } and estimate its cost. Let us first estimate the cost of the segment ( p i,j , x ). We observe that ‖ p i,j x ‖ ≤ ̇ w ′ j | p i,j x | , that | p i,j , x | ≤ | p i,j a i,j | + | ̃ π 3 ( a i,j , x ) | (by triangle inequality), and that the segments on the path ̃ π 3 ( a i,j , x ) have weight greater than or equal to ̇ w ′ j . Using these observations, (29), and Definition 3.1, we obtain ‖ p i,j x ‖ ≤ ̇ w ′ j | p i,j a i,j | + ̇ w ′ j | ̃ π 3 ( a i,j , x ) | ≤ ̇ w ′ j | p i,j a i,j | + ‖ ̃ π 3 ( a i,j , x ) ‖ ≤ 3 ε 2 ̇ w ′ j r ( p i,j ) + ‖ ̃ π 3 ( a i,j , x ) ‖ = ε 16 ̇ w ′ j d ( p i,j ) + ‖ ̃ π 3 ( a i,j , x ) ‖ . (32) Analogously, we have ‖ yq i,j +1 ‖ ≤ ε 16 ̇ w ′ j +1 d ( q i,j +1 ) + ‖ ̃ π 3 ( y, b i,j +1 ) ‖ . (33) Using the definition of the path ̃ π 4 ( p i,j , q i,j +1 ), the estimates (32) and (33), and the definition of the distances d ( p i,j ) and d ( q i,j +1 ), the weights ̇ w ′ j and ̇ w ′ j +1 , and the points X and Y , we obtain ‖ ̃ π 4 ( p i,j , q i,j +1 ) ‖ = ‖ ̃ π 3 ( a i,j , b i,j +1 ) ‖ + ε 16( ̇ w ′ j d ( p i,j ) + ̇ w ′ j +1 d ( q i,j +1 )) (34) ≤ ‖ ̃ π 3 ( a i,j , b i,j +1 ) ‖ + ε 16( ‖ ̃ π 3 ( p i,j , X ) ‖ + ‖ ̃ π 3 ( Y, q i,j +1 ) ‖ ) ≤ ‖ ̃ π 3 ( a i,j , b i,j +1 ) ‖ + ε 8 ‖ ̃ π 3 ( p i,j , q i,j +1 ) ‖ , which implies estimate (31), in the general case. Finally, combining segments ( q i,j , p i,j ) and paths ̃ π 4 ( p i,j , q i,j +1 ), for j = 0 , . . . , k i , we construct a path ̃ π 4 ( v i , v i +1 ) = { ( v i , p i, 0 ) , ̃ π 4 ( p i, 0 , q i, 1 ) , ( q i, 1 , p i, 1 ) , ̃ π 4 ( p i, 1 , q i, 2 ) , . . . , ̃ π 4 ( p i,k i − 1 , q i,k i ) , ( q i,k i , v i +1 ) } . This path has no bending points in any of the edge or vertex vicinities. Its cost can be bounded using (30) and (31) as follows ‖ ̃ π 4 ( v i , v i +1 ) ‖ = k i ∑ j =0 ‖ q i,j , p i,j ‖ + k i − 1 ∑ j =0 ‖ ̃ π 4 ( p i,j , q i,j +1 ) ‖ ≤ (35) k i ∑ j =0 ‖ ̃ π 3 ( b i,j , a i,j ) ‖ + k i − 1 ∑ j =0 (1 + ε 7 ) ‖ ̃ π 3 ( a i,j , b i,j +1 ) ‖ ≤ (1 + ε 7) ‖ ̃ π 3 ( v i , v i +1 ) ‖ , where we assume v i = b i, 0 = q i, 0 and v i +1 = a i,k i = p i,k i . 28 The paths ̃ π 4 ( v i , v i +1 ), for i = 0 , . . . k − 1, form a linear path ̃ π 4 ( v 0 , v ), whose cost is estimated using (35), (26), and (23) by ̃ π 4 ( v 0 , v ) ≤ k − 1 ∑ i =0 (1+ ε 7 ) ‖ ̃ π 3 ( v i , v i +1 ) ‖ = (1+ ε 7) ‖ ̃ π 3 ( v 0 , v ) ‖ ≤ (1+ ε 3) ‖ ̃ π 2 ( v 0 , v ) ‖ ≤ (1+ ε ) ‖ ̃ π ( v 0 , v ) ‖ . (36) As in Step 3, it is possible for ̃ π 4 to contain self-intersections and consecutive face-using segments. Hence, we traverse ̃ π 4 and compress it by removing loops and by replacing con- secutive face-using segments. The obtained path is denoted again by ̃ π 4 , and estimate (36) is valid. The bending points defining ̃ π 4 can be partitioned into two groups. The first group consists of bending points corresponding to nodes of the graph G ε , i.e., Steiner points on bisectors, Steiner points on edges, and vertices of D . The second group consists of the remaining bending points of ̃ π 4 , which are bending points inside the faces of D . We complete the proof of the theorem by showing that the sequence of the nodes in the first group defines a discrete path π ( v 0 , v ) whose cost c ( π ( v 0 , v )) ≤ ‖ ̃ π 4 ( v 0 , v ) ‖ . It suffices to show that any two consecutive nodes (bending points in the first group) along the path ̃ π 4 are adjacent in the approximation graph G ε . To show this, we review closely the structure of the path ̃ π 4 . In Step 3, portions of ̃ π 2 related to vertex vicinities have been replaced by two segment portions through-vertices of D . Furthermore, we observe that the segments ( v i , x ) created in Step 3 are either a face- using segments or join v i to a Steiner point on a bisector. The same applies to segments ( y, v i +1 ). Similarly, in Step 4, portions related to edge-vicinities have been replaced by three segment portions visiting corresponding edges. Again segments ( p i,j , x ) are either face-using segments or join p i,j to a node on a bisector, that is a neighbor of the bisector incident to the edge containing p i,j . The same applies to the segments ( y, q i,j +1 ). In summary, the segments created in Steps 3 and 4 are of one of the following two types: 1. A face-using segment with one of its endpoint being a (node) vertex of D or a Steiner point on an edge of D . 2. A segment joining two nodes, at least one of them being a Steiner point on an edge of D or a vertex of D . The remaining segments in ̃ π 4 are cell-crossing and face-using segments, whose endpoints are outside any vertex or edge vicinity. All the cell-crossing segments in ̃ π 4 were created during Steps 1 and 2. Hence, one of their endpoints is a (node) Steiner point on a bisector of a tetrahedron. Finally, due to the compressing, there are no consecutive face-using segments in ̃ π 4 . Now, let p and q be two consecutive nodes along the path ̃ π 4 . We show that p and q are adjacent in G ε . We consider, first, the case where at least one of the nodes, say p , is a vertex of D . Let x be the bending point following p along the path ̃ π 4 . By the definition of bending points adjacent to the vertices (in Step 3), we know that ( p, x ) is a face-using segment followed by a cell-crossing segment ( x, x 1 ), joining x to a (node) Steiner point on a bisector lying in one of the tetrahedra incident to the face that contains ( p, x ). So, q = x 1 29 and q is inside a tetrahedron incident to p . Thus, p and q are adjacent in G ε . The case where at least one of the nodes p or q is a Steiner point on an edge of D can be treated analogously. Assume now that both p and q are Steiner points on bisectors. Let x and x 1 be the bending points following p along ̃ π 4 . The point x has to be a bending point on a face of the tetrahedron containing p . The segment ( x, x 1 ) is either a cell-crossing or a face-using segment. In the first case, q must coincide with x 1 and is adjacent to p in G ε , since it lies in a tetrahedron that is a neighbor to the one containing p . In the second case, where ( x, x 1 ) is a face-using segment, we consider the bending point x 2 that follows x 1 along the path ̃ π 4 . The segment ( x 1 , x 2 ) must be a cell-crossing segment. Thus, in this case, q = x 2 is adjacent to p , because the tetrahedra containing p and q are neighbors. We have shown that any pair p and q of consecutive nodes on the path ̃ π 4 are adjacent in G ε . Hence, we define a discrete path π ( v 0 , v ) to be the path in G ε following the sequence of nodes along ̃ π 4 . Finally, we observe that the sub-paths of ̃ π 4 ( p, q ) joining pairs of consecutive nodes stay in the union of the tetrahedra containing these nodes and cross faces shared by the bisectors containing them. Hence, by the definition of the cost of the edges in G ε , we have c ( p, q ) ≤ ‖ ̃ π 4 ( p, q ) ‖ . Summing these estimates, for all edges of π ( v 0 , v ), and using (36), we obtain, c ( π ( v 0 , v )) ≤ ‖ ̃ π 4 ( v 0 , v ) ‖ ≤ (1 + ε ) ‖ ̃ π ( v 0 , v ) ‖ . ✷ 5 An algorithm for computing SSSP in G ε In this section we present our algorithm for solving the Single Source Shortest Paths (SSSP) problem in the approximation graph G ε = ( V ( G ε ) , E ( G ε )). Straightforwardly, one can apply Dijkstra’s algorithm, which runs in O ( | E ( G ε ) | + | V ( G ε ) | log | V ( G ε ) | ) time. By Lemma 4.1 we have | V ( G ε ) | = O ( n ε 2 log 1 ε ) and | E ( G ε ) | = O ( n ε 4 log 2 1 ε ). Thus, the SSSP problem in G ε can be solved in O ( n ε 4 log n ε log 1 ε ) time. In the remainder of this section, we demonstrate how geometric properties of our model can be used to obtain a more efficient algorithm for solving the SSSP problem. More precisely, we present an algorithm that runs in O ( | V ε | (log | V ε | + 1 √ ε log 3 1 ε )) = O ( n ε 2 . 5 log n ε log 3 1 ε ) time. Informally, the idea is to avoid consideration of large portions of the edges of the graph G ε when searching for shortest paths. We achieve that by applying the strategy proposed first in [25, 26] and developed further in [4] and by using the properties of the weighted distance function and additive Voronoi diagrams studied in Section 2.2. We maintain a priority queue containing candidate shortest paths. At each iteration of the algorithm, a shortest path from the source s to some node u of G ε is found. Then, the algorithm constructs edges adjacent to u that can be continuations of the shortest path from s to u and inserts them in the priority queue as new candidate shortest paths. In general, one needs to consider all edges adjacent to u as possible continuations. In our case, we divide the edges adjacent to u into O ( 1 √ ε log 1 ε ) groups related to the segments containing Steiner points in the neighboring bisectors and demonstrate that we can consider just a constant number of edges in each group. The latter is possible due to the structure of the Voronoi cell V ( u ) of the node u in the additive Voronoi diagram related to a fixed group (see Theorem 2.1). This section is organized as follows: In the next subsection, we describe the general structure of the algorithm. In Subsection 5.2, we show how this strategy can be applied in our case and present an outline of the algorithm. We provide details of the implementation 30 G e ( S ) S u s E ( S ) δ ( u ) v Figure 10: The sets S , S a , and an optimal edge e ( S ) = ( u, v ) in E ( S ) are illustrated. A shortest path from s to u is illustrated by a dashed curve. of the algorithm and analyze its complexity. Finally, at the end we establish the main result of the paper. 5.1 General structure of the algorithm Let G ( V, E ) be a directed graph with positive costs (lengths) assigned to its edges and s be a fixed node of G , called the source . A standard greedy approach for solving the SSSP problem works as follows: a subset, S , of nodes to which the shortest path has already been found and a set, E ( S ), of edges connecting S with S a ⊂ V \ S are maintained. The set S a consists of nodes not in S but adjacent to S . In each iteration, an optimal edge e ( S ) = ( u, v ) in E ( S ) is selected, with source u in S and target v in S a (see Figure 10). The target vertex v is added to S and E ( S ) is updated correspondingly. An edge e = e ( S ) is optimal if it minimizes the value δ ( u ) + c ( e ), where δ ( u ) is the distance from s to u and c ( e ) is the cost of e . Different strategies for maintaining information about E ( S ) and finding an optimal edge e ( S ) during each iteration result in different algorithms for computing SSSP. For example, Dijkstra’s algorithm maintains only a subset Q ( S ) of E ( S ), which, however, always contains an optimal edge. Alternatively, as in [4], one may maintain a subset of E ( S ) containing one edge per node u in S . The target node of this edge is called the representative of u and is denoted by ρ ( u ). The node u itself is called predecessor of its representative. The representative ρ ( u ) is defined to be the target of the minimum cost edge in the propagation set I ( u ) of u , where I ( u ) ⊂ E ( S ) consists of all edges ( u, v ) such that δ ( u ) + c ( u, v ) < δ ( u ′ ) + c ( u ′ , v ) for all nodes u ′ ∈ S that have entered S before u . The union of propagation sets forms a subset Q ( S ) of E ( S ) that always contains an optimal edge. Propagation sets I ( u ), for u ∈ S , form a partition of Q ( S ). The propagation sets of the vertices in S form a partition of E ( S ), which is called propagation diagram , and is denoted by I ( S ). The set of representatives R ⊂ S a can be organized in a priority queue, where the key of the node ρ ( u ) in R is defined to be δ ( u ) + c ( u, ρ ( u )). Observe that the edge corresponding to the minimum in R is an optimal edge for S . In each iteration, the minimum key node v 31 in R is selected and the following three steps are carried: Step 1. The node v is moved from R into S . Then, the propagation set I ( v ) is computed and the propagation diagram I ( S ) is updated accordingly. Step 2. Representative ρ ( v ) of v and a new representative, ρ ( u ) , for the predecessor u of v are computed. Step 3. The new representatives, ρ ( u ) and ρ ( v ) , are either inserted into R together with their corresponding keys, or (if they are already in R ) their keys are updated. Clearly, this leads to a correct algorithm for solving the SSSP problem in G . The total time for the priority queue operations 5 is O ( | V | log | V | ). Therefore, the efficiency of this strategy depends on the maintenance of the propagation diagram, the complexity of the propagation sets, and the efficient updates of the new representatives. In the next subsection, we address these issues and provide necessary details. 5.2 Implementation details and analysis 5.2.1 Notation and algorithm outline Our algorithm follows the general strategy as described in the previous subsection. First, we convert G ε into a directed graph by replacing each of its edges by a pair of oppositely oriented edges with cost equal to the cost of the original edge. Let, as above, S be the set of the nodes to which the shortest path has already been found and E ( S ) be the set of the edges joining S with S a ⊂ V \ S . We partition the edges of G ε (and respectively E ( S )) into groups so that the propagation sets and the corresponding propagation diagrams, when restricted to a fixed group, have a simple structure and can be updated efficiently. Then, for each node u in S , we will keep multiple representatives in R – a constant number on the average, for each group where edges incident to u participate and where its propagation set is non-empty. A node in S a will have multiple predecessors – at most as many as the number of the groups where edges incident to it participate. We will show that the number of the groups, where edges incident to u can participate, is bounded by O ( 1 √ ε log 1 ε ) times the number of bisectors incident to u . In a fixed group, we will be able to compute new representatives in O (log 1 ε ) time and update propagation diagrams in O (log 2 1 ε ) time. Edges of G ε joining pairs of Steiner points on bisectors are naturally partitioned into groups corresponding to ordered triples ( b , b 1 , f ), where b and b 1 are neighboring bisectors with respect to the face f (see Section 4 for the definitions). The edges of the initial tetra- hedralization D are assumed to belong to the bisectors incident to them. So, the group of edges corresponding to an ordered triple ( b , b 1 , f ) consists of all edges from a node on b to a node on b 1 that cross f . Recall that the nodes (Steiner points) on any bisector b were placed on a set of segments parallel to the edge of D incident to b . In our discussion below, we refer to these segments, including the edge of D , as Steiner segments . We further partition 5 Note that we do not need a priority queue based on elaborated data structures such as Fibonacci heaps. Any priority queue with logarithmic time per operation suffices. 32 A B D P 1 ℓ 1 ℓ f b D 1 b 1 C P Figure 11: Two Steiner segments ℓ and ℓ 1 lying on neighboring bisectors b = △ ABP and b 1 = △ ACP 1 respectively, that share a face f = △ ABC are illustrated. Steiner segments on b and b 1 are parallel to the shared face f . The edges joining nodes on ℓ and ℓ 1 form the group of edges corresponding to the triple ( ℓ, ℓ 1 , f ). the group of edges associated with the triple ( b , b 1 , f ) into subgroups corresponding to pairs of Steiner segments ( ℓ, ℓ 1 ) on b and b 1 , respectively, see Figure 11 (a). In this way, the edges of G ε are partitioned into groups corresponding to ordered triples ( ℓ, ℓ 1 , f ), where ℓ and ℓ 1 are Steiner segments parallel to f on two neighboring bisectors sharing f . The group corresponding to ( ℓ, ℓ 1 , f ) is denoted by E ( ℓ, ℓ 1 , f ) and consists of all oriented edges from a node on ℓ to a node on ℓ 1 that cross f . A fixed bisector b has either three or six neighboring bisectors ( b itself and two or five others, respectively) with respect to each of the two faces forming the dihedral angle bisected by b . Hence, the total number of ordered triples ( b , b 1 , f ) does not exceed 36 n . By Lemma 3.1, the number of Steiner segments on any bisector is O ( 1 √ ε log 1 ε ) and thus the number of subgroups of a group corresponding to a triple ( b , b 1 , f ) is O ( 1 ε log 2 1 ε ). In total, the number of groups E ( ℓ, ℓ 1 , f ) is O ( n ε log 2 1 ε ). A node u lying on a Steiner segment ℓ will have a set of representatives for each group E ( ℓ, ℓ 1 , f ) corresponding to a triple, where ℓ is the first element and where its propagation set is non-empty. We denote this set by ρ ( u, ℓ 1 , f ). The set of representatives ρ ( u, ℓ 1 , f ) will correspond to the structure of the propagation set I ( u ; ℓ, ℓ 1 , f ), as we will detail in the next subsection. Consider an iteration of our algorithm. Let v be the node extracted from priority queue R , containing all representatives. Let T ( v ) be the set of triples ( ℓ, ℓ 1 , f ) such that v lies on ℓ . First, we need to move v from S a to S , since the distance from v to the source s has been found. Nodes that are targets of the edges originating from v need to be added to S a . Then, we need to compute representatives of v for each group of edges, where edges originating at 33 v participate and where its propagation set is non-empty. Finally, we need to compute new representatives for all nodes in the set of predecessors of v , which we denote by R − 1 ( v ). The outline of our algorithm is as follows. ALGORITHM: SSSP( G ε , s ) While S 6 = V ε do 1. v ←− Extract min( R ); 2. Insert v in S and update S a ; 3. For each triple ( ℓ, ℓ 1 , f ) ∈ T ( v ) do 3.1 Update the data structures related to the Propagation Diagram I ( ℓ, ℓ 1 , f ); 3.2 Find new representatives for nodes whose propagation set has changed in 3.1; 3.3 Update sets of representatives ρ ( u ; ℓ ′ , ℓ, f ′ ), for all u ∈ R − 1 ( v ); 3.4 Update R with respect to 3.2 and 3.3. In the remainder of this section, we address the implementation of this algorithm and analyze its complexity. First, we observe that the number of iterations is | V ε | . The total number of representatives cannot exceed the number of oriented edges in G ε , which is less than | V ε | 2 and so, the size of the priority queue R is bounded by | V ε | 2 (later we show that it is actually O ( | V ε | √ ε log 2 1 ε )). Therefore, a single priority queue operation takes O (log | V ε | ) time and the total time for Step 1 is O ( | V ε | log | V ε | ). The total time for Step 2 is O ( | V ε | log 1 ε ). In Section 5.2.2, we describe the structure and maintenance of the data structures re- lated to the propagation diagrams I ( ℓ, ℓ 1 , f ). Computation and updates of the sets of rep- resentatives are described in Section 5.2.3. We conclude our discussion in Section 5.2.4 by summarizing the time complexity of the algorithm and by establishing our main result. 5.2.2 Implementation of Step 3.1 We consider a fixed triple ( ℓ, ℓ 1 , f ), where ℓ and ℓ 1 are Steiner segments on neighboring bisectors b and b 1 sharing f . The propagation diagram I ( ℓ, ℓ 1 , f ), was defined as the set consisting of the propagation sets of the active nodes on ℓ . Instead of explicitly computing the propagation diagram, we construct and maintain a number of data structures that allow efficient computation and updates of representatives. Consider an iteration of our algorithm. Denote the currently active nodes on ℓ by u 1 , . . . , u k , and assume that they are listed by their order of entering S . We denote this set by S ( ℓ ) and assume that it is stored and maintained as a doubly linked list ordered according to the position of the nodes on ℓ . In Step 3.1, we update the data structures related to the propagation diagram I ( ℓ, ℓ 1 , f ). According to our definition, the propagation set I ( u ) = I ( u ; ℓ, ℓ 1 , f ) of a node u ∈ ℓ consists of all edges ( u, v 1 ) in E ( ℓ, ℓ 1 , f ) such that δ ( u ) + c ( u, v 1 ) < δ ( u i ) + c ( u i , v 1 ), for i = 1 , . . . , k . Clearly, the set I ( u ) can be viewed and described as a subset of the set of nodes v 1 on ℓ 1 that satisfy the following three conditions: C1. The nodes u and v 1 are adjacent in G ε by an edge that crosses f ; C2. δ ( u ) + c ( u, v 1 ) < δ ( u i ) + c ( u i , v 1 ), for i = 1 , . . . , k ; C3. The node v 1 is in S a . We construct and maintain separate data structures for the nodes on ℓ 1 satisfying each of these three conditions: The data structure related to C1 is called Adjacency Diagram and is 34 denoted by A ( ℓ, ℓ 1 , f ). It consists of sets A ( u, ℓ 1 ), for all nodes u on ℓ , where the set A ( u, ℓ 1 ) consists of the nodes on ℓ 1 that satisfy C1 . This data structure is static. The data structure related to C2 is, in fact, a dynamic additive Voronoi diagram on ℓ 1 for the active nodes on ℓ with respect to the weighted distance function c ( u, x ) defined and studied in Section 2, see (1). Finally, the nodes on ℓ 1 that are in S a are stored in a dynamic doubly-linked list and organized in a binary search tree with respect to their position on ℓ 1 . We denote this data structure by S a ( ℓ 1 ). The lists S ( ℓ ) and S a ( ℓ 1 ) are readily maintained throughout the algorithm in logarithmic time per operation. Next, we describe in detail the construction and maintenance of these data structures. Adjacency Diagram: The Adjacency Diagram A ( ℓ, ℓ 1 , f ) consists of sets A ( u, ℓ 1 ), for all nodes u on ℓ . We assume that the nodes on ℓ 1 are stored in an ordered list V ( ℓ 1 ) according to their position on that segment. For any fixed node u ∈ ℓ , the adjacency set A ( u, ℓ 1 ) will be computed and stored as a sublist of the list V ( ℓ 1 ). We denote this sublist by ̄ A ( u, ℓ 1 ). We reduce the size of ̄ A ( u, ℓ 1 ) by replacing each portion of consecutive nodes in them by a pair of pointers to the first and to the last node in that portion. (Isolated nodes are treated as portions of length one.) Hence, each sublist ̄ A ( u, ℓ 1 ) is an ordered list of pairs of pointers identifying portions of consecutive nodes in the underlying list V ( ℓ 1 ). The size of the sublists implemented in this way is proportional to the number of the consecutive portions they contain. Next, we discuss the structure of the lists ̄ A ( u, ℓ 1 ) and show that their size is bounded by a small constant. According to our definitions (Section 4), an edge ( u, u 1 ) is present in A ( u, ℓ 1 ) if the local shortest path ˆ π ( u, u 1 ; f ) does not touch the boundary of f , where the path ˆ π ( u, u 1 ; f ) was defined in (22). We refer to intervals on ℓ 1 with both of their end-points being Steiner points as Steiner intervals . Furthermore, we say that a Steiner interval is covered by the set A ( u, ℓ 1 ) if all Steiner points, including its end-points, are in A ( u, ℓ 1 ). Clearly, each maximal interval covered by A ( u, ℓ 1 ) corresponds to and defines a portion of consecutive nodes on ℓ 1 that are adjacent to u . Moreover, by our definition, the list ̄ A ( u, ℓ 1 ) consists of the pairs of pointers to the end-points of the maximal intervals covered by A ( u, ℓ 1 ). In the next lemma, we show that there are at most seven maximal Steiner intervals covered by A ( u, ℓ 1 ). Lemma 5.1 The number of the maximal intervals covered by A ( u, ℓ 1 ) is at most seven. The corresponding ordered list ̄ A ( u, ℓ 1 ) can be computed in O (log K ( ℓ 1 )) time, where K ( ℓ 1 ) denotes the number of Steiner points on ℓ 1 . Proof: Presented in Appendix 3. ✷ We assume that the nodes that are end-points of the maximal Steiner intervals covered by the sets A ( u, ℓ 1 ), for all nodes u ∈ ℓ 1 , are pre-computed in a preprocessing step and stored in the lists ̄ A ( u, ℓ 1 ) as discussed above. Lemma 5.1 implies that this preprocessing related to the group ( ℓ, ℓ 1 , f ) takes O ( K ( ℓ ) log K ( ℓ 1 )) time, where K ( ℓ ) and K ( ℓ 1 ) denote the number of the nodes on ℓ and ℓ 1 , respectively. Next, we discuss the Voronoi diagram data structure related to condition C2 . Dynamic Additive Voronoi Diagram: We assumed that the currently active nodes, u 1 , . . . , u k on ℓ , are listed by order of their insertion into S . So, for the distances of these nodes to the source, we have δ ( u 1 ) ≤ · · · ≤ δ ( u k ). We view the distance δ ( u i ) as an additive 35 weight assigned to the node u i , and consider the additive Voronoi diagram of u 1 , . . . , u k on ℓ 1 with respect to the weighted distance function, introduced and studied in Section 2 and defined by (1). From the definition (see (1)), the weighted distance c ( u, x ) for a node u on ℓ and a point x ∈ ℓ 1 is given by c ( u, x ) = c ( u, x ; f ) = min a,a 1 ∈ F { w | ua | + w f | aa 1 | + w 1 | a 1 x |} , where F is the plane containing the face f ; w , w 1 are the weights of the cells containing ℓ and ℓ 1 , respectively, and w f is the weight associated to the face f . An important observation for our discussion here is that if x is a node on ℓ 1 adjacent to u , then the cost of the edge ( u, x ) is c ( u, x ). We denote the end-points of the segment ℓ 1 by A 1 and B 1 and assume that it is oriented so that A 1 < B 1 . For i = 1 , . . . , k , the Voronoi cell V ( u i ) is defined as the set of points on ℓ 1 V ( u i ) = { x ∈ ( A 1 , B 1 ) : δ ( u i ) + c ( u i , x ) ≤ δ ( u j ) + c ( u j , x ) for j 6 = i } , where ties are resolved in favor of the node that has entered S earlier. Clearly, the Voronoi diagram V ( u 1 , . . . , u k ) is a partitioning of ( A 1 , B 1 ) into a set of intervals, where each interval belongs to exactly one of the Voronoi cells. Hence, V ( u 1 , . . . , u k ) is completely described by a set of points A 1 = x 0 < x 1 < · · · < x m < x m +1 = B 1 and an assignment between the intervals ( x j , x j +1 ), for j = 0 , . . . , m , and the cells of the diagram. We assume that V ( u 1 , . . . , u k ) is known and stored. We further assume that a node v on ℓ has been extracted by the extract-min operation in Step 1 of our algorithm. In Step 3.1, we need to add the new site v and to compute the Voronoi diagram V ( u 1 , . . . , u k , v ). Next we show how this can be achieved in O (log 2 1 ε ) time. First, the following lemma shows that the Voronoi cell of v has a simple structure. Lemma 5.2 Let u 1 , . . . , u k be the active nodes on ℓ and let v be the last node inserted in S . Then the Voronoi cell V ( v ) , in the Voronoi diagram V ( u 1 , . . . , u k , v ) , is either empty or consists of a single interval on ℓ 1 . Proof: By our assumptions δ ( u i ) ≤ δ ( v ), for i = 1 , . . . , k . The Voronoi cell V ( v ) can be represented as an intersection V ( v ) = ∩ k i =1 V i ( v ), where the sets V i ( v ) are defined by V i ( v ) = { x ∈ ℓ 1 : δ ( v ) − δ ( u i ) + c ( v, x ) < c ( u i , x ) } . By Theorem 2.1, each of V i ( v ) is either empty or is an interval on ℓ 1 , and thus the same is true for their intersection. ✷ Using the above lemma, we easily obtain a bound on the size of the Voronoi diagrams. Corollary 5.1 The number of the intervals comprising the diagram V ( u 1 , . . . , u k ) does not exceed 2 k − 1 . Next, we present and analyze an efficient procedure which, given the Voronoi dia- gram V ( u 1 , . . . , u k ) and a new node v inserted in S , determines the Voronoi diagram V ( u 1 , . . . , u k , v ). This includes computation of the Voronoi cell V ( v ), update of the set of points x 1 , . . . , x m describing V ( u 1 , . . . , u k ) to another set describing V ( u 1 , . . . , u k , v ) and update of the assignment information between intervals and Voronoi cells. 36 ℓ ℓ 1 = A 1 B 1 x 4 = B 1 A 1 = x 0 x 1 x − x 2 x + x 3 u 2 v u 1 u 3 Figure 12: The figure illustrates updates of the diagram V . The Voronoi diagram V ( u 1 , u 2 , u 3 ) for the nodes u 1 , u 2 , and u 3 is characterized by the sequence { x 0 < x 1 < x 2 < x 3 < x 4 } and the assignment V ( u 1 ) = ( x 0 , x 1 ) ∪ ( x 3 , x 4 ), V ( u 2 ) = ( x 2 , x 3 ), V ( u 3 ) = ( x 1 , x 2 ). After computation of the Voronoi cell V ( v ) = ( x − , x + ) the Voronoi diagram V ( u 1 , u 2 , u 3 , v ) is characterized by the sequence { x 0 < x 1 < x − < x + < x 3 < x 4 } and the assignment V ( u 1 ) = ( x 0 , x 1 ) ∪ ( x 3 , x 4 ), V ( u 2 ) = ( x + , x 3 ), V ( u 3 ) = ( x 1 , x − ), V ( v ) = ( x − , x + ). According to Lemma 5.2, the Voronoi cell V ( v ) is an interval, which we denote by ( x − , x + ). Let M be any of the points x 1 , . . . , x m characterizing the diagram V ( u 1 , . . . , u k ). The fol- lowing claim shows that the relative position of M with respect to the interval ( x − , x + ) can be determined in constant time. Claim 5.1 The relative position of M with respect to the interval ( x − , x + ) can be determined in O (1) time. Proof: By the definition of point M , it follows that there are two nodes u i 1 and u i 2 such that δ ( u i 1 ) + c ( u i 1 , M ) = δ ( u i 2 ) + c ( u i 2 , M ). We denote the latter value by d ( M ) and note that d ( M ) ≤ δ ( u i ) + c ( u i , M ), for i = 1 , . . . , k . Then, we compute the value d ( v, M ) = δ ( v ) + c ( v, M ) and compare it with d ( M ). If d ( v, M ) < d ( M ), then we have M ∈ ( x − , x + ) and thus x − < M < x + . In the case where d ( v, M ) ≥ d ( M ), we compute the Voronoi cell △ ( v ) of v in the three cites diagram V ( u i 1 , u i 2 , v ). By Lemma 5.2, the cell △ ( v ) is an interval on ℓ 1 . Since M must be outside △ ( v ) and ( x − , x + ) ⊂ △ ( v ), it follows that the relative position between M and ( x − , x + ) is the same as the relative position between M and △ ( v ). The claimed time bound follows from the described procedure, which besides the constant number of simple computations, involves a constant number of evaluations of the function c ( · , · ) and eventually solving of the equations c ( u i , x ) − c ( v, x ) = δ ( v ) − δ ( u i ), for i = i 1 , i 2 . ✷ We derive the following binary search procedure, which computes the Voronoi cell V ( v ). 37 ALGORITHM: Voronoi cell V ( v ) Input: The sequence X = { A 1 = x 0 < x 1 < · · · < x m < x m +1 = B 1 } . Output: Points x − and x + such that V ( v ) = ( x − , x + ). A. Compute the point x − first by the following: 1. While | X | > 2 do Steps 1.1 – 1.3 below 1.1. Find the median M of the sequence X . 1.2. Determine the relative position between M and x − . 1.3 If x − < M then set X = { x 0 < · · · < M } else set X = { M < · · · < x m +1 } . 2. If | X | = 2 compute x − directly. B. Compute the point x + in the same way. Once the cell V ( v ) = ( x − , x + ) has been computed, the update of diagram V ( u 1 , . . . , u k ) to diagram V ( u 1 , . . . , u k , v ) can be done in a natural way. The sorted sequence of points X ( u 1 , . . . , u k , v ) characterizing the diagram V ( u 1 , . . . , u k , v ) is obtained from the sequence X ( u 1 , . . . , u k ) = { x 0 < · · · < x m +1 } by inserting the points x − and x + at their positions and by deleting points (if any) x − < x j − +1 < · · · < x j + − 1 < x + lying inside the interval ( x − , x + ), where x j − and x j + are the left and the right neighbors of the points x − and x + , respectively. We need to delete each of the intervals ( x j , x j +1 ), for j = j − , . . . , j + − 1, from the cell that contains it and to add intervals ( x j − , x − ) and ( x + , x j + ) to the cells that have contained intervals ( x j − , x j − +1 ) and ( x j + − 1 , x j + ), respectively. Indeed, if the cell of some of the nodes u 1 , . . . , u k becomes empty then this node is removed from the set of active nodes in the group E ( ℓ, ℓ 1 , f ). To implement all of these updates efficiently, we maintain the sequence X of points characterizing the Voronoi diagram of the currently active nodes on ℓ in an order-statistics tree, allowing us to report order statistics as well as insertions and deletions in O (log | X | ) time. Based on this data structure, computation of the interval ( x − , x + ) takes O (log 2 | X | ) time, since it takes O (log | X | ) iterations, and each iteration takes O (log | X | ) time. The update of the Voronoi diagram requires two insertions and j + − j − + 1 deletions in X , where insertions take O (log | X | ) time and deletions are done in amortized O (1) time. Let us estimate the time for the maintenance of the Voronoi diagram of the active nodes in the group E ( ℓ, ℓ 1 , f ). We denoted the total number of the nodes on ℓ by K ( ℓ ). Each of the nodes on ℓ becomes active once during the execution. Thus, each node on ℓ becomes subject of the procedure Voronoi cell exactly once. According to Corollary 5.1, the sizes of the sequences X characterizing Voronoi diagrams in the group E ( ℓ, ℓ 1 , f ) are bounded by 2 K ( ℓ ) + 1. Therefore, the total time spent by the procedure Voronoi cell in the group E ( ℓ, ℓ 1 , f ) is O ( K ( ℓ ) log 2 K ( ℓ )). In total, there are O ( K ( ℓ )) insertions in the sequence X , and the total number of deletions, clearly, is at most the number of insertions. Hence, the total time spent for insertions and deletions is O ( K ( ℓ ) log K ( ℓ )). Thus, the time spent for the maintenance of the Voronoi diagram in a fixed group E ( ℓ, ℓ 1 , f ) is O ( K ( ℓ ) log 2 K ( ℓ )). Next, we discuss the computation and maintenance of a data structure that combines the adjacency diagram and the Voronoi diagram. Propagation Diagram: As discussed above, the propagation set I ( u ) of an active node u on ℓ is described completely by the set of nodes on ℓ 1 satisfying conditions C1 , C2 , and C3 . We denote the set of nodes on ℓ 1 satisfying C1 and C2 with respect to u by I ′ ( u ). Slightly abusing our terminology, we refer to this set again as propagation set of u . Similarly, we refer 38 to the set consisting of the sets I ′ ( u ) for all currently active nodes as propagation diagram and denote it by I ′ ( u 1 , . . . , u k ), where, as above, u 1 , . . . , u k are the currently active nodes on ℓ 1 . The difference between the originally defined propagation set I ( u ) and the set I ′ ( u ) is that the elements of I ( u ) are the edges joining u to the nodes on ℓ 1 satisfying C1 , C2 , and C3 , whereas the elements of I ′ ( u ) are the nodes on ℓ 1 that satisfy C1 and C2 , but not necessarily C3 . Indeed, the set I ′ ( u ) is closely related to I ( u ) and when combined with the list S a ( ℓ 1 ) describes it completely. Based on this observation, we compute and maintain the propagation diagram I ′ ( u 1 , . . . , u k ) instead of the originally defined diagram. We describe the sets I ′ ( u i ) by specifying the maximal Steiner intervals they cover. We implement these sets as ordered lists of pairs of pointers to the end-points of these intervals in the underlying list V ( ℓ 1 ). The propagation sets of different active nodes do not inter- sect, and hence, the end-points of the maximal Steiner intervals of the propagation sets I ′ ( u 1 ) , . . . , I ′ ( u k ) form a sequence, I ′ = { A 1 ≤ y 1 ≤ z 1 ≤ · · · ≤ y m 1 ≤ z m 1 ≤ B 1 } , where ℓ 1 = ( A 1 , B 1 ). The points y j and z j , for j = 1 , . . . , m 1 , are Steiner points (nodes) on ℓ 1 . Any of the Steiner intervals ( y j , z j ) is a maximal Steiner interval covered by one of the sets I ′ ( u 1 ) , . . . , I ′ ( u k ), whereas the Steiner points inside the intervals ( z j , y j +1 ) do not belong to any of the sets I ′ ( u i ). Clearly, the sequence I ′ plus the assignment of the intervals ( y j , z j ) to the sets I ′ ( u i ) covering them determine the diagram I ′ ( u 1 , . . . , u k ). We implement sequence I ′ as an ordered list of pointers to the underlying list V ( ℓ 1 ). In addition, we associate with it a binary search tree based on the position of the Steiner points on the segment ℓ 1 . The diagram I ′ ( u 1 , . . . , u k ) is maintained in Step 3.1 and details are as follows. Let, as above, v be the node extracted by the extract-min operation in Step 1 in the current iteration of the algorithm. We assume that the diagram I ′ ( u 1 , . . . , u k ) is known – i.e., we know the sequence I ′ as well as the assignment of the intervals ( y j , z j ) to the propagation sets I ′ ( u i ). Next, we describe the update of I ′ and the assignment information specifying I ′ ( u 1 , . . . , u k , v ). By definition, I ′ ( v ) consists of the nodes on ℓ 1 that lie in the Voronoi cell V ( v ) and belong to the adjacency set A ( v, ℓ 1 ). By Lemma 5.2, V ( v ) is either empty or a single interval, which we have denoted by ( x − , x + ). We denote by ( v − , v + ), the largest Steiner interval inside the interval ( x − , x + ). The interval ( v − , v + ) is easily found using binary search in O (log K ( ℓ 1 )) time, where as above K ( ℓ 1 ) denotes the number of Steiner points on ℓ 1 . On the other hand (see Lemma 5.1), the adjacency set A ( v, ℓ 1 ) consists of the nodes lying inside constant number (at most seven) of Steiner intervals, which were computed and stored as the list ̄ A ( v, ℓ 1 ). Hence, the maximal Steiner intervals specifying the propagation set I ′ ( v ) can be obtained as the intersection of intervals in ̄ A ( v, ℓ 1 ) with ( v − , v + ). This is done in constant time by identifying the position of the points v − and v + with respect to the elements of the list ̄ A ( v, ℓ 1 ). Clearly, the so-computed maximal Steiner intervals covered by I ′ ( v ) are at most seven. We update the sequence I ′ by inserting each of the maximal Steiner intervals covered by I ′ ( v ) in the same way as we inserted the interval ( x − , x + ) into the sequence X describing the Voronoi diagram. More precisely, let ( y, z ) be any of the maximal Steiner intervals covered by I ′ ( v ). We insert the points y and z at their positions in the ordered sequence I ′ , and then we delete the points of I ′ between y and z . If the interval containing y is ( y j , z j ), we set new z j to be the Steiner point preceding y on ℓ 1 . Similarly, if the interval containing z is ( y j , z j ), then we set y j to be the Steiner point 39 following z on ℓ 1 . At each iteration of the algorithm, the endpoints of at most seven intervals are inserted into the sequence I ′ . Hence, the size of the sequence I ′ is bounded by 14 K ( ℓ ) and insertions in I ′ are implemented in O (log K ( ℓ )) time. Deletions are implemented in O (1) time. The total number of insertions is O ( K ( ℓ )) and the total number of deletions is at most the number of insertions. Therefore, the total time spent for the maintenance of I ′ and the propagation diagram is O ( K ( ℓ )(log K ( ℓ ) + K ( ℓ 1 ))). Finally, we summarize our discussion on the implementation of Step 3.1. The compu- tations and times related to a fixed triple ( ℓ, ℓ 1 , f ) are as follows. First, in a preprocessing step the lists ̄ A ( u, ℓ 1 ), for all nodes u on ℓ , are computed in O ( K ( ℓ ) log K ( ℓ 1 )) time (Lemma 5.1). Times spent for the maintenance of the lists S ( ℓ ) and S a ( ℓ 1 ) are O ( K ( ℓ ) log K ( ℓ ) and O ( K ( ℓ 1 ) log K ( ℓ 1 ), respectively. The time spent for maintenance of the Voronoi diagram for the active nodes on ℓ requires O ( K ( ℓ ) log 2 K ( ℓ )) time. The time for the maintenance of the Propagation Diagram is O ( K ( ℓ )(log K ( ℓ ) + log K ( ℓ 1 ))). Therefore, the total time for the implementation of Step 3.1 is ∑ ( ℓ,ℓ 1 ,f ) ( O ( K ( ℓ )(log 2 K ( ℓ ) + log K ( ℓ 1 )) + O ( K ( ℓ 1 ) log K ( ℓ 1 )) ) ≤ O ( 1 √ ε log 1 ε ) (∑ ℓ K ( ℓ )(log 2 K ( ℓ ) + log K ( ℓ 1 )) + ∑ ℓ 1 K ( ℓ 1 ) log K ( ℓ 1 ) ) ≤ O ( 1 √ ε log 3 1 ε ) (∑ ℓ K ( ℓ ) + ∑ ℓ 1 K ( ℓ 1 ) ) = O ( | V ε | √ ε log 3 1 ε ) , where we have used Lemma 3.1 to estimate that the number of triples ( ℓ, ℓ 1 , f ) with a fixed first or second element is O ( 1 √ ε log 1 ε ), and that log K ( ℓ ) and log K ( ℓ 1 ) are O (log 1 ε ). Lemma 5.3 The total time spent by the algorithm implementing Step 3.1 is O ( | V ε | √ ε log 3 1 ε ) . 5.2.3 Computation and updates of set of representatives Next, we concentrate on the computation of representatives in Steps 3.2, 3.3 and 3.4. The set of representatives ρ ( v ; ℓ, ℓ 1 , e ) of an active node v on ℓ in a group E ( ℓ, ℓ 1 , f ) contains one representative for each interval ( y j , z j ) in the propagation set I ( v ). Recall that I ( v ) consists of a set of intervals ( y j , z j ) stored in the sequence I ′ , characterizing the propagation diagram of the currently active nodes on ℓ . The representative in ρ ( v ; ℓ, ℓ 1 , e ), corresponding to ( y j , z j ) ∈ I ( v ), is the target of the minimum cost edge from v to a node in S a ∩ ( y j , z j ). By Lemma 2.1, the function c ( v, x ) is convex and thus in any interval it has a single minimum. Let x ∗ ( v ) be the point on ℓ 1 , where c ( v, x ) achieves its minimum. To efficiently compute the representatives, we compute in a preprocessing step the points x ∗ ( v ), for all nodes on ℓ . From the definition of the function c ( v, x ) and Snell’s law, it follows that x ∗ ( v ) is the point on ℓ 1 that is closest to v . So, each of x ∗ ( v ) can be computed in constant time, which leads to O ( K ( ℓ )) preprocessing time for the group E ( ℓ, ℓ 1 , f ), where K ( ℓ ) is the number of nodes on ℓ . Thus, the total time for preprocessing in all groups is O ( | V ε | √ ε log 1 ε ). 40 We have associated two data structures to the set of nodes in S a that lie on a fixed Steiner segment ℓ 1 . First, we maintain them in a doubly-linked list and second, we maintain them in a binary-search tree, with respect to their position on ℓ 1 . We show that finding a representative ρ ( v ) ∈ ρ ( v ; ℓ, ℓ 1 , e ) takes O (log 1 ε ) time. There are three situations, where we need to compute or update ρ ( v ): 1. New representatives ρ ( v ) are computed when v becomes active and its propagation set is non-empty. We need to compute one new representative for each maximal Steiner interval ( y, z ) in the propagation set I ( v ). Recall that there are at most seven such intervals and they were computed and stored in the sequence I ′ . To compute ρ ( v ) in the interval ( y, z ), we determine the leftmost and rightmost nodes from S a inside the interval ( y, z ). This is done by finding the position of the points y and z in the sequence of nodes currently in S a . Let the leftmost and the rightmost nodes from S a in ( y, z ) be y a and z a , respectively. Then, we determine the position of the point x ∗ ( v ) with respect to y a and z a . If it is to the left of y a , then ρ ( v ) = y a . If it is to the right of z a , then ρ ( v ) = z a . If x ∗ ( v ) is inside ( y a , z a ), we determine the two nodes in S a immediately to the left and to the right of x ∗ ( v ), and ρ ( v ) is one of these two nodes. Using the binary-search tree on S a , the nodes y a and z a and eventually the nodes neighboring x ∗ ( v ) are determined in O (log 1 ε ) time. 2. When some representative ρ ( v ) is removed from S a , a new representative for v is one of the neighbors of ρ ( v ) in the doubly-linked list S a that lie in the same interval ( y, z ) as ρ ( v ). This is done in O (1) time. 3. When some interval of the propagation set I ( v ) shrinks and the current representative ρ ( v ) is no longer inside this interval, then ρ ( v ) is updated as follows. Let, as above, y a and z a be the leftmost and the rightmost nodes from S a , respectively, in the updated interval. Then, if ρ ( v ) lies to the left of y a , we set ρ ( v ) = y a . If ρ ( v ) is to the right of z a , we set ρ ( v ) = z a . As above, determination of the nodes y a and z a is done in O (log 1 ε ) time. To complete our analysis, we need to estimate the total number of representatives which are computed by our algorithm. Each pair (representative, predecessor) relates to the edge joining them. Since such a pair can be computed at most once by the algorithm, the total number of representatives related to nodes that are vertices of D is bounded by the total number of edges incident to these nodes, which is O ( | V ε | ). It remains to estimate the number of representatives which are related to nodes that are Steiner points. Consider an iteration for a node v that is a Steiner point. There are O ( 1 √ ε log 1 ε ) triples in T ( v ), and at most nine new representatives are computed in Step 3.2. For each predecessor in R − 1 ( v ) that is a Steiner point, a single representative is computed. The number of predecessors | R − 1 ( v ) | is O ( 1 √ ε log 1 ε ). Hence, in a single iteration, O ( 1 √ ε log 1 ε ) representatives related to Steiner points are computed. Since the number of iterations is O ( | V ε | ) and the computation of a single representatives takes O (log 1 ε ) time, we obtain that the total time for the execution of Steps 3.2 and 3.3 is O ( | V ε | √ ε log 2 1 ε ). 41 Finally, the number of priority queue operations executed in Step 3.4 is bounded by the number of computed representatives. Thus, the total time for Step 3.4 is O ( | V ε | √ ε log n ε log 1 ε ). 5.2.4 Complexity of the algorithm and the main result Here, we summarize our discussion from the previous three subsections and state our main result. Step 1 of our algorithm takes O ( | V ε | log n ε ) time. Step 2 requires O ( | V ε | ) time. By Lemma 5.3, Step 3.1 takes in total O ( | V ε | √ ε log 3 1 ε ) time. The total time for implementation of Steps 3.2 and 3.3 is O ( | V ε | √ ε log 2 1 ε ) and the total time for Step 3.4 is O ( | V ε | √ ε log n ε log 1 ε ). By Lemma 3.1, we have that | V ε | = O ( n ε 2 log 1 ε ). We have thus established the following: Theorem 5.1 The SSSP problem in the approximation graph G ε can be solved in O ( n ε 2 . 5 log n ε log 3 1 ε ) time. Consider the polyhedral domain D . Starting from a vertex v 0 of D , our algorithm solves the SSSP problem in the corresponding graph G ε and constructs a shortest paths tree rooted at v 0 . According to Theorem 4.1, the computed distances from v 0 to all other vertices of D (and to all Steiner points) are within a factor of 1 + ε of the cost of the corresponding shortest paths. Using the definition of the edges of G ε , an approximate shortest path can be output by simply replacing the edges in the discrete path with the corresponding local shortest paths used to define their costs. This can be done in time proportional to the number of segments in this path, because computation of the local shortest paths takes O (1) time. The approximate shortest paths tree rooted at v 0 and containing all Steiner points and vertices of D can be output in O ( | V ε | ) time. Thus, the algorithm we described solves the WSP3D problem and the following theorem states the result. Theorem 5.2 Let D be a weighted polyhedral domain consisting of n tetrahedra and ε ∈ (0 , 1) . The weighted shortest path problem in three dimensions (WSP3D), requiring the computation of approximate shortest paths from a source vertex to all other vertices of D , can be solved in O ( n ε 2 . 5 log n ε log 3 1 ε ) time. 6 Conclusions This paper generalizes the weighted region problem, originally studied in 1991 by Mitchell and Papadimitriou [21] for the planar setting, to 3-d weighted domains. We present the first polynomial time approximation scheme for the WSP3D problem. The complexity of our algorithm is independent of the weights, but depends upon the geometric features of the given tetrahedra as stated in Lemma 3.1. There are some fairly standard techniques which can be employed here to remove the dependence on geometry (cf., [1]), provided that there is an estimate known on the maximum number of segments (i.e., the combinatorial complexity) in weighted shortest paths in three dimensions. It can be shown that the combinatorial complexity of weighted shortest paths in planar case is Θ( n 2 ) [21]. We conjecture that the same bound holds in three dimensions, but the proof techniques in [21] do not seem to apply here, since they use planarity. If the 42 combinatorial complexity of these paths in three dimensions is a polynomial in n , then we can remove the dependence on the geometry by increasing the run time by a polynomial factor in n . We do not recommend this approach, since the increase in the running time will be significant. Already, in the planar case (and in terrains), in an experimental study [18], it was shown that a constant number of Steiner points suffice to produce high-quality paths. We believe that the same holds here and this merits further investigation. This paper also investigated additive Voronoi diagrams in heterogeneous media. We studied a fairly simple scenario and already the analysis of that was very technical and cumbersome. It is desirable to find simpler and more elegant ways to understand the combi- natorics of these diagrams. Nevertheless, we believe that the discretization scheme and the algorithms presented here can be used successfully for efficient computation of approximate Voronoi diagrams in heterogeneous media. Our algorithm does not require any complex data structures or primitives and as such should be implementable and even practical. Its structure allows Steiner points to be gen- erated “on the fly” as the shortest path wavefront propagates though the weighted domain. This feature allows the design of more compact and adaptive implementation schemes that can be of high practical value. One of the classical problems that motivated this study is the unweighted version of this problem, namely the ESP3D problem. There, we need to find a shortest path between a source and a target point, lying completely in the free space, avoiding three-dimensional polyhedral obstacles. We can use our techniques to solve this problem, though this will require triangulating (i.e., tetrahedralization) the free space. As outlined above, the com- plexity of our algorithm depends upon the geometry of these tetrahedra; so it is natural to ask whether the free space can be partitioned into nice tetrahedra? Unfortunately, there is no simple answer to this question which has been an important topic of study in computa- tional and combinatorial geometry for several decades. Nevertheless, our algorithm provides a much simpler and so far the fastest method for solving the ESP3D problem, provided the free space is already partitioned into non-degenerate tetrahedra. Combining the techniques of answering weighted shortest path queries on polyhedral surfaces [2] and the existence of nice separators for well-shaped meshes [20], we believe that our construction presented in this paper can be used for answering (approximate) weighted shortest path queries in 3-d. References [1] Pankaj K. Agarwal, R. Sharathkumar, and Hai Yu. Approximate Euclidean shortest paths amid convex obstacles. In Claire Mathieu, editor, SODA , pages 283–292. SIAM, 2009. [2] Lyudmil Aleksandrov, Hristo Djidjev, Hua Guo, Anil Maheshwari, Doron Nussbaum, and J ̈ org-R ̈ udiger Sack. Algorithms for approximate shortest path queries on weighted polyhedral surfaces. Discrete & Computational Geometry , 44(4):762–801, 2010. 43 [3] Lyudmil Aleksandrov, Anil Maheshwari, and J ̈ org-R ̈ udiger Sack. Approximation algo- rithms for geometric shortest path problems. In STOC , pages 286–295, 2000. [4] Lyudmil Aleksandrov, Anil Maheshwari, and J ̈ org-R ̈ udiger Sack. Determining approxi- mate shortest paths on weighted polyhedral surfaces. J. ACM , 52(1):25–53, 2005. [5] Tetsuo Asano, David Kirkpatrick, and Chee Yap. Pseudo approximation algorithms with applications to optimal motion planning. Discrete & Computational Geometry , 31(1):139–171, 2004. [6] Franz Aurenhammer and Rolf Klein. Handbook of Computational Geometry , chapter Voronoi Diagrams. North Holland, 2000. [7] Chandrajit L. Bajaj. The algebraic degree of geometric optimization problems. Discrete & Computational Geometry , 3:177–191, 1988. [8] John F. Canny and John H. Reif. New lower bound techniques for robot motion planning problems. In FOCS , pages 49–60. IEEE, 1987. [9] Joonsoo Choi, J ̈ urgen Sellen, and Chee-Keng Yap. Approximate Euclidean shortest paths in 3-space. Int. J. Comput. Geometry Appl. , 7(4):271–295, 1997. [10] Joonsoo Choi, J ̈ urgen Sellen, and Chee-Keng Yap. Precision-sensitive Euclidean shortest path in 3-space. SIAM J. Comput. , 29(5):1577–1595, 2000. [11] Kenneth L. Clarkson. Approximation algorithms for shortest path motion planning (extended abstract). In STOC , pages 56–65. ACM, 1987. [12] B. Cox and B. Treeby. Artifact trapping during time reversal photoacoustic imaging for acoustically heterogeneous media. IEEE Trasaction on Medical Imaging , 29(2):387–396, 2010. [13] Sariel Har-Peled. Constructing approximate shortest path maps in three dimensions. SIAM J. Comput. , 28(4):1182–1197, 1999. [14] Philip L. Inderwiesen and Tien-When Lo. Fundamentals of Seismic Tomography , vol- ume 6 of Geophysical Monograph Series . 1994. [15] Rolf Klein. Concrete and Abstract Voronoi Diagrams , volume 400 of Lecture Notes in Computer Science . Springer, 1989. [16] Rolf Klein, Elmar Langetepe, and Zahra Nilforoushan. Abstract Voronoi diagrams revisited. Computational Geometry , 42(9):885 – 902, 2009. [17] J. Krozel, S. Penny, J. Prete, and J.S.B. Mitchell. Comparison of algorithms for synthe- sizing weather avoidance routes in transition airspace. Collection of Technical Papers - AIAA Guidance, Navigation, and Control Conference , 1:446–461, 2004. [18] Mark Lanthier, Anil Maheshwari, and J ̈ org-R ̈ udiger Sack. Approximating shortest paths on weighted polyhedral surfaces. Algorithmica , 30(4):527–562, 2001. 44 [19] Ngoc-Minh Lˆ e. Abstract Voronoi diagram in 3-space. Journal of Computer and System Sciences , 68(1):41 – 79, 2004. [20] Gary L. Miller, Shang-Hua Teng, William Thurston, and Stephen A. Vavasis. Automatic mesh partitioning. In Alan George, John Gilbert, and Joseph Liu, editors, Graphs Theory and Sparse Matrix Computation , The IMA Volumes in Mathematics and its Application, pages 57–84. Springer-Verlag, 1993. Vol 56. [21] Joseph S. B. Mitchell and Christos H. Papadimitriou. The weighted region problem: Finding shortest paths through a weighted planar subdivision. J. ACM , 38(1):18–73, 1991. [22] Joseph S. B. Mitchell and Micha Sharir. New results on shortest paths in three dimen- sions. In SoCG , pages 124–133. ACM, 2004. [23] Christos H. Papadimitriou. An algorithm for shortest-path motion in three dimensions. Inf. Process. Lett. , 20(5):259–263, 1985. [24] J.H. Reif and Z. Sun. Movement planning in the presence of flows. Algorithmica (New York) , 39(2):127–153, 2004. [25] Zheng Sun and John Reif. Bushwhack: An approximation algorithm for minimal paths through pseudo-Euclidean spaces. In Peter Eades and Tadao Takaoka, editors, Algo- rithms and Computation , volume 2223 of Lecture Notes in Computer Science , pages 160–171. Springer Berlin / Heidelberg, 2001. 10.1007/3-540-45678-3 15. [26] Zheng Sun and John H. Reif. On finding approximate optimal paths in weighted regions. Journal of Algorithms , 58(1):1 – 32, 2006. [27] T. Varslot and G. Taralsen. Computer simulation of forward wave propagation in soft tissue. IEEE Trasactions on Ultrasonics, Ferroelectrics, and Frequency Control , 52(9):1473–1482, 2005. 45 A Appendix A.1 Proof of Proposition 2.2 Proposition 2.2: The second mixed derivative of the function x = x ( y, α ) is negative, i.e. x yα < 0 . Proof: First, we consider the case where w − = w + . In this case, the function x ( y, α ) can be represented and differentiated explicitly. Recall, that the path ̄ π ( v, x ) in this case either consists of a single segment or is a three segment path as shown in Figure 3 (b). So, in the case where the path consists of a single segment, we have x ( y, α ) = y cot α . In the case where the path consists of three segments, x ( y, α ) = y cos α/ √ κ 2 − cos 2 α , where κ = w/w − . The mixed derivatives x yα of these two functions are − 1 / sin 2 α and − κ 2 sin α/ ( κ 2 − cos 2 α ) 3 2 , respectively, and both are readily negative. Next, we consider the case where w − 6 = w + . We introduce some additional notation as necessary for our presentation below (Figure 2). We denote the coordinates of the bending point a of the path ̄ π ( v, x ) by a = ( x − , y + ). Furthermore, we set x + = x − x − and y − = y − y + . Clearly, x − , x + , y − , and y + can be viewed as functions of the independent variables y and α . We have x = x − ( y − ( y, α ) , α ) + x + ( y + ( y, α ) , α ) and thereby x y = x − y − y − y + x + y + y + y . (37) We differentiate(37) with respect to α and obtain x yα = A + A 1 + A 2 , where A = x − y − y − yα + x + y + y + yα , A 1 = x − y − y − y − α y − y + x + y + y + y + α y + y , and A 2 = x − y − α y − y + x + y + α y + y . We complete the proof by showing that the terms A , A 1 , and A 2 , are negative. We begin with the term A = x − y − y − yα + x + y + y + yα . From the identity y = y − + y + , it follows that y − yα + y + yα = 0 and hence, A = ( x − y − − x + y + ) y − yα . From our notation, Snell’s law, and the relation cos α = cos θ sin φ , we derive the following: x + = z + cos α √ κ 2 − sin 2 φ , y + = z + √ sin 2 φ − cos 2 α √ κ 2 − sin 2 φ , and (38) x − = z − cos α √ 1 − sin 2 φ , y − = z − √ sin 2 φ − cos 2 α √ 1 − sin 2 φ . First, we compute x − y − and x + y + . We denote sin 2 φ by σ , differentiate x + and y + with respect to σ and obtain x + y + as the ratio x + σ /y + σ . We differentiate expressions (38) with respect to σ and obtain { x ∈ H : c ( v, x ) + C < c ( v ′ , x ) } x + σ = z + cos α 2( κ 2 − σ ) 3 2 , y + σ = z + ( κ 2 − cos 2 α ) 2 √ σ − cos 2 α ( κ 2 − σ ) 3 2 (39) and hence x + y + = x + σ /y + σ = cos α √ sin 2 φ − cos 2 α κ 2 − cos 2 α . (40) 46 Similarly, x − σ = z − cos α 2(1 − σ ) 3 2 , y − σ = z − (1 − cos 2 α ) 2 √ σ − cos 2 α (1 − σ ) 3 2 , and x − y − = x − σ /y − σ = cos α √ sin 2 φ − cos 2 α 1 − cos 2 α . (41) So, for the difference x − y − − x + y + we have x − y − − x + y + = cos α √ sin 2 φ − cos 2 α ( 1 1 − cos 2 α − 1 κ 2 − cos 2 α ) = ( κ 2 − 1) cos α √ sin 2 φ − cos 2 α (1 − cos 2 α )( κ 2 − cos 2 α ) . (42) The latter shows that sign( x − y − − x + y + ) = sign( κ 2 − 1) . (43) To prove the negativity of A = ( x − y − − x + y + ) y − yα , we show that sign( y − yα ) = sign(1 − κ 2 ). We have y − y = y − σ /y σ = y − σ / ( y − σ + y + σ ) = 1 1 + y + σ /y − σ and thus y − yα = − ( y + σ /y − σ ) α (1 + y + σ /y − σ ) 2 . Hence, it is sufficient to show that sign(( y + σ /y − σ ) α ) = sign( κ 2 − 1) . (44) So, we continue by establishing the sign of the derivative ( y + σ /y − σ ) α . We use (39) and (41) and compute the ratio y + σ /y − σ as follows y + σ /y − σ = z + ( κ 2 − cos 2 α )(1 − σ ) 3 / 2 z − (1 − cos 2 α )( κ 2 − σ ) 3 / 2 = ( z + /z − ) BC, where B = κ 2 − cos 2 α 1 − cos 2 α and C = ( 1 − σ κ 2 − σ ) 3 / 2 = ( 1 − sin 2 φ κ 2 − sin 2 φ ) 3 / 2 . (45) Then, we compute the derivatives B α and C α using the expressions (45) B α = sin 2 α (1 − cos 2 α ) − sin 2 α ( κ 2 − cos 2 α ) (1 − cos 2 α ) 2 = (1 − κ 2 ) sin 2 α (1 − cos 2 α ) 2 and C α = 3 2 ( 1 − sin 2 φ κ 2 − sin 2 φ ) 1 / 2 ( − sin 2 φ )( κ 2 − sin 2 φ ) − ( − sin 2 φ )(1 − sin 2 φ ) ( κ 2 − sin 2 φ ) 2 φ α = 3 2 ( 1 − sin 2 φ κ 2 − sin 2 φ ) 1 / 2 sin 2 φ (1 − κ 2 ) ( κ 2 − cos 2 α ) 2 φ α 47 and obtain ( z − /z + )( y + σ /y − σ ) α = B α C + BC α (46) = ( 1 − sin 2 φ κ 2 − sin 2 φ ) 3 2 (1 − κ 2 ) sin 2 α (1 − cos 2 α ) 2 + 3 2 ( 1 − sin 2 φ κ 2 − sin 2 φ ) 1 2 ( κ 2 − cos 2 α ) sin 2 φ (1 − κ 2 ) φ α (1 − cos 2 α )( κ 2 − sin 2 φ ) 2 = (1 − κ 2 ) √ 1 − sin 2 φ ( κ 2 − sin 2 φ ) 3 / 2 (1 − cos 2 α ) D, where D = sin 2 α (1 − sin 2 φ ) 1 − cos 2 α + 3 sin 2 φ ( κ 2 − cos 2 α ) 2( κ 2 − sin 2 φ ) φ α . Omitting the positive multiplicative terms in (46), we derive that sign( y + σ /y − σ ) α = sign((1 − κ 2 ) D ) and continue with the evaluation of sign( D ). We compute the derivative φ α using the identity y = y − + y + , which implies 0 = y − α + y + α . We differentiate expressions from (38) with respect to α and obtain y − α = ( z − / 2) (1 − sin 2 φ ) sin 2 α + sin 2 φ (1 − cos 2 α ) φ α (sin 2 φ − cos 2 α ) 1 / 2 (1 − sin 2 φ ) 3 / 2 , (47) y + α = ( z + / 2) ( κ 2 − sin 2 φ ) sin 2 α + sin 2 φ ( κ 2 − cos 2 α ) φ α (sin 2 φ − cos 2 α ) 1 / 2 ( κ 2 − sin 2 φ ) 3 / 2 , From these two, we obtain φ α = − I sin 2 α (1 − sin 2 φ )( κ 2 − sin 2 φ ) J sin 2 φ , where (48) I = z − ( κ 2 − sin 2 φ ) 1 / 2 + z + (1 − sin 2 φ ) 1 / 2 and (49) J = z − (1 − cos 2 α )( κ 2 − sin 2 φ ) 3 / 2 + z + ( κ 2 − cos 2 α )(1 − sin 2 φ ) 3 / 2 . Next, we substitute φ α from (48) in the expression D given in (46) and obtain D = sin α cos α (1 − sin 2 φ ) 2 J − 3 I ( κ 2 − cos 2 α )(1 − cos 2 α ) J (1 − cos 2 α ) . The term sin α cos α (1 − sin 2 φ ) and the denominator in this expression are positive and by expanding the numerator we have sign( D ) = sign[2 J − 3 I ( κ 2 − cos 2 α )(1 − cos 2 α )] = sign [ 2 z − (1 − cos 2 α )( κ 2 − sin 2 φ ) 3 / 2 + 2 z + ( κ 2 − cos 2 α )(1 − sin 2 φ ) 3 / 2 − 3 z − (1 − cos 2 α )( κ 2 − cos 2 α )( κ 2 − sin 2 φ ) 1 / 2 − 3 z + (1 − cos 2 α )( κ 2 − cos 2 α )(1 − sin 2 φ ) 1 / 2 ] = sign [ z − (1 − cos 2 α )( κ 2 − sin 2 φ ) 1 / 2 D − + z + ( κ 2 − cos 2 α )(1 − sin 2 φ ) 1 / 2 D + ] , where D − = 3 cos 2 α − 2 sin 2 φ − κ 2 and D + = 3 cos 2 α − 2 sin 2 φ − 1 . (50) 48 Now, we observe that the terms multiplied by D + and by D − are positive and show that D − and D + are negative. We use cos α = cos θ sin φ and cos α κ = cos θ sin φ κ , where sin φ = κ sin φ κ and obtain D + = 3 cos 2 α − 2 sin 2 φ − 1 = 2 cos 2 α − 2 sin 2 φ − sin 2 α = 2 cos 2 θ sin 2 φ − 2 sin 2 φ − sin 2 α = − 2 sin 2 φ sin 2 θ − sin 2 α < 0 and D − = 3 cos 2 α − 2 sin 2 φ − κ 2 = κ 2 (3 cos 2 α κ − 2 sin 2 φ κ − 1) = κ 2 ( − 2 sin 2 φ κ sin 2 θ − sin 2 α κ ) < 0 . From (50) and D + , D − < O , we get sign( D ) = − 1. From (46) it follows that sign( y + u /y − u ) α = − sign(1 − κ 2 ) and thus sign( y − yα ) = sign(1 − κ 2 ). The latter implies that A < 0. Next, we consider the term A 1 . From the identity y − α + y + α = y α = 0, we get A 1 = y − α ( x − y − y − y − y − x + y + y + y + y ). To evaluate the sign of y − α , we substitute φ α from (48) in the expression (47) and by omitting positive multiplicative term, we obtain sign( y − α ) = sign[( J − (1 − cos 2 α )( κ 2 − sin 2 φ ) I ) cos α ] (51) = sign { z + (1 − sin 2 φ ) 1 / 2 [( κ 2 − cos 2 α )(1 − sin 2 φ ) − (1 − cos 2 α )( κ 2 − sin 2 φ )] cos α } = sign[(1 − κ 2 ) cos α ] . Next, we evaluate the sign of the difference x − y − y − y − y − x + y + y + y + y . We compute x + y + y + as follows x + y + y + = ( x + y + ) σ /y + σ = cos α 2( κ 2 − cos 2 α ) √ σ − cos 2 α / z + ( κ 2 − cos 2 α ) 2 √ σ − cos 2 α ( κ 2 − σ ) 3 2 , where we have differentiated (40) with respect to σ = sin 2 φ and used (39). We compute x − y − y − in the same way and obtain x + y + y + = cos α ( κ 2 − sin 2 φ ) 3 2 z + ( κ 2 − cos 2 α ) 2 and x − y − y − = cos α (1 − sin 2 φ ) 3 2 z − (1 − cos 2 α ) 2 . (52) Furthermore, we have y + y = y + σ /y σ = y + σ / ( y + σ + y − σ ) and y − σ = y + σ / ( y + σ + y − σ ). So, we compute y + y and y − y using (39) as follows y + y = z + ( κ 2 − cos 2 α )(1 − sin 2 φ ) 3 2 J and y − y = z − (1 − cos 2 α )( κ 2 − sin 2 φ ) 3 2 J , (53) where J was defined in (49). Using (52) and (53), we determine sign( x − y − y − y − y − x + y + y + y + y ) = sign[(1 / (1 − cos 2 α ) − 1 / ( κ 2 − cos 2 α )) cos α ] = sign[( κ 2 − 1) cos α ] . The latter and (51) imply A 1 < 0. Finally, we show that A 2 = x − y − α y − y + x + y + α y + y is negative too. We first compute the derivative x + y + α by differentiating the expression (40) with respect to α . We have x + y + α = P κ sin α + cos α sin φ cos φ ( κ 2 − cos 2 α ) φ α (sin 2 φ − cos 2 α ) 1 2 ( κ 2 − cos 2 α ) 2 , (54) 49 where P κ = 2 cos 2 α ( κ 2 − sin 2 φ ) − sin 2 φ ( κ 2 − cos 2 α ). We substitute φ α using the expression (48) and multiply by y + y using the expression (53). After simplification, we obtain x + y + α y + y = z + sin α (1 − sin 2 φ ) 3 / 2 JP κ − cos 2 α ( κ 2 − cos 2 α )(1 − sin 2 φ )( κ 2 − sin 2 φ ) I J 2 (sin 2 φ − cos 2 α ) 1 2 ( κ 2 − cos 2 α ) . (55) Analogously, we obtain the following x − y − α y − y = z − sin α ( κ 2 − sin 2 φ ) 3 / 2 JP 1 − cos 2 α (1 − cos 2 α )(1 − sin 2 φ )( κ 2 − sin 2 φ ) I J 2 (sin 2 φ − cos 2 α ) 1 2 (1 − cos 2 α ) , (56) where P 1 = 2 cos 2 α (1 − sin 2 φ ) − sin 2 φ (1 − cos 2 α ). We sum (55) and (56), simplify and omit the positive multiplicative terms obtaining sign( x − y − α y − y + x + y + α y + y ) = (57) sign[ z − ( κ 2 − sin 2 φ ) 3 / 2 ( κ 2 − cos 2 α ) Q 1 + z + (1 − sin 2 φ ) 3 / 2 (1 − cos 2 α ) Q κ ] , where Q 1 = JP 1 − cos 2 α (1 − cos 2 α )(1 − sin 2 φ )( κ 2 − sin 2 φ ) I and Q κ = JP κ − cos 2 α ( κ 2 − cos 2 α )(1 − sin 2 φ )( κ 2 − sin 2 φ ) I . We denote the expression in the square brackets by R . Finally, evaluate R and show that it is negative. First, we evaluate and simplify Q κ and Q 1 . We substitute the expressions I and J from (49) in Q κ and group the terms containing z − and z + . Then, we substitute the expression for P κ from (54) and by simplification we get Q κ = z − ( κ 2 − sin 2 φ ) 3 / 2 [(1 − cos 2 α ) P k − cos 2 α ( κ 2 − cos 2 α )(1 − sin 2 φ )] + z + (1 − sin 2 φ ) 3 / 2 [( κ 2 − cos 2 α ) P k − cos 2 α ( κ 2 − cos 2 α )( κ 2 − sin 2 φ )] = z − ( κ 2 − sin 2 φ ) 3 / 2 (cos 2 α − sin 2 φ )( κ 2 + cos 2 α − 2 κ 2 cos 2 α ) + z + (1 − sin 2 φ ) 3 / 2 (cos 2 α − sin 2 φ ) κ 2 ( κ 2 − cos 2 α ) . In the same way, we obtain the following representation for Q 1 Q 1 = z − ( κ 2 − sin 2 φ ) 3 / 2 (cos 2 α − sin 2 φ )(1 − cos 2 α ) + z + (1 − sin 2 φ ) 3 / 2 (cos 2 α − sin 2 φ )( κ 2 + κ 2 cos 2 α − 2 cos 2 α ) . Substitution of Q κ and Q 1 in (57) produces an expression for R of the form R = ( z − ) 2 R 1 + ( z + ) 2 R 2 + z − z + R 3 , where R 1 = ( κ 2 − sin 2 φ ) 3 ( κ 2 − cos 2 α )(1 − cos 2 α )(cos 2 α − sin 2 φ ) R 2 = (1 − sin 2 φ ) 3 (1 − cos 2 α ) κ 2 ( κ 2 − cos 2 α )(cos 2 α − sin 2 φ ) R 3 = (1 − sin 2 φ ) 3 / 2 ( κ 2 − sin 2 φ ) 3 / 2 (cos 2 α − sin 2 φ ) × [( κ 2 − cos 2 α )( κ 2 + κ 2 cos 2 α − 2 cos 2 α ) + (1 − cos 2 α )( κ 2 + cos 2 α − 2 κ 2 cos 2 α )] = (1 − sin 2 φ ) 3 / 2 ( κ 2 − sin 2 φ ) 3 / 2 (cos 2 α − sin 2 φ ) × [( κ 2 − cos 2 α ) 2 + κ 2 (1 − cos 2 α ) 2 + (1 − κ 2 ) 2 cos 2 α ] From these expressions, it is clear that terms R 1 , R 2 , and R 3 are negative, since cos 2 α − sin 2 φ < 0 and all other terms are positive. Thus, R and consequently A 2 are negative. The proposition is proved. ✷ 50 A.2 Upper bound on the constant C ABP ( t ) Next, we show that C ABP ( t ) ≤ 11 | AB | r ( e ) sin 2 ( γ/ 2) log 2 4 | AB | 2 h r ( e ) r ( A ) r ( B ) . Recall that λ = (1 + √ ε/ 8 sin( γ/ 2)) − 1 and 0 < ε ≤ 1. We use the following two inequalities, which are easily derived from the properties of logarithms: For any X ≥ 1 log λ − 1 X < 3 . 44 ln X √ ε sin( γ/ 2) , ln X ε ≤ ln 2 ε log 2 X (58) By our definition of the constant C ABP ( t ) and (17) it follows that C ABP ( t ) ≤ 2 ε | AB | r ( e ) λ (1 − λ ) log 2 ε log λ − 1 | AB | ελ √ r ( A ) r ( B ) + ε | AB | r ( e )(1 − λ ) 2 log 2 ε (59) + 2 ε 2 log 2 ε log λ − 1 h εr ( e ) + 4 ε 2 log 2 ε log λ − 1 | AB | ελ √ r ( A ) r ( B ) . We estimate the terms on the right-hand side of this inequality using inequalities (58) above. For the first one, we have 2 ε | AB | r ( e ) λ (1 − λ ) log 2 ε log λ − 1 | AB | ελ √ r ( A ) r ( B ) ≤ (2 √ 2 + 1) 2 √ ε | AB | √ 2 r ( e ) sin( γ/ 2) log 2 ε log λ − 1 2 | AB | ε √ r ( A ) r ( B ) ≤ 3 . 44(2 √ 2 + 1) 2 | AB | √ 2 r ( e ) sin 2 ( γ/ 2) log 2 ε ln 2 | AB | ε √ r ( A ) r ( B ) < 25 | AB | r ( e ) sin 2 ( γ/ 2) log 2 2 | AB | √ r ( A ) r ( B ) . (60) For the second one, we have ε | AB | r ( e )(1 − λ ) 2 log 2 ε ≤ (2 √ 2 + 1) 2 | AB | r ( e ) sin 2 ( γ/ 2) log 2 ε < 15 | AB | r ( e ) sin 2 ( γ/ 2) log 2 ε . (61) The sum of the third and fourth terms is estimated by 2 ε 2 log 2 ε ( log λ − 1 h εr ( e ) + 2 log λ − 1 | AB | ελ √ r ( A ) r ( B ) ) ≤ 14 ε 1 . 5 sin( γ/ 2) log 2 ε ( ln √ h εr ( e ) + ln 2 | AB | ε √ r ( A ) r ( B ) ) ≤ 14 ε 1 . 5 ln 2 sin( γ/ 2) log 2 2 | AB |√ h √ r ( e ) r ( A ) r ( B ) < 5 ε 1 . 5 sin( γ/ 2) log 2 4 | AB | 2 h r ( e ) r ( A ) r ( B ) . (62) We substitute (60), (61), and (62) in (59), use 2 r ( e ) ≤ | AB | , r ( e ) ≤ h and obtain C ABP ( t ) < 23 | AB | r ( e ) sin 2 ( γ/ 2) log 4 | AB | 2 h r ( e ) r ( A ) r ( B ) . 51 A.3 Proof of Lemma 5.1 Lemma 5.1 The number of the maximal intervals covered by A ( u, ℓ 1 ) is at most seven. The corresponding list ̄ A ( u, ℓ 1 ) is computed in O (log K ( ℓ 1 )) , where K ( ℓ 1 ) denotes the number of Steiner points on ℓ 1 . Proof: First consider the case when the segments ℓ and ℓ 1 lie in different tetrahedra. We denote the weights of the tetrahedra containing ℓ and ℓ 1 by w − and w + , respectively. Let F be the plane defined by the face f and let F − and F + be the two half-spaces defined by F , where we assume that ℓ is in F − and ℓ 1 is in F + . Furthermore, we assign weights w − and w + to F − and F + , respectively, and we consider shortest weighted path ̄ π ( u, y ) between u , that is on ℓ , and an arbitrary point y on ℓ 1 . As discussed in Section 2, in this case, the path ̄ π ( u, y ) has the form ̄ π ( u, y ) = { u, a ( y ) , y } , where the point a ( y ) lies in F and is uniquely defined by Snell’s law (Figure 3 (a)). By our definition, there is an edge joining u to a Steiner point u 1 ∈ ℓ 1 if and only if the point a ( u 1 ) lies in the interior of the triangle f . The interior can be represented as intersection of three half-planes defined by the lines containing the sides of f . So, we first obtain an upper bound on the number of maximal intervals covered by A ( u, ℓ 1 ) in the case where f is a half-plane defined by an arbitrary line L in F . There is one-to-one correspondence between end-points of the maximal intervals and the points y on ℓ 1 for which a ( y ) lies on L . Hence, the number of maximal intervals covered by A ( u, ℓ 1 ) can be estimated by counting the number of points y for which a ( y ) lies on L . When the point y traverses the segment ℓ 1 , the bending point a ( y ) defines a curve in the plane F , which we denote by a ( y ). We consider Cartesian coordinate system O μ,ν in F , such that segment ℓ 1 projects onto the segment ( μ 1 , μ 2 ) of the μ -axis, and u projects onto ν -axis, say, at the point u ′ = (0 , ν 0 ). We denote the μ -coordinate of the projection of y by μ ( y ) or simply by μ when no ambiguity arises. Then, as we discussed in Section 2, the curve a ( y ) has a representation a ( μ ( y )) = ( τ μ ( y ) , (1 − τ ) ν 0 ), where τ is the unique solution of the equation (2). Let L ( μ, ν ) be the linear function, such that L ( μ, ν ) = 0 represents the line L in O μ,ν . Then, a point a ( y ) belongs to L if L ( τ μ ( y ) , (1 − τ ) ν 0 ) = 0. Thus, we obtain the following system of algebraic equations for τ and μ ( y ) { w − τ √ τ 2 ( μ 2 ( y )+ ν 2 0 )+( z − ) 2 = w + (1 − τ ) √ (1 − τ ) 2 ( μ 2 ( y )+ ν 2 0 )+( z + ) 2 L ( τ μ ( y ) , (1 − τ ) ν 0 ) = 0 , (63) where z + is the Euclidean distance from ℓ 1 to F and z − is the Euclidean distance from u to F . Excluding τ from this system leads to a degree four algebraic equation for μ ( y ). Therefore, there may be no more than four intersections between a ( y ) and L and, hence, the number of the maximal intervals covered by A ( u, ℓ 1 ) in the case where f is a half-plane is at most three. In the case where f is a triangle, we denote by f 1 , f 2 , and f 3 the three half-planes defining f and by I 1 , I 2 , and I 3 the corresponding sets of maximal intervals. Then, any maximal interval △ defined by f is obtained as an intersection △ 1 ∩△ 2 ∩△ 3 , where △ i ∈ I i , i = 1 , 2 , 3. Each of the sets I i contains at most three intervals and it is easily seen that the number of intervals that are intersections of the type △ 1 ∩ △ 2 ∩ △ 3 does not exceed seven. 52 ν 0 L − μ 0 μ ν 0 a 1 ( y ) a ( y ) L ( b ) ( a ) a 1 ( y ) a ( y ) μ 0 μ Figure 13: The figure illustrates the curves a ( y ) and a 1 ( y ) in the case (a) when ν − + ν + > ν 0 and in the case (b) when ν − + ν + ≤ ν 0 . A line L and its intersections with the curves a ( y ) and a 1 ( y ) are shown. The number of maximal intervals covered by E ( v, ℓ 1 ) in the case when f is the lower half-plane defined by L , is 2 for both instances (a) and (b). The list ̄ A ( u, ℓ 1 ) is computed by finding the position of the solutions of the systems (63) with respect to the elements of V ( ℓ 1 ). This can be done by performing binary search and hence the computation of the list ̄ A ( u, ℓ 1 ) in this case takes O (log K ( ℓ 1 )) time. Next, we consider the case where u and ℓ 1 lie in the same tetrahedron, say the one that is in F − . If the weight w − is smaller than w + , then the path ̄ π ( u, y ) has the form { u, a ( y ) , y } . The point a ( y ) is the point in F that lies on the segment ( u, y ′ ), where y ′ is the point symmetric to y with respect to F . So, the curve a ( y ), in this case, is a segment. Hence, each of the sets I i , for i = 1 , 2 , 3, in this case, is either empty or consists of a single interval. Consequently, there can be at most one interval obtained as intersection of intervals in these sets. Thus, in this case, there can be no more than one maximal interval covered by A ( u, ℓ 1 ). Finally, we consider the case where w − is greater than w + . In this case, the shortest path ̄ π ( u, y ) has the form { u, a ( y ) , a 1 ( y ) , y } , where the segment ( a ( y ) , a 1 ( y )) lies in F . We discussed the structure of this path in Section 2 and illustrated it in (Figure 3 (b)). The curves a ( y ) and a 1 ( y ) have explicit representations as we detail below. We set ν − = z − tan φ ∗ and ν + = z + tan φ ∗ , where the critical angle φ ∗ is defined by sin φ ∗ = w + /w − and z − , z + are the distances from v and ℓ 1 to F , respectively. In this notation the curves a ( y ) and a 1 ( y ) have the following representations in O μ,ν a ( y ) = { ν = z + ν 0 z − + z + for | μ | < μ 0 ν = ν 0 − √ ( ν − ) 2 − μ 2 for μ 0 ≥ | μ | ≤ ν − , (64) a 1 ( y ) = { ν = z + ν 0 z − + z + for | μ | < μ 0 ( ν 0 − ν ) √ ( ν + ) 2 − ν 2 = μν for | μ | ≥ μ 0 , , (65) where μ 0 = z − z − + z + √ ( ν − + ν + ) 2 − ν 2 0 if ν − + ν − > ν 0 and μ 0 = 0 if ν − + ν − ≤ ν 0 . In the case where ν − + ν − > ν 0 (Figure 13 (b)), the curve a ( y ) is a half-circle centered at the point (0 , ν 0 ) with radius ν − . The curve a 1 ( y ) is symmetric with respect to the ν -axis. The part of a 1 ( y ) to the right of O ν is monotonically decreasing. It is convex and approaches 53 the μ -axis at infinity. In the case where ν − + ν − ≤ ν 0 (Figure 13 (a)), the curves a ( y ) and a 1 ( y ) have a common part – a horizontal segment that projects at ( − μ 0 , μ 0 ) on the μ -axis. For | μ | > μ 0 , the curves are the same as in the case ν − + ν − > ν 0 . Again, we consider, first, the case when f is a half-plane defined by a line L in F . There is an edge between u and a point y on ℓ 1 if and only if the segment ( a ( y ) , a 1 ( y )) lies entirely inside f , i.e. in one of the half-planes defined by L . By a simple case analysis, we determine that in the case where f is a half-plane, the number of maximal intervals covered by A ( u, ℓ 1 ) is at most 2. Hence, in the general case when f is a triangle, each of the sets I 1 , I 2 , and I 3 , as defined above, contains at most 2 intervals. The number of intervals that can be intersections of the type ∩ 3 i =1 △ i with △ i ∈ I i is at most 4. The latter proves that the number of maximal intervals covered by A ( u, ℓ 1 ) in the case where ℓ and ℓ 1 are in the same tetrahedron, whose weight w − is bigger than the weight w + of the neighboring tetrahedron, is at most 4. Computation of the list ̄ A ( u, ℓ 1 ) in this case is done again by binary search and takes O ( K ( ℓ 1 )) time. The lemma is proved. ✷ 54