arXiv:1707.05077v2 [cs.RO] 21 May 2018 Lower Bounds for Searching Robots, some Faulty∗ Andrey Kupavskii† University of Birmingham and Moscow Inst. of Physics and Technology kupavskii@yandex.ru Emo Welzl Department of Computer Science ETH Zurich emo@inf.ethz.ch May 22, 2018 Abstract Suppose we are sending out k robots from 0 to search the real line at constant speed (with turns) to find a target at an unknown location; f of the robots are faulty, meaning that they fail to report the target although visiting its location (called crash type). The goal is to find the target in time at most λ|d|, if the target is located at d, |d| ≥1, for λ as small as possible. We show that this cannot be achieved for λ < 2 ρρ (ρ −1)ρ−1 + 1, ρ := 2(f + 1) k , which is tight due to earlier work (see J. Czyzowitz, E. Kranakis, D. Krizanc, L. Narayanan, J. Opatrny, PODC’16, where this problem was introduced). This also gives some better than previously known lower bounds for so-called Byzantine-type faulty robots that may actually wrongly report a target. In the second part of the paper we deal with the m-rays generalization of the problem, where the hidden target is to be detected on m rays all emanating at the same point. Using a generalization of our methods, along with a useful relaxation of the original problem, we establish a tight lower for this setting as well (as above, with ρ := m(f+1)/k). When specialized to the case f = 0, this resolves the question on parallel search on m rays, posed by three groups of scientists some 15 to 30 years ago: by Baeza-Yates, Culberson, and Rawlins; by Kao, Ma, Sipser, and Yin; and by Bernstein, Finkelstein, and Zilberstein. The m-rays generalization is known to have connections to other, seemingly unrelated, problems, including hybrid algorithms for on-line problems, and so-called contract algorithms. ∗This research was initiated at the 15th Gremo’s Workshop on Open Problems (GWOP), Pochtenalp, Switzerland, June 12-16, 2017. †The research was partially supported by the Swiss National Science Foundation grants no. 200020- 162884 and 200021-175977 and by the EPSRC grant no. EP/N019504/1. i 1 Introduction The following problem was raised in 1963 by Bellman [10] and addressed in the 60s by Beck [6, 7, 8] and Franck [18] (quoting from Beck’s paper [6]): “A man in an automobile searches for another man who is located at some point of a certain road. He starts at a given point and knows in advance the probability that the second man is at any given point of the road. Since the man being sought might be in either direction from the starting point, the searcher will, in general, have to turn around many times before finding his target. How does he search so as to minimize the expected distance traveled? When can this minimum expectation actually be achieved?” The authors were interested in proving the existence of and finding optimal strategies for different distributions, as well as proving non-existence results. In particular, in [8] it is shown that, without apriori knowledge of the distribution, we cannot hope the distance traveled to be smaller than 9 times the expected distance to the target. A variant of the problem was rediscovered in the late 80s by computer scientists, and it became known as the cow path problem (or cow at a fence problem). Quoting from [4]: “A cow comes to an infinitely long straight fence. The cow knows that there is a gate in the fence, and she wants to get to the other side. Unfortunately, she doesn’t know where the gate is located. Assume that the gate is positioned in an integer number of steps away from the cow and that the cow can only recognize the gate when directly upon it. How can she optimally find the gate?” By optimality the authors mean the smallest worst-case ratio between the distance traveled and the distance to the gate (the competitive ratio). It is easy to see that the strategy in which the cow goes 1 to the left, then back and 2 to the right, then back and 4 to the left etc. gives the worst-case ratio of 9. It is shown in [4] (de-facto reproving one of the results of [8]) that one cannot hope for a better ratio. The cow path problem gave rise to a significant line of research in computer science. Several variants of the problem were addressed ([12, 15]), as well as its planar variants [4, 5] that go back to another famous search problem, the lost at sea problem [9, 19]. We refer the reader to the book [2], in which many of the single robot search problems, including the cow path problem, are discussed. Motivated by numerous applications, different possible variants of collective search problems were considered in the context of random walks [1], forging [16], unmanned aerial vehicles [22]. In [13], [14] the authors proposed the following parallel search version of the cow path problem: We are sending out k robots from 0 to explore the real line R, each one traveling at unit speed, changing directions according to some strategy we are free to prescribe. There is a target hidden at a point x of the real line, where |x| ≥1. The robots may detect the target only when they arrive at the point where the target is hidden. Moreover, f robots are faulty. The problem is to determine the minimal competitive ratio: the supremum, over all possible target locations x, |x| ≥1, of the ratio τ(x) |x| , for τ(x) the time taken for the non-faulty robots to be sure about the location of the target at x. 1 In [14] the authors focus on the faults of the crash type: When passing through the target, the robot does not detect it. We denote the competitive ratio in this setting by A(k, f). In [13] the authors study the faults of the Byzantine type: A faulty robot may stay silent even when it detects or visits the target, or may claim that it has found the target when, in fact, it has not found it. In this case we denote the competitive ratio by B(k, f). One of our contributions is the complete resolution of the case of faults of crash type (or silent robots): Theorem 1. Let us denote s := 2(f + 1) −k and ρ := 2(f+1) k . If 0 < s ≤k (i.e. 1 < ρ ≤2) then we have A(k, f) = 2 k r (k + s)k+s sskk + 1 = 2 ρρ (ρ −1)ρ−1 + 1 . (1) We note that if s ≤0, that is, k ≥2(f + 1), then by sending f + 1 of the robots to ∞ and f + 1 of the robots to −∞we achieve a competitive ratio 1. We also note that s > k, i.e. f + 1 > k, means that k = f, in which case we cannot find the target since all robots are faulty. In [14] the authors exhibited a natural strategy which achieves the ratio in (1), as well as provided some lower bounds for A(k, f). Thus, our contribution in Theorem 1 is the matching lower bound. A lower bound for crash-type faulty robots is also a lower bound for Byzantine-type faulty robots. By this relation, our bounds also improve on the known bounds for Byzantine- type faulty robots, e.g. the bound of B(3, 1) ≥3.93 from [13] is now at B(3, 1) ≥8 3 3√ 4+1 ≈ 5.23. More importantly, we develop a method which allows to deal with problems of this type. We demonstrate this in the proof of Theorem 1 in Section 2. In Section 3, we generalize Theorem 1 to the case of m rays, thus resolving an old question, which appeared in at least three different papers. We also give several useful relaxations of the original problem, which shed more light on this group of questions. Some of the less important proofs are deferred to Appendix. 2 Proof of Theorem 1 We are given k robots, f of them faulty of crash type. To assure that the target at x ∈R is found in time at most λ|x|, the point x has to be visited by at least f + 1 robots in time (otherwise the adversary will place the target there and choose the first f robots arriving at x to be faulty and stay silent). Therefore, for each pair of points (x, −x), x ∈R≥1, at least 2(f + 1) −k = s robots need to pass through both points before time λx elapses. If a robot visits both (x, −x) in time λx, then we say that it λ-covers x. Similarly, if x is λ-covered s times by a group of several robots, we say that it is s-fold λ-covered. Since we are going to work with coverings in different settings, let us give the following general definition. Definition 2. Fix λ > 1 and s ∈N. For a covering problem of a certain object with some set of rules (called the setting), and a certain set S we say that S is λ-covered, if for any point in S at distance x from the origin it was visited by the robot within time λx (where “visited” is interpreted within the setting, and should comply with the rules of the setting). We may also say that it is the λ-covering of S. Similarly, we say that a robot or set of 2 robots we say that it produces an s-fold λ-covering of S (in some setting), if each point was λ-covered at least s times. Note that this definition is strategy-dependent. Let us now formalize the covering setting that we are working with in this proof. The symmetric line-cover setting: The goal is to cover R≥1. The robot (robots) moves on R with unit speed, starting from the origin. A point x ∈R≥1 is covered by a robot r at the moment when r visited both x and −x, it is λ-covered, if that happens within time λx. A robot can cover any point at most once. Throughout the proof we will be working with the s-fold λ-covering of R≥1 with k robots in the symmetric line-cover setting for fixed s and λ. Therefore, slightly abusing notation, we will use the term ±-covering to refer to this covering problem. By the above discussion, any valid strategy with competitive ratio λ for the original problem (of detecting a target with k robots, f of which are faulty of the crash type) is also a strategy for ±-covering R≥1 with k robots. We actually obtain a lower bound as in (1) for the competitive ratio for the ±-covering strategies. Our goal is to prove the following result. Theorem 3. It is impossible to ±-cover R≥1 with k robots if λ < 2 k r (k + s)k+s sskk + 1 . For each individual robot r we will restrict ourselves to strategies described by an infinite nondecreasing sequence T = T (r) = (t1, t2, t3, . . .) over R≥1, with the interpretation that the robot is sent till +t1 in the positive direction, till −t2 in the negative direction, till +t3 in the positive direction, etc. We argue that this carries no loss of generality. First, it is clear that for ±-covering, we may assume to start in the positive direction. Second, if the robot ever turns in territory that was visited before by this robot, we can shift the turning point in the direction where we came from (since we skip only parts of R which we have visited already, and any further visits occur now even earlier). These two observations restrict already to movements that alternate between turning at positive and negative numbers, actually with the positive turning points increasing and the absolute values of negative turning points increasing. Now suppose for 0 < x2 < x1, the robot turns at x1 and then at −x2. Note that the interval (x2, x1] is covered, but [−x1, x2) is not, so as for ±-covering, we may as well turn at x2 instead of x1 (since after turning at −x2, we will return to (x2, x1] before we actually extend our trip to points smaller than −x2). Thus, we can transform any strategy to an at least as efficient strategy of the type as above. Given a strategy T = T (r) = (t1, t2, t3, . . .) of a single robot r, it is easy to see that for x with ti−1 < x ≤ti, it takes exactly 2(t1 + t2 + · · · + ti) + x time to visit both x and −x (note that indeed the sum goes up to “ti”). So in order for the robot to λ-cover x, we need x ≥1 µ(t1 + t2 + · · · + ti) for µ := λ−1 2 . (2) Let us set t′′ i := max  1 µ(t1 + t2 + · · · + ti), ti−1  (3) 3 unless this value exceeds ti, when we leave t′′ i undefined. All i with t′′ i defined we call fruitful. It is clear that robot r λ-covers exactly Covµ(T) := [ i fruitful [t′′ i , ti] . Note that if we skip a turn ti∗in T, then, of course, we lose the interval [t′′ i∗, ti∗], but following intervals (for i > i∗) extend even further to the left. That is, turning points that are not fruitful can be skipped, in this way definitely λ-covering at least as much. Moreover, it becomes clear that if ti+1 = ti (hence t′′ i+1 = ti+1, if defined), then we can skip ti+1 in the strategy thereby covering at least as much. We move now to considering all robots, not just individual ones. If we are given a ±- covering by k robots, then each point of R≥1 is contained in Covµ(T (r)) for at least s (out of the k) robots. By truncating some of the intervals from [t′′ i , ti] to half-open intervals (t′ i, ti] (with t′′ i ≤t′ i < ti) or even skipping some intervals we may assume that each point of R>1 is contained in exactly s such assigned intervals (of s different robots), and that the turning points of each robot coincide with the right ends of the corresponding intervals. Recall here, that if an interval is not needed, we can actually skip the corresponding turning point in the robot’s strategy without affecting the validity of the other intervals used. We call the new half-open intervals (t′ i, ti] assigned intervals. Let us emphasize, as a consequence of t′ i ≥t′′ i and (3), that t′ i ≥1 µ(t1 + t2 + · · · + ti) (4) or, equivalently, ti ≤µt′ i −(t1 + t2 + · · · + ti−1) . (5) Next we accumulate all assigned intervals of all k robots in one sequence, sorted by their left endpoints, ties broken arbitrarily. Consider a prefix P of this sequence, long enough so that all robots have already some interval present in P. What part of R>1 do these intervals cover, and how often? There is a value a = a(P) ≥1 such that (1, a] is covered s times, and then there are points a =: as ≤as−1 ≤as−2 ≤. . . ≤a1 , such that every point in (aj+1, aj] is covered exactly j times, j = 1, . . . , s −1, and (a1, ∞) is not yet covered. (Note that some of these intervals may be empty.) Associate with P the multiset A(P) := {as, as−1, . . . , a1} (which can be seen as a description of the covering situation of P). We need one more notion. Define the load, L(r)(P), of robot r in P as the sum of all turning points occuring in the intervals of r in P, i.e. L(r)(P) := t(r) 1 + t(r) 2 + · · · + t(r) ir for ir s.t. ( (t′ ir (r), t(r) ir ] in P, and (t′ ir+1 (r) , t(r) ir+1] not in P. We immediately observe L(r)(P) ≤µt′ ir (r) ≤µa (due to (4)). (6) 4 As we extend the prefix P by the next assigned interval, this interval has to be of the form (t′ i (r∗), t(r∗) i ], for some robot r∗. Note t′ i (r∗) = a and t(r∗) i = µ∗a −L(r∗)(P) , for some µ∗≤µ (due to (5)). Set P+ to be the prefix with P extended by this next interval (t′ i (r∗), t(r∗) i ]. We immediately see that loads of robots do not change as we move from P to P+, except for robot r∗for which we get L(r∗)(P+) = L(r∗)(P) + t(r∗) i = µ∗a . Also as in A(P) gets replaced by t(r∗) i = µ∗a −L(r∗)(P) in A(P+), all other elements stay the same. (Note that still, a(P+) = a(P) is a possibility if as = as−1.) We want to analyze the function f(P) := k Y r=1 "L(r)(P) s Q y∈A(P) y # (7) and understand how it changes as we move from P to P+. With L(r)(P) ≤µa for all robots r and y ≥a for all y ∈A(P) we get f(P) ≤ (µa)s as k = µks , (8) i.e. f stays bounded. With the changes of A and loads from P to P+ described, we see that f(P+) f(P) = ak L(r∗)(P) s · (µ∗a)s µ∗a −L(r∗)(P) k = µ∗s xs(µ∗−x)k for x := L(r∗)(P) a , 0 < x < µ∗. We show that this ratio is larger than 1 if µ∗≤µ is too small, independent of x. We use the following lemma. Lemma 4. For µ∗> 0 the polynomial xs(µ∗−x)k maximizes for x = sµ∗ k+s in the range x ∈R, 0 < x < µ∗. Proof. The derivative of the polynomial is (µ∗−x)k−1xs−1(s(µ∗−x) −kx), which has only one zero x = sµ∗ k+s in (0, µ∗). Since the values at 0 and µ∗of the polynomial are 0, the zero of the derivative corresponds to the maximum of the function. Lemma 5. For 0 < x < µ∗, µ∗s xs(µ∗−x)k ≥(k+s)k+s sskkµ∗k and thus µ∗s xs(µ∗−x)k ≥δ for δ := (k+s)k+s sskkµk > 1, provided µ < kq (k+s)k+s sskk . Proof. The first inequality follows immediately from Lemma 4 after substituting the value x = sµ∗ k+s. The second inequality is obvious since we know that µ∗≤µ. Applying Lemma 5, we get that f(P+) f(P) ≥δ > 1, provided µ < kq (k+s)k+s sskk . It implies that f(P) is unbounded for larger and larger prefixes P. This contradicts the bound on f(P) from (8). Therefore, to have a valid strategy, one must have µ ≥ kq (k+s)k+s sskk , which concludes the proof of the Theorem 3 and thus of Theorem 1. 5 3 Generalization to m Rays The following natural generalization of the original cow-path problem was studied by several authors [5], [11], [20], [21], [23]. Consider m rays emanating from 0, and assume that there is a hidden target on one of the rays. We send a robot from 0 at unit speed, and the goal is to find (pass over) the target. It was shown in [4] that the best possible competitive ratio for the problem is 1 + 2 mm (m−1)m−1 , and this is tight. This problem got a lot of attention due to its numerous connections to other, seemingly unrelated, problems. In particular, it is related to hybrid algorithms [3], [17], [20] for on-line problems. We quote from [20] (with “m” for “w” and “k” for “λ”): “We study on-line strategies for solving problems with hybrid algorithms. There is a problem Q and m basic algorithms for solving Q. For some k ≤m, we have a computer with k disjoint memory areas, each of which can be used to run a basic algorithm and store its intermediate results. In the worst case, only one basic algorithm can solve Q in finite time, and all of the other basic algorithms run forever without solving Q. To solve Q with a hybrid algorithm constructed from the basic algorithms, we run a basic algorithm for some time, then switch to another, and continue this process until Q is solved. The goal is to solve Q in the least amount of time.” Interpret the calculations done while performing the i-th basic algorithm as the i-th ray emanating at 0 (where 0 corresponds to the initial state). Then, being at a point xi in the calculations of the i-th algorithm, it costs us at most xi + xj to pass from this point to the point xj in the calculations of the j-th algorithm. This interpretation was given in [17], using which the authors exhibited an algorithm, which is competitive with respect to each of a given set of algorithms. Another related field is contract algorithms. The connection was established in [11], and the setting is as follows. A processor is supposed to advance in m computational problems, until it is interrupted. Upon interruption, it is given the index i ∈{1, . . . , m}, and is returns its most advanced computation on the i-th problem. Again, interpreting each problem as a ray emanating from 0, we easily see the connection to the m-path problem. Both papers [11] and [20] were concerned with the variation of the m-path problem, in which k robots are conducting the search simultaneously. When introducing several robots, we may use two possible measures for competitive ratio: T d and D d , where T is the time spent until the target was found (given that all robots travel at unit speed), and D is the total distance travelled by all robots until the target was found. The distance version of the problem was resolved in [20], in which the authors determined the optimal competitive ratio. Somewhat unfortunately, the optimal algorithm does not really use multiple robots simultaneously: all but one robot search on one ray each, while the last robot performs the search on all remaining rays. The time version of the problem was addressed in [11], where it was resolved for cyclic strategies. Informally speaking, a cyclic strategy is a strategy in which the advancements in the search on the rays is happening in cyclic order, and at each step each robot is assigned a farther distance to explore on a ray than it previously explored on other rays. We address the natural faulty generalization of the problem, when we have m rays and k robots, f out of which are faulty. Let us denote the corresponding function by A(m, k, f). That is, A(k, f) = A(2, k, f). The following theorem gives the precise value of A(m, k, f) 6 in all (meaningful) cases. In particular, the f = 0 case of our theorem answers the question posed in several research papers mentioned before. It strengthens the result from [11], answering their question whether their result is possible to generalize to all strategies, gives appropriate analogue of the result from [20] for the competitive ratios measured in terms of time, as well as answers the question posed by Baeza-Yates, Culberson, and Rawlins in [5]. Theorem 6. Given that f < k < m(f + 1) and q := m(f + 1), we have A(m, k, f) = λ0 := 2 k s qq (q −k)q−kkk + 1. (9) Note that the restriction on k simply excludes trivial cases: first, f = k means that all robots are faulty. Second, if k = m(f + 1) (or larger), then sending f + 1 robots on each of the m rays will guarantee a competitive ratio of 1. It is easy to see that, substituting m = 2 in (9), one gets (1). The proof of the upper bound is deferred to the appendix. The proof of the lower bound uses the following covering relaxation of the original problem. One-ray cover with returns (ORC) setting: The goal is to cover (a subset of) R≥1. The robot (robots) starts at 0 and move with unit speed along the ray R≥0. One robot may cover a point multiple times, but different coverings are only counted if the robot visited 0 in between. Fix k ∈N and λ > 1. Then a q-fold λ-cover of R≥1 in the ORC setting may be seen as a relaxation of the m-ray cover problem. Indeed, in a sense we simply discard the labels of the rays, but still ask the robots to return to 0, imitating the change of the rays, which happened in the original problem. Thus, any strategy for searching a target on m rays with k robots, f out of which are faulty, with competitive ratio λ, induces a strategy for the q-fold λ-covering of R≥1 with k robots in the ORC setting, where q = (f + 1)m. Let C(k, q) be the infimum of all λ, for which there exists a strategy for a q-fold λ- covering of R≥1 with k robots in the ORC setting. We prove the following: C(k, q) ≥2 k s qq (q −k)q−kkk + 1. (10) Clearly, A(m, k, f) ≥C(k, m(f + 1)), and thus the lower bound Theorem 6 follows from (10). At the same time, the bound (10) is tight, as follows from the “≤” part of (9). Before proving (10), let us describe yet another covering problem, which is a fractional analogue of the multicovering problem in the ORC setting. Fractional one-ray retrieval with returns: We are given a finite number of robots, of total weight 1 each of which moves with constant speed 1 along the ray R≥0. We are supposed to cover an unknown target at distance x ≥1 with several robots of some prescribed total weight η, η ∈R≥1. The same robot may cover any point any number of times, but different coverings are counted only if the robot visits 0 in between. We are interested in the worst-case competitive ratio C(η) of the best algorithm for the problem. Remark. Here’s a related, but not equivalent formulation, which is closer to the original m-ray problem. We are given a finite number of robots, each of which moves with 7 constant speed 1 along the ray R≥0 and is supposed to match an unknown target at dis- tance x ≥1 with one of the sample targets in a list of size η, η ∈R≥1. The total size of the memory (that is, the total portion of the list they may carry) of all robots is fixed and is equal to 1. Robots may change the set of samples they carry only when visiting 0. In the worst case, the target is matched only after being compared with all samples. We are interested in the worst-case competitive ratio C′(η) of the best algorithm for the problem. The statement we prove is as follows: C(η) = 2 ηη (η −1)η−1 + 1. (11) Naturally, the same equality holds for C′(η) as well. We defer the proof of the reduction of (11) to (9) to the appendix. 3.1 Proof of the lower bound in (10) For technical reasons, we will prove the following slightly stronger statement about covering a finite part of R≥1: for any ǫ > 0, there exists N, such that if k robots produce a q-fold λ-cover of [1, N] in the ORC setting, then λ ≥2 k s qq (q −k)q−kkk + 1 −ǫ. (12) The main point here is that the needed N is independent of the strategy. Let us denote by µ(q, k) the root in the right hand side of the displayed inequality above. Note that µ(q, k) = µ(cq, ck) for any c > 0 and thus µ(q, k) < µ(q −1, k −1), provided that q > k > 1. Put ǫ′ := 2µ(q −1, k −1) −2µ(q, k). In the rest of the proof we show the validity of (12). The proof of the theorem follows similar steps as the proof of Theorem 1. We are presenting the proof in a somewhat contracted form, highlighting the changes one has to make in order to prove this theorem. The proof goes by induction on k.1 For k = 1 we of course will not use any inductive hypothesis. For k ≥2, suppose by induction that the statement of (12) is valid for k −1 and q −1. Choose N ′, such that (12) holds for k −1 and q −1 with N := N ′ and ǫ := ǫ′. In other words, for such choice of N ′ any (q −1)-fold λ′-covering of [1, N ′] by k −1 robots in the ORC setting satisfies λ′ ≥2µ(q −1, k −1) + 1 −ǫ′ = 2µ(q, k) + 1. (13) Fix ǫ > 0 and a competitive ratio λ not satisfying (12) and put µ := (λ −1)/2. Fix a collective strategy of robots for k robots to produce a q-fold λ-covering of [1, N] in the ORC setting (note that N is sufficiently large and would be chosen later). 1We remark that induction is needed only for one part of the proof, which is given in Case 2 below. 8 Standardising the strategy. The round of a strategy for a robot is the period between two consecutive visits of 0. We can assume that in each round the robot turns exactly once at a turning point t. Therefore, we can describe a strategy by an infinite vector T = (t1, t2, t3, . . .) of reals in R≥1, ti the turning point in round i. Obviously, point x ∈R≥1 is λ-covered in round i iff(i) x ≤ti and (ii) 2(t1 + t2 + · · · + ti−1) + x ≤λx (⇔x ≥ 1/µ(t1 +t2 +· · ·+ti−1) for µ := λ−1/2). We set t′′ i := 1/µ(t1 +t2 +· · ·+ti−1). If t′′ i > ti, round i does not λ-cover any point, and we may as well skip this round (future rounds will λ-cover even more in this way). Otherwise, we call the round fruitful and the robot λ-covers exactly the interval [t′′ i , ti] in round i. Clearly, we can assume that we are using only strategies with only fruitful rounds. Observe that the sequence (t′′ 1, t′′ 2, t′′ 3, . . .) is monotone increasing. Suppose now we have k such strategies T (r) = (t(r) 1 , t(r) 2 , t(r) 3 , . . .), r = 1, 2, . . . , k, that establish a q-fold λ-covering of R≥1. Let us now assign truncated intervals (t′ i (r), t(r) i ] ⊆ [t′′ i (r), t(r) i ] ∩[1, ∞), in such a way that every point in R>1 is covered exactly q times, and such that each sequence (t′ 1 (r), t′ 2 (r), t′ 3 (r), . . .) is monotone increasing (this may actually also result in skipping some of the turning points). Defining the function. Next we accumulate all assigned intervals of all k robots in one sequence, sorted by their left endpoints, ties broken arbitrarily. Consider a prefix P of this sequence, long enough so that all robots have already some interval present in P. Clearly, it consists of all rounds of the strategy of the robot r up to some ir, for each r = 1, . . . , k. What part of R≥1 do these intervals cover, and how many times? There is a value a = a(P) ≥1 such that (1, a] is covered q times, and then there are points a =: aq ≤aq−1 ≤as−2 ≤. . . ≤a1 , such that every point in (aj+1, aj] is covered exactly j times, j = 1, . . . , q −1, and (a1, ∞) is not yet covered. (Note that some of these intervals may be empty.) Associate with P the multiset A(P) := {aq, aq−1, . . . , a1} (which can be seen as a description of the covering situation of P). We need one more notion. Define the load, L(r)(P), of robot r in P as the sum of all turning points occurring in the intervals of r in P, i.e. L(r)(P) := t(r) 1 + t(r) 2 + · · · + t(r) ir for ir s.t. ( (t′ ir (r), t(r) ir ] in P, and (t′ ir+1 (r) , t(r) ir+1] not in P. Let b(r) := t′(r) ir+1 be the beginning of the first interval, assigned to robot r and which is not in P. Then, clearly, the analogue of (6) is L(r)(P) ≤µb(r). (14) The following function controls the situation: f(P) := k Y r=1 "L(r)(P) q−k (b(r))k Q y∈A(P) y # . (15) 9 Adding one interval. Add one new interval to P, thus forming the covering situation P+. If the new interval is assigned to the robot r, then b(r) = a. In P+, the beginning b(r) of the first non-included assigned interval to robot r is replaced by some b′ (equal to t′(r) ir+2); using (14), the load of r changes to µ∗b′ for some µ∗≤µ; at the same time, the element a is removed from A and replaced by the end of the new interval, which is L(r)(P+) −L(r)(P) = µ∗b′ −L(r)(P). Thus, in f(P) the ak multiple in the denominator is compensated by the (b(r))k in the numerator, and we have f(P+) f(P) = (µ∗b′)q−k · b′k L(r)(P) q−k µ∗b′ −L(r)(P) k = µ∗q−k xq−k(µ∗−x)k for x := L(r)(P) b′ , 0 < x < µ∗. Making use of Lemma 5, we conclude is as follows: if the competitive λ does not satisfy (12) with ǫ, then there exists δ > 1, such that f(P+) f(P) ≥δ. (16) The last step of the proof is to conclude that, at the same time, f(P) should be bounded (and that the number of intervals in the cover, and thus the number of such potential increments, is infinite). It is slightly more difficult in this case and is the reason why we use induction. We consider two cases. Case 1. For any robot r and any two consecutive starting points t′(r) i , t′(r) i+1 ∈R≥1 of its assigned intervals, we have t′(r) i+1/t′(r) i ≤C, where C is an absolute constant, which we would specify later. In this case, looking at (15), we have b(r) ≤Ct′(r) ir ≤Ca, and thus L(r)(P) ≤µCa. This allows us to bound f(P): f(P) ≤Cqkµ(q−k)k. Therefore, in a finite number of steps, we will get a contradiction with (16). At the same time, when passing from P to P+, the distance from the first point that is not q-covered to zero increases by at most C times. Thus, eventually, we get a contradiction. We can find the value of N needed in (12) based on these two simple facts. Case 2. There exists a robot r and two consecutive starting points t′(r) i , t′(r) i+1 ∈R≥1 of its assigned intervals, such that t′(r) i+1/t′(r) i ≥C. Consider the interval [µt′(r) i , Ct′(r) i ]. Then, due to (14), no endpoint of the first i−1 intervals assigned to r could have surpassed µt′(r) i . Thus, the interval displayed above is λ-covered by r at most once. It implies that the other k −1 robots produced a (q −1)-fold λ-covering of this interval in the ORC setting. Rescaling, we may assume that k−1 robots produced a (q−1)-fold λ-covering of the interval [1, C/µ]. Using (13) and choosing C such that C/µ ≥N ′, we conclude that λ ≥2µ(q, k)+1. Acknowledgements. We thank D´aniel Kor´andi, Alexander Pilz, Miloˇs Stojakovi´c, and May Szedlak for discussions on and suggestions for the material covered in this paper, and Yoshio Okamoto for suggesting the problem. 10 References [1] N. Alon, C. Avin, M. Kouck, G. Kozma, Z. Lotker, and M. R. Tuttle, Many Random Walks Are Faster Than One, Combinatorics, Probability & Computing 20 (2011), N4, 481–502. [2] S. Alpern, S. Gal, The theory of search games and rendezvous, Springer Science & Business Media (2006), Vol. 55. [3] Y. Azar, A. Z. Broder, and M. S. Manasse, On-line choice of on-line algorithms, in Proceedings of the 4th Annual ACM-SIAM Symposium on Discrete Algorithms (1993), 432–440. [4] R. Baeza-Yates, J. Culberson, G. Rawlins, Searching with uncertainty extended ab- stract. In: Karlsson R., Lingas A. (eds) SWAT 88. SWAT 1988. Lecture Notes in Computer Science 318, 176–189. Springer, Berlin, Heidelberg. [5] R. Baeza-Yates, J. Culberson, G. Rawlins, Searching in the plane, Information and Computation, 106:2 (1993), 234–252. [6] A. Beck, On the linear search problem, Israel J. Math. 2 (1964), N4, 221–228. [7] A. Beck, More on the linear search problem, Israel J. Math. 3 (1965), N2, 61–70. [8] A. Beck, D.J. Newman, Yet more on the linear search problem, Israel J. Math. 8 (1970), N4, 419–429. [9] R. Bellman, Minimization Problem, Bull. Amer. Math. Soc. 62 (1956), p. 270 [10] R. Bellman, An optimal search, SIAM Review 5 (1963), N3, 274. [11] D.S. Bernstein, L. Finkelstein, and S. Zilberstein, Contract algorithms and robots on rays: Unifying two scheduling problems, In IJCAI (2003), i 1211-1217. [12] P. Bose, J.-L. De Carufel, S. Durocher, Revisiting the problem of searching on a line, In AlgorithmsESA 2013, LNCS, Vol 8125, Springer (2013), 205–216. [13] J. Czyzowitz, K. Georgiou, E. Kranakis, D. Krizanc, L. Narayanan, J. Opatrny, S. Shende, Search on a line by Byzantine robots, Proc. of the 27th International Sympo- sium on Algorithms and Computation (ISAAC), 2016, 27:127:12. [14] J. Czyzowitz, E. Kranakis, D. Krizanc, L. Narayanan, J. Opatrny, Search on a line with faulty robots, Proc. PODC’16, 2016, 405–414. [15] E. D. Demaine, S. P. Fekete, and S. Gal, Online searching with turn cost, Theoretical Computer Science 361 (2006), N2, 342–355. [16] O. Feinerman, A. Korman, Z. Lotker, and J. S. Sereni, Collaborative search on the plane without communication, In Proceedings of the 2012 ACM symposium on Prin- ciples of distributed computing, ACM, 2012, 77–86. [17] A. Fiat, Y. Rabani, and Y. Ravid, Competitive k-server algorithms, in Proceedings of the 31st Annual IEEE Symposium on the Foundations of Computer Science (1990), 454–463. 11 [18] W. Franck, An optimal search problem, SIAM Review 7 (1965), N4, 503–512. [19] J.R. Isbell, An optimal search pattern, Naval Research Logistics Quaterly 4 (1957), 357–359. [20] M.Y. Kao, Y. Ma, M. Sipser, and Y. Yin Optimal constructions of hybrid algorithms, Journal of Algorithms 29 (1998), N1, 142–164. [21] M.Y. Kao, J.H. Reif, S.R. Tate, Searching in an Unknown Environment: An Optimal Randomized Algorithm for the Cow-Path Problem, Information and Computation 131 (1996), N1, 63–79. [22] M. M. Polycarpouy, Y. Yang, K. M. Passinoz, A Cooperative Search Framework for Distributed Agents, Intelligent Control. (2001), 1–6. [23] Schuierer, Sven. A Lower Bound for Randomized Searching on m Rays, In Computer Science in Perspective, 264–77. Lecture Notes in Computer Science (2003) 4 Appendix First, we give a proof of the upper bound in (10). We exhibit a so-called exponential strategy, similar to the many strategies that appeared for related problems. It is a fairly natural extension of the proofs of the upper bounds for some particular cases of our problem, exhibited in [14] and [11]. Proof of the upper bound in (10). It is sufficient, for each x ∈R≥1, to produce an f-fold λ0-covering each of points on the m rays at distance x. Let us first do the assignment for each of the robots: to each robot r we assign intervals on different rays, which are supposed to be λ0-covered by r. Fix α > 1, which we optimize later. The assignment is as follows: robot r λ0-covers intervals (αk(i+mj)+m(r−f), αk(i+mj)+mr] for each j = −2, −1, . . . on ray i, i = 1, . . . , m. We start with j = −2 to ensure that before distance 1 on each ray, each of the robots has already done at least 1 pass on the ray. Let us first make sure that all the points at distance at least 1 from the origin are covered at least f times by such an assignment. Fix a ray i and a point on it, in the form αx, where x ≥0. Then x must fall into (k(i + mj) + m(r −f), k(i + mj) + mr] for some j in order to be covered by r. It is equivalent to y ∈(kj + r −f, kj + r], where y := ⌈x−ki m ⌉. Note that y ≥−k. Thus, y falls into the interval above if y( mod k) ∈(r −f( mod k), r( mod k)]. This holds for r ∈[y( mod k), y + f( mod k)). Thus, the point αx was covered by the intervals assigned by the robots y −f + 1, . . . , y (modulo k). Now let us describe a strategy for robots to cover these intervals, and calculate the corresponding competitive ratio. Each robot starts his tour with ray number 1, and visits the rays in cyclic order. When visiting ray i for the (j + 3)-nd time (+3 is there since j starts from −2), he turns at the point αk(i+mj)+mr, returns to 0 and continues to the ray i + 1. A point at distance x ≥1 from the origin on ray i is covered for the f-th (and final) time by robot r, if it falls into the segment (αk(i+mj)+m(r−f), αk(i+mj)+m(r−f+1)] for some 12 j ∈Z. Take x ∈(αk(i+mj)+m(r−f), αk(i+mj)+m(r−f+1)]. Then the time passed until robot r arrives at this point is at most x + 2 i+mj−1 X l=−2m αkl+mr < x + 2αk(i+mj)+mr αk −1 . Thus, the competitive ratio of such algorithm is 2γ + 1, where γ is the maximum over all valid choices of x of the expression αk(i+mj)+mr x(αk −1) . This expression is clearly decreasing as x increases, therefore, γ ≤ αk(i+mj)+mr αk(i+mj)+m(r−f)(αk −1) = αmf αk −1. Note that the last bound is independent of the choice of the ray and x. Applying Lemmas 4, 5 with µ∗= 1, x = α−k, s = mf−k k , we get that the displayed expression is minimised for α = kq mf mf−k, for which it is equal to (λ0 −1)/2. We conclude that the competitive value of the algorithm coincides with the right hand side of (9). Next, we give the reduction of (11) to (10). Proof of (11). Let us first prove the “≤” direction: that the right hand side is an upper bound for the left hand side. Fix a sequence of rational numbers qi ki , i = 1, . . ., such that qi ki ≥ η and limi→∞ qi ki = η. Consider the strategy that gives the upper bound from Theorem 6 with f := 1, k := ki, m := qi. Then the competitive ratio is 2 ki s qqi i (qi −ki)qi−kikki i + 1 = 2 (qi/ki)qi/ki (qi/ki −1)(qi/ki−1) + 1 →2 ηη (η −1)η−1 + 1. On the other hand, the strategy above can be clearly used to produce a fractional one-ray qi ki -covering (and thus an η-covering): just split the weight between ki robots in equal parts, and let them perform their strategy on one ray, ignoring the labels of the rays. Going to the limit, the “≤” part of (11) is proved. The proof of the “≥” direction is similar. Fix ǫ > 0 and consider a strategy S in the fractional problem, which achieves the competitive ratio C(η)+ǫ. Assume that the weights of the robots used in the strategy are w1, . . . , wn, Pn i=1 wi = 1. Fix δ > 0, which choice is specified later, and integers q, k1, . . . , kn, such that wi η ≤ki q ≤wi η + δ. Put k := Pn i=1 ki. Thus, clearly, k q ≥1 η. The requirement on δ is such that k q ≤ 1 η−ǫ. Consider the strategy S′ for the q-fold covering with k robots in the ORC setting, which is obtained as follows: the first k1 robots repeat the actions of the first robot from strategy S, the next k2 robots repeat the actions of the second robot from S etc. Clearly, the strategy produces the q-fold covering and has competitive ratio C(η) + ǫ. On the other hand, using the lower bound from (10), it has competitive ratio at least 2 (q/k)q/k (q/k −1)(q/k−1) + 1 ≥2 (η −ǫ)η−ǫ (η −ǫ −1)η−ǫ−1 + 1. 13 This implies that for any ǫ > 0 we have C(η) ≥2 (η −ǫ)η−ǫ (η −ǫ −1)η−ǫ−1 + 1 −ǫ. Passing to the limit ǫ →0 gives the result. 14