Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Compatibility to Data Acquisition based on mutliple Micro Air Vehicles #54

Open
tribbloid opened this issue Jul 10, 2016 · 5 comments
Assignees
Milestone

Comments

@tribbloid
Copy link
Owner

tribbloid commented Jul 10, 2016

OR: Drone Swarm Integration

Demo1: 8 simulated APM quadcopters, simple MapReduce

The goal of this proposal is to enable end-to-end interoperability between a fleet of micro air vehicles (MAV, also known as drone, aerial robot or sUAS: small unmanned aircraft system) for realtime data acquisition and a COTS (Commodity-off-the-shelf) computer cluster for high-performance data processing/fusion and service integration.

The speed and low cost of MAVs makes them suitable for quick on-demand provisioning and surveying large and unknown spaces that are dangerous otherwise. Meanwhile, most task parallelization and failover paradigms in Apache Spark & spookystuff can be used in multi-robot coordination with minimal change.

Use Case 1 (open area search): Given the contour of a large area, a collection of waypoints that ensures line-of-sights (LOS) to all objects in the area can be derived and partitioned into small pieces with minor overlap. These pieces are then distributed to multiple computer nodes, each directing a small number of MAVs to fly over their respective waypoints in parallel. the video segments captured by their cameras can be simultaneously post to web services that has strong computing power and intelligence (e.g. strong AI like google brain, or crowdsourcing service like Amazon Mechanical Turk). Allowing object of interest (e.g. missing person) to be discovered with minimal latency.

Use Case 2 (3D scanning of building interior): An interconnected space with dense obstacle (e.g. a buiding compartmentized into many rooms) can be abstracted into an undirected graph, with each node denoting a position accessible by MAV, and each edge denoting a direction from which node the MAV detects no collision. Thus, given several starting points (a.k.a. seeds), a distributed graph discovery algorithm that are only slight different from web crawling can be used to exhaustively scan a large & complex building, yielding enough visual information to reconstruct its interior.

@tribbloid
Copy link
Owner Author

tribbloid commented Jul 10, 2016

Overview

Illustrated by the following diagram, the proposed architecture can be best described as a distributed ground control station (GCS) backend:

overview

When initialized, a thread pool is created on every node, When the first job is scheduled, a GCS thread is provisioned on-demand and allocated to each Spark executor, the thread first initialize a 2-way telemetry connection with a MAV, then enters a close loop for its flight control and collection of its data feed.

The GCS threads are never terminated until the job is finished, its MAV is lost or returned to land & recharge, each thread is simply reclaimed by the pool when it finished its partition (at which point its MAV is instructed to hold position). The state of its MAV is synchronized and can be queried in realtime, and is used to optimize parallelization strategy where they are reassigned to new tasks.

The first prototype should use MAVLink for basic telemetry due to its low integration cost and proven ecosystem that includes some very popular GCS, e.g.

3DR Tower/DroidPlanner (https://github.com/DroidPlanner/Tower)
AndroPilot (https://github.com/geeksville/arduleader/tree/master/andropilot)
Project Condor (https://github.com/project-condor/mavigator)
MAVProxy (https://github.com/Dronecode/MAVProxy)
qGCS (https://github.com/mavlink/qgroundcontrol)

In addition it's hardware & telemetry agnostic, while supports both ardupilot and PX4, two most widely used open-source firmware in MAV autopiloting.

@tribbloid
Copy link
Owner Author

tribbloid commented Jul 10, 2016

Algorithms

We assume a graph-based model for MAVs' environment, as popularized by BrassTRO2011. Namely the search space is segmented into many interlinked & uniquely identifiable vertices/waypoints. This allows the proposed feature to fit into spookystuff's execution engine which is designed mainly for web and file scraping, and the following old concepts to be generalized into the proposal's context:

  1. URI (Unique Resource Identifier): The URI of a vertex/waypoint refers to a data structure that uniquely identify it: any data acquired from 2 waypoints with identical URIs are considered identical, thus redundant. The simplest URI in outdoor environment is the latitude/longitude/elevation (LLE) triplet, a slightly more complex URI in indoor environment when GPS lock is infeasible can be visual odometry estimation or SLAM (Simultaneous localization and mapping) features.
    • In practice, the mapping from URI to waypoint can be surjective, many URI may lead to the same waypoint and their relationship may only be visible at a later time (e.g. when SLAM feature has big enough point cloud to ensure overlapping object), this should not affect the end result.
  2. link/edge: A link of a waypoint refers to any direction from that waypoint to which the MAV can potentially acquire new LOS without colliding with the environment. This is mainly used for exploring unknown environment or refine search/scan results.
Task Parallelization Algorithm

... is used to break a task into many small partitions to facilitate parallel execution iff the scope of the task is known completely beforehand, this obviously excludes unknown environment exploration, which will be elaborated in the next section. E.g. the following task of scanning a farm is broken into 4 pieces to be executed by multiple MAVs in parallel with no coordination overhead:

genpartitioner

Unlike map-reduce, efficient task parallelization for collaborating robots can be tricky, namely, it has to factor in MAV's location and speed: partitions assigned to the same node should be close to each other and close to where MAVs of that node was left to hold position while finishing their previous task, such that time wasted on relocating to new waypoints can be minimized.

GenPartitioner (short for Generalized Partitioner) is the main interface where exact task parallelization implementations can be plugged-in. It generalizes Spark Partitioner (org.apache.spark.Partitioner) in the sense that it can directly read & manipulate the RDD (resilient distributed dataset, the big dataset abstraction in Apache Spark) being shuffled. This allows drone controllers to fine tune missing planning, including:

  1. Doing a pre-scan of tasks and nodes that execute them to "see the big picture", so task allocation can be more efficient and localized, e.g. it can collect all MAVs' position within a cluster if they are left to hold positions upon completion of a previous task/job.
  2. modifying the URIs of waypoints to avoid midair collision or increase scanning overlaps, or insert new waypoints to circumvent geofence.

Each GenPartitioner only has to implement 1 function: groupByKey: RDD[(A,B)] => RDD[(A, Iterable[B])], which in task parallelization case, can be used to replace RDD[(URI,Task)].groupByKey(org.apache.spark.Partitioner), which is the conventional way of breaking RDD[(URI,Task)] into partitions. Its exact implementation can be heavily customized to enable all above optimizations.

GenPartitioner optionally takes an extra parameter called beaconRDD, a beaconRDD is an explicitly partitioned RDD with presumably empty content, which can be used to enforce colocation of partitions of multiple RDDs (through coGrouping, this is a makeshift circumvention and may be removed once Spark Partitioner itself start supporting direct control over the partition shipping process).

Current design require 2 components to make decisions on which drone to run what subtask:

  1. GenPartitioner query all Link.lastLocation and decide partitionID for each datum based on joint optimization of proximity & workload.

    • (Globally efficient but quickly become obsolete due to straggler(s) in tasks, also not optimized for clusterwise failover)
  2. Link.getOrCreate recommission an Link from all idle Links based on closest first.

There is no point in merging their functionalities because they serve different purposes:

GenPartitoner can do this in the following order:

  • CAUTION: after partitioning things are no longer in control, drones may be lost, workers may be lost and added. But the stage has to stick to the same plan.
  1. query the last locations of all Drones w.r.t. their workers, aggregate them to driver

  2. create a special 3D-range based partitoner that remembers all these locations, give them Serial partition IDs, and partition any point in the entire 3D location space into
    the same partition with a location based on the following 2 criterions:

    • its distance to the drone location
    • its distances to other drones' locations that is commanded by the same worker
  3. deploy a BeaconRDD that use the above as partitioner, use drone locations as index, use associated worker hostnames as primary preferred location, and use the hostnames of worker that commands
    closest drone fleet as backup preferred Location.

    • sorry newly joined worker(s), even your drone fleet may be closer and less busy than existing workers' there is no chance any part of the BeaconRDD will be transferred to you. You'll have to wait for the next stage.
  4. This BeaconRDD is a singleton! It's shared globally by all ExecutionPlan & SpookyContext.

Genpartitioner represents global, less responsive optimization. Link.getOrCreate represents local, real-time, and simple optimization, it is also the only necessary optimization if not
on cluster.

In its most simple form. It always recomission the nearest idle link to do a task that has an explicit starting location.
More complex optimization may require estimating the value of an idle link to other incoming tasks, and predicting if a busy link is getting near and about to be decommissioned.

The practical plan is to implement Link.getOrCreate first, then implement GenPartitioner and optimize both later.

Currently it has 3 implementations and non of them is optimized for collaborating robots: the first one eliminates shuffling cost all together by performing in-partition groupByKey, the second one is the default Spark implementation (which takes Spark Partitioner as a parameter), the third one tries to build an empty beaconRDD and reuse it in all downstream tasks.

GenPartitioner specifically optimized for collaborating robots is the most important component and should be implemented and tested with high priority.

@tribbloid
Copy link
Owner Author

tribbloid commented Jul 10, 2016

Exploration Algorithm

... is used to actively explore/search an unknown environment in parallel (e.g. Use Case 2). Depth-first search (DFS) and its multi-robot extension (MR-DFS, see BrassTRO2011) provide a good reference point for the most common objective: traversing all vertices in the shortest time. The actual optimal algorithm may varies largely depending on objectives and environment. E.g. 2 robots with the objective of meeting each other may use algorithms closer to A* search or D* search.

Due to this variety I propose the following ExploreAlgorithm interface in which graph exploration algorithms for both web crawling and collaborating robots can be plugged-in. Before describing its usage let's first look at what all these algorithms share in common:

  • Properties are attached to each vertex of which states can be changed: properties can vary from a simple beacon that denotes if the vertex has been visited (DFS), to distance to starting vertex (Dijkstra), or in more complex cases, the visual data and derived SLAM models that are repeatedly refined with each scan. These properties may be sharded (locally known) and not available immediately to all robots of the fleet.
  • The exact URLs of most vertices are hidden at the beginning (thus 'Unknown Environment'), and can only be revealed if its neighbour(s) is visited, at which point the vertex is marked 'open', and after its been visited once it's marked 'closed' or 'visited'. Each robot/executor always have a set of open & closed vertices and they may also be sharded. A robot can choose any revealed vertex to visit next, regardless of whether its open or visited.

ExploreAlgorithm is the main interface that generalizes these algorithms, the above traits are enclosed within the following 4 methods:

  • openReducer: decide how to merge properties of 2 open vertices if they have identical URI. Invoked when an new open vertex become visible.
  • visitedReducer: decide how to merge properties of 2 visited vertices if they have identical URI. Invoked when the robot has finished visiting a vertex. E.g. it can be used to collect visual data from a vertex, fuse it with old data, and use them to reestimate SLAM model.
    • In addition, open & visited reducers are used in periodic load balancing stage, where robots that finished their tasks are reassigned to new branches of the graph. During load balancing stage all known vertices are shuffled accross the cluster (using **GenPartitioner* described in previous section), and 2 vertices on different nodes may turn out to have identical URI. This is a perfect opportunity for reducers to be invoked again to merge properties that are scattered across the cluster. It should be noted that both reducers are assumed to be associative, thus the final merged properties are irrelevant to the GenPartitioner being chosen.
  • ordering: decide which known vertex (either open or visited) needs to be processed next. Invoked when the robot is planning to visit a new vertex, immediately after visitedReducer.
  • eliminator: decide how to process the vertex chosen by the ordering, given that the same vertex that has identical URI already exists in the visited set. The vertex can either be visited (by the robot controlled by the same thread/executor), or be eliminated simply because it's redundant. Invocation of eliminator is delayed until the vertex is being chosen by the ordering, at which point both reducers may already be applied on it for many times.

The following diagram shows how these methods are being invoked in use case 2.

exploreimpl

Currently the only working implementation is Breadth-First Search which is only useful in web crawling. However Depth-First Search shouldn't be too hard to implement given simple enough URI (e.g. LLE triplet).

@tribbloid
Copy link
Owner Author

tribbloid commented Jul 13, 2016

Implementation

The first implementation should use MAVLink common library and Python 2.x embedded in Spark Python RDD:

  • MAVLink is 62.3% Python, Many derivative projects & libraries are also based on Python (e.g. dronekit-python, MAVProxy). Python has a proven ecosystem with strong support from Spark, Machine Learning and SITL-based integration test.
  • C++ is ... well C++ is not.
  • Most Java-based projects are vender-locked to android, and most java-based libraries only implement the serialization component.

The only limitation to MAVLink based implementation is that most supported MAV types still can't fly indoor, only prototypes that leverage optical flow camera & LIDARs are available. The most promising solution so far is to integrate single-camera based visual odometry (e.g. SVO https://github.com/uzh-rpg/rpg_svo), of which the process already started within the community.

I will add/switch to an extra abstraction layer of telemtric control to handle more MAV types (DJI/LockheedMartin maybe? This library may be useful: https://www.ugcs.com/en/page/download) if its deemed necessary.

@tribbloid tribbloid changed the title Add compatibility with micro aerial vehicle based data acquisition. Add Compatibility to practical Data Acquisition based on Micro Air Vehicles Jul 13, 2016
@tribbloid tribbloid changed the title Add Compatibility to practical Data Acquisition based on Micro Air Vehicles Add Compatibility to Data Acquisition based on mutliple Micro Air Vehicles Jul 14, 2016
@tribbloid tribbloid modified the milestones: 5.0, 0.5.0 Jul 31, 2016
@tribbloid
Copy link
Owner Author

tribbloid commented Aug 27, 2017

Path Planning submodules

1. route optimization

2. collision avoidance

2a. traffic control

use distributed GD/SGD with minibatches to randomly sample quad-tuples of points: $A_1, B_1, A_2, B_2$, each denoting a column vector

let:

$$ \begin{aligned} M = A_1 - A_2 \\ C_1 = B_1 - A_1 \\ C_2 = B_2 - A_2 \end{aligned} $$

and G is a 3x3 matrix representing:

$$ G = C_2 C_1 ^T - C_1 C_2 ^T $$

to find the distance between line section $A_1 B_1$ and $A_2 B_2$ we minimize the following distance:

$$ \begin{aligned} D &= || M + t1 C_1 - t2 C_2 ||^2 \\ \dot{t1}, \dot{t2} &= \arg \min_{ {t1, t2} } D \end{aligned} $$

subject to $t_1, t_2 \in [0, 1]$

The solution of which is:

$$ \begin{cases} \dot{t1} &= Clip(\frac{M ^T G C_2}{C_1 ^T G C_2}, [0,1]) \\ \dot{t2} &= Clip(\frac{M ^T G C_1}{C_1 ^T G C_2}, [0,1]) \end{cases} $$

after which the 1st-order approximation of $D$ is reduced to:

$$ \begin{aligned} D &= <\dot M + \dot{t1} \dot C_1 - \dot{t2} \dot C_2, M + \dot{t1} C_1 - \dot{t2} C_2> \\ &= <P, { (1-\dot{t_1}) A_1 - (1-\dot{t_2}) A_2 + \dot{t_1} B_1 - \dot{t_2} B_2 }> \end{aligned} $$

finally we get $D$'s gradient w.r.t. $M, C_1, C_2$

$$ \begin{cases} \nabla_{A_1} D &= (1-\dot{t1}) P \\ \nabla_{A_2} D &= (\dot{t2} - 1) P \\ \nabla_{B_1} D &= \dot{t1} P \\ \nabla_{B_2} D &= -\dot{t2} P \end{cases} $$

use $- \nabla_{(A_1, A_2, B_1, B_2)}$ to update all the points until (local) minimum is reached

2b. terrain adaptation

@tribbloid tribbloid self-assigned this Jan 15, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant