Autonomous Driving: Modeling and Learning Behaviors

Motion planning for dynamic environments is clearly one of the main problems that needs to be solved for effective autonomous navigation. As shown by Ref and Sharir, the general problem is NP-Hard, which explains the continued efforts to find algorithms to cope with that complexity.

A much more overlooked but equally critical aspect of the problem is motion prediction. Motion planning algorithms assume the availability of previous knowledge about how every mobile object in the environment will move, an assumption which does not hold for a vast number of situations and environments. The alternative is to predict how objects will move and to use that prediction as the input of the motion planning algorithm.

Read More »

Mastering CMake — Key Concepts

Key Concepts

Main Structures

This chapter provides an introduction not CMake's key concepts. As you start working with CMake, you. Will run into a variety of concepts such as targets, generators, and commands. In CMake, these concepts are implemented as C++ classes and are referenced in many of CMake's commands. Understanding these concepts will provide you with the working knowledge you need to create effective CMakeLists files.

Before going into detail about CMake's classes, it is worth understanding their basic relationships. At the lowest level are source files; these correspond to typical C or C++ source code files. Source files are combined int targets. A target is typically an executable or library. A directory represents a directory in the source tree and typically has a CMakeLists file and one-or-more targets associated with it. Every directory has a local generator that is responsible for generating the Makefiles or project for the directory. All of the local generators share a common global generator that oversees the build process. Finally, the global generator is created and driven by the cmake class itself.

Figure 1 shows the basic structure of CMake. We will now consider CMake's concepts in a bit more detail. CMake's execution begins by creating an instance of the make class and passing command line arguments to it. This class manages the overall configuration process and holds information that is global to the build process, such as the cache values. One of the first things the cake class does is to create the correct global generator based on the user's selection of which generator to use (such as Visual Studio 10, Borland Makefiles, or UNIX Makefiles). At this point, the cmake class passes control to the global generator it read by invoking the configure and generate methods.


The global generator is responsible for managing the configuration and generation of all the Makefiles (or project files) for a project. In practice, most of the work is actually done by local generators that are created by the global generator. One local generator is created for each directory of the project that is processed. So while a project will have only one global generator, it may have many local generators. For example, under Visual Studio 7, the global generator creates a solution file for the entire project while the local generators create a project file for each target in their directory.

Read More »

Autonomous Driving: Context and the State-of-the-Art

In this section the state-of-the-art in Intelligent Vehicles will be presented from a vehicle navigation perspective as these achieve autonomous navigation capabilities. The section is structured as follows:

  1. Motivation: The motivation to this ongoing transformation of modern vehicles are presented in terms of usage, safety and external factors such as fossil-fuel constraints, pollution.
  2. Vehicle navigation functions: The state-of-the art review is formulated in terms of vehicle navigation functions to focus the section on the machine intelligence and decision-making processes that are being developed and introduced to transform modern vehicles into connected platforms with autonomous navigation capabilities. It addresses the issues of autonomy, driver needs and communications. That is, it formulates the vehicle onboard intelligence as a navigation problem and thus defines the functional needs for vehicles t demonstrate autonomous navigation capabilities.
  3. Related vehicle techniques: Current developments have been classified under three different perspectives:

    1. Driver Centric addresses systems that seek to increase the situational awareness of drivers providing different types of driving assistance systems that include the driver in the decision making process.
    2. Network Centric addresses the use of wireless networks enabling the sharing of information among vehicles and in the infrastructure creating awareness of the drivers and machines beyond what standalone vehicle systems could observe.
    3. Vehicle Centric addresses systems that seek to convert vehicles into fully autonomous vehicles, with the driver outside the control loop.

    These different perspectives will be defined, presenting current developments in academia and industry.

  4. Future developments: A perspective on future developments and how these technologies could be adopted taking account cost, legal and societal constraints will be provided.

Read More »

Creational Patterns

Creational design patterns abstract the instantiation process. They help make a system independent of how its objects are created, composed, and represented. A class creational pattern uses inheritance to vary the class that’s instantiated, whereas an object creational pattern will delegate instantiation to another object.

Creational patterns become important as systems evolve to depend more on object composition than class inheritance. As that happens, emphasis shifts away from hard-coding a fixed set of behaviors toward defining a smaller set of fundamental behaviors that can be composed into any number of more complex ones. Thus creating objects with particular behaviors requires more than simply instantiating a class.

There are two recurring themes in these patterns. First, they all encapsulate knowledge about which concrete classes the system uses. Second, they hide how instances of these classes are created and put together. All the system at large knows about the objects is their interfaces as defined by abstract classes. Consequently, the creational patterns give you a lot of flexibility in what gets created, who creates it, how it gets created, and when. They let you configure a system with “product” objects that vary widely in structure and functionality. Configuration can be static (that is, specified at compile-time) or dynamic (at run-time).

Sometimes creational patterns are competitors. For example, there are cases when either Prototype (117) or Abstract Factory (87) could be used profitably. At other times they are complementary: Builder (97) can use one of the other patterns to implement which components get built. Prototype (117) can use Singleton (127) in its implementation.

Read More »

A case study: Designing a Document Editor

This chapter presents a case study in the design of a “What-You-See-Is-What-You-Get” (or “WYSIWYG”) document editor called Lexi.[1] We’ll see how design patterns capture solutions to design problems in Lexi and applications like it. By the end of this chapter you will have gained experience with eight patterns, learning them by example.

Design Problems

  1. Document structure. The choice of internal representation for the document affects nearly aspect of Lexi's design. All editing, formatting, displaying, and textual analysis will require traversing the representation. The way we organize this information will impact the design of the rest of the application.
  2. Formatting. How does Lexi actually arrange text and graphics into lines and columns? What objects are responsible for carrying out different formatting policies? How does these polices interact with the document's internal representation?
  3. Embellishing the user interface. Lexi's user interface includes scroll bars, borders, and drop shadows that embellish the WYSIWYG document interface.

  4. Supporting multiple look-and-feel standards. Lexi should adapt easily to different look-and-feel standards such as Motif and Presentation Manager (PM) without major modification.

  5. Supporting multiple window systems. Lexi's design should be as independent of the window system as possible.

  6. User operations. Users control Lexi through various user interfaces, including buttons and pull-down menus. The functionality behind these interfaces is shattered throughout the objects in the application.

  7. Spelling checking and hyphenation. How does Lexi support analytical operations such as checking for misspelled words and determining hyphenation points?

Read More »

Effective Python (26) — Use multiple inheritance only for mix-in utility classes

Python is an object-oriented language with built-in facilities for making multiple inheritance tractable. However, it's better to avoid multiple inheritance altogether.

If you find you yourself desiring the convenience and encapsulation that comes with multiple inheritance, considering writing a mix-in instead. A mix-in is a small class that only defines a set of additional methods that a class should provide. Mix-in classes don't define their own instance attributes nor require their __init__ constructor to be called.

Writing mix-ins is easy because Python makes it trivial to inspect the current state of any object regardless of its type. Dynamic inspection lets you write generic functionality a single time, in a mix-in, that can be applied to many other classes. Mix-ins can be composed and layered to minimize repetitive code and maximize reuse.

For example, say you want the ability to convert a Python object from its in-memory representation to a dictionary that's ready for serialization. Why not write this functionality generically so you can use it with all of your classes?

class ToDictMixin(object):
    def to_dict(self):
        return self._traverse_dict(self.__dict__)

The implementation details are straightforward and rely on dynamic attribute access using hasattr, dynamic type inspection with isinstance, and accessing the instance dictionary __dict__.

def _traverse_dict(self, instance_dict):
    output = {}
    for key, value in instance_dict.items():
        output[key] = self._traverse(key, value)
    return output

def _traverse(self, key, value):
    if isinstance(value, ToDictMixMin):
        return value.to_dict()
    elif isinstance(value, dict):
        return self.__traverse_dict(value)
    elif isinstance(value, list):
        return [self._traverse(key, i) for i in value]
    elif isinstance(value, '__dict__'):
        return self._traverse_dict(value.__dict__)
        return value

Here, I define an example class that uses the mix-in to make a dictionary representation of a binary tree:

class BinaryTree(ToDictMixin):
    def __init__(self, value, left=None, right=None):
        self.value = value
        self.left = left
        self.right = right

Translating a large number of related Python objects into a dictionary becomes easy.

tree = BinaryTree(10,
  left=BinaryTree(7, right=BinaryTree(9)),
  right=BinaryTree(13, left=BinaryTree(11)))

{'left': {'left': None,
         'right': {'left':None, 'right':None, 'value':9},
         'value': 7},
 'right': {'left': {'left': None, 'right':None, 'value': 11},
          'right': None,
          'value': 13},
 'value': 10}

The best part about mix-ins is that you can make their generic functionality pluggable so behaviors can be overridden when required. For example, here I define a subclass of BinaryTree that holds a reference to its parent. This circular reference would cause the default implementation of ToDictMixin.to_dict to loop forever.

class BinaryTreeWithParent(BinaryTree):
    def __init__(self, value, left=None,
                right=None, parent=None):
        super().__init__(value, left=left, right=right)
        self.parent = parent

The solution is to override the ToDictMixin._traverse method in the BinaryTreeWithParent class to only process values that matter, preventing cycles encountered by the mix-in. Here, I override the _traverse method to not traverse the parent and just insert its numerical value:

def _traverse(self, key, value):
    if (isinstance(value, BinaryTreeParent) and
          key == 'parent'):
        return value.value # Prevent cycles
        return super()._traverse(key, value)

Calling BinaryTreeWithParent.to_dit will work without issue because the circular referencing properties aren't followed.

root = BinaryTreeWithParent(10)
root.left = BinaryTreeWithParent(7, parent=root)
root.left.right = BinaryWithParent(9, parent=root.left)

{'left': {'left': None,
         'parent': 10,
         'right': {'left': None,
                  'parent': 7,
                  'right': None,
                  'value': 9},
         'value': 7},
'parent': None,
'right': None,
'value': 10}

By defining BinaryTreeWithParent._traverse, I've also enabled any class that has an attribute of type BinaryTreeWithParent to automatically work with ToDictMixin.

class NamedSubTree(ToDictMixin):
    def __init__(self, name, tree_with_parent): = name
        self.tree_with_parent = tree_with_parent
    my_tree = NamedSubTree('foobar', root.left.right)
    print(my_tree.to_dict()) # No inifinite loop
{'name': 'foobar',
'tree_with_parent': {'left': None,
                    'parent': 7,
                    'right': None,
                    'value': 9}}

Mix-ins can also be composed together. For example, say you want a mix-in provides generic JSON serialization for any class. You can do this by assuming that a class provides a to_dict method (which may or may not be provided by the ToDictMixin class).

class JsonMixin(object):
    def from_json(cls, data):
        kwargs = json.loads(data)
        return cls(**kwargs)
    def to_json(self):
        return json.dumps(self.to_dict())

Note how the JsonMixin class defines both instance methods and class methods. Mix-ins let you add either kind of behavior. In this example, the only requirements of the JsonMixin are that the class has a to_dict method and its __init__ method takes keyword arguments.

This mix-in makes it simple to create hierarchies of utility classes that can be serialized to and from JSON with little boilerplate. For example, here I have a hierarchy of data classes representing parts of a datacenter topology:

class DatacenterRack(ToDictMixin, JsonMixin):
    def __init__(self, switch=None, machines=None):
        self.switch = Switch(**switch)
        self.machines = [
          Machine(**kwargs) for kwargs in machines]

class Switch(ToDictMixin, JsonMixin):
    # ...
class Machine(ToDicMixin, JsonMixin):
    # ...

Serializing these classes to and from JSON is simple. Here, I verify that the data is able to be sent round-trip through serializing and deserializing:

serialized = """{
    "switch": {"ports": 5, "speed": 1e9},
    "machines": [
        {"cores": 8, "ram": 32e9, "disk": 5e12},
        {"cores": 4, "ram": 16e9, "disk": 1e12},
        {"cores": 2, "ram": 4e9, "disk": 500e9}

deserialized = DatacenterRack.from_json(serialized)
roundtrip = deserialized.to_json()
assert json.loads(serialized) == json.loads(roundtrip)

When you use mix-ins like this, it's also fine if the class already inherits from JsonMixin higher up in the object hierarchy. The resulting class will behave the same way.

The Origin of Best Practices


According to Richard Stallman, hackers have love of excellence and programming in common. The enjoy the creative challenging to overcome technical limitations and achieve things not thought possible before. People who are good at hacking are indispensable in todays technology-driven society. Hacking is a key skill, and we need people who are good at it. Without doubt, hacking is a useful skill in programming. Python is an excellent language to hack things in.

However, I don't think hacking is a great approach to programming in general. I give three reasons for that.

  • First hacking focuses on novel, difficult, or otherwise challenging problems.
  • Second, hacking has a strong connotation of excellence.Hackers are often considered an elite, a community that requires an unspoken skill level to belong to. But in programming ,there are many problems that do not need an expert to solve. Often, an average programmer is sufficient, where a hacker might become bored.
  • Third, not everybody wants to devote themselves to hacking. Many of us have other things to do than figuring out in all details how computers work; we have data to understand, websites to crate, a business to run, and families to take care of. In my option, programming is far too important to leave it to a small group of devoted experts.

Software engineering

Opposed to hacking, we find software engineering. Software engineering is concerned with a systematic approach to building software, usually in a corporate environment. Instead of focusing on the individual, software engineering is about controlling the entire process of building programs: its techniques cover finding out precisely what is to be built, designing a program with clearly defined opponents, verifying that the program actually works correctly and, finally, maintaining the program once it is being used.

The drawback of the software engineering approach is that it is too big for most Python projects.


The notion that software engineering is a rather heavy methodology is not new. When asking for an alternative, the answer you will often hear in 2017 is Agile. Agile is a philosophy dedicated to improving the software development process. Its core values are individuals and interactions, working software, customer collaboration, and responding to change. agile promotes rapid, evolutionary development of programs and working in short iterations.

Agile is a philosophy, not a set of tools. Agile tells us why we might want to program in a certain way (to crate working software quickly and have satisfied customers), but it does not tell us what to do when in front of a keyboard. Also, Agile frameworks are often difficult to implement in practice.

Software craftsmanship

Software craftsmanship acknowledges that a big portion of programming consists of simple task that need to be done. To do it well, we need to have the right tools, we need to have the right skills, and we need to apply both in practice. Programming being a craft like masonry, carpentry, or confectionery suggests that

  • Purpose matters. We are creating programs as a means to an end. Programming is not an art; we want to write programs that are used.
  • Planning matters.
  • Tools matter.
  • Skills matter.
  • Community matters.
  • Size does not matter.
  • Practice matters.

Introduction to Algorithms — Red-Black Tree

1. Properties of red-black trees

A red-black tree is a binary search tree with one extra bit of storage per node: its color, which can either RED or BLACK. By constraining the node colors on any simple path from the root to a leaf, red-black trees ensure that no such path is more than twice as long as any other, so that the tree is approximately balanced.

Each node of the tree now contains the attributes color, key, left, right, and p. If a child or the parent of a node does not exist, the corresponding pointer attribute of the node contains the value NIL. We shall regard these NILs as being pointers to leaves (external nodes) of the binary search tree and the normal, key-bearing nodes as being internal nodes of the tree.

A red-black tree is a binary tree that satisfies the following red-black properties:

  1. Every node is either red or black.
  2. The root is black.
  3. Every leaf (NIL) is black.
  4. If a node is red, then both its children are black.
  5. For each node, all simple paths from the node to descendant leaves contain the same number of black nodes.

Screen Shot 2017-05-05 at 8.27.10 P

As a matter of convenient in dealing with boundary conditions in red-black tree code, we use a single sentinel to represent NIL. For a red-black tree T, the sentinel T.nil is an object with the same attributes as an ordinary node in the tree.

  • Its color attribute is BLACK, and its other attributes — p, left, right, and key can take on arbitrary values.

As Figure (b) shows, all pointers to NIL are replaced by pointer to the sentinel T.nil.

We use the sentinel so that we can treat a NIL child of a node x as an ordinary node whose parent is x .

  • Although we instead could add a distinct sentinel node for each NIL in the tree, so that the parent of each NIL is well defined, that approach would waste space.
  • Instead, we use the one sentinel T.nil to represent all the NILs — all leaves and the root’s parent. The values of the attributes p, left, right, and key of the sentinel are immaterial, although we may set them during the course of a procedure for our convenience.

Screen Shot 2017-05-05 at 8.28.39 P

We generally confine our interest to the internal nodes of a red-black tree, since they hold the key values. In the remainder of this article, we omit the leaves when we draw red-black trees, as shown in (c).

Screen Shot 2017-05-05 at 8.30.02 P

We call the number of black nodes on any simple path from, but not including, a node x down to a leaf the black-height of the node, denoted bh(x). By property 5, the notion of black-height is well defined, since all descending simple paths from the node have the same number of black nodes. We define the black-height of a red-black tree to be the black-height of its root.

The following lemma shows why red-black tree make good search trees.

Lemma 1

A red-black tree with n internal nodes has height at most 2 \lg(n+1) .


As an immediate consequence of this lemma, we can implement the dynamic-set operations in O(\lg n) time on red-black trees, since each can run in O(h) time on a binary search tree of height h and any red-black tree on n nodes is a binary search tree with height O(\lg n) . Although the algorithms TREE-INSERT and TREE-DELETE in Binary Search Tree run in O(\lg(n)) time when given a red-black tree as input, they do not directly support the dynamic-set operations INSERT and DELETE, since they do not guarantee that the modified binary search tree will be a red-black tree.

2. Rotations

The search-tree operations TREE-INSERT and TREE-DELETE, when run on a red-black tree with n keys, take O(\lg n) time. Because they modify the tree, the result may violate the red-black properties enumerated in Section 1. To restore these properties, we must change the colors of some the nodes in the tree and also change the pointer structure.

We change the pointer structure through rotation, which is a local operation in a search tree that preserves the binary-search-tree property. Figure 2 shows the two kinds of rotations: left rotations and right rotations.

  • When we do a left rotation on a node x , we assume that its right child y is not T.nil ; x may be any node in the tree whose right child is not T.nil .
    • The left rotation “pivots” around the link from x to y . It makes y the new root of the subtree, with x as y ‘s left child and y ‘s left child as x ‘s right child.

The pseudocode for LEFT-ROTATE assumes that x.right \neq T.nil and that the root’s parent is T.nil .



Figure 3 shows an example of how LEFT-ROTATE modifies a binary search tree. The code for RIGHT-ROTATE is symmetirc. Both LEFT-ROTATE and RIGHT-ROTATE run in O(1) time. Only pointers are changed by a rotation; all other attributes in a node remain the same.


3. Insertion

We can insert a node into an n -node red-black tree in O(\lg(n)) time. To do so, we use a slightly modified version of the TREE-INSERT procedure to insert node z into the tree T as if it were an ordinary binary search tree, then we color z red. To guarantee that the red-black properties are preserved, we then call an auxiliary procedure RB-INSERT-FIXUP to recolor node and perform rotations. The call RB-INSERT(T, z) inserts node z , whose keys is assumed to have already been filled in, into the red-black tree T .


The procedures TREE-INSERT and RB-INSERT differ in four ways.

  • First, all instance of NIL in TREE-INSERT are replaced by T.nil.
  • Second, we set z.left and z.right to T.nil in lines 14-15 of RB-INSERT, in order to maintain the proper tree structure.
  • Third, we color z red.
  • Fourth, because coloring z red may cause a violation of one of the red-black properties, we call RB-INSERT-FIXUP(T, z) of RB-INSERT to restore the red-black properties.
  while z.p.color == RED
    if z.p == z.p.p.left
        y = z.p.p.right
        if y.color == RED
            z.p.color = BLACK           // case 1
            y.color = BLACK             // case 1
            z.p.p.color = RED           // case 1
            z = z.p.p                   // case 1
        else if z == z.p.right
                  z = z.p               // case 2
                  LEFT-ROTATE(T, z)     // case 2
            z.p.color = BLACK           // case 3
            z.p.p.color = RED           // case 3
            RIGTH-ROTATE(T, z.p.p)      // case 3
    else (same as then clause with "right" and "left" exchanged)
  T.root.color = BLACK            

To understand how RB-INSERT-FIXUP works, we shall our examination of the code in three major steps.

  • First, we shall determine what violations of the red-black properties are introduced in RB-INSERT when node z is inserted and colored red.
  • Second, we shall examine the overall goal of the while loop in line 1-15.
  • Finally, we shall explore each of the three cases within the while loop’s body and see how they accomplish the goal.

Figure 4 shows how RB-INSERT-FIXUP operates on a sample red-black tree.


4. Deletion


Introduction-to-algorithms — dynamic programmimng

Dynamic programming typically applies to optimization problems in which we make a set of choices in order to arrive at an optimal solution. As we make each choice, subproblems of the same form often arise. Dynamic programming is effective when a given subproblem may arise form more than one partial set of choices.

  • The key technique is to store the solution to each such subproblem in case it should reappear.

Divide-and-conquer algorithms portion the problem into disjoint subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem — that is, when subproblems share subsubproblems.

  • In this context, a divide-and-conquer algorithm does more work than necessary, repeatedly, solving the common subsubproblems.

A dynamic-programming algorithm solves each subsubproblem just once and then saves its answer in a table, thereby avoiding the work of recomputing the answer every time it solves each subsubproblem.

We typically apply dynamic programming to optimization problems. Such problems can have many possible solutions. Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum) value. We call such a solution an optimal solution to the problem, as opposed to the optimal solution, since there may be several solutions that achieve the optimal value.

When developing a dynamic-programming algorithm, we follow a sequence of four steps:

  1. Characterize the structure of an optimal solution.
  2. Recursively define the value of an optimal solution.
  3. Compute the value of an optimal solution, typically in a bottom-up fashion.
  4. Construct an optimal solution from computed information.

Step 1-3 from the basis of a dynamic-programming solution to a problem.

  • If we need only the value of an optimal solution, and not the solution itself, the we can omit step 4.
  • When we do perform step 4, we sometimes maintain additional information during step 3 so that we can easily construct an optimal solution.

Read More »

Performance of Python Data Structures

It is important for you to understand the efficiency of these Python data structures because they are the building blocks we will use as we implement other data structure in the remainder of the book.


The designers of Python had many choices to make when they implemented the list data structure. Each of these choices could have an impact of how fast on how fast list operations perform. To help them make the right choices they looked at the ways that people would most commonly use the list data structure and they optimized their implementation of a list so that the most common operations were very fast. Of course they also tried to make the less common operations fast, but when a tradeoff and to be made the performance of a less common operation was often sacrificed in favor of the more common operation.

Two common operations are indexing and assigning to an index position. Bothe of these operations take the same amount of time no matter how large the list becomes. When an operation like this is independent of the size of the list they are O(1) .

Read More »