Archives: Development

Raspberry Pi 4

Only 1 full year after it was released, I managed to get a Raspberry Pi 4 and test it out in the quad A1. I had been delaying doing so because of reports of thermal issues. The Pi 3B+ already ran a little hot and I didn’t want to have to add active cooling into the robot chassis to get it stable.

It looks like the Raspberry Pi engineers have been hard at work because the newer firmware releases have significantly reduced the overall power consumption and thus the thermal load. In my testing so far it only seems “a little” hotter than the 3b+.

Balancing gait in 2D

After getting a gait which looked like it could balance across the leg support line in 1D, I needed to extend that to 2D and try it out on the robot.

Extension to 2D

Extending this to two dimensions wasn’t too bad. I just did a bunch of geometry to follow the path traced out by a given 2 dimensional velocity and rotation rate, intersected with a line segment:

Given this function, the logic to select a swing target is basically the same as in the 1 dimensional case. We now create two “virtual legs”, which consist of two feet ganged together and produce a single support line. At each time instant when all legs are in stance, we look at the time remaining until each of the virtual legs would cross the center of mass at the current velocity. As soon as one hits the half-swing point, we start a swing.

Primitive gait balancing - 1D

I’m working to improve the walking gait of the mjbots quad A1. In this iteration, I wanted to tackle an incremental step towards a more fully dynamic gait, but one that will still greatly increase the capability of the machine. As mentioned last time, the current walking gait cycles between all four legs, and then alternative opposing corner legs in order to move laterally. I’d like to keep that same basic structure, but be a bit smarter about what happens during the swing phase.

First steps towards more dynamic gaits

Now I’ve got a machine, the mjbots quad A1, which is capable of dynamic motions, but the only gait which takes advantage of these capabilities is the pronking one. That gait has the benefit that the dynamics are very simple. The entire time that that robot is in contact with the ground, it is in contact with all 4 legs, so in that regime it is fully controllable. Since it is fully controllable up to the point of lift-off, we can ensure that there is basically zero rotational rate while the machine is mid-flight, which means that it lands with all four legs largely at the same time. Of course, pronking isn’t a very fast or efficient way of getting anywhere, so I wanted to make the first steps… I guess pun intended, towards improving the more general walking algorithm to make the machine move faster in a more robust manner.

Another foot failure

The first feet I built for the quad A0 lasted for maybe an hour of walking before snapping off. The current design, has been much more robust - completing a lot of intensive walking and jumping. However, all things must fail:

Looking at the failure, I was surprised I used so little material in the region in question. For now, I just made it 4x thicker and we’ll see how long that lasts, although ultimately it may need to be a different design or machined instead of 3d printed.

Cartesian leg PD controller

As I am working to improve the gaits of the mjbots quad A1, one aspect I’ve wanted to tackle for a long time is improving the compliance characteristics of the whole robot. Here’s a small step in that direction.

Existing compliance strategy

The quad A1 uses qdd100 servos for each of its joints. The “qdd” in qdd100 stands for “quasi direct drive”. In a quasi direct drive actuator, a low gearing ratio is used, typically less than 10 to 1, which minimizes the amount of backlash and reflected inertia as observed at the output. Then, high rate electronic control of torque in the servo based on current and position feedback allows for dynamic manipulation of the spring and dampening of the resulting system.

Improved swing trajectory

Now that I finally have tplot2 working sufficiently to diagnose problems in 3D, it is time to start actually fixing those problems. The first obvious thing I noticed when watching data replay was that the legs scooted around a lot after making contact with the ground. Absent 3D visualization, I knew something was wrong, but couldn’t easily tell what.

Diagnosing the first problem

Once I was able to plot the commanded position and velocity trajectory, I could clearly see a number of problems. For one, the trajectory was not terribly achievable. The velocity jumped in a discontinuous manner between different phases of the swing cycle, which resulted in large tracking errors when moving the physical legs:

Primitive derived fields in tplot2

One of the features that I wanted to get working in the newer tplot2 is some facility for rendering values which are calculated from the things in the log, even if not directly logged there. Straightforward simple cases would be things like the lengths of vectors, unit conversions, or quaternion to euler angles. You could imagine needing arbitrarily complex values plotted after the fact.

In past systems I’ve designed, I built in a generic scripting interface to allow arbitrary things to be plotted. I’d like to do that here as well eventually, but in the short term I had a need to plot the total normal force exerted on the ground by all stance legs. And I didn’t want to spend a lot of time designing a generic mechanism. Thus, I rigged up a very primitive C++ only mechanism, where a function can be registered which returns an arbitrary serializable structure. That is then rendered in the tplot2 tree view in a dedicated area, and has a pretty “hacky” way of getting its values on the plot if necessary.

Video and telemetry synchronization (diagnostics part 8)

This is part of a continuing series on updated diagnostic tools for the mjbots quad A1 robot.  Previous editions are in 1, 2, 3, 4, 5, 6, and 7.  Here I’ll be looking at one of the last pieces of the puzzle, synchronizing the video with the rest of the telemetry.

As mentioned previously, recording video of a robot running is an easy, cheap, and fast way to provide ground truth information on all of the sensors and actuators.  However, it is only truly useful if it can be accurately synchronized in time to the other telemetry streams for the robot.

3D rendering in tplot (diagnostics part 7)

In previous posts of this series, I covered some diagnostics improvements I’ve made to help work on more advanced gaits for the mjbots quad A1 (1, 2, 3, 4, 5, 6).  This post will cover the last major new piece of diagnostics I added to tplot2, 3d rendering of telemetry data.

3D rendering

While it should be obvious, I’ll give a little exposition.  tplot2 in its state prior to this could show a “tree view” of all data logged in numeric form.  It had a “plot view” which let you plot any single floating point scalar vs time.  As of recently, it could also render video associated with a given point in time in the log.  However, as anyone who has ever tried to debug a 3d dimensional software application, much less a 3d dimensional robot, can attest, debugging with scalar numbers and time plots is only productive for a very limited range of problems.