A Loss Of Heart

Zero Out Of Ten


In a recent talk at Cambridge University, Bjarne Stroustrup presented this slide to his audience:

StroustrupTop10List

Although lots of hard work has been performed by some very smart people over the years in developing these features, Mr. Stroustrup (sadly) went on to say that none of them will be present in the formal C++17 specification. The last sentence on the slide succinctly summarizes why.

Categories: C++17 Tags: ,

M Of N Fault Detection


Let’s say that, for safety reasons, you need to monitor the voltage (or some other dynamically changing physical attribute) of a safety-critical piece of equipment and either shut it down automatically and/or notify someone in the event of a “fault“.

FaultDetection

The figure below drills into the next level of detail of the Fault Detector. First, the analog input signal is digitized every ΔT seconds and then the samples are processed by an “M of N Detector”.

Fault Detector

The logic implemented in the Detector is as follows:

MofNLogic

Instead of “crying wolf” and declaring a fault each time the threshold is crossed (M == N == 1), the detector, by providing the user with the ability to choose the values of M and N, provides the capability to smooth out spurious threshold crossings.

The next figure shows an example input sequence and the detector output for two configurations: (M == N == 4) and (M == 2, N == 4).

FDExamples

Finally, here’s a c++ implementation of an M of N Detector:

#ifndef MOFNDETECTOR_H_
#define MOFNDETECTOR_H_

#include <cstdint>
#include <deque>

class MofNDetector {
public:
  MofNDetector(double threshold, int32_t M, int32_t N) :
    _threshold(threshold), _M(M), _N(N) {
    reset();
  }

  bool detectFault(double sample) {

    if (sample > _threshold)
      ++_numCrossings;

    //Add our newest sample to the history
    if(sample > _threshold)
      _lastNSamples.push_back(true);
    else
      _lastNSamples.push_back(false);

    //Do we have enough history yet?
    if(static_cast<int32_t>(_lastNSamples.size()) < _N) {
      return false;
    }

    bool retVal{};
    if(_numCrossings >= _M)
      retVal = true;

    //Get rid of the oldest sample
    //to make room for the next one
    if(_lastNSamples.front() == true)
      --_numCrossings;

    _lastNSamples.pop_front();

    return retVal;
  }

  void reset() {
    _lastNSamples.clear();
    _numCrossings =0;
  }

private:
  int32_t _numCrossings;
  double _threshold;
  int32_t _M;
  int32_t _N;
  std::deque<bool> _lastNSamples{};

};

#endif /* MOFNDETECTOR_H_ */
Categories: C++, technical Tags: ,

A Quarter Of An Hour

May 19, 2016 2 comments

Looky here at what someone I know sent me:

PointTwoFive

This person is working on a project in which management requires time tracking in .25 hour increments. On some days, that poor sap and her teammates spend at least that much time at the end of the day aggregating and entering their time into their formal timesheets.

Humorously, those in charge of running the project have no qualms about promoting their process as “agile“.

I can’t know for sure, but I speculate that the practice of micromanagement cloaked under the veil of “agile” is pervasive throughout the software development industry.

Categories: management Tags: ,

Linear Interpolation In C++

May 10, 2016 2 comments

In scientific programming and embedded sensor systems applications, linear interpolation is often used to estimate a value from a series of discrete data points. The problem is stated and the solution is given as follows:

linear interpolation

The solution assumes that any two points in a set of given data points represents a straight line. Hence, it takes the form of the classic textbook equation for a line, y = b + mx, where b is the y intercept and m is the slope of the line.

If the set of data points does not represent a linear underlying phenomenon, more sophisticated polynomial interpolation techniques that use additional data points around the point of interest can be utilized to get a more accurate estimate.

The code snippets below give the definition and implementation of a C++ functor class that performs linear interpolation. I chose to use a vector of pairs to represent the set of data points. What would’ve you used?

Interpolator.h

#ifndef INTERPOLATOR_H_
#define INTERPOLATOR_H_

#include <utility>
#include <vector>

class Interpolator {
public:
  //On construction, we take in a vector of data point pairs
  //that represent the line we will use to interpolate
  Interpolator(const std::vector<std::pair<double, double>>&  points);
  
  //Computes the corresponding Y value 
  //for X using linear interpolation
  double findValue(double x) const;

private:
  //Our container of (x,y) data points
  //std::pair::<double, double>.first = x value
  //std::pair::<double, double>.second = y value
  std::vector<std::pair<double, double>> _points;
};
#endif /* INTERPOLATOR_H_ */

Interpolator.cpp

#include "Interpolator.h"
#include <algorithm>
#include <stdexcept>

Interpolator::Interpolator(const std::vector<std::pair<double, double>>& points)
  : _points(points) {
  //Defensive programming. Assume the caller has not sorted the table in
  //in ascending order
  std::sort(_points.begin(), _points.end());

  //Ensure that no 2 adjacent x values are equal,
  //lest we try to divide by zero when we interpolate.
  const double EPSILON{1.0E-8};
  for(std::size_t i=1; i<_points.size(); ++i) {
    double deltaX{std::abs(_points[i].first - _points[i-1].first)};
    if(deltaX < EPSILON ) {
      std::string err{"Potential Divide By Zero: Points " +
        std::to_string(i-1) + " And " +
        std::to_string(i) + " Are Too Close In Value"};
      throw std::range_error(err);
    }
  }
}

double Interpolator::findValue(double x) const {
  //Define a lambda that returns true if the x value
  //of a point pair is < the caller's x value
  auto lessThan =
      [](const std::pair<double, double>& point, double x)
      {return point.first < x;};

  //Find the first table entry whose value is >= caller's x value
  auto iter =
      std::lower_bound(_points.cbegin(), _points.cend(), x, lessThan);

  //If the caller's X value is greater than the largest
  //X value in the table, we can't interpolate.
  if(iter == _points.cend()) {
    return (_points.cend() - 1)->second;
  }

  //If the caller's X value is less than the smallest X value in the table,
  //we can't interpolate.
  if(iter == _points.cbegin() and x <= _points.cbegin()->first) {
    return _points.cbegin()->second;
  }

  //We can interpolate!
  double upperX{iter->first};
  double upperY{iter->second};
  double lowerX{(iter - 1)->first};
  double lowerY{(iter - 1)->second};

  double deltaY{upperY - lowerY};
  double deltaX{upperX - lowerX};

  return lowerY + ((x - lowerX)/ deltaX) * deltaY;
}

In the constructor, the code attempts to establish the invariant conditions required before any post-construction interpolation can be attempted:

  • It sorts the data points in ascending X order – just in case the caller “forgot” to do it.
  • It ensures that no two adjacent X values have the same value – which could cause a divide-by-zero during the interpolation computation. If this constraint is not satisfied, an exception is thrown.

Here are the unit tests I ran on the implementation:

#include "catch.hpp"
#include "Interpolator.h"

TEST_CASE( "Test", "[Interpolator]" ) {
  //Construct with an unsorted set of data points
  Interpolator interp1{
    {
      //{X(i),Y(i)
      {7.5, 32.0},
      {1.5, 20.0},
      {0.5, 10.0},
      {3.5, 28.0},
    }
  };

  //X value too low
  REQUIRE(10.0 == interp1.findValue(.2));
  //X value too high
  REQUIRE(32.0 == interp1.findValue(8.5));
  //X value in 1st sorted slot
  REQUIRE(15.0 == interp1.findValue(1.0));
  //X value in last sorted slot
  REQUIRE(30.0 == interp1.findValue(5.5));
  //X value in second sorted slot
  REQUIRE(24.0 == interp1.findValue(2.5));

  //Two points have the same X value
  std::vector<std::pair<double, double>> badData{
    {.5, 32.0},
    {1.5, 20.0},
    {3.5, 28.0},
    {1.5, 10.0},
  };
  REQUIRE_THROWS(Interpolator interp2{badData});
}

How can this code be improved?

Categories: C++ Tags: ,

The Unwanted State Transition


Humor me for a moment by assuming that this absurdly simplistic state transition diagram accurately models the behavior of any given large institution:

Culture STM

In your experience, which of the following transitions have you observed occurring most frequently at your institution?

management xitions

First Confuse Them, And Then Unconfu$e Them


I don’t understand it. I simply don’t understand how some (many?) Scrum coaches and consultants can advocate dumping the words “estimates” and “backlogs” from Scrum.

The word “estimate” appears 9 times in the 16 page Scrum Guide:

  1. All incomplete Product Backlog Items are re-estimated and put back on the Product Backlog.
  2. The work done on them depreciates quickly and must be frequently re-estimated.
  3.  Work may be of varying size, or estimated effort.
  4.  Product Backlog items have the attributes of a description, order, estimate and value.
  5. Product Backlog refinement is the act of adding detail, estimates, and order to items in the Product Backlog
  6.  More precise estimates are made based on the greater clarity and increased detail; the lower the order, the less detail.
  7. The Development Team is responsible for all estimates.
  8. ..the people who will perform the work make the final estimate.
  9. As work is performed or completed, the estimated remaining work is updated

As for the word “backlog“, it appears an astonishing 80 times in the 16 page Scrum Guide.

People who make their living teaching Scrum while insinuating that “estimates/backlogs” aren’t woven into the fabric of Scrum are full of sheet. How in the world could one, as a certified Scrum expert, teach Scrum to software development professionals and not mention “estimates/backlogs”?

certified scrum

Even though I think their ideas (so far) lack actionable substance, I have no problem with revolutionaries who want to jettison the words “estimates/backlogs” from the software development universe. I only have a problem with those who attempt to do so by disingenuously associating their alternatives with Scrum to get attention they wouldn’t otherwise get. Ideas should stand on their own merit.

If you follow these faux-scrummers on Twitter, they’ll either implicitly or explicitly trash estimates/backlogs and then have the audacity to deny it when some asshole (like me) calls them out on it. One of them actually cranked it up a notch by tweeting, without blinking an e-eye, that Scrum was defined before Scrum was defined – even though the second paragraph in the Scrum guide is titled “Definition Of Scrum“. Un-freakin-believable.

TOC

Sorry, but I lied in the first sentence of this post. I DO understand why some so-called Scrum advocates would call for the removal of concepts integrally baked into the definition of the Scrum framework. It’s because clever, ambiguous behavior is what defines the consulting industry in general. It’s the primary strategy the industry uses very, very effectively to make you part with your money: first they confuse you, then they’ll un-confuse you if you “hire us for $2K /expert/day“.

…and people wonder why I disdain the consulting industry.

The best book I ever read on the deviousness of the consulting industry was written by a reformed consultant: Matthew Stewart. Perhaps you should read it too:

TMM

Follow

Get every new post delivered to your Inbox.

Join 640 other followers

%d bloggers like this: