Technology Musings

May 28, 2015

Snippets / ActiveMerchant, Authorize.Net Root Certificates, and Linux SSL


This week, is starting its rollout of new server certificates, signed by a new certificate authority - Entrust.  If you are using ActiveMerchant, this could break your stuff!  It may or may not affect newer versions of ActiveMerchant - I don't know, I only have sites using old versions (like 1.18).  So, even if it breaks, the solution may be different depending on your version of ActiveMerchant.  Still, I'll try to keep the info detailed enough that you can find the solution yourself.

First, the problem.  This affects you if, when ActiveMerchant tries to connect to, you receive the error:

SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B:
  certificate verify failed

Note that you may not get this error when trying to connect to via manual Ruby code, but only via ActiveMerchant.

The reason for this is ActiveMerchant maintains its own list of Root CAs!

Therefore, to fix, you have to, (a) make sure that the new Root CAs for are stored in your server's root CA list, and (b) make sure that ActiveMerchant is using your server's list rather than its own.  For good measure, you can also copy ActiveMerchant's CA list to yours.

So, first, to add the Entrust root CAs to your Linux server (assuming it is a fairly recent CentOS-based distro), do the following as root:

  1. Enable the trust infrastructure with the following command:update-ca-trust enable
  2. Go to /etc/pki/ca-trust/source/anchors
  3. Grab the .cer files from (just right-click on the links, copy the URLs, and download them to the current directory with wget)
  4. Go to wherever the gem files for your project are, and find the active_utils gem.  In the lib/certs directory, there is a file called or something like that.  Copy that to /etc/pki/ca-trust/source/anchors
  5. Now run the following command to update your CA list:

Now you have all of the certs installed.  Now we need to tell ActiveMerchant to use your own certs rather than its internal list.  However, again, this is in the active_utils gem, not the activemerchant gem.  Within the gem, find the file called lib/active_utils/common/connection.rb.  Now look for a function called configure_ssl.  Copy that function to your clipboard.

Now create a file in your own project called config/initializers/active_merchant_pem_fix.rb (the name doesn't matter as long as it is in config/initializers).  In this file you need to have:

module ActiveMerchant
  class Connection
    # Paste the configure_ssl function you copied from lib/active_utils/common/connection.rb here.

Now, there will be a line that says something like this:

        http.ca_file = File.dirname(__FILE__) + '/../../certs/cacert.pem'

This is the offending line!!!  Comment that sucker out!!!

For my version of ActiveMerchant, this is what my file looks like:

   def configure_ssl(http)
      return unless endpoint.scheme == "https"

      http.use_ssl = true

      if verify_peer
        http.verify_mode = OpenSSL::SSL::VERIFY_PEER
        # http.ca_file = File.dirname(__FILE__) + '/../../certs/cacert.pem'
        http.verify_mode = OpenSSL::SSL::VERIFY_NONE

Now that you have the updated root CA on your server, and ActiveMerchant is using your server's list rather than your own, you are all set!

June 11, 2014

General / Why You Shouldn't Put Validations on Your Model


This post is about model validators in Ruby on Rails, although it probably applies to validators in most similar MVC systems that allow declarative model validation.

One of the biggest disasters, in my opinion, has been the trend for developers to put more and more validation in their models.  Putting validation in the model always *sounds* like a good idea.  "We want our objects to look like X, therefore, we will stick our validations in the place where we define the object."

However, this leads to many, many problems.

There are two problems, and the most general way of stating them is this:

  1. life never works the way you expect it, but validating on the model assumes that it will.  
  2. while it is cleaner to validate on the model for a given configuration, when that validation needs to change, you don't always have enough information to understand the purpose of the validation in the first place (i.e., if a new developer is making the change)

Example of both -

Let's say that we require all users to enter their first name, last name, birthdate, and email.  Okay, so we'll put validators on the model to make sure that this happens.  

Let's say that six months go by, and your company signed an agreement with XYZ corporation to load in new users.  However, XYZ corp didn't require that their users enter in their birthdate.  However, you don't know that, and the bit of data you looked at, it looks like they are all there.  So, you do a load through the database, and everything loads in cleanly, and you think that everything is good.  

In fact, *Everyone* can view their records.

However, the next week, people start getting errors when they try to update their profile.  Why?  Because the people who don't have a birthdate entered are getting validation errors when they try to update something else.

So here is problem #1 - if an existing record doesn't match the validation, nothing can be saved, and this causes headaches because you are getting errors in places where they were unexpected.  This doesn't just happen with dataloads.  It happens every time you add a validator as well.  So, every time you change policy, even if it seems benign, you have the potential of messing up your entire program, even if all of your tests succeed (because your tests will never have had the bad data in them, since it doesn't have the old version of the code!).

So, we go back and recode, and remove the validator from the model.  Great, but let's say this is a new programmer.  Now the programmer has to figure out where to put the validation in.  Presumably, we still want new users to enter in their birthdate, even if we allow non-birthdate users that we load from outside.  However, now, because we aren't validating on the model, we have to move it somewhere else.  But where?  Since the validator just floated out there declaratively, there is no direct link between the validator and *where* it was to be enforced.  Now the new programmer has to go through and recode and retest all of the situations to figure out the proper place(s) for the validator.  This is especially problematic if you have more than one path for entrance on your system.  It's not so bad if it is a system you wrote yourself, but when having to maintain someone else's code, this is a problem.

Therefore, the policy I follow is this - only use model validation when a failure of the validator would lead to a loss of data *integrity*, not just cleanliness.  The phone number field should be validated somewhere else.  You should validate a uniqueness constraint only if uniqueness is actually required for the successful running of the code.  Otherwise, validate in the controller.  Now, you can make this easier by putting the code to validate in the model, but just don't hook it in to the global validation sequence.  So, go ahead and define a method called "is_proper_record?" or something, but don't hook it up to the validation system - call it directly from your controller when you need it.

NOTE - if I don't get around to blogging about it, there are similar problems with State Machine, and a lot of other declarative-enforcement procedures in the model.  Basically, they are elegant ways of coding, but *terrible* when you need to modify them, especially if the person modifying the code wasn't the one who built it.  Untangling the interaction between declarative and manual processes leads to both delays (as the new developer tries to figure out the code) and bugs (when the new developer doesn't figure out the code perfectly the first time).  When your processes are coded *as processes* and not declarations, it is more visible to future programmers, and therefore more maintainable.

June 11, 2014

Platforms / New Library for Reading Pickle Files from Ruby


This is announcing the release of "pickle-interpreter", a new ruby gem for reading Python pickle files in ruby and rails projects.  The project is available at

It is called "pickle-interpreter" because Pickle is actually a mini-stack-based programming language, so this is an interpreter for the Pickle language.

March 03, 2014

General / New Book Integrating Engineering, Philosophy, and Theology


I thought some of you, my faithful blog readers, might be interested to know about a project I just completed with some other people - a new book about integrating engineering, philosophy, and theology.  It is titled Engineering and the Ultimate: An Interdisciplinary Investigation of Order and Design in Nature and Craft.

Engineering and the Ultimate book cover

February 12, 2014

Snippets / Creating a WSDL file for a RESTful Web Service


NOTE - this is about XML-based REST services.  There *might* be a way to write WSDL for JSON, but I don't know it yet.

Most of my work is done in RESTful environments.  However, the Microsoft world seems to favor SOAP.  REST is simple.  It is so simple, that you don't even really need special APIs for the different services.  You just need the URL and the documentation.  However, SOAP is complex, and requires complex tooling. SOAP services are described using the WSDL format.  Usually what happens is that a program reads the WSDL file and spits out stubs that you can code against directly.

In any case, the people that come from a SOAP environment don't realize that you can do cool stuff without tooling, and the idea that I could just give them a URL to hit and they could just hit it directly boggles their mind.  So, if I have a handy-dandy REST service, they still want to ask me for the WSDL file.  There's really no reason, but that's what they want.  So, thankfully, WSDL 2 allows for us to create WSDL descriptions of RESTful services.

So, before I describe how WSDL works, let me first say that, if you are used to the simplicity of REST coding, WSDL will blow your mind.  WSDL is built around the complexities of SOAP.  There are a lot of things where you will say, "why did they put in that extra layer?"  The answer is, "because the SOAP protocol was built by crack addicts."

WSDL is grouped into 4 parts - services, bindings, interfaces, and types.

An interface is a list of operations you might want to perform.  This is, essentially, the API itself.  Our API will be for retrieving form submissions to our website.  The interface is a list of operations, where each operation is an individual API call.  In our example, we will have a "list" and a "get" operation in our inteface.  The interface relies on types in order to define the input and output of the operations.

A binding tells you how to make an API call for an operation.  This is significantly simpler in REST than in SOAP, as you really only need the URL path and the method (this is the relative URL - the absolute path will be defined in the service).  The binding will have a operation tag for each operation it binds.

A service defines where on the web you can go to in order to find the interfaces.  It defines a set of endpoints, each of which tells the top-level URL of the resources, like, and what interfaces they support and which bindings to use on them (telling both the interface and the binding seems redundant to me, but then again so does writing a WSDL file to begin with).

So, below, is a simple REST web service.  I've excluded the "types" section, which I will discuss later.

<?xml version="1.0" ?>
<!-- omitted -->

<wsdl2:interface name="FormSubmissionsInterface">
<wsdl2:input element="formSubmissionSearchParameters" />
<wsdl2:output element="response" />
<wsdl2:input element="formSubmissionIdentifier" />
<wsdl2:output element="response" />

<wsdl2:binding name="FormSubmissionBinding"


So, starting at the bottom, we have 2 services defined - an HTTP service and an HTTPS service.  They both use the same binding and interface.  So, they are called exactly the same way, buy have a different base URL.  Depending on your client, the service name often corresponds to the main class name used to access the service.

The services implement the myexample:FormSubmissionsInterface.  This interface defines two operations: listFormSubmissionsOperation and getFormSubmissionsOperation.  Each one defines its input and output parameters.  Now, the output parameter is just a regular XML element from your schema (defined in the "types" section).  The input parameter is special.  It, too, is defined in your types section, but, when bound to an HTTP endpoint, rather than being treated as an XML document, it will be treated as a combination of URL pieces and query string parameters.  Here is mine:

                       <xs:element name="formSubmissionSearchParameters">
<xs:element name="since" type="xs:dateTime" minOccurs="0" />
<xs:element name="since_id" type="xs:integer" minOccurs="0" />
<xs:element name="until" type="xs:dateTime" minOccurs="0" />
<xs:element name="until_id" type="xs:integer" minOccurs="0" />

Each of the elements under xs:all actually become query-string parameters.  WSDL treats it like an XML document, because of its SOAP roots.  But, when we specify an HTTP binding, then this gets transmogrified from an XML document into query-string parameters.  I have a separate type for the "get" operation, which defines the ID number of the form submission I want:

			<xs:element name="formSubmissionIdentifier">
<xs:element name="id" type="xs:integer" />

As we will see, in the bindings section, we can move a named parameter into the URL itself, rather than just being sent as a query string.

The output is simply an XML document, whose structure is defined in the types section.  I would need to teach you XSD documents to tell you how this works, but you can google it yourself.  The one thing I will say is that because my result document does not use namespaces at all, in the xs:schema declaration I set elementFormDefault to "unqualified".

The binding is where we tell WSDL that we are using REST rather than SOAP.  We tell it what interface we are binding, and the fact that we are using an HTTP binding.  Then, for each operation, we define the HTTP method and relative path for the binding.  We can substitute in our parameters into the URL path by saying {name-of-xml-input-element} in the location.  Any parameters given which are not inserted into the URL path will be put in the query string.  So, for instance


Says to take the "id" element and substitute it in the URL.  The other location,


just puts all of the parameters in the query string.

So that's it!  That's a WSDL file for a REST service!

I tested my service and WSDL using AXIS 2's WSDL2JAVA command: -uri path/to/wsdl -wv 2

Then I created a simple to make sure it ran right.  The example also shows how to use HTTP Basic Authentication with the service:

import java.rmi.RemoteException;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Iterator;
import java.math.*;

import org.apache.axis2.databinding.ADBBean;
import org.apache.axis2.Constants;
import org.apache.xmlbeans.XmlOptions;
import org.apache.xmlbeans.XmlValidationError;
import org.apache.axis2.client.*;

import com.example.www.FormSubmissionsServiceStub;
import com.example.www.FormSubmissionsServiceStub.*;

import org.apache.axis2.transport.http.*;

public class TestMe {
public static void main(String[] args) throws RemoteException {
/* Setup Service */
FormSubmissionsServiceStub stub = new FormSubmissionsServiceStub();

/* If you need authentication */
HttpTransportProperties.Authenticator basicAuth = new HttpTransportProperties.Authenticator();
final Options clientOptions = stub._getServiceClient().getOptions();
clientOptions.setProperty(HTTPConstants.AUTHENTICATE, basicAuth);

/* Setup Single Get */
BigInteger subm_id = BigInteger.valueOf(500);
FormSubmissionIdentifier myid = new FormSubmissionIdentifier();

/* Run Single Get */
Response r = stub.getFormSubmissionOperation(myid);

/* Setup Multi Get */
FormSubmissionSearchParameters p = new FormSubmissionSearchParameters();

/* Run Multi Get */
Response r2 = stub.listFormSubmissionsOperation(p);

Note that "Response" is not a generic Java class, it is actually com.example.www.FormSubmissionsServiceStub.Response, because the name of the output element is "response".

Other Resources:

October 30, 2013

Snippets / Getting my Inspiron 8000 to run Ubuntu 13.10


This is for my own benefit if I need to reinstall again.

Links to helpful articles:

What I remember doing:
  1. I installed using a text-mode install from the network installer
  2. I booted into recovery mode root prompt (don't forget to remount / readwrite!)
  3. I modified /etc/default/grub to add "nolapic nomodeset" to the default parameter
  4. I made s imple xorg.conf file to say that Card0 should use vesa
Then everything seemed to work.  It shows it in a small box, though, and that is better than nothing.

July 31, 2013

Platforms / Uniqueness Methods for PostgreSQL


Just finished packaging together a set of uniqueness functions for PostgreSQL, mostly focused on Cantor pairing.  Here is the introduction from the documentation:


A collection of functions such as cantor pairing which help make unique identifiers for PostgreSQL.

What is a Cantor Pair? Let's say that you have a table which is uniquely identified by two numeric foreign keys. However, let's say that you need a single, unique key to refer to the row externally. Probably a better approach is to create a new primary key, but, if you can't, you can use a Cantor Pairing function to create a single number from the two.

select cantorPair(23, 50);
-- yields 2751
select cantorUnpair(2751);
-- yields {23,50};

Remember, in Postgres, you can even create indexes on these functions.

create table cantortest(a int, b int, c text, primary key(a, b)); create index on cantortest(cantorPair(a, b)); insert into cantortest values (1, 5, 'hello'); insert into cantortest values (3, 18, 'there');

select cantorPair(3, 18); -- =>  (yields 249)

select * from cantortest where cantorPair(a, b) = 249; -- uses index on large enough tables, or if enable_seqscan set to no.

select * from cantortest where cantorUnpair(249)[1] = 3; -- parses 
number out into a foreign key that can be looked up on its own

You can use more than two values with an array and the cantorTuple function:

select cantorTuple(ARRAY[5, 16, 9, 25]);
-- yields 20643596022
select cantorUntuple(20643596022, 4); -- NOTE - the 4 tells how many components
-- yields {5,16,9,25}

Additional functions include:

cantorPairBI - the regular cantorPair uses DECIMALs, which can be slow. If you know the final value will fit in a BIGINT, this version should be a little bit faster.

cantorUnpairBI - same as above but for unpairing

uniquenessSpace(key, spaces, space_index) - this is useful for when you need to UNION tables which each have their own PKs, but you want all of the rows to also have their own PKs. It is a super-simple function (key * spaces + space_index), but it is helpful because people can better see what/why you are doing things. So, for instance, if we had an employees tables and a customers table, but wanted a people view that had a unique PK, we can do:

     SELECT uniquenessSpace(employee_id, 2, 0), employee_name from employees
     SELECT uniquenessSpace(customer_id, 2, 1), customer_name from customers;

To retrieve the original ID, use uniquenessUnspace:

select * from employees where employee_id = uniquenessUnspace(person_id, 2, 0); -- use same arguments as used on uniquenessSpace

These are both horrid names, and I am open to renaming them.

I am currently working on another uniquenessSpace, which doesn't require the number of tables to be known, only the index. However, this function is problematic because it relies on exponentials. The current version doesn't work because I need a prime filter. However, the goal is to overcome the problem in the previous example that if the number of tables change (i.e. we add on an additional supplier table or something), then all of the generated keys have to change (because there are more tables, which affects the second parameter). In an ideal setting, you will be able to do:

     SELECT uniquenessSpace(employee_id, 0), employee_name from employees
     SELECT uniquenessSpace(customer_id, 1), customer_name from customers;

And then add on additional tables as necessary, without ever having to worry about changing keys. This can work by generating a key which is p0^employee_id, and p1^customer_id, where p0 is the first prime (2) and p1 is the second prime (3). As you can see, this gets out of hand pretty quickly, but PostgreSQL is surprisingly accomodative to this. Anyway, this is probably a generally bad idea, but I thought it should be included for completeness. I should probably rename this function as well.

June 27, 2013

General / Knowing When to Start Over


A friend of mine shared this following quip on FB:

Quinn's design tip o' the day: You can't iterate your way out of local maxima. In other words, no amount of horse breeding will produce a Model T.

Sooner or later, you just have to throw it all away and start over.

May 29, 2013

Snippets / Adding a "Copy" context menu to a disabled UITextfield in iOS


I have a project which has a lot of fields which, at any moment, may be enabled or disabled.  However, even when they are disabled, I want to show a context menu.  It turns out that no one has a solution, so I hacked one together for myself.  This code is adapted from a working project, but I have not tested the version presented here, but let me know if you have problems.  

The general problem you run into is this: if your UITextField is disabled, then it cannot receive touch events.  Therefore, it cannot fire the menu, even if you try to do so explicitly using Gesture recognizers.  The normal answer is to simply not allow any editing on the field.  However, while this prevents you from editing the field, it still *draws* the textfield as active.  We want it to *look* inactive as well, which you only get by disabling it.

So, what I did was to create a subclass of UITextField (JBTextField) which has a subview (a specialized subclass of UIView - JBTextFieldResponderView) that has a gesture recognizer attached.  We override hitTest on JBTextField to do this: if JBTextField is disabled, then we delegate hitTest to our JBTextFieldResponderView subclass.

As I said, this is from another project, which had a whole bunch of other stuff mixed in.  I have tried to separate out the code for you here, but I don't guarantee that it will work out-of-the-box.

@interface JBTextFieldResponderView : UIView
@property (strong, nonatomic) UIGestureRecognizer *contextGestureRecognizer;

@interface JBTextField : UITextField
@property (strong, nonatomic) JBTextFieldResponderView *contextGestureRecognizerViewForDisabled;
@implementation JBTextFieldResponderView
@synthesize contextGestureRecognizer = _contextGestureRecognizer;

-(id) initWithFrame:(CGRect)frame {
  self = [super initWithFrame:frame];

  // Add the gesture recognizer
  self.contextGestureRecognizer = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:@selector(longPressGestureDidFire:)];
  [self addGestureRecognizer:self.contextGestureRecognizer];

  return self;

// First-level of response, filters out some noise
-(IBAction) longPressGestureDidFire:(id)sender {
  UILongPressGestureRecognizer *recognizer = sender;
  if(recognizer.state == UIGestureRecognizerStateBegan) { // Only fire once
    [self initiateContextMenu:sender];

// Second level of response - actually trigger the menu
-(IBAction) initiateContextMenu:(id)sender {
  [self becomeFirstResponder]; // So the menu will be active.  We can't set the Text field to be first responder -- doesn't work if it is disabled
  UIMenuController *menu = [UIMenuController sharedMenuController];
  [menu setTarget:self.bounds inView:self];
  [menu setMenuVisible:YES animated:YES];

// The menu will automatically add a "Copy" command if it sees a "copy:" method.  
// See UIResponderStandardEditActions to see what other commands we can add through methods.
-(IBAction) copy:(id)sender {
  UIPasteboard *pb = [UIPasteboard generalPasteboard];
  UITextField *tf = (UITextField *)self.superview;
  [pb setString: tf.text];

// Normally a UIView doesn't want to become a first responder.  This forces the issue.
-(BOOL) canBecomeFirstResponder {
  return YES;


@implementation JBTextField
@synthesize contextGestureRecognizerViewForDisabled = _contextGestureRecognizerViewForDisabled;

/* Add the recognizer view no matter which path this is initialized through */

-(id) initWithCoder:(NSCoder *)aDecoder {
  self = [super initWithCoder:aDecoder];
  [self addDisabledRecognizer];
  return self;

-(id) initWithFrame:(CGRect)frame {
  self = [super initWithFrame:frame];
  [self addDisabledRecognizer];
  return self;

// This creates the view and adds it at the end of the view chain so it doesn't interfere, but is still present */
-(void) addDisabledRecognizer {
  self.contextGestureViewForDisabled = [[JBTextFieldResponderView alloc] initWithFrame:self.bounds];
  self.contextGestureViewForDisabled.userInteractionEnabled = NO;
  [self addSubview:self.contextGestureViewForDisabled];

-(void) setEnabled:(BOOL) enabled {
  [super setEnabled:enabled];
  self.contextGestureViewForDisabled.userInteractionEnabled = !enabled;

// This is where the magic happens
-(UIView *) hitTest:(CGPoint)point withEvent:(UIEvent *)event {
  if(self.enabled) {
    // If we are enabled, respond normally
    return [super hitTest:point withEvent:event];
  } else {
    // If we are disabled, let our specialized view determine how to respond
    UIView *v = [self.contextGestureViewForDisabled hitTest:point withEvent:event];


Now, in interface builder, we can enabled copy on disabled fields just by setting the class to JBTextField, or, if built programmatically, by replacing [UITextField alloc] with [JBTextField alloc].

I hope this helps!

Additional Resources:




May 09, 2013

Platforms / My Beef with Model Validators in Rails


When I first started using Rails, it was a dream come true.  I estimate that I reduced the amount of code I was writing by 60-75%.  This is probably still true.  However, as I've gotten to know the platform better, there are some philosophical issues that I have with Rails.  Sometimes it is more the Rails community and its conventions rather than rails itself, but nonetheless, it continually manifests itself.

My current beef - validators!  Having written lots and lots of code that interacts with lots and lots of systems, I have to say that you should never, ever, ever put in model validators.  Ever.  Validation is a process issue, not a data one.  In theory it should be a data issue, but it never ever works out that way.

Here's what happens.  Let's say I have a telephone number validator for a model.  It makes sure the numbers are in the format (918) 245-2234.  If not, it gives a nice error message.  However, now let's connect that same model to an external system.  Let's now say that the author of the external system was not as picky about phone number formats.  What happens now?  Well, every record that gets migrated from that system that isn't formatted according to your desires will now get kicked out!  And it might not even show up during testing!  In addition, your pretty error messages mean practically nothing on system->system communication.  So, what you have is the system randomly breaking down.

This causes even more problems in long-term maintenance.  I've had many bugs pop up because someone added a validator that broke several processes.  It worked in all of their tests, but they weren't thinking about all of the ways the system was used.  For instance, we decided to lock down the characters used in our usernames a little more.  The programmer decided to implement it as a validator.  Well, the problem is that several users already had usernames that didn't match the validator.  So what happened?  Those records just stopped saving.  Not just when there was someone who was at the terminal, where they could see the error message, but every job started erroring out because of users who had faulty names.

Because of these issues, I *always* do validation in my controller, and *never* in the model.  It is true that there are probably workarounds for all of these issues, but the point is that they are *not* that exceptional!  They are everywhere!  What would be nice is to have validators which could be run optionally, in specific situations.  So, a controller could declare that it was using :data_entry validations, and then the model would only run those validators.  So, it would look something like this:

In the model:

validates_presence_of :email, :for => [:data_entry]

In the controller:

uses_validators :data_entry

One of the issues I have found with dynamic languages such as ruby is that errors arise when "magic" is sprinkled in unexpected places.  Ruby magic is awesome, but when I do something simple and it won't save, it is sometimes hard to even locate where the error is.  Explicitly declaring which validator set you are using allows programmers to more easily see what they are doing, and only apply magic code where it goes.

Someday I'll write a package like this --- when I get extra time :)