Weighted Pairwise Comparison With Table-like Heterogeneous Data - Is Bradley-Terry Model The Right Choice?

by Jarad   Last Updated July 12, 2019 05:19 AM

I came across this library Choix. I want to do Pairwise Comparison on data that looks like this to output a ranking from best to worst.

Let's call this X:

      Impressions  Clicks       CTR   CPC    Cost  Conversions    CPA  ConvRate
1425         3564    57.0  0.015993  0.68   38.76          9.0   4.31  0.157895
3544         7683   519.0  0.067552  1.90  986.10         10.0  98.61  0.019268
4721           32     1.0  0.031250  3.54    3.54          0.0   0.00  0.000000

The author of Choix posted this answer showing how to do maximum likelihood estimate iteratively but his sample data is a square matrix. I'm not sure how to do it with my data (not square).

I've also spent the last week attempting to apply these concepts from his formulas (found here):

$p(i \succ j \succ \ldots \succ k) = \frac{e^{\theta_i}}{e^{\theta_i} + e^{\theta_j} + \cdots + e^{\theta_k}} \cdot \frac{e^{\theta_j}}{e^{\theta_j} + \cdots + e^{\theta_k}} \cdots.$

If I understand this formula, you just apply softmax to each column, then multiply the columns together. I'm subtracting max for numeric stability.

>>> softmax = np.exp(X - X.max(axis=0)) / np.sum(np.exp(X - X.max(axis=0)))
      Impressions         Clicks       CTR       CPC  Cost  Conversions           CPA  ConvRate
1425          0.0  2.269600e-201  0.325915  0.045769   0.0     0.268932  1.111809e-41  0.367041
3544          1.0   1.000000e+00  0.343160  0.155029   1.0     0.731034  1.000000e+00  0.319528
4721          0.0  1.085072e-225  0.330926  0.799202   0.0     0.000033  1.493555e-43  0.313431

>>> product = softmax.prod(axis=1)
# Prints Out:
1425    0.000000
3544    0.012427
4721    0.000000
dtype: float64

I've tried various scaling tactics (softmax, min/max scaling, standardization, sigmoid, rank with 0s and 1s).

Questions

  1. Is the Bradley–Terry model (or some derivation) the right model choice for my data? I believe this model is based on wins and losses which my raw data is not.
  2. Columns CPA and CPC are considered good when their values are lower, not higher like the other columns. What's the best way to handle this? I've tried negativing the columns but wondering if there's an inverse softmax or something?
  3. I would like to assign weights to the columns but I think if I assign flat weights (ex: 1.05, 0.55, 0.01, etc.) it has no effect because all values in the column are scaled by the same value. What I really want to do is weight the good performance more and weight the bad performance less in each column but not sure how to approach it mathematically.
  4. My data is heterogeneous. What is the best scaling method to preserve the difference (distance) between rows to make the math work best for pairwise comparison?
  5. (Bonus) I've tried classification algorithms but they did terrible because the results are all relative to the group, not the entire dataset. I have no idea if a Pairwise Classifier exists -- a classifier that trains only in the context of the samples of a small subset, relative to the other samples. Does this exist?

My tool of choice is Python so I use Numpy, Pandas, Scikit-learn. I really struggle reading math notation but I can easily read it if it's in Python. If you know Python and know how to answer this, Python code would go a long way in helping me bridge the gaps.

Resources



Related Questions


Updated December 25, 2016 08:08 AM

Updated June 21, 2017 17:19 PM

Updated April 18, 2017 10:19 AM

Updated May 10, 2017 12:19 PM

Updated April 21, 2019 16:19 PM