Tuesday 22 December 2015

Post mortem

This is again a guest post, mainly written by Roberto, which I only slightly edited (and if significantly so, I am making it clear by adding text in italics and in square brackets, like [this]). By the way, the pic on the left shows my favourite pathologist examining a post-mortem. 








A day after the bull-fight is over and done with (at least until the next election, which may be sooner than one would expect), we have to do some analysis as to what went right, and what failed, as it regards to our model. First let’s look at the results:


Right off the bat, we can say a few things: firstly, the model captures the correct order of the parties, and hence the results of this election, with impressive certainty. A study of our simulations suggest that if this election was run 8000 times, we would get the correct ranking of parties according to seats 90% of the time, with the remaining 10% having Ciudadanos ahead of Podemos, and the other parties still in their correct place. The model predicts the correct ranking of parties according to votes 100% of the time.

This is an interesting point because towards the end of the campaign it looked as if Ciudadanos could overtake Podemos, but came way short on election day. It is also interesting to see that the Popular Party performed better than expected, and one could think that, due to the ideological similarities, when it came to election day some Ciudadanos aficionados opted to vote for the PP, anticipating that they were better positioned to govern. 

This explanation only holds in part, since we would have to understand why the same reasoning didn’t apply to Podemos voters. However a key could lie in the under-performance of the "Other" parties, which due to their regional anti-central-government nature, and their mostly left-leaning ideology, may have been an easy prey for the Podemos tide. 

Whatever the true mechanism, this highlights the main problem with our model, which is that we didn't model the substitution effect among parties. For future references, I believe these results point to two important variables which can forecast whether a party ends up "swallowing" votes from another: similarity of ideology and probability of achieving significantly (in political terms) more seats that the other party. 

When we look at the number of seats won, the model has a Root Mean Squared Error (RMSE) of just below 10 seats, suggesting that the true number of seats gained by a party lies 10 seats away from my forecast on average. 10 seats represents just shy of 3% of the total seats, so when put in context, this is not a huge margin of error, and it is clearly low enough to allow us to make relevant and useful inference as it regards the results, 5 days before the election. However we could have probably improved on this, perhaps by trying to model regional races separately, which would have enabled us to witness the "finalists" of each race, perhaps allowing us to reduce the overall variability. 

As it regards to vote shares, our RMSE is around 2%, suggesting our vote share estimate for each party is around 2% away from its actual result. This is better than the seats estimate, perhaps due to the larger number of polls at our disposal, as well as the lower variability due to the absence of sources of external variability such as the electoral law. 


When we plot our prediction intervals for seats against the actual results and we stretch the prediction interval slightly to include 3 standard deviations (hence including even rare outcomes under our model), it becomes evident that this was indeed somewhat of a special election. Almost all of the results are either at the border of our prediction, or have jumped past it just slightly, with the only exception being Ciudadanos. This suggests that, beyond the Ciudadanos problem, this election was an extreme case under our model, meaning that either our model could have been better, or we were just very unlucky. I tend to believe the former.

Our vote share results also hold some interesting clues. Although our point estimates here do better on average than for the seats, we pay the price to some over-confidence in our estimates. Our prediction intervals just aren't large enough. This could be for several reasons, including perhaps too high a precision parameter over our long-run estimates. Moreover, polls may have been consistently "wrong", leading our dynamic Bayesian framework to converge around values which, sadly, were untrue. We should look into mitigating this effect through more thorough weighing of the polls.
[I think this is a very interesting point and similar to what we've seen with the UK general elections a few months back, where the polls were consistently predicting a hung parliament and suggesting a positive outcome for the Labour Party, which (perhaps? I don't even know what to think about it anymore...) sadly never materialised. May be the question here is more subtle and we should try and investigate more formal ways of telling the model to take the polls with a healthy pinch of salt...]

Other forecasters attempted the trial, with similar accuracy as our own effort, but using different methodologies. Kiko Llaneras, over at elespanol.com, and Virgilio Gomez Rubio,at the University ofCastilla-La Mancha, have produced interesting forecasts for the 2015 Election. They have the larger merit, compared to our own effort, of having avoided the aggregation of "Other" parties into a single entity, as well as having produced forecasts for each single provincial race.
[In our model, we did have a province elements in that the long-term component of the structural forecast did depend on variables that were defined at that geographical level. But we didn't formally model each individual province as a single race...]

I put our results and error together with theirs in the following tables for comparison. For consistency, and to allow for proper comparison, I stick to our labels (including the “Other “ aggregate). It should be noted that the other guys were more rigorous, testing themselves also on whether each seat in the "Other" category went to the party they forecast within that category. Hence their reviews of the effort may be harsher. That said, since none of the "Other" parties have a chance at winning the election, this rating strategy is fair enough for our purposes. 


As it clear from this table, Virgilio had the smallest error, whilst all forecasts have similar error shares. Where Virgilio does better than both us and Kiko is in the PSOE forecast, which he hits almost on the dot, whilst we underestimate it. Furthermore he’s more accurate on the "Others", as is Kiko, suggesting that producing provincial forecasts could help reduce error on that front. Finally, whilst our model falls short of forecasting the true swing in favour of the PP, it also has the smallest error for the two new parties, Ciudadanos and Podemos.
[I think this is interesting too and probably due to the formal inclusion of some information in the priors for these two parties, for which no or very limited historical forecast is available.]

Looking at the actual results, we can only speculate as to the future of Spain. None of the parties even came close to touching the magic number of 176 seats, needed for an outright majority. However some interesting scenarios may unfold: the centre-right (C+PP) only manages to put together 163 seats; the left, on the other hand, could end up being able to form a governing coalition. PSOE, IU and Podemos can pool their seats together to get to 161 and if they manage to convince some of the "Other" left-leaning parties, they could get the 17 seats they need in order to govern. 

However, this would certainly be an extremely fragile scenario, which would lead to serious contradictions within the coalition: how could Podemos forge a coalition with the PSOE over extremely serious disputes such as the Catalonian independence referendum? Rajoy's hope is that he'll be able to convince the POSE to form a "Gran Coalition" for the benefit of the nation; however, this scenario, whilst being the preferred from the markets worldwide, is unlikely as the PSOE "smells blood" and knows it can get rid of Rajoy, if it holds out long enough.

In conclusion, our model provided a very good direction for the election and predicted the main and most important outcome of the election: a hung parliament and consequent uncertainty. However, through a more thoughtful modeling of polls; an effort to disaggregate "Others" into its respective parties; and province-level forecasts, we could go a long way in reducing our error. 

Saturday 19 December 2015

Political Forecasting Machine - The Spanish Edition

This is a (rather long, but I think equally interesting!) guest post by Roberto (he's introducing himself below). We had already done some work on similar models a while back and he got so into this that now he wanted to actually take on the new version of Spanish Inquisition (aka general election). I'll put my own comments on his original text below between square brackets (like [this]).


My name is Roberto Cerina, I am a Masters student under Gianluca's supervision here at UCL, and this is a guest post on work we've been doing together.

A year after "shaking-up" the status-quo of pollsters and political pundits with our fearless forecasting of the US 2014 Senate election, the Greatest Political Forecasting Machine the world has ever seen is back with a vengeance. This time, we take on the Spanish Armada. The juicy results come after the "Model" section, so feel free to jump right there if you can't contain your enthusiasm for this forecast. Apologies for all and any errors in this work, for which I take full responsibility for.
[Actually, I guess I should be taking more responsibility in those cases. But given he's volunteered, I'll let Roberto do this! ;-)]

Pablo Iglesias, leader of the anti-austerity party "Podemos"

Intro:
The Spanish Election which will take place this coming Sunday (December 20th) is a whole different kind of challenge compared to our experience with the US. To start with, we are talking about a multi-party system, which has seen over 100 different political entities inhabiting the "Congreso de los Diputados" (the lower house of the Spanish Parliament, which is the focus of our forecast); furthermore, the Kingdom has only been a democracy since 1977 and has morphed into its classical form of PP (Partido Popular, centre-right) vs PSOE (Partido Socialista Obrero Español, centre-left) as late as 1989, giving very few past elections to base our results on. To complicate matters further, this 2015 elections is hardly exchangeable with the previous ones, involving two new parties, Ciudadanos (C) and Podemos (P), which are polling at around 20% each. Finally, strong regional identities lead to territorial political parties which do not fall into any specific left or right category, and whilst they perform well within their specific constituency, they are largely irrelevant on the national stage.

In order to tackle this beast, we use data from the Global Election Database, giving us access to constituency-level past election results; economic data from quandl and the Spanish Instituto National de Estadistica; historical government approval rating from the Centro de Investigaciones Sociologicas; seat and vote share polls from the (very, very useful) tailor-made wikipedia page.


The model:
We capture the long-term dynamics of the Spanish election through a Discrete Choice Model based on a Multinomial Logit Regression, estimated in a Bayesian fashion. We recognise this does not solve the "Independence of Irrelevant Alternatives” (a property automatically encoded in the multinomial-logit model which suggests that the entry of a new party in the race leave the relationship between the other parties unchanged). We look forward to modify this historical model, perhaps using Multinomial Probit or Nested Logit, in the next iteration of our forecast.

The parties which we examine throughout are those that compete Nationally, whilst the "Other" label captures regional entities and very low performing national parties. The parties examined are: Partido Popular (PP); Partido Socialista Obrero Español (PSOE); Izquierda Unida (IU) and Others. This Long Run model does not include estimates for the new parties, C and P. This model is at the province level, and it is simple to aggregate the results to get a national estimate of the vote shares. Having a province-level model allows us to keep flexibility with respect to using provincial polls if they become available. The variables which we have used for this particular example are: province-level GDP; National approval rating of government and national incumbency of party. The model is essentially a version of the famous "Time for Change" developed by Alan Abramowitz, which is a good starting point for election dynamics, although it fails to be a good predictor for non-governing parties in multi-party races. We hope to cure this ill by introducing a party-province random effect to account for significant "party strongholds" effect.  

The Discrete Choice Model is based on the idea that rational voters gain utility from voting for one party or another. We model their utility as a function of the observed utility (based on the "Time for Change" economic vote model), and unobserved utility distributed through a Gumbel distribution, hence encoding a Multinomial Logit model.
$$U_{ikt} = H_{ikt} + e_{ikt}, \mbox{ with } e_{ikt} \sim\mbox{Gumbel}(\mu,\beta). $$
The probability that an individual voter (or an aggregate of voters in our case) will vote for party i in province k at time t is dependant on whether party i guarantees the individual more utility than the other parties. 
$$ P_{ikt} = \mbox{Pr}(U_{ikt}>U_{jkt}) = \mbox{Pr}(V_{ikt} + e_{ikt} > V_{jkt}+e_{jkt}) = \mbox{Pr}(e_{jkt}<e_{ikt}+V_{ikt}-V_{jkt}) \forall j\neq i.$$After a short and painless derivation it is possible to see that the probability of interest is equal to the logit of the observed utility: 
$$\mbox{logit}(P_{ikt}) =H_{ikt}.$$[I like the idea of a "painless" algebraic derivation $-$ I certainly didn't know any such thing, when I was a student...]

The historical forecast is a hierarchical function of National ($x$) and Provincial ($z$) variables, and all the relevant coefficients are assigned vague priors:
$$H_{ikt}= \alpha_{ik}  + \sum_a \beta_{ik} x_{akt} + \sum_b \zeta_ {ik}z_{bkt}$$
After deriving the long run vote share probabilities, we need to produce a forecast for 2015 which includes the new parties running. We want a National forecast, so we aggregate the province forecasts. Then we assign reasonable vague priors to the expected vote shares of C and P, and re-weight our previous estimates to account for the presence of these. Here we assume that C and P steal votes equally from all parties, something that is probably overly simplistic.
$$\mbox{P}_{C,t}, \mbox{P}_{P,t} \stackrel{iid}\sim \mbox{Uniform}(0.1,0.3) $$We then re-weight the national long run estimates, after we assign C and P uniform priors between 10% and 30%, measures determined by understanding that a) a party winning 30% of the vote in this election would essentially win, and neither party looks poised to come first; b) that both parties polled well above 10% as early as February, and hence there would have been no chance of either falling through this bracket.

We use a Dynamic Bayesian model to update the Long Run estimates (for each party) obtained above, with observed opinion polls. The multitude of polls available allows us to follow the campaign for 51 weeks, meaning every week since the beginning of 2015. We connect weeks together through a reverse random walk pinned on election week at the Long Run estimates, which enables us to make inference over weeks were no polls have been released. Every week we pool all opinion polls released during the course of that week and feed them to the model, which then updates its estimates for the predicted vote shares of every party, for every week in the campaign, in a Bayesian fashion. 

  • $\mbox{Y}_{il} \sim \mbox{Multinomial}(\mbox{N}_{l},\theta_{1l},..., \theta_{nl})$
  • $\theta_{il}=\mbox{logit}^{-1} (v_{il} + m_{i}),$
  • $v_{iL} \sim \mbox{Norm} \left( \mbox{logit}(\mbox{P}_{iT}), \frac{1}{\tau_{hist}}\right)$
  • $v_{il} \mid v_{il} \sim (v_{il},\sigma_v)$
  • $m_{i} \sim \mbox{Norm} \left( 0, \frac{1}{\tau_{mi}}\right)$
In the equations above, we have $I=1,\ldots,L$ campaing weeks; $i=1,\ldots,n$ parties; $Y_{ik}$ is the number of voters expressing a preference for party $i$ at week $l$ of the campaign; $N_k$ is the number of voters polled over week $l$; $P_{iT}$ is the predicted vote share for the election year at hand $T$, derived from the Long-Run model; $v_{il}$ is a party-week effect; and $m_i$ is a party effect capturing the remaining unobserved party-specific variability, during the campaign. The precision parameter $\tau_{hist}$ represents the confidence we have in our long run model estimates as it regards this election at hand, and it is to be calibrated through a sensitivity analysis.

We get an initial seat projection by exploiting the historical correlation between votes and seats. We produce two different regressions, one for "Governing Parties", the other for "Non-Governing parties", and use the former to estimate the seats shares for IU, Other, C and P. The latter estimates the seats shares for PP and PSOE. The estimates are then pooled together and re-weighted in order to make sure they add up to the full 350 seats in the chamber of deputies.

The final seat projections come from another Dynamic Bayesian model, which makes use of the available seat projection polls (which are usually not as freely available) since the beginning of 2015. Again, we connect weeks together through a reverse random walk pinned on election week at the Seat Projections derived above (equations omitted as almost identical to the above Dynamic Bayesian Model). 



Results:
We first produce our long run estimates, and derive a measure to assess their reliability over time. Our Long-Run dynamics estimates for the national vote shares are as follows (to two decimal places):
How reliable are these predictions? A Root Mean Squared Error of 0.0529 (4dp) over past elections since 1989, suggests that the model misses the correct vote shares of the 5 parties by 5.3% on average. This makes for a fairly reliable model which captures most of the historical dynamics behind the election. The point estimates of the model provide a mixed bag of results with respect to pointing to the correct winner of the election, although this is more a reflection of the competitiveness of Spanish politics (shown by overlapping prediction intervals for governing parties) than a weakness of the model itself.
We then proceed to update these estimates with the available polls. The final vote share estimates are also displayed below. According to these estimates, the PP wins the largest percentage of votes, followed by the PSOE and Podemos and C. Other parties and IU win predictable percentages. The model provides strong confidence in these estimates, with standard deviations lower than half of a percentage point. These results are more or less in line with the long-run model, with the exception that the PSOE is underestimated according to the long-run dynamics. However both history and polls conclude that Mariano Raoy’s Popular Party is set to gain the largest vote share.
The Dynamic Bayesian model allows us to make inference as it relates to the campaign behaviour of voters, which is displayed in the following plots. For this post we only fit the model for election week due to computational constraints, and the last polls embedded in the model are from December 16th (although the official polling deadline wad December 14th, some "Illegal Polls" from credible papers have been published since). We should point out the the validity of inference made from this model, and especially as it pertains to voter behaviour during the campaign, is proportional to the validity of the polls as a tool for monitoring the behaviour of votes. Some pollsters are better than others, and in future iterations of this model we will investigate polling firms further and use a weighted average of polls rather than a simple average.
The dotted line in the plots points to the historical estimate of the vote share. We can see how the PSOE over-performs the historical estimate throughout the campaign, whilst the PP dances around it, outperforms it form week 28 to week 47 of the campaign, and eventually converges to its historical estimate. The vote shares for these governing parties are rather stable throughout the campaign. A very different scenario unfolds when we look at the "young guns" C and P.
The two new parties start from opposite ends of the spectrum: C is a former regional, Catalonia-based anti-separatist party, which was largely unknown by the wider public at the beginning of this campaign. Podemos, on the other hand, came off the back of a successful European Election which helped put it on the map, but also strong international recognition as Spain’s anti-austerity answer to the perceived dictates of the European Union. Furthermore, the electoral victories of Syriza (the Greek "equivalent" of Podemos), and the prominence of the discussion on austerity and the EU, essentially allowed Podemos to monopolise the debate at the early stages of 2015. As the campaign went on we can see how Podemos fell short of its initial goals, and lost as high as 15% of its popular vote share, before picking back up to a more reasonable 20%, in the last few weeks of the campaign. C follows essentially the exact opposite path, increasing its share as Podemos’ falls, and dropping back to around 17% of the total vote share as Podemos pushed back in the last week. This suggests there is a high degree of substitutability between the parties, which is odd considering C is a centre-right party and Podemos is very far left.

More investigation on the reasons for this shared electorate is in order, however a simple explanation could just be that they represent some new ideas in Spanish politics. Given that someone decides to vote for a "New", before the rise of C, he/she could only vote for Podemos, perhaps even if the voters’ ideas were in contrast with those of the party, as a sign of protest. However, the rise of C allowed those disaffected with the current political system but still of the same ideological spectrum as the current (right wing) governing party, to find a new home. This is just speculation for now, but perhaps it could explain at least some of the exchangeability between the two parties. IU and Other parties maintain a fairly invariant percentage of the vote, with the exception of the last few weeks for Other parties. This could be as a result of the of the parties included in this category being correlated with C or Podemos’ variability.

We then go ahead and try to provide Seats Projections for the Congress. The difficulty here lies in that, not having at our disposal regional polling, it is not possible for us to determine the result of every single race. 
Furthermore, the relationship between national vote share and seats is not 1:1, since Spain uses the D'Hondt system of seat allocation, which is a complex mechanism meant to strike the right balance between representation and governability. However, this system has its weaknesses, ie that it over-represents large parties, especially in rural areas, whilst penalising smaller ones. This simple conceptual difference is enough for us to justify the use of two different projections for governing and non-governing parties, as explained above. Hence, we pool the seats won (historically) by the parties into these two categories, do the same for the votes and then regress seats on votes for both groups. The results of these projections are then aggregated and re-weighted providing us with the following estimates based solely on the correlation between votes and seats:

We then proceed to update these with seat projection polling: the results of this effort are below, along with the plots showing how potential seats shares have evolved during the campaign. Finally, seats are projected on the actual parliament, and compared to the parliament composition we were left with in 2011, from which we can make inference as to the total change in seats.
 
Variability in seats is very high generally, but we experience many of the same patterns that we see at it regards to vote share during the campaign. PP and PSOE are relatively stable in their estimates, and especially in the last few weeks of the campaign, they don’t seem to vary much at all. The same can be said for IU, which is even more remarkably stable throughout, whilst Other parties experience a slight U-shape trend in the last few weeks of the campaign, but stay within their overall campaign average interval. C and Podemos are again remarkably variable with Podemos setting itself up to be potentially the second largest party at the beginning of 2015, but then falling dramatically in conjunction with C’s rise. C’s momentum was very strong in the second-half of the campaign, but flattened out in November and December, something that coincided with Podemos’ re-gaining strength.



In conclusion, we expect the congress post-2015 election to look extremely fragmented. No party will outright win a majority and it will be a power-play to see who will manage to form a government. Mariano Rajoy is likely to stay prime minister
[I guess it's: sorry, Spain...?]
but should he find himself incapable of forming a government by brokering deals with some of the other parties, we may see new and unpredictable scenarios unfolding. The powerful show and entry in parliament by Ciudadanos and Podemos raises a question: are we seeing a gradual but definite systematic renewal of Spanish politics or is this predicted strong performance by new parties due to the exceptional circumstances we live in (austerity, unemployment, EU centralisation, etc)? We cannot answer that, as of today, but we can be sure that the answer depends almost entirely on how much "change" these parties are going to be able to bring and, crucially, how quickly. They may find that even a politically adventurous electorate such as the Spaniards will have very little patience, when it comes to broken promises.

[Just to add that another interesting forecast of the Spanish election has been done by Virgilio]

Saturday 12 December 2015

ERCIM/CMStat 2015

Tomorrow I'll be at the 8th International Conference of the ERCIM WG on Computational and Methodological Statistics (CMStatistics 2015), which has again come to London. I have only been once to this conference, two years ago $-$ that time too it was held at Senate House, just around the block from the office.

I've been invited to talk in the session on Health Economics $-$ that's the first time such a session has been held at CMStats $-$ and I'll present our work on the Expected Value of Partial Information (I've mentioned this already here. My slides are here). 

The session looks good (details here $-$ search for code "EO254"). Interestingly, it seems like a Italian-Greek face-off (I guess we're somewhat in between, with Ioanna being a co-author). Anna is the odd-one-out as the sole non Graeco-Roman... 

(Speaking of, the picture above is incidentally the Temple of Concordia in Agrigento, where I was born $-$ well, not in the temple, obviously, just the town...)

The Master plan

Together with Jolene (who's really been the driving force behind this) and Marcos, I've been working in the past few months to try and set up a new MSc in Health Economic Evaluation and Decision Science at UCL.

The process has been relatively long and we've had to overcome a few bumps, but it would appear that we are being successful $-$ there're a couple more signatures to get through and all the business with advertisement and setting up a couple of new modules, but these shouldn't be too terrible!

I think this is a very exciting prospect: the MSc will be made by 8 modules and will comprise a joint core in which students will have the choice of a focus on higher income country or a global context (the latter will tend to emphasise the challenges of low and middle income countries).  As the MSc title gives away, in addition, the students will be able to choose a "decision science" stream or an "economics and policy" stream. I think this is very nice and crucial, since we'll be able to provide interesting options and possibility of selecting from a wide and diverse range of modules from which to learn.

We'll be involved particularly with the "decision science" stream for which my new module "Bayesian methods in health economics" will be core (mandatory), together with other modules that we currently provide (eg Medical Statistics). Again, I think this is really good as it's increasingly important (IMHO) to have modellers who have very advanced statistical skills, in health economic evaluation (or more specifically, I should say cost-effectiveness/utility analysis).

If all goes to plan, we'll start with the first cohort of students in September 2016 $-$ that's really exciting!

Monday 7 December 2015

Two peas in a pod

Earlier today I've seen this post about Frank Wilcoxon's work on non-parametric statistics, on the Significance website. I've only very recently become involved in using some non-parametric methods (notably for our work on the EVPPI and on variable selection using random partition models).

Also, Marta and I have very recently become super-involved with the work of Dr Nefario and I couldn't help but noticing the extraordinary similarity between the two...






Tuesday 10 November 2015

Good value

Earlier today we had our workshop on the Value of Information at the Ispor conference. I think it went well $-$ I counted about 80 people in the room, which was a big turnout, I think (I lost count three times, so I am not actually sure about the number, but this should be just about right). We had prepared some contingency plan to foster the discussion in case people just stared at their feet, but we didn't need to resort to them, as there were a few good comments and questions, so we were quite pleased.

In comparison to other times I've been to Ispor, I think that there have been quite a few interesting sessions, so that's nice too. And the weather in Milan is ridiculously good $-$ I had to chuck away my jacket and jumper for the whole time. On the negative side, I thought that the venue wasn't super $-$ both as in doesn't look particularly nice and is kind of inefficient (with no computer plugs apart from a few stands that look like ice-cream carts with cables to charge your phone and tablets, but not laptops I think, that you would have to fight over with the other 2 million delegates)...

Saturday 7 November 2015

More on stepped wedge

A couple of months back I talked at the launch of the Trial series on the Stepped Wedge Designed, on which I have worked together with a number of colleagues at UCL and LSHTMJennifer, who's one of the authors of the series and is doing her PhD on this topic, has also posted on the LSTHM blog to report her impressions of the day (all of which I agree with, apart from the typo in spelling my name!).

Related to this, we are getting very close to also releasing our R package SWSamp (I know $-$ the page is empty at the moment, but we're working on it...). This basically started when we were working on our SW paper and back the code was fairly specific to my immediate objective (simulation-based sample size calculations for a SW trial). I think this is kind of obvious, may be, certainly it happens all the time with me $-$ even BCEA has had a very similar inception and then development into something that is a lot more structured. 

However, I think now I've become much better at writing R code (NB: this doesn't necessarily mean that I've become good at that $-$ just much better than, for example, when I started writing BCEA) and so I think we're including a lot more functionalities in
SWSamp. For example, we're working to have very generic functions that can handle simulation based sample size calculations for many types of designs $-$ so kind of over and above the SWT.

We should be able to have some beta-release out very soon!

Milano 2

I'm not talking about this, but rather the curious, I'd say, coincidence that brings me to Milan for the second time in a matter of 4/5 months for a health economics conference. Back in July, I went to iHEA, which, by and large, was a very good conference. Tomorrow I'll go again for Ispor. On Tuesday, I'll be presenting in a workshop on Value of Information (ours is W13 on this page), together with Mark, Nicky and Anna

Friday 30 October 2015

Bayes 2016

We're finalising the details for our next Bayes Pharma conference $-$ this time we're going to Belgium, in Leuven.

We've now opened the call for abstracts, which can be sent by email (to this address), providing Title + Authors + Max 300 words, before March 1st 2016. We'll then work quickly to notify acceptance by around March 14th 2016.

The programme sounds interesting (I want to say "as usual" $-$ I know there's a bit of conflict there, given I'm in the Scientific Committee, but I do think so!), with the following invited speakers:

  • Greg Campbell, FDA
  • Mike Daniels, U Texas
  • Kyle Wathen, Johnson and Johnson
  • Tarek Haddad, Medtronic
  • Martin Posch, U Vienna
  • Alberto Sorrentino, Genova
  • Robert Noble, GSK

Looking forward to this already!

Thursday 29 October 2015

Our new R package

As part of the work she's doing for her PhD, Christina has done some (fairly major, I'd say!) review of the literature about prevalence studies on PCOS $-$ that's a rather serious, albeit probably fair to say quite under-researched area. 

When it came to analysing the data she had collected, naturally I directed her towards doing some Bayesian modelling. In many cases, these are not too complicated $-$ often the outcome is binary and so "fixed" or "random" effect models are fairly simple to structure and run. One interesting point was that, because there often wasn't very good or comprehensive evidence, setting up the model using some reasonable (and, crucially, fairly easy to elicit from clinicians) prior information did help in obtaining more stable estimates.

So, because we (she) have spent quite a lot of time working on this, I thought it would be good to structure all this into a R package. All of our models are actually run using JAGS as interfaced using the package R2jags and, I think, the nice idea is that in R the user can specify the kind of model they want to use. Our package, which incidentally is called bmeta, then builds a suitable model file for the assumptions selected in terms of outcome data and priors and then runs it via R2jagsThe model file that is generated is automatically saved on the user's computer and can then be re-used as a template or modified as necessary (eg to include different priors or more complex structures).

Currently, Christina has implemented 22 models (ie combinations of data model and prior, including variations of fixed vs random effects) and in the package we have also implemented several graphical diagnostics, including:

  • forest plots to visualise the level of pooling of the data
  • funnel plots to examine publication bias
  • diagnostics plots to examine convergence of the underlying MCMC algorithm
The package will be on CRAN in the next couple of days, but it's already downloadable from this webpage. We'll also put some more structured manual/guide shortly.

Tuesday 20 October 2015

Solicited

Quite a while ago, I have received an email by Samantha R. from Udemy pointing me towards this article, discussing the "difference between data science and statistics" (I have to confess that I don't really know Udemy, apart from having looked at that article and, despite having quickly searched for her, I wasn't able to find any link or additional information). Given he has asked me to comment on the article, which I do now with over a month delay $-$ apologies Samantha, if you're reading this!

So: I have to say that, while I don't think it's fair or wise to just discard the whole of "data science" as a re-branding of statistics, I don't agree 100% with some of the points raised in the article. For example, I am not sure I buy the distinction between statistics as a discipline of the old world and data science (DS) as one for the modern world. Certainly, if a fundamental connotation of DS is computing, then obviously it will be relevant to the modern world, where cheap(er) and (very) powerful computers are available. But I certainly do not think that this does not apply to statistics too.

I am not sure about the distinction between "dealing" and "analysing" data, either. In my day-to-day job as a (proud!) statistician, I do have to do lots of dealing with data $-$ one of the most obvious example is our work on administrative databases (eg THIN for our work on the Regression Discontinuity Design in epidemiology); eventually, the dataset becomes very rich and with lots of potential for interesting analysis. But how we get there is an equally long (if not longer!) process in which we do lots of dealing with the "dirt".

The third point I'm really not convinced by is when Samantha says that "Statistics, on the other hand, has not changed significantly in response to new technology. The field continues to emphasize theory, and introductory statistics courses focus more on hypothesis testing than statistical computing." Seems to me that this is far from true and actually we do place a lot more emphasis on computing than we used to 10-15 years ago in our introductory courses. And computation is playing a more and more central role in the development of statistics $-$ with amazing results, a couple I'm more familiar with: Stan and INLA. I would definitely see these developments as statistics $-$ definitely not DS.

In general, I think that the main premise of DS (as I understand it) that data should be queried to tell you things takes away basically all the fun of my job, which is about modelling, making assumptions which you need to carefully justify so that other people are persuaded that they are reasonable. 

Still, I think there's plenty of data for statisticians and data scientists to co-inhabit this world and I most certainly don't take a "Daily Mail-esque" view that data scientists are coming to take our jobs and stealing our women. I think am allowed to say this, as somebody who has actually come to a different country to steal their statistical jobs $-$ at least I had the decency of bringing my own woman with me (well, actually Marta did bring her man with her as when we moved to London she was the one with a job. But that's another story...). 

Friday 16 October 2015

Not so NICE...

Earlier today I caught this bit on the news $-$ that's the story of the latest NICE deliberation on ataluren, a treatment for Duchenne muscular dystrophy. That's a rare and horrible condition $-$ no doubt about it. The story is mainly that NICE has preliminary decided not to recommend ataluren for reimbursement (the full Evaluation consultation document is here).

I thought the report in the news was not great and, crucially (it seems to me), it missed the point. The single case of any individual affected is heart-breaking, but the media are not doing a great service to the public in picturing (more or less) NICE, or as they say "the drug watchdog", as having rejected the new drug because it's too expensive.

ITV reporting quotes the father of one of the children affected as saying:

How do we tell Archie he is not allowed a drug that will keep him walking and living for longer because NHS England and drug companies cannot agree on a price?
That's the real problem, I think $-$ the presumption (which most of the media do nothing to explore or explicitly state) is that the drug will keep the affected children walking and living longer. Trouble is that this is not a certainty at all $-$ on the basis of the evidence currently available. The Evaluation consultation document says (page 34)
There were no statistically significant differences in quality of life betweenthe ataluren and placebo groups. The company stated there was apositive trend towards improved quality of life with ataluren 40 mg/kg dailyin the physical functioning subscale. The company submission alsodescribed a positive effect on school functioning and a negative trend inemotional and social subscales.
So the point, I think, is that if the treatment was associated with much more definitive evidence, then the discussion would be totally different. What has not been mentioned, at least not that I have seen, is that the estimated total cost per person per year of
treatment with ataluren of £220,256 is affected by the uncertainty in the evidence and assumptions enconded in the model presented for assessment. And it is this uncertainty that needs to be assessed and carefully considered...

Tuesday 6 October 2015

PhDeidippides

Anthony (who's doing good work in his PhD project) also doubles as a runner and has written a nice post for the Significance website.

Clearly (just look at the numbers!), for many people this is a serious issue $-$ the fact that you can't run officially a big Marathon such as London's unless you've been lucky enough to win your place through a ballot. 

I have to say my only experience with long-distance running was a few years back at Florence's Marathon, for which I was not officially registered $-$ for that matter, I wasn't even planning on finishing it, just do a bit of the whole thing and then go back home, so I guess it didn't really matter that I didn't get a medal or something...

I'm not sure that guaranteeing a place to somebody who's been turned down a sufficient number of times in a row would solve the problem, though $-$ people must get fed up with the wait?

Tuesday 29 September 2015

Two PLOS Two

Cosetta Minelli and I have just published an editorial on PLOS Medicine on the use of the value of information, with particular reference to risk prediction modelling. 

We had proposed the topic to the journal, thinking that they may not even like it, but as it turned out they came back asking us to write the paper in a very short amount of time, last August. That was also in the middle of moving home, so not great timing...

Still, they seem to have liked it and the paper is the headline for this month (this link will only work until the next issue is published in a month time). Before you ask: we did not select the picture $-$ I've thought about it for a couple of hours and only now I realise it's suppose to show lots of tools...