Thursday 30 January 2014

LaTeX can be arsey, but boy is it good?!

I have been using LaTeX since I wrote my BSc thesis (that was way back in the last century $-$ although I'm saying this just for dramatic effect, but I'm not THAT old!) and have loved it since. Of course, I do use WYSIWYG typesetting software now and then, but, of course, I try to avoid those as much as possible when doing serious work, involving maths formatting.

But, amazing as it is, LaTeX has sometimes its quirks and some cost (= time invested in looking the solution up) to solving them. In the past few days, I've had a couple of spats. 

The first one has to do with the fact that I typically use the old latex $\Rightarrow$ dvi $\Rightarrow$ ps $\Rightarrow$ pdf routine to produce my documents/presentations (eg I use ps files for graphics and so I can use things like pstricks $-$ which I have now learned to use quite well, and so find relatively easy to deal with). But now I'm working on a joint presentation [I'll post about this thing later $-$ that's good stuff!] and my collaborators had already started working using the newer pdflatex routing, which prevents you from (straightforwardly) using ps files. So I had to struggle a bit to use all my pstricks-created graphs. In the end, I decided to save each as a separate .pdf file (using my old routine) and then import those in the beamer presentation. 

The second one was slightly more complex (and in fact I had to ask for external help to solve this). I'm writing something where we want to show some "normal" text and then some sort of "example boxes", which we want to typeset in a different, sans serif font (hence the clever image above: sans serif, Arial, Helvetica, Switzerland... pretty font-nerdy stuff!). [This is even better stuff and again I'll post about it later]

The problem is that Helvetica is actually somewhat larger than other typefaces of the same nominal size. As a result, mixing Times and Helvetica within running text may look bad, which is exactly what was happening: the text in the example boxes (typeset in Helvetica) would look slightly bigger than the maths (eg equations), which was freaking me out. But this can be easily fixed by loading the package helvet with the option [scaled=(scale)], for instance: \usepackage[scaled=.92]{helvet}. As a result, the font family phv (Helvetica) will be scaled down to 92% of its "natural" size, which is suitable for use with Adobe Times. 

Wednesday 15 January 2014

BMHE & BCEA get a shout in published paper

Panagiotis Petrou has posted a link to a recent paper of his, which develops a cost-effectiveness analysis of a drug used as a second-line treatment of renal carcinoma. The analysis is based on a Bayesian Markov model. 

But (from an incredibly self-involved point of view, I realise!), more importantly, they say on page 132:
"The model was synthesized in WinBUGS software package (Bayesian inference Using Gibbs Sampling) suitable for analyzing complex statistical models [18], and the R package Bayesian Cost Effectiveness Analysis [19] to do all the economic evaluation process after the Bayesian model has been run."
The R package is actually BCEA and it was also very nice to see that ref [19] is in fact BMHE, which they refer to as "An excellent book in health economics".

This is clearly the sound track to this post...

Monday 13 January 2014

Job @ UCD

This is an interesting job opportunity $-$ Mark (this is his UCL webpage, although he's now officially transitioned to Warwick) has pointed this out to me, and I thought I may as well advertise it through the blog!

Friday 10 January 2014

Porn capital of the porn nation

The other day I was having a quick look at the newspapers and I stumbled on this article. Apparently, Pornhub (a website whose mission should be pretty clear) have analysed the data on their customers and found out that the town of Ware (Hertfordshire) has more demand for online porn than any other UK town. According to PornHub, a Ware resident will last 10 minutes 37 seconds (637 seconds) on its adult website, compared with the world average time of 8 minutes 56 seconds (just 536 seconds).

Comments have gone both ways, with Ware being dubbed (in a somewhat derogatory way) the "Britain capital of porn", while some people have highlighted the better "performance", shall we say, of Ware viewers (who, ahem, lasted on line nearly 2 minutes longer than the average viewer).

The data (or at least an excerpt) are available from The Guardian website and so I have very, very quickly played around with them. In particular, I think that it's kind of weird that the analysis focussed just on the maximum value; so I had a quick look at the entire distribution.



Interestingly, it appears that not just Ware, but basically all of the British towns in the dataset are above the world average; some are even very close to the porn capital! So do the same comments made for Ware apply to the whole of the UK?

After downloading the data to a file "PornUse.csv", you can easily recreate the graph with this very simple R code:

boxplot(time_seconds~Region,main="Time spent online viewing porn (in secs)",ylab="Time (seconds)") 
abline(h=536,lwd=2,lty=2)
ord <- order(time_seconds,decreasing=TRUE)
for (i in 1:3) {
   text(1,time_seconds[ord[i]],Town[ord[i]],cex=.6,pos=4)
}
ord2 <- order(time_seconds,decreasing=FALSE)
text(4,time_seconds[ord2[4]],Town[ord2[4]],cex=.6,pos=4)
for (i in 1:3) {
   text(1,time_seconds[ord2[i]],Town[ord2[i]],cex=.6,pos=4)
}


Wednesday 8 January 2014

Bayes 2014 coming up nicely!

Today I had a very useful teleconference with the other members of the organising committee for the Bayes Pharma 2014 conference. The new website is already up and running and we've included some information.

We've nearly finalised all the details and the call for abstract is now open (the deadline is March 30th $-$ we aim at getting back to applicants by April 15th). Registration is also open and the details can be found here.

I've already mentioned the programme here $-$ the topics are confirmed and the line up of confirmed speakers is already exciting, I think. I'll post again when everything is finalised (which should be very shortly).

The conference will be 11-13 June and details of the location can be found here.

New year, new cost-effectiveness thresholds?

Karl Claxton and colleagues at the University of York have recently published a working paper on Methods for the Estimation of the NICE Cost Effectiveness Threshold. Since a guideline was issued in 2004, NICE has used standard values of £20-30,000 per QALY as the official cost-effectiveness threshold. These are effectively equivalent to the cost per quality adjusted life year gained by investing in a new technology at the expenses of an already existing intervention. Decisions on reimbursements have been based on this decision rule (eg if the cost per QALY exceeded this range of thresholds, then the new intervention was not cost-effective).

This paper tries to produce an updated version of the threshold, based on empirical evidence (eg programme budgeting data for the English NHS). The methodology used is quite complex $-$ the full report is over 400 pages and I have only skimmed through it, reading with some care only some parts. Technically, the analysis is based on an instrumental variables approach within a structural equations setting and aims at simultaneously estimating the impact of the level of investment (and other variables) on health outcomes and the impact the overall budget constraint (and other variables) on the level of spending for a given health programme. Their main result is to suggest a slightly lower value to be used by NICE (£18,317 in some sort of baseline scenario). 

As I said, I only skimmed through the report, but I think it looks like a substantial piece of work. Nevertheless, I think there are some major limitation (which, to be fair, the authors acknowledge in the text. The Office for Health Economics has also produced a critique of this paper, which is available here).

The main one, seems to be the (lack of) availability of data for all the different programmes, to be used to translate the impact of expenditure into changes in quality of life). On the other hand, the paper tries to deal carefully with the issue of uncertainty propagation; for example, there is a whole section on the evaluation of structural and parametric uncertainty $-$ although this is not directly based on a full Bayesian model (which is kind of strange, given Karl is the main author on the paper...).

Tuesday 7 January 2014

Significant news

Good news on the second day back to work after the Christmas break: I've been invited to join the Editorial Board of the Significance magazine $-$ of course I have happily agreed to the invitation!

I have always been a big fan of the magazine (in fact I concocted a picture of XY holding a copy, a while back) and am really excited about being on board. 

I think the job will be mainly to identify topics and potential authors for articles. I do have a few in mind already...