Thursday, July 9, 2015

What feedback should a programming MOOC provide?



What sort of feedback models can be used in a MOOC and other online learning activities? In this note we look at the different styles of giving feedback before looking at automated feedback in a little more detail, and come to a conclusion about what is essential in the next iteration of the Erlang MOOC.

Different feedback styles



One mechanism is to assign a Teaching Assistant (TA) to perform the work. The advantage of this is that the TA can tune their feedback to the particular piece of work, and the individual submission for that work. Feedback can be given on partial or incorrect programs, and can cover all aspects of the program’s behaviour. This can also be given ay any stage during the assessment, and not only when a completed solution is available.

One limitation is that it can be impossible to give human feedback alone on a large assessment, for which some testing data and other automated analyses may be necessary to provide the TA with suitable diagnostic information. A more fundamental difficulty is that this is difficult to scale, or to provide in a timely way to suit the participants’ different speed of working. So, while this can work for a small, focussed, group, isn’t feasible for the sort of numbers expected in a MOOC (or even in a university class of 150).

Participant self-assessment provides one route to scaling. The model for this is to give the student a comprehensive mark scheme and marking instructions, and to ask them to apply that scheme to their own work. This can then be moderated by a TA or other teacher. This approach has the pedagogical advantage that it gives the student a chance to gain insight into their own work through applying the marking scheme, but it can only be used on final or complete solutions to assessments, and certainly can’t be used to help students towards a final solution.

A variant of this is peer assessment, under which pairs of students are asked to give feedback to each other on their work. Again, this has a pedagogical advantage of allowing  students to gain further insights into how work is assessed, and in giving them a better perspective on their work through seeing another’s. It can also be used to give students support during the development of the solution, and can indeed become a form of peer programming. 

There are two variants of the peer assessment approach: the pairs can be assigned “statically” in advance of the work being done, or the process can be more dynamic than that, assigning the pairing only when the two have each made a submission. It is obviously a scalable solution, though without the consistency of having a single marker; it can be supported with appropriate materials, including marking schemes etc., just as for self-assessment.

The final option, and the one that we look at in more detail in the next section, is providing some form of automated feedback to students. Of course, it it possible to combine these modes – as hinted earlier – supplementing some form of person to person feedback with data from a variety of automated tests and analyses.

Automated feedback



On the whole, a pre-requisite for giving feedback is that the program unit – call it a module here – needs to be compilable. Without that it is difficult to give any meaningful automated feedback. Though difficult, it is not impossible, and some sorts of lexical-based analysis are possible even for programs that are incomplete or contain errors. Nevertheless, the rest of this section will assume that the program can be compiled successfully.


What kind of feedback can be provided using some sort of automated analysis?


  • Static checks
    • type checks
    • compiler warnings and compiler errors
    • abstract interpretations and other approximate analyses e.g, dead code analysis
  • Style analysis
    • decidable e.g. lengths of identifiers
    • undecidable (requires approximation), e.g. the module inclusion graph.
    • intensional properties: e.g. this is / is not tail recursive
  • Tests
    • hand-written unit tests, performed by the user
    • unit tests within a test framework; can be performed by the user, or the system
    • integration tests of a set of components, or of a module within context
    • these may be tested against mocked components
    • user interface testing
  • Properties
    • logical properties for functions
    • property-based testing for stateful APIs
    • both of these can be accompanied by “shrinking” of counter-examples to minimal
  • Non-functional properties
    • efficiency
    • scalability
    • fault-tolerance (e.g. through fuzz testing)


What role does testing play in online learning?


  • Confirming that a solution is (likely to be) correct or incorrect.
  • Pointing out how a solution is incorrect.
  • Pointing out which parts of a solution are correct / incorrect.
  • Pointing out how a solution is incomplete: e.g. a case overlooked.
  • Assessing non-functional properties: efficiency, scalability, style etc


Rationale for using automated feedback


Reasons in favour of using automated feedback, either on its own or as one of a number of mechanisms.
  • Timeliness: feedback can be (almost) immediate, and certainly substantially quicker than from a peer reviewer.
  • Comprehensiveness: can do a breadth of assessment / feedback which would be infeasible for an individual to complete.
  • Scalability: in principle it scales indefinitely, or at least to the limit of available resources; if the evaluation is done client-side then there is no resource cost at all.
  • Consistency of the feedback between different participants.
  • Complements peer interactions: can concentrate on “why did this fail this test?”


There are, however, drawbacks to using automated feedback.
  • Can miss the high-level feedback – covering “semantic” or “pragmatic” aspects – in both positive and negative cases.
  • Restricted to programs that can be compiled or compiled and run. 
  • Difficult to tie errors to faults and therefore to the appropriate feedback.

There is a cost to automating testing, as it needs to be implemented, either on the client side – typically within a browser – or on a server. There is a potential security risk if the testing is performed server-side, and so some sort of virtualisation or container will generally be used to test programs that could potentially interact with the operating system. On the server side, any technology can be used, whereas on the client it will typically use whatever facilities are provided by the browser.


Recommendations



At the base level we should provide two modes of feedback when the Erlang MOOC is run next.

Dynamic peer feedback, where pairs are matched dynamically as final submissions are made. This allows solutions that are pretty much complete – and hopefully correct – can receive feedback not only on correctness but also on style, efficiency, generality etc. This gives qualitative as well as quantitative feedback, but requires at least one other participant to be at the same stage at roughly the same time.

Unit tests and properties should be provided for participants to execute themselves. While this is less seamless than testing on the server side, it gives participants freedom to use the tests as they wish, e.g. in the course of solving the problem, rather than just on submission.

Understanding tools. We should include information about interpreting output from the compiler and dialyzer; some information is there, but should make it clearer.

Other feedback is desirable, and particularly integrating as much of this feedback into the browser would make an attractive user experience, particularly if it is integrated with the teaching materials themselves, à la Jupyter. As an alternative, the feedback support could be integrated with the server, but this will require effort to craft a bespoke solution for Moodle if we’re not yet using a mainstream MOOC platform.

Sunday, June 14, 2015

The Erlang mini-MOOC pilot … what have we learned?


We’ve learned some things from our pilot MOOC: some is surprising and some more predictable. The main places we’ll need to work harder are how we give feedback on homeworks, and how in general how we engage participants … more about both of those below.

We’ll also do some more detailed reflection on the quantitative data that we’ve gathered, but that’s for another post when we’ve had time to analyse it, and also given time to those who are still working hard on the pilot to finish up.

So, in no particular order, here is what we have found out from the experience of planning, producing and running the pilot, which is part of the K-MOOCs project at Kent.

Planning is key


One key thing we learned from Mike McCracken – who has led Georgia Tech’s online activities, including a full online MSc in computer science – is how important it is to plan. Mike's advice was to plan each activity in enough detail so that once we started in production, all that we needed was to make the video, write the quiz, or whatever.

Here’s some of the planning in progress: you can see that we’ve got three levels: weeks 1 to 3, sections within each week (typically 3 or 4), and sub-sections, which can be anything up to 8.

Planning takes you longer than you expect …  but it saves time later.


Mark O'Connor – the learning technologist and project manager for the K-MOOCs project was also clear about the importance of planning, and he helped us to refine things into a template for describing each activity.

This took about two weeks; that’s a lot longer than I’d expected, given that much of the material was already written for an undergrad course at Kent on functional and concurrent programming. However, the actual production took less time than we’d anticipated, and this was down to the value of doing all that planning in advance, so it paid off. Thanks Mike and Mark


Lo-fi is OK


We recorded the three Erlang Master Classes using a serious – I guess semi-professional – studio, and having them filmed and post-produced by KERSH media to a very high standard. My master class was part of the pilot, and all three will be part of the full MOOC, but our budget didn’t stretch to making all the videos in a studio.

Instead, we used my office and MacBook Pro … pictured right. The box makes sure that the camera angle is right, and the external keyboard means that you can’t hear all the keystrokes during “live coding” sessions. Apart from making sure that no-one could come in and disturb the session, my mobile was switched off and phone unplugged from the wall, no special preparation needed. Once I got into my stride, most of the lectures were done in a single take, with Mark providing some post-recording cutting, in case I’d had to start again, or paused for too long.

The preliminary feedback seems to suggest that this lo-fi approach is “good enough” for the lectures, which after all are concentrating on getting ideas across effectively, and not aiming to be eye candy.

Teamwork makes things fly


Getting it right was a matter of teamwork. Mark O’Connor played an invaluable role in project admin in general but more importantly in building the web presence for the MOOC. We were using our own “external” moodle setup, rather than a specific MOOC platform, and Mark was able to give that a look and feel consistent with what we hope people were expecting.

Stephen Adams – who is doing a PhD at Kent on refactoring Haskell programs – was TA for the course, and separating that role from that of the lecturer seemed to work fine. He had clear responsibilities for supporting the forums and so forth when the course was running, whereas my role was to get the pedagogy and content right in advance.

The platform was “good enough”


So we were using Moodle rather than a dedicated MOOC platform, and this had some disadvantages.

Dedicated platforms will provide some services, such as transcription of lectures, which we were unable to provide in the pilot. We’ll have to find a way of dealing with this if we don’t move up to using a MOOC platform, and indeed we’ll have to be more aware about questions of accessibility in general.

We also struggled with how to keep participants up to date with changes that are taking place. This is tricky because we decided early on to have a separate discussion forum for each activity: we wanted to do this because we knew that people would want to go through the materials at different rates, and so at any particular time each individual’s interest could potentially be on smoothing different, so we had to group according to topic, rather than just use time. Ideally we’d like to use some sort of feedback dashboard, but in conversation I’ve heard it said that none of the platforms has got a complete answer to this, most likely because it’s a tricky problem.

Moodle performed better than standard platforms in some ways. At Kent we use Panopto to power our lecture recording platform (which we used for post production for the MOOC). This allows users to view the media – e.g. slides, screen capture, webcam video – separately, and, for instance, to index into the presentations by means of the slides. This extends what most MOOC providers do as standard.

Publicity


We’d originally aimed to have fifty to a hundred participants for the pilot. These came from both CS students at Kent and externally. At Kent we offered the course as revision for second year students who had already learned Erlang and as a taster for those who’ll take the Erlang course next year. I’d also recruited some potential participants at the Erlang Factory in San Francisco in March.

Mark we keen for us to be more ambitious, and so I tweeted an invitation, hash-tagged #erlang. We got enough retweets from key Erlang players that this brought us up to our 500 cap relatively quickly – a pleasing result, not least because it cost us nothing for advertising.

Make it multi-purpose


We wanted to make the best use that we could of our limited budget, so we made sure that we could use what we did in three different, but interlinked, ways.
  • First, we’d have a MOOC for people new to Erlang, but who we expected to have some programming experience.
  • Secondly, we’d make some parts of the MOOC, particularly those which give more extended examples, into master classes which could be of value for people who already know Erlang, but who want to see it used in practice.
  • Finally, the materials will allow us to teach Erlang to of face-to-face CS students in a different way: we’ll particularly be able to try “flipping the classroom” so that students watch lectures in their own time, and lectures can then be used for more interactive activities.

Engagement


We used forums for discussion, one par activity, but also added a general FAQ for some of the questions that came up as well as a “common room” for general discussion that cut across individual activities. We also used a forum as the place we started to try to form an online community, through people introducing themselves to the group.

Although we did have discussions, particularly about the homeworks, we’d have liked to promote more, and one idea we’ll definitely explore next time is to find ways of pairing up people to help each other out. Our plan is to do this dynamically, so that two people who hand in a solution to a homework at about the same time will be asked to comment on each others’ work. We hope that this will make it easier for everyone to see how to contribute, and also how they will be gaining benefit themselves from the process. 

Finally, we could also pro-actively form groups to work on later, larger, exercises (maybe supported with some collaborative editing software) but again this group formation will need to be dynamic to reflect the different ways that individuals use the materials.


Feedback 


One of the aims of engaging participants actively was to provide feedback to others on their work. What became clear is that we should also provide some way of participants to get automated feedback on their work too. 

Some years ago, Kent and Erlang Solutions Ltd worked together on an e-learning project hosted in moodle, key investigator Roberto Aloi. This project developed a feedback plugin for moodle to give testing and style feedback on Erlang programs, and we hope with ESL to re-develop this to fit with more modern technology (such as docker and REST).

Conclusion


We were happy with the pilot. It showed us that we could deliver a mini-MOOC using available resources and reasonable amounts of time. It also showed us where to put our effort next, principally in two areas: promoting the best engagement possible from participants, and delivering to them the best feedback that we can, at the time when it is most valuable to them. We hope to move things forward over the summer, and to present a full MOOC later this year or early next. We’ll also follow this blog post with another containing more details about the feedback from the participants in the pilot.




Thursday, May 28, 2015

On the fine-structure of regular algebra by Foster and Struth (review)

This review of On the fine-structure of regular algebra by Simon Foster and Georg Struth (Journal of Automated Reasoning 54 (2): 165-197, 2015) was first published in Computing Reviews.

Although it is dangerous to generalize from a single example, it’s instructive to look at Foster and Struth’s paper as giving us a snapshot of the state of the art in automating ordinary reasoning. First, let’s look at the particular subject of the paper.

Regular expressions and finite automata are fundamental to the practice of computer science, but they are a fertile topic for research, too. A key question is identity: when do two regular expressions (or two finite automata) behave identically? The answer is “when they describe the same set of strings,” but that’s less useful computationally than it first looks, since it requires that we test sets, which potentially are infinite, for equality. So, how else to judge identity? By setting out a collection of laws, or axioms, that encapsulate identity. Foster and Struth’s paper sets out the various attempts by a number of authors, including Boffa, Conway, Kozen, and Salomaa, to provide axioms that are sound (don’t prove any false negatives) and complete (can prove all identities).

The “fine structure” of the title is a detailed study of the interrelationships of these sets of axioms: showing when one set implies another (a process of deduction) and when one set is strictly stronger than another, which is shown by giving a counterexample that meets one set and not the other. The process of both deduction and finding counterexamples is computer assisted, and this provides the main contribution of the paper.

How does the automation directly contribute to the mathematics? It means that a number of loose ends could be tidied up: one set of axiomatizations was simplified, some missing axioms were found in the statement of a particular axiom set, and a gap was found in an existing, well-known proof. So it increased assurance in the work, but probably does not constitute a substantial advance in itself.

On the other hand, the work is indicative of real progress in the facilities available to support online proof and counterexample development. Advances have come in a number of ways. Most important is the way in which different approaches have come together: it used to be that the proof was either done manually, with a proof assistant, or in an entirely automated way, using some kind of decision procedure. The work under review uses the Isabelle proof assistant [1], which now supports the Sledgehammer tool [2], to apply a number of external automated theorem provers for first-order logic; in cases where this fails, Isabelle has its own simplification and proving systems. This has the effect of allowing users to concentrate their attention on shaping the higher-level architecture of the theories, rather than having to deduce the results proof step by (painful) proof step. Where this reaches its limit is when proofs go beyond the purely algebraic and contemplate rather more mathematical structure (for example, sets) as provided by a higher-order logic.

Although most mathematicians don’t use theorem-proving technology in their day-to-day practice, as opposed to the wide take-up of computer algebra systems in various areas of mathematics and engineering, it has taken real hold in the area of mechanized metatheory of programming languages, as witnessed by the POPLmark challenge (http://www.seas.upenn.edu/~plclub/poplmark/). It may well be because this is an area where the scale of proofs can be such that the usual social process of proof checking has been shown to be less than ideal, that value that accrues from formalization. In conclusion, it is interesting to speculate on how long it will take for the practicing mathematician to reach for a theorem-proving system as a part of his or her daily work.

[1] Isabelle home page. https://www.cl.cam.ac.uk/research/hvg/Isabelle/index.html (05/13/2015).
[2] Blanchette, J. C.; Böhme, S.; Paulson, L. C. Extending Sledgehammer with SMT solvers. In Proc. of the 23rd Int. Conf. on Automated Deduction: CADE-23. Springer, 

Wednesday, May 13, 2015

Erlang MOOC pilot, day 3

We’re at day 3 of our MOOC pilot on functional programming in Erlang: so how are things going?

MOOCs aren’t synchronous, and so people can begin and progress in their own time. So far half of those who signed up in advance have started to use the MOOC, and half of them have progressed a good way through the first week’s material. We have people taking an active part from (in no particular order) the Netherlands, Spain, Russia, the USA, Mexico, Denmark, Italy, France, Argentina, Chile, Japan, the Czech republic, New Zealand, China, Canada, the UK amongst others.

We’re also finding our way in working out the best ways of interacting. Our forums are split “by activity” and while this keeps each one on topic, it leaves no space for general discussions, so we’ve added a “common room” for those discussions. We also wanted to share with everyone some of the responses to points being made in the feedback form, so we have added a (growing) FAQ page to hold those questions and answers. Luckily we’re able to sort quite a few things out, but some others – like making videos downloadable – are going to take more time, and internal university discussions, to resolve.

We’re also beginning to see discussion and feedback on programming homework assignments. Part of the MOOC ethos is to involve everyone in giving feedback about others’ work and problems, so we have a fully meshed network rather than spoke to hub with the Kent staff providing feedback and so on. It’s still early days, but discussions are beginning. We also had a request for acceptance tests for the homeworks, but again for the purposes of the trial we’re encouraging participants themselves to develop and share these too. We might come back to this as the pilot progresses …

So far Moodle seems to be coping … there would be clear advantages to going with a fully-featured MOOC platform, such as providing transcripts of the videos for accessibility reasons, but Moodle seems to be managing so far. We’re also able to see how people are progressing in quite some detail, so there will be number crunching going on over the course of the pilot, and hopefully some useful data analysis as an outcome.

Monday, coincidentally, we were part of a tea party to celebrate the Beacon Projects at Kent, and as part of that we got a lot of encouragement and interest in what we’re doing. It can only help with developing a case to argue to the university about the multi-faceted value of MOOCs, online lectures and blended learning to our staff, students and public reach.

Monday, May 11, 2015

“Functional Programming in Erlang” MOOC pilot

It’s all very peculiar.

We’ve been working flat out to get the pilot MOOC on Functional Programming in Erlang up and running at Kent, we have signed up over 500 participants, and it just went live 20 minutes ago. Rather like sending a book manuscript off to the publishers, it has all gone quiet, and we're waiting to see what's going to happen. I should tell you who the “we” are: apart from me, there’s Mark O'Connor, who is Distance Learning Technologist for the university, and Stephen Adams, who is TA for the course and who is a PhD student in computer science here.

This pilot is part of a bigger plan, which includes supporting our face-to-face teaching of Erlang at Kent, as well as putting out a set of Master Classes in Erlang by Joe Armstrong, one of the inventors of Erlang, Francesco Cesarini, founder of Erlang Solutions Ltd, and me. These will be coming shortly. This K-MOOCs project is being supported by the University of Kent as part of its Beacon Projects initiative, which is in celebration of the University’s fiftieth anniversary.

We recorded the master classes in a studio with green screen etc, to try that out, and these are in the final stages of production, but the majority of the MOOC lectures are more “home made” and were recorded on my MacBook Pro, using the built in audio and video facilities, and Kent’s lecture recording facilities that are powered by Panopto . Given that “available tech” approach, we’ve been pleased with the results (but our participants may choose to differ!).

In getting the MOOC planned and executed, Mark has been tireless in providing infrastructure, in getting me to plan, in editing and in feeding back on all the activities. We’ve also really benefitted from advice and help from Mike McCracken of Georgia Tech, who has pioneered their online CS courses and MOOCs. Without Mike’s advice about how to script the course in advance, implementation would have been so much more difficult.

As it is, we’ve got the course in place. I’ll be adding a few finishing touches – more quizzes! – today, but really we’re waiting to see how people get on with what’s there. I’ll aim to blog some more as the pilot runs, and we see how it all works out.

Thursday, April 30, 2015

Automated Proof in Practice


While it is dangerous to generalise from a single example, it's instructive to look at Foster and Struth's paper as giving us a snapshot of the state of the art in automating ordinary reasoning, but first let’s look at the particular subject of the paper. 

Regular expressions and finite automata are fundamental to the practice of computer science, but they are a fertile topic for research, too. A key question is identity: when do two regular expressions – or two finite automata – behave identically? The answer is “when they describe the same set of strings”, but that's less useful computationally than it first looks, since it requires that we test sets – which potentially are infinite – for equality. So, how else to judge identity? By setting out a collection of laws – or axioms – that encapsulate identity.  Foster and Struth's paper sets out the various attempts, by a number of authors including Boffa, Conway, Kozen and Salomaa, to provide axioms that are sound (don't prove any false negatives) and complete (can prove all identities). 

The “fine structure” of the title is a detailed study of the interrelationships of these sets of axioms: showing when one set implies another – a process of deduction – and when one set is strictly stronger than another, which is shown by giving a counterexample which  meets one set and not the other. The process of both deduction and counterexample-finding are computer assisted, and this provides the main contribution of the paper. 

How does the automation directly contribute to the mathematics? It meant that a number of loose ends could be tidied up: one set of axiomatisations was simplified (though finding hitherto undiscovered relationships between axioms), some missing axioms were found in the statement of a particular axiom set, and a gap was found in an existing, well-known, proof. So, it increased assurance in the work, but probably does not constitute a substantial advance in itself.

On the other hand, the work is indicative of real progress in the facilities available to support online proof and counterexample development. Advances have come in a number of ways. Most important is the way in which different approaches have come together: it used to be that proof was either done “manually” with a proof assistant, or in an entirely automated way using some kind of decision procedure. The work under review uses the Isabelle proof assistant, which now supports the Sledgehammer tool to apply a number of external automated theorem provers for first-order logic, and in cases where this fails, Isabelle has its own simplification and proving systems. This has the effect of allowing users to concentrate their attention on shaping the higher-level architecture of the theories, rather than having to deduce results proof step by (painful) proof step. Where this reaches its limit is when proofs go beyond the purely algebraic and reverie rather more mathematical structure (e.g. sets) as provided by a higher-order logic.

While it's not the case that most mathematicians use theorem proving technology in their day to day practice – as opposed to the wide take-up of computer algebra systems in various areas of maths and engineering – it has taken real hold in the area of mechanised meta-theory of programming languages, as witnessed by the POPLmark challenge http://www.seas.upenn.edu/~plclub/poplmark/. It may well be because this is an area where the scale of proofs can be such that the usual social process of proof checking have been shown to be less than ideal, hence that value that accrues from formalisation, and, in conclusion, it is interesting to speculate on how long it will take for the practising mathematician to reach for a theorem proving system as a part of his or her daily work.

This review of On the Fine-Structure of Regular Algebra by Simon Foster and Georg Struth and published in the Journal of Automated Reasoning is a first draft as submitted to Computing Reviews.

Wednesday, February 25, 2015

Some stories about Alan Turing


So, here are a few Turing stories of my own. The first from Robin Gandy, who supervised by DPhil, and who, in turn, was Turing's PhD student. Robin had a wealth of stories about Turing, many of which made their way into Andrew Hodges' book – Hodges was a postgrad in Oxford with Penrose around the beginning of the 80s. Perhaps most poignant [quoting from Mike Yates' obituary of Robin]  was when asked about Turing's motives if he really did commit suicide, Gandy would become quite heated: “Some things are too deep and private and should not be pried into.” Sara Turing, Alan's mother, certainly always maintained that his death was an accident.

Her biography of Alan was republished in the centenary year by Cambridge University Press, and that also has only remembered stories of his youth and adulthood. The most striking thing for me was the postscript written by his brother John, on their upbringing, which was not untypical for the English upper middle classes in the early years of the century. Two quotes
  • When Alan was two “rightly or wrongly, [my father] decided that he and my mother should return alone to India, leaving both children with foster parents in England” … “it was certainly a shock for me, even at the age of five” but “it was accepted procedure” (and he goes on to compare it with Kipling's horrendous experience, noting that at least they “escaped” that).
  • The real bombshell, though, is schooling. John says, without an ounce of irony or indeed anger “I take credit for persuading my parents to send [Alan] to Sherborne instead of Marlborough, which all but crushed me and would certainly have crushed him”.
A final anecdote. In our previous house, a near neighbour was a retired canon from Canterbury Cathedral, Donald Eperson, who wrote puzzles for the Mathematical Gazette, and who had been a schoolteacher before being ordained. Not any teacher, though, he's taught Alan at Sherborne, and indeed is mentioned in the Hodges biography. He remembered Alan, and I lent him the book – unfortunately, references to his naiveté rather upset him, and I was sorry for unsettling him. 

It's certainly a great thing that Turing has become almost a household name, and that his memory has been rescued for generations to come as one of the greatest scientists of the twentieth century. It's also a great thing that he was pardoned for his conviction for being gay … but surely something that should apply to everyone who was treated so shamefully?

Links

Gandy obituary

Sara Turing bio of Alan

Memoir of Donald Eperson (look for "Music and Mathematics")

The Imitation Game – telling a good story about Alan Turing


So, what to make of The Imitation Game, the film based on the life of Alan Turing?

Well, first of all it tells a good story. Some of the key messages about codebreaking are there:

  • Knowing something about the content – particularly stylised beginnings or endings – make it easier to break the code.  
  • The paradox of the codebreaker: you can't betray that by changing your behaviour too much, or the coders will change their setup … something card players surely recognise.
  • The Bletchley crowd were a mixed bunch: classicists rubbed shoulders with mathematicians and debs.
But it's clearly telling a story in the sense of lying too, and that's a frustration. Maybe it must to move the plot along, but some of the changes seem wilful and so out of character:
  • Part of the extraordinary nature of Bletchley was its scale: in the film it's shrunk to something like a "Famous Five" adventure: Turing and his small crew have the idea for the machine, build it (no Tommy Flowers), and then take the decision about not being able to reveal that the code has been cracked; that just doesn't make historical sense, but I guess keeps the plot moving;
  • An anecdote about the scale of the place: a couple who were in the forces during WW2 were recently visiting Bletchley, and half way round the husband confesses to his wife that he'd worked there during the war – because of the secrecy surrounding the whole operation, he'd been sworn to secrecy – only for her to admit to working there too; perfectly possible
  • More egregious is the sub-plot about Cairncross, and suggesting that Turing had in some way colluded with him – no historical evidence for this at all.
  • Worst of all, I think, was the conceit of Turing's "home computer" Christopher. No evidence for  that at all.
So, it's a good story, well acted and put together, but it tells too many stories to be completely satisfactory. Check out the biography by Andrew Hodges for a comprehensive and erudite view of Turing's life.