Sunday, October 5, 2014

Advice on going to your first conference

I was asked for advice from someone going to his first conference … in this case CodeMesh in London. Here are my thoughts … any comments or other advice?

Strategy … what are you aiming to find out from the conference: some very general impressions about what is going on with functional innovation, something focussed on a particular language or languages, or on particular technologies? Depending on this is …

Plan … CodeMesh has 4 parallel tracks (a real hassle) but it's well worth making sure that you have a plan for what you most want to hear, reflecting your strategy. You should be able to move between individual talks, but that can sometimes be tricky because of synchronisation between different rooms, as well as the distances between rooms. If that's tricky, then you can always change at session breaks.

Another way of choosing is to go to the talk that you know least about. If you go to the one you know most about, that talk will probably spend 80% or more of its time telling you things that you already know. Of course, it needs to be one you're interested in …

Networking … a lot of what happens during the conference is outside the sessions, so do make sure that you make the most of the coffee / tea / meal breaks, and the evening sessions. Speakers are always happy to chat, so you can engage with them outside sessions, or follow up from any questions you ask in a session (scary, but a way of getting noticed). You can always email speakers, introducing yourself, and ask a question, if you do not feel comfortable asking a question after a talk.

Find an introducer, and have them introduce you to some people. If you are going with your PhD supervisor or another colleague or friend who has been there before, they should do the introductions.

Presenting your ideas … you're going to meet people who'll ask what you're working on and you want to interest them and move the conversation along, not stumble over how to explain what you're doing. So, have a 30 second elevator pitch ready.

Talk … OK, you may not feel ready for this, but if you had something specific and interesting to say you can often give a 5 minute "lightning talk" at meetings like CodeMesh. That get you and your work noticed, and people are usually very generous in listening and supporting speakers with less experience.

Learn … if you can get to the tutorials then you can learn a whole lot at these. There are some excellent tutorialists at CodeMesh this year.

Social Media … increasingly there is a whole virtual side to conferences, so track the twitter feed and other online stuff, and contribute yourself too.

Branch out … if you're there with your buddies, don't just stick with them, but aim to meet new people too, particularly at any of the conference social events. Even a casual conversation a couple of times over a couple of conferences lays a foundation for a deeper professional relationship, especially with peers.

Refreshments … there's always free (well, paid for in the registration price) coffee/tea etc. at the breaks and lunch in the middle of the day, but at quite a lot of conferences there's breakfast too, so it's worth getting along in time for that.

Afterwards you can follow up with people you have met by email or social media. You can also catch up on presentations which you missed by watching the video if the conference talks are recorded. Personally, I find that despite my best intentions, it's very unusual for me to find the time to do this.

Particularly for big conferences, it's best to pick and chose which talks to attend rather than sitting in on everything, otherwise you'll get burned out in the first couple of days. “I'd also advise listening to the speakers rather than sitting on your laptop/phone - personally I'd ban such devices from talks!” Also, keep away from alpha male superstars and their groupies … focus on the people who give presentations that fire you with enthusiasm, and those that take your presentation seriously.

Finally, have fun soaking up all the new ideas and meeting all the interesting people behind them.

Updates … thanks to Scott Fritchie, Andy Gill, Kathy Gray, Graham Hutton, Stefan Kahrs, Greg Michaelson, Neil Mitchell and Gem Stapleton for their comments and suggestions.

HTH

Simon

Monday, August 25, 2014

Reading Robert Macfarlane by the internet

There's an old fashioned pleasure to reading on a wet August afternoon. Robert Macfarlane's The Old Ways takes you – in imagination – out into the wilder (or indeed not so wild) parts of Britain. What increases the pleasure and depth of the experience is reading with the internet by your side. With the internet we get so much further …

First, the maps – Ordnance Survey if you have paid for them, or Google maps if not. Walking along the Broomway in (or rather off) Essex takes you to that footpath along the sands, right next to the "DANGER AREA" signs. In the Hebrides, we can find the islands on Google maps – and satellite view - unnamed, but unmistakable from his descriptions. And then to wikipedia to see what gannets look like and read about the peculiar anatomy that sustains their deep dives into  the ocean, making the story of the gannet that pierced the hull of the boat but kept it plugged entirely believable.

And then onto Harris itself. Trying to negotiate the walk he makes – again we have topography and named lakes, but no hill names – but hills cast shadows on the satellite picture, and photographs culled from somewhere (even street view was here) show the picture from the ground. Then we can look at Steve Dilworth's art and read about what Iain Sinclair says about him.

Awe-inspiring.

Cloud Refactoring

A draft review of “Cloud refactoring: automated transitioning to cloud-based services” by Young-Woo Kwon and Eli Tilevich, from Automated Software Engineering, (2014) 21:345–372, DOI 10.1007/s10515-013-0136-9; to appear in Computing Reviews.

Refactoring is the process of changing how a program works without changing what it does, but even before the term was coined, it was practised by programmers as "program restructuring" from the early 1950s. Refactoring itself came to prominence with work by Opdyke, Griswold and Johnson in the early 1990s and was popularised by Fowler's 1999 book. Refactoring is done to improve code for a range of reasons: to make it conform to coding standards, to make it easier to read, or to prepare the code for modification or enhancement.

Whatever the case, refactorings of large code bases can be infeasible without automated – or semi-automated – tools, and many IDEs incorporate refactorings for a variety of languages, although it is perhaps most developed in the IDEs for the Java language, including IntelliJ and Eclipse. Refactoring "in the small" is the preserve of the developer, and may be done as a part of his or her day to day development process; larger-scale refactoring is often anticipated, but perhaps harder to justify as part of an aggressive release cycle, unless, of course, there is some concrete gain to be made. What better example could there be of this than migrating an existing system to the cloud?

Taking a system and moving it to the cloud must be the right thing to do: it provides scalability, resilience, and also fits the zeitgeist. However, as the authors make very clear, it is not without difficulties. It is important to preserve functionality – the system should not change what it does – but also it should maintain non-functional properties like efficiency (e.g. latency and throughput) and questions of resilience are more acute in a distributed/cloud setting. In common with many other refactoring tool builders, the authors propose a two-pronged approach to the problem: first, they give an analysis to provide recommendations of how systems might be refactored and then they develop a tool to implement the refactorings identified. Their first phase combines static clustering analysis with runtime profiling to identify potential services which can be migrated, and their second implements refactorings that make this change, through introducing the appropriate interfaces and proxies and at the same time adding fault-handling facilities to deal with the additional risks introduced by moving to a more distributed application platform.

These refactorings are included in Eclipse and available through its standard refactoring workflow, and the work is demonstrated through two small examples and a larger case study performed for GE Research. One of the key questions faced by designers of a cloud refactoring tool and their users alike is how much of the refactoring workflow should be automated. In discussing a number of examples the authors say that "we selected appropriate classes for the reason of the performance, call-by-reference, and meaning of features", thus making clear the role for the domain-aware engineer in the process. It would have been interesting to have heard more about the view of the developers of the software about the results of the analysis: were the services identified meaningful to them? would they have structured things somewhat differently from the analyses of the tool?

In summary, this paper makes a clear and well-described contribution to what is a fast moving field: Google Scholar, for example, gives over 1000 references matching the keywords "cloud" and "refactor", and this number can only be set to grow as migrating cloud proves to be more of a challenge than its advocates suggest.

Saturday, July 19, 2014

In honour of Robert Wyatt

Picture the scene … you're having an evening in the pub, with the usual music in the background: guitars, 4-4 bass beat, when suddenly it changes: it's a song with more complicated rhythms and  dissonant notes. What is more, it's sung in a lugubrious style by someone who sounds like a real person, rather than an identikit American drawl. The singer is Robert Wyatt, our honorary graduand today, and picking music to catch your attention like this has come to be called "Wyatting" in his honour.

Canterbury is know historically for the cathedral, and Chaucer’s Canterbury Tales, but for those of us who grew up in the 60s and 70s, it was also the home of the Cantebury scene – musicians whose careers started out in the city in the late 60s. The scene may have been no more that a group of boys whose families had eclectic record collections … record collections that included 20C classical – Hindemith but “nothing German … not long after the war” – and jazz – Max Roach and John Coltrane, perhaps – but those boys went on – jointly and separately – to be a key part of the creative explosion that was the 1960s, and Robert was there at its centre.

Right from the start, Robert was one of a very select group – others are Justin Bieber, Dave Clark, Karen Carpenter, Levon Helm and Ringo Starr – of singing drummers. As he tells it, drumming was something you could do while listening to a Max Roach record, and what's more you could get started by drumming with rolled up newspapers. You could also sing along, too, and teach yourself to be a musician. All of this meant that at school – the Simon Langton Boys School, here in Canterbury – Robert was rather left behind, and instead he became one of the Wilde Flowers, who played in Canterbury and further afield for a couple of years. That is “Wilde” with an “e”, in honour of Oscar – an example of the wordplay that “lets words collide” running through all Robert’s work.

In 1966 Robert helped to form Soft Machine, with David Allen (from Herne Bay, and introduced to Roger as “someone else who has grown their hair long”), Kevin Ayers and Mike Ratledge. Soft Machine never managed to break through to the rock or pop mainstream, but were hugely influential, first for their gentle English surrealism, and later for their cool, stripped down, fusion jazz rock. They cut their teeth – and sharpened up their act – playing support to Jimi Hendrix in a long 1968 tour of the USA. Robert observes: when you're in front of 5,000 young Texans waiting to hear Hendrix, you don't mess about; … it also helps if you don't have a guitarist in the band! This tour brought them American success, which was mirrored on the continent, where their jazz style found a sympathetic ear. In the UK, they became the first rock group to play at the Proms.

Soft Machine broke up, reformed to make their landmark ”Third” album 2LPs with four side-long pieces. and finally Robert left them for good. He formed a larger group – Matching Mole – a bilingual pun on “machine molle” / soft machine (groan!) but not long after that in an accidental fall from a window in 1973 he received injuries that have meant that he now uses a wheelchair. This took him – in his words – into just another way to be, and it meant that he had time to slow down and think about how to sing. His first record after that – Rock Bottom – sets the pattern for his work since: slow songs, striking tunes and rhythms, usually based on keyboard melodies. A reworking of the Neil Diamond song I'm a Believer – first sung by the Monkees – gave him his first hit, but that is only one side of his work: he's a great musical collaborator, and played with a huge range of people through the 1970s and 80s, from free jazzers like Carla Bley to avant gardists and more traditional rock stars, often with a political message. His most remembered song from that time is his version of Elvis Costello's Shipbuilding: an elegiac meditation on the Falklands War and its effect on the parts of the county that had been hit hardest by the Thatcher government's policies.

Looking back over Robert's musical career – which started at around the same time as the University of Kent, and shares its 50th anniversary – it is hard to think of more than a handful of musicians who have been able to keep their music as vital and original as it was when they began. His recent work has seen collaborations with the Brodsky Quartet - also honorary Kent graduates - Brian Eno and a re-recording of a set of songs by the electronica group Hot Chip. 

Recognition for his work has come in many forms: he has been a guest editor of Radio 4's Today programme, he is an honorary doctor of the university of Liège, and he is a petit fils ubu from the collège de pataphisique. Locally, he is celebrated in a life-size stencil by the street artist Stewey in Canterbury's Dover Street, on the site of the Beehive club where he played at the start of his career. 

In its turn, the University of Kent would like to record its appreciation of Robert's work. For his musical achievements and influence over the last fifty years, most honourable deputy vice-chancellor, to you and to the whole university I present Robert Wyatt to be admitted to the Degree of Doctor of Music, honoris causa.

[Text of the oration for Robert Wyatt's honorary degree award, Canterbury, 18-7-14].

Thursday, July 3, 2014

Fifteen facts about EPSRC

Thanks very much to Alex Hulkes for his visit and presentation about EPSRC in general and their ICT programmes in particular. Here are fifteen things that I learned.
  1. EPSRC has about £2.5bn worth of live projects at any time.
  2. “We have to do applied research …” because it is part of EPSRC's Royal Charter.
  3. Terminology: EPSRC has two kinds of theme: “capabilities” correspond to specific research areas, while “challenges” are cross-cutting themes like energy or global uncertainty.
  4. Terminology (2): for EPSRC “interdisciplinary” means belonging to more than one of their research areas.
  5. 75% of PGR funding is DTP plus CASE, and so not subject to particular shaping or direction.
  6. Pathways to impact: either say how you will achieve impact, or say that it doesn’t make sense for your research to have impact (at this point). 
  7. It’s good if you can say how your research project fits in with what EPSRC are currently funding, as long as it’s not a straight duplication of work that’s already funded.
  8. Developing leaders (fellowships): while it’s important to have a good research project, that’s a necessary rather than a sufficient condition: you need to be able to convince that you are a research leader.
  9. These schemes are less popular than earlier fellowships schemes, perhaps because of the difficulty of getting evidence of leadership potential down on paper.
  10. In ICT, EPSRC wants to keep developing and encouraging new research areas. It also wants greater collaboration with other areas
  11. It’s also keen to get ambitious proposals: of the funding for responsive – rather than directed – research, some 50% of the cash is in longer/larger grants.
  12. Proposals will get at least 3 reviews for panel.
  13. There's no correlation between the number of reviewers and the success/failure of the proposal.
  14. The PI response is important: the factual response has an effect when (and only when!) you can provide evidence that shows that an objection doesn't hold.
  15. Success rates (at least for ICT) are constant across areas set to grow/stay constant/ shrink. On the other hand the “grow” area has received more applications.

Tuesday, June 17, 2014

Review of "Certified Programming with Dependent Types"


Chilpala's text Certified Programming with Dependent Types is an outstanding introduction to how programs can be guaranteed to be correct, by means of the Coq theorem prover – programs that are, in his terminology, “certified”. While machine-assisted proof has been possible for more than a quarter of a century, it is only in the last five years that a substantial body of fully-formal proofs has been delivered. These include mathematical results – such as Gonthier's proof of the four colour theorem – and also those in the domain of theoretical computer science. Indeed, the “POPLmark challenge” http://www.seas.upenn.edu/~plclub/poplmark/ has set a benchmark for proof mechanisation in language metatheory. So, this text is timely in providing an accessible introduction to the area; but what is it that makes it stand out?

Firstly, Chilpala gives an excellent introduction to the area, explaining the background of the different approaches to theorem proving in systems such as ACL2 and PVS among others, as well as the Coq system that is the subject of the book. Complementing this is a discussion of the logics implemented by the various systems. He argues cogently for a theory that supports dependent types, under which the result types of functions can depend upon the values of inputs. Dependently typed systems support a dual interpretation: objects can be seen as values belonging to types, or equivalently as proofs for propositions – the so called ‘Curry Howard’ isomorphism. Allowing the interpretations to co-exist gives a type system that can express program pre-conditions, or alternatively a logic in which a functional programming language can be used to build proofs. Coq implements a powerful dependently typed theory that has a functional programming foundation, proof checking in a secure kernel (according to the ‘de Bruijn principle’), advanced proof automation through a tactic language Ltac and a principle of reflection.

Secondly, Chilpala opens the exposition in Chapter 2 with a worked set of examples that concentrate on language evaluators and compilers. Rather than covering all the material needed first, he plunges into the exposition, giving a survey of what is possible, and saying that “it is expected that most readers will not understand what exactly is going on here”. Does this approach work? It is clear reading this that anyone reading the chapter needs to understand a language like Haskell or ML, but with that knowledge it is possible to gain a good sense of how the system is used in practice, and so I would personally endorse it. After all, it's possible to skip on to chapter 3 and follow a sequential approach if this proves to be too forbidding.

Thirdly, in contrast to some introductions, the book promises a “pragmatic” approach to proof construction or engineering. This is welcome, since Chilpala acknowledges that Coq is a large system that has grown in complexity over the last twenty years. Does he deliver on his promise? The book is divided into four main sections: the first two cover the fundamental technical material, namely “basic programming and proving” and “programming with dependent types” in some 250 pages, but the remaining two sections (130pp in total) cover “proof engineering” and “the big picture” and it is in these two that he is able to cover the pragmatics of proof. He covers not only common approaches like logic programming in proof search but the problems of larger-scale proof, such as the evolvability and robustness of proofs. This is forcefully communicated through a set of “anti patterns” which mitigate against well-structured and evolvable proofs, and provides strategies for avoiding these.

The book doesn’t contain exercises, but these can be found on the book website, contributed by readers. The website also provides access to the full text of the book, as well as the Coq code for all the chapters, as well as supporting libraries of code and tactics. It is very clearly written, and tha author has a direct, approachable style. If you want to find out more about using the Coq system for building realistic, large-scale, proofs, particularly for certifying programs, then I recommend this highly.

Saturday, May 10, 2014

PROWESS mid-term workshop

I'm just back from a trip to Borås in Sweden, where we held a one day workshop for the PROWESS project to showcase what it has achieved, just over half way through the project. The aim of PROWESS is to bring the benefits of property-based testing to testing web services. We're a consortium of universities from the UK (Sheffield and Kent), Sweden (Chalmers) and Spain (A Coruña, UPM); the Swedish research organisation SP,  and SMEs from those three countries too: Erlang Solutions (UK), Quviq (Sweden) and Interoud (Spain).

As well as attendees from the project we had more than twenty others from companies in Sweden, Denmark, the UK and the Netherlands and universities in Denmark, Sweden, the UK and Brazil. The day went very well, with project members like myself being pleased to see how the separate strands of the work are coming together, and that the interactions we're having informally are turning into practical tools.

More importantly, we got positive reports back from the external attendees too, who were able to give us really helpful suggestions about how we could extend and build on what we had done. The prize for the coolest talk has to go to Benjamin Vedder, whose demo of fault injection for a quadcopter using a QuickCheck model stole the show [OK, the fault injection only happened in a simulation – to the relief of the front row – but still fun to see the use of QuickCheck in Erlang used to inject faults in to C++ embedded systems!]
Benjamin's quadcopter

The morning programme gave an overview of the project and property-based testing in QuickCheck, as well as introducing the VoDKATV platform from Interoud, which provided a case study running through the more detailed presentations in the afternoon
The afternoon gave participants a chance to talk in more detail about specific technical innovations in the project:
  • Inference of state machines from QuickCheck traces – Kirill Bogdanov, University of Sheffield
  • Automating Property-based Testing of Evolving Web Services – Huiqing Li, University of Kent and Laura Castro, University of A Coruña.
  • Fault injection – Benjamin Vedder, SP.
  • More-bugs -- how to not find the same bug over and over again – Ulf Norell, Quviq.
  • A Property-based Load Testing Framework – Diana Corbacho, Erlang Solutions Ltd and Clara Benac Earle, UPM, Madrid.
  • Smother: Extended code coverage metrics for Erlang – Ramsay Taylor. University of Sheffield
  • Automatic complexity analysis – Nick Smallbone, Chalmers University
The full programme for the day, including talk abstracts, is here; and the slides for the presentations are here.

I'd recommend this kind of open day event to anyone planning a complex research project, as it gives you all a chance to get some invaluable external insights into the work that you're doing. It's also a real fillip to the project to see the tangible progress that has been made, and also to enthuse project members to do even better in the second half of the project.


Tuesday, March 25, 2014

Writing your thesis and preparing for your viva

Writing your thesis


The first question to ask yourself is "who am I writing for?" Ok, you're writing for yourself and your supervisor, but you and (s)he know the work too well. Think instead of the examiners, and others who will read it now and in the future.

Sticking with the examiner for the moment, you should realise that the examiners want to pass you: you make it simpler for her if she can find all the evidence she needs as easily as possible. In fact, this is just as applicable when you're writing papers in general, and particularly for when you're drafting a research grant application. So, here are some specific points 
  • write it as you would say it: that means keep the language straightforward … just because it's written, there's no need to use longer words, or more complicated constructions;
  • if in doubt, include it: it's not a problem to read something that you know (within reason) but is is a problem to come across a TLA that you don't understand;
  • give a clear statement of contribution of the work: what have you done that goes beyond the state of the art (which you will explain in a "related work" chapter) … also it's worth saying where to find those contributions in the thesis (by giving section cross-references)
  • if some of the work has already been published in jointly authored papers, it's very good for you to give a clear statement of your contribution to the papers;
  • make it easy to read the thesis by including the right signposting, which means that anyone reading the thesis knows precisely what you do, where; in particular, I'd advise you to include:
    • an overview of the whole thing in the introduction
    • a summary at the beginning of each chapter, section by section
    • and a conclusion at the end summarising the contribution of the chapter
To get another perspective on what to aim for in writing, you can look at the criteria for the CPHC/BCS Distinguished Dissertations competition, which aims to reward the best theses in CS in the UK. This prize looks for
  • whether the thesis makes a noteworthy contribution to the subject, and what that contribution is;
  • the standard of exposition;
  • how well it places its results in the context of computer science as a whole; and
  • how easily a computer scientist with significantly different interests would be able to grasp its essentials.
As you're writing there are people who will give you feedback. At Kent someone from your supervision panel will help, as well as your supervisor, it's worth checking early on who from your panel will do this.

A cardinal rule in getting feedback is only ask for feedback on things that are final, albeit final first drafts. There's nothing worse than giving feedback on something and then being told "oh yes, I was going to do that" … it should have been done (preferably) or at least there be some note to the effect that it's still to do. Why do I say this? The first reading is always the most effective … when you read for a second time, it's inevitably going to be shaped by the results of the first read. 

Finally, make sure that you've fixed the problems pointed out first time before you ask someone to read it again!

Preparing for your viva


First, make sure you check when it is, and keep in touch with the internal examiner and supervisor about what they need you to do specifically to prepare for the viva.

Candidates are often (usually?) asked to give a summary of the main contributions of the thesis at the start of the viva: this could be a 10 minute oral presentation (no slides) or (sometimes) with slides …  either way, it's worth be prepared, and check with your supervisor about this.

Read the thesis again in the days before the viva, so that you're familiar with what is where … it will probably be a few months since you submitted it, and so it will have gone out of your mind. While you're reading thorough, it makes sense to keep track of typos that you find, so that you can fix them in the final version. You can go armed with errata lists, either personal or to share with the examiners, that you compile while reading it.

Will your supervisor be present? The Kent rules are they are only there with your explicit permission, and this is entirely up to you. At other universities the rules might be different, and other people might be there – this might be an independent chair of the proceedings, for example – or the viva might be recorded. It may be that you would like to invite your supervisor to be present after the viva is over,  if there are corrections to be discussed once the decision has been made.

Another way of preparing is to give a seminar to your research group: that will help you think about what your contribution is, and also how the parts of the thesis fit together, something that may well be a question in the viva itself. I've heard of people having a mock viva, but I'm not sure that's such a good idea … each viva will be different, and preparing by reading and giving a seminar should be enough.

Wear something comfortable … and take some water with you.

Probably you'll have access to a whiteboard, but just in case not it's useful to take along some paper if you want to draw diagrams or otherwise explain things visually or symbolically. 

Once the viva gets going,
  • think before you answer … it's no problem to consider what you want to say before you say it;
  • if you don't understand a question, ask for clarification … if it's not clear to you then it may well not be clear to the other examiner either, and
  • if things seem to get stuck into a discussion of a particular point, politely ask to move on. 
Finally, some common questions that get asked,

  • as I said earlier, examiners often begin by asking you to give a summary of the contribution that your work makers,
  • and this might be followed by going into some more depth about how the bits of the thesis fit together  – it is supposed to be a "body of work" and it's worthwhile thinking about how you would explain that in your case;
  • other questions might ask about your knowledge of work by others in this field,
  • or get you to talk in detail about "your argument on p34" or "your proof on p31" … it's not all high-level and context, but will engage with the details of what you have written, and how you justify it.
So, good luck with the writing and the viva itself!

Monday, March 24, 2014

The ideal graduate

The School of Computing has a stakeholder panel, made up of representatives from companies and other organisations that we work with. Its job is to give the school strategic advice on its teaching, research, enterprise and public-facing activities in general. In a recent meeting we discussed what it is that panel members look for in the interns, placement year students and graduates that they recruit.

One of the distinguishing features of computer science at Kent is that about eighty percent of students - over 100 this year - take a year's industrial placement between their second and final years. It's therefore not a hollow promise to say that their degree overall is a single package, delivered jointly by the school and its industrial partners. We teach what it makes sense to learn in the classroom, while our partners are able to give students the experience that can only come from working on real projects for real customers.

So, what is it that this selection of our graduate employers look for?

  • First, it goes without saying that strong technical skills are required: in particular, our students need to be able to program. Not all CS graduates are competent programmers (!) - and many employers will test applicants' programming competence. Theory and maths were discussed too. But what other attributes do they need?
  • Curiosity and the drive to learn. It's clear that all our graduates will be working in a quickly changing environment, and so will have to keep up with new technologies and applications. When presented with a situation, the best graduates will ask "why?", and try to find out the answer. They will likely have an agenda of what it is they want to learn, and what skills they want to develop. As a school, we therefore need to make sure that we help our students to be independent learners, both individually and together in groups.
  • Approach. It's not enough to roll up your sleeves and start coding … nor is a system finished when it compiles without errors and passes a handful of tests. Those who are able to plan their work using pencil and paper before getting into the details of the work are likely to perform much better. Similarly, systems need to be unit tested, but also whole applications need to be exposed to potential users, and tested to destruction. This leads on to …
  • Empathy. Testing is just one aspect of this. There are end users who'll need to interact with what a developer does, and having some sense of their needs, and how they will use the system: listening skills, and understanding the context of work - what does drive this: money? user engagement? - isn't an optional extra. Others may well work with challenging groups, for example clients of social care, with a particular approach and needs. 
  • The flip-side of this is being able to tell a story, or have a "value proposition". We can all suffer from this - I certainly have colleagues who can write research applications that are all about a great solution, but the problem is it's not clear what problem it is solving, if anything. So, begin able to appreciate context, and explain an idea from a different point of view, is key.
  • More traditionally, I guess, are project skills: team working, appreciating the lifecycle - there's testing again, agile processes (our students, and students in general, seem to cling to waterfall models), and general infrastructure skills. A nice angle on testing was "what will be the impact of this software on the system when I integrate it, in terms of functionality, performance and scalability?" - it's not just a matter of meeting those few unit tests.
  • Many employers saw students come in wanting to be developers, while an employer will be looking for wider ambition, ultimately. Employers are also looking for diversity in their recruits, not just in terms of gender, but all dimensions of difference.
Looking at this list it's easy to see that some of these are skills which students will acquire through their placement year, or in the Kent IT Consultancy, but there's a clear challenge there for academic departments in how they develop students as independent learners, curious about the world in general and tech in particular. Experiencing failure - in a "safe" environment - would also really help, and immersion in a topic can really help: a one week intensive MSc project some years ago was one of the most positive teaching experiences I have had … that also reminds me of teaching Open University summer schools - a year's worth of university experience in a week!

Thursday, March 13, 2014

Monadic Functional Reactive Programming

Reactive programming - programs that interact with the "outside world" - present a problem to the pure functional programmer: how to program in the traditional style in the absence of updateable state? But problems can also be opportunities, and in this case the opportunity is to rethink the problem from a different perspective. The result that ensues is "functional reactive programming", often abbreviated to FRP, originated by Conal Elliott and Paul Hudak.

The essence of FRP is to think declaratively, or in other words to think about how to make a mathematical model of the problem domain. The key insight is then to see that the functional world offers just the right set of abstractions to do this. A continuously varying quantity - such as the position of an object in an animation (or indeed in a robot world) - a given by a function from time to coordinates. Continuous quantities can trigger discrete events, for example when a position exceeds a certain value, and conversely events can trigger changes in continuous quantities - such as the rebound of a (simulated) billiard ball from a (simulated) cushion. So, FRP gives very general - and declarative - models of hybrid systems.

So far, so good. We have a very powerful model of a very general set of systems, but how is this to be implemented? We are in a functional language, so we can describe a naive implementation, but sadly that is desperately inefficient, and so the history of the last decade or so has been one of searching for sufficiently efficient implementations gained without losing too much of the original power and elegance of the framework. One approach is to make everything discrete, which is the approach of the Lustre language, but aiming to keep as close to true continuity as possible, at least as an abstraction, has proved to be more of a challenge. Elliott has provided a number of implementation ideas, as has Nilsson; the paper here provides a new approach in which a monadic interface is designed for FRP. 

Under the monadic approach, continuous behaviours are allowed to terminate (and also to begin) and so there is a natural sequencing operation between them. As well as giving a more familiar API to the programmer, the author claims that this allows a clean - i.e. declarative - yet efficient implementation of the system. While it is too early to give a definitive judgement on this - we should wait to see what the take-up is - it may be that finally FRP has come into its own with this work. If so, we can look forward to a whole set of new libraries for high-level reactive programming in interface design, graphics, web programming and more. Watch this space …

This review of Monadic Functional Reactive Programming, from the 2013 Haskell Symposium, was drafted for Computing Reviews, but too long for their liking. So, I have posted it here instead :-(

Tuesday, February 18, 2014

Why I lecture

Sometimes it's a real joy to give a lecture: the topic is right, and you have managed to get the slides to say what you want without clutter or confusion. This happened to me today, in giving my final lecture  for a while in our “Functional and Concurrent Programming” course at Kent. I wanted to pull together all that we'd covered about functional programming, and do this by treating a “meaty” example. I chose language processing: here's the lecture recording and here are the slides.

Why choose language processing? It’s central to a CS course: even if our graduates are unlikely to write a compiler, there’s a much higher chance of writing something that will manipulate XML, or a domain-specific language, or build scripts, or test code … the list goes on. It's also a place where the combination of immutable data, expressive structured types, a highly developed lists library, and an interactive development loop work really well together.

Thursday, January 30, 2014

Planning redux

I can't resist posting a link to my previous post on planning after receiving a 9 page document instructing us how to write departmental plans.

OK, 3 pages are templates, so that leaves only 6, to describe what should itself be a maximum of 6 pages, plus tables … of course, this should also take into account the university's institutional plan, 20 lavishly-illustrated pages.

Transition to grumpy old man complete, I think …