Wednesday, December 12, 2012

What makes a good plan?

Having spent quite a bit of last weekend having to write a research plan, I've gone back to the question of “what makes a good plan?”. I think there are two principles that identify the best plans … I wonder how well mine measures up?
  • You should be able to summarise it on the back of a postcard, or in a 30 second soundbite. That probably means that there are three or four key points, rather than thirteen or fourteen, or even thirty or forty.
  • The real decisions in the plan are those whose negation would be a sensible choice. So, saying that we choose to do the best possible research, or that we aim to publish in the highest quality journals, are pretty much content free … what plan would aim not to do those things?
So, how would I summarise the research plan in three points?
  • Aim to increase research funding by 30% over three years … a modest aim, but one where we could perfectly well fail if we didn't work to achieve it.
  • Set up a formal industrial panel … in the last decade or so we've dealt with industrial input on an ad hoc basis (apart from the Kent IT Clinic advisory group) … it's time to try to get more synergy with a group of stakeholders: industrialists, educators and alumni.
  • Make sure that we have enough space to house researchers, equipment and so on … this is a real problem at our Medway campus, and will harm our research unless we're able to get the space. [This is plan as negotiating tool I suppose.]
What about the negation test? Well - sadly - I've failed that a few times: “highest quality venues”, “widest possible dissemination”, etc, but at least there are some places, and particularly those mentioned in the three points above, where we're making a definite commitment to doing something rather than something else. Whether or not we're ultimately successful is something I can come back to in 2015 …

Friday, December 7, 2012

Links to make you think, part 2

In my first post on this blog I posted a list of links to twitter posts for the People and Computers course, chosen to give a context to studying computer science.

Here's the second (and final) collection of tweets, in chronological order:


Sixth week



Seventh week



Eighth week



Ninth week



Tenth week



Eleventh week

Functional Programming in 2012

This was first written for josetteorama as publicity for CUFP 2012. but I thought I would share it on here as well …

Functional programming ideas have been around longer than computers: Church’s lambda-calculus was invented in the 1930s as a way of describing computations as functions around the same time that Turing was describing what an idealised (human) “computer” could do through Turing machines. The idea of using functions as the primary means of computing fed straight into Lisp and its (LAMBDA …), ideas of procedures in Algol68, but settled into what people now call functional languages in the late 1980s, with the definition of ML, Haskell and Erlang.

Erlang – which is really a language for high-concurrency, fault-tolerant, robust systems – is based on functional ideas because it’s so much easier to write concurrent systems when every assignment is a single assignment (check out Single Assignment C too). Haskell and ML (and also OCaml, SML, …) are general purpose languages where types do much of the heavy lifting for you: types are inferred from what you write, and if you have got a program that type checks then you’ve ironed out a whole class of errors before running a single instruction.

As I’ve said, these languages were first defined more than twenty years ago, so what are the modern functional programming languages? One answer is Java, and other mainstream languages, which are taking on more and more functional features. Using immutable data is inherited from the functional world, as were Java generics. Coming soon are closures – yes, that’s right, LAMBDA in Java! So you can do functional programming in Java (and O’Reilly have recently published Functional Programming for Java Developers: Tools for Better Concurrency, Abstraction, and Agility).

But what if you want to try the real thing? A first choice is F#, which puts a functional language into the .NET framework as a first-class citizen, with internationalization, intellisense, and full integration with C# and the .NET libraries. One of the ways of getting started with F# is to use it to explore some of the library APIs, executing calls interactively. Extending Java to give a fuller functional language, as well as Erlang-style concurrency, is Scala. From Scala too, there’s access to the Java libraries, as well as execution on the JVM. Finally, it’s worth taking a look at Haskell, which is functional first, but which provides controlled access to concurrency, IO, exceptions and a wealth of open source libraries.
But, why bother, you might say? One reason is sheer curiosity: you’ll find out what all the fuss is about, and see where ideas like map-reduce come naturally out of the programming style. It also give you a different way of attacking problems: even if you’re going to be programming in Java, then a functional style might well give you a different tool to solve the problem at hand.

Another reason is to get a job (really!). How are functional languages getting used? Erlang and Scala support high-throughput message handling and web systems, and are the “secret weapon” behind applications supporting Facebook, Amazon and other systems. Haskell gets used in building domain-specific languages in security, crypto and increasingly in the financial sector, where F# is getting traction too. You’ll see a steady stream of jobs calling for Erlang, Haskell, Scala, F#, as well as for people who are happy using one of these languages and C, or Java or whatever, because functional languages often come in as a solution to part of a problem, rather than solving the whole thing.

Added December 2012: Another sign of functional programming getting traction is Simon Marlow, one of the "two Simons" supporting GHC, moving to Facebook in the new year …

Sunday, December 2, 2012

Tracking our time

The end of a busy week … one of the weeks I've had to record for the TRAC exercise that all English universities go through. When we do this we record all that we have done over the week, categorising things as teaching, teaching prep, funded research, admin etc. Although this is apparently an exercise in transparency, it's not what it seems. We record actual hours (61 in total this week: OK, that's more than usual, it's been a busy one), and then our activities are reported as percentages, deemed to come out of a 37.5 hour week.

It's no wonder the consequences of analysing these data are perverse: instead of concluding that we cross-subsidise our research from our teaching funding based on normalised data, the fact is that we do our research in the time above the 37.5 hours … free overtime, in other words. Don't get me wrong, I really enjoy this job, and it's a privilege to do many of the things that academics do,  but the TRAC exercise does a disservice to what we do.

Ok, so what did I actually do this week?

  • Announcing the winners of the BCS/CPHC Distinguished Dissertation award for 2012 at the Royal Society, and then hearing a wonderful Needham Lecture from Dino Distefano of QMUL. We had 22 submissions to this award (from among the 100s of PhDs awarded in CS in the UK each year) and it was a hard job to pick the three we chose: we weren't looking just for first-class academic work, but also having it reported in a way that it can command the widest possible audience.
  • Preparing lectures for our new first year course: one on professionalism, and another collection of case studies to discuss in tomorrow's session. Also, I do a talk on the “Disappearing Computer” to the University's Global Skills award for postgrad students, and it needed revising.
  • Spending a day interviewing for a new research position on the PROWESS project.
  • Finalised the school's submission for the pilot of the upcoming Research Evaluation Framework: Kent is piloting the process a year ahead of the actual exercise, and the deadline was noon on the 30th. 
  • Worked with colleagues on our upcoming BCS Accreditation visit.
  • Had our first interviews of students for next year. Fascinating discussions with both of them, so I do hope that they choose to come to Kent …
  • Worked with Huiqing Li, my colleague and research collaborator, on our next steps for research projects, as well as the undergraduate project group we're working with. 
  • Gave feedback on one of my colleague's research grant applications … it's looking very strong, and so he'll be submitting it soon. Two more to look at tomorrow, too.



Wednesday, November 28, 2012

$1,000,000,000 and 1 bit

I blogged a while back about the videos that our first year students have made, when we'd asked them to illustrate a measure with appropriate "killer" examples from computing. The two that scored highest in the eyes of the students were these.

First we have $1,000,000,000 ... by Daniela, John, Arben, Alan and Rhys.

Next we have 1 bit ... by Loren, Conor, Jack and Andy.

I hope that you enjoy them as much as we did ...

Tuesday, November 20, 2012

Unanticipated impact …

The latest incarnation of research evaluation in the UK, REF 2014, has taken the radical step of assessing the impact of research, roughly interpreted as "the effects that the work has had beyond academia" (that's my paraphrase, you can get chapter and verse at the REF website). I call it radical because it's untried - apart from a pilot -  and counts for 20% of the overall assessment, which you could identify with 20% of the money that's begin allocated on the basis of the results of the REF.

Whatever misgivings I might have about the mechanism, it's interesting to try and find out what effect your own work has had, and so we've been looking at what impact Wrangler has had since the project to build this refactoring tool for Erlang began in 2005.

The development of Wrangler was first funded by EPSRC, and then the EU's Seventh Framework Programme (FP7) in the ProTest and Release projects. This kind of European project encourages industrial / academic crossover and so one level of impact comes from Wrangler being used in industrial collaborators in these projects, including Ericsson, Quviq and LambdaStream. Moreover, some of them continue to use and develop the system after the end of the project.

What's fascinating is to see the effect of making the project open source and putting it on github, a community collaboration site. This means that anyone can contribute to the project, rather than it being restricted to the project team. Github provides a great set of tools for visualising what's happening on your project, and for Wrangler the best seem to be the Network Graph, showing all the branches, commits and merges:


and the impact analysis, showing the impact of what individuals have contributed:






which together show us that we've had a collection of contributors, some of whom we know - for example adapting the tool to work with other tools - and others who have pitched in to help to add what they needed - e.g. updating the system to support a parallel make. So, opening up the system in this case seems pretty clearly to have added to its overall impact, and take up, and so we've got some evidence to provide as a part of the REF submission …

[One thing that it seems harder to find out in github is the number of downloads, but it looks like the github API provides an answer … more on that anon.]

Thursday, November 15, 2012

"Professionalism" and "professionalism"


One of the things we worry about in university Computer Science departments is how to encourage our students to have a professional attitude to what they do. At Kent we're lucky in having some 70% of our students follow a one year sandwich placement between their second and final years, as well as offering them a chance to work as a consultant in the Kent IT Clinic in their final year. However, we'd like to start inculcating a professional attitude right from the start, and we're trying that this year in our "People and Computing" module which covers estimation, communication, group working and argumentation as well as more traditional topics. That's where the videos I was talking about in an earlier post came from too.

At the same time, I sit on the BCS Professionalism Board, which has oversight of the formal professional framework that the BCS supports, not only as a member organisation of the Engineering Council UK, but also in its own right with CITP (Chartered IT Professional) status. The BCS is looking at how it can engage young professionals, including recent graduates of degrees accredited by the organisation, as well as students at university. The aim here is to encourage these young people to become chartered members of the organisation, through CITP, CEng or CSci.

So, we have two agendas, which I'll characterise as "professionalism with a small 'p'" and "Professionalism with a capital 'P'". On the face of it the two look as though they're very different; the challenge is to make them work together. The "small p" challenge is to engage the intellect and interest of the students … something that Anthony Finkelstein has made a case for very clearly already … and this is something that academics should be able to do. But there's more that can be done here, and that's to put students in touch with people at other places in the pipeline.

My experience is that BCS branch members are happy to work with others interested in computing. A specific example from the Kent Branch is that branch members have been involved with mentoring school groups in a heat of the FIRST Lego League run at the University of Kent, as well as acting as judges. Surely we can build on this willingness to get involved to build links between local branch members and students by, for example, mentoring student project groups, or other activities.

What about the young professionals? I suspect that a similar approach would be the right thing here too. The BCS has strong … and very fruitful … links with corporate bodies, but not with SMEs. Paradoxically, the corporate sector is one where companies themselves are best able to support their young employees, whereas the SME sector cannot. So, is there an untapped opportunity in the BCS supporting young professionals in the SME sector, through mentoring, training and so on. In my experience, the managers of startups and other SMEs would be very appreciative of this … they don't have the resources to do it themselves, but by the same token have staff who would most benefit from this input.

To try to pull the arguments together, maybe a key approach is to encourage mentoring by young people of young people. For the mentors it develops a professional skill, for the mentees, it helped their development. And, with this in place, the professionals "with a small p" can be encouraged to become "Professionals with a capital P".


Wednesday, November 14, 2012

Animating Multicore Erlang

As a part of the RELEASE project we've been looking at how to visualise Erlang computations on multicore systems. We've started by extending the Percept tool to Percept2, and you can see a presentation about that, from the Erlang Factory Lite in London here.

We're beginning to look at how we can present real-time results, and as a half-way house we've been experimenting with animating computations. We've made a video that animates run-queue lengths and process migrations for a computation on a 24 core machine (well 12 cores with hyper-threading, in fact). Any comments or suggestions that you have would be very welcome.

Monday, November 12, 2012

Making Videos

In our first year course on “People and Computers”, Sally Fincher and I asked our students to make a set of videos that illustrate the range of units in computer science, including measures of charge, data, distance, frequency, money, power and time, ranging from the tiny (for example 1 nanosecond) to the huge (e.g. 1 exabyte). The idea of this was to build a body of reference points to give students “scaffolding” for the work that they will do in the rest of the course, but we also wanted to encourage their creativity in communicating ideas this way (as well as in more traditional ways like essays). The videos were "low fi", recorded on smartphones or equivalent: we weren't looking for HD quality, but rather inventiveness and style.

Well, we've got the results now, and we were really pleased with them. What struck me was how varied the videos are (we have about thirty of them, produced by groups of three or four), but each has something interesting to say. What approaches did people use? One strategy was to approach the subject historically, giving the history of the inventor of an idea as well as the idea itself, and one ingenious group had one member impersonating the inventor! Taking a historical view allows comparisons with the (recent, or not so recent) past, and that underlines just how quickly computing evolves.

Another line was in comparisons, or analogies. Some were arresting … measuring data by the number of BluRay disks that would be needed to hold it, and the equivalent weight in baby elephants … the weight of iPad that could be bought for 1p … how many hamsters running in wheels it would take to power Google … . Other presentations brought up unfamiliar computing facts: the power consumption of a sleeping laptop … the rate of spread of the I♡U virus … MB versus “advertising” MB.

We saw some nice animations: using walking, running and skateboarding - or running different distances - to illustrate different rates of data transmission, as well as an animation of Wireshark. A number of groups used (sped up) drawing by hand to get ideas across. Some groups introduced themselves, while others remained behind camera commentating, or even using speech synthesis … quite a few had cheesy soundtrack music.

What were some of the memorable images and ideas? Personally I'll remember the “if a pea is one bit, then the peas in a pod are a byte”, you can get 170 Mb of data storage for 1p, and a description of how much data is in the video we're watching (script and image). We'll find out the most popular among the students this time next week, and I hope to post them on YouTube after that.

Thursday, November 8, 2012

Going to a conference …

You never know who you're going to meet when you go to a conference. My colleague Peter Rodgers works on information display using Euler diagrams - intelligent Venn diagrams, if you like - and conferences often remind me of this. Contacts and colleagues inhabit overlapping circles (or ellipses?), and today was no exception. I talked to people at the Erlang Factory in London whose overlap is

  • Haskell: enthusiasts who have also come to an Erlang meeting;
  • Student placements: from a company who used to take placement students from Kent;
  • Research: can we do some work together? can they provide a case study?
  • Intern: Purdue engineering student visiting the UK as an intern;
  • Colleagues: Roberto Aloi, who worked on the E-learning KTP and now is working on the RELEASE project, and Francesco, co-author …
  • Friends: I've been around the functional programming community for a while
While we can maintain existing relationships online, there's nothing yet to replace meeting someone face to face, though it may well happen, through global warming if nothing else.

At the Erlang Factory …

RELEASE logo
It's always enjoyable going to a conference to talk about the research work that you're doing. I'm at the Erlang Factory Lite in London today, which is taking place at the Google campus in central London. The Erlang Factories are interesting because they are primarily focussed on practitioners, and have managed to build a very supportive community around putting the Erlang language to work in a variety of projects. I'm just listening to a talk by Cyan Technologies on using Erlang within an RF smart metering system deployed in India, and hearing about how Erlang is used. Finishing off now talking about how Cyan technology will support the burgeoning area of M2M (machine to machine) … and they are recruiting!

As far as the my talk went, I was talking about the Percept2 tool that's being built by Huiqing Li and me to profile Erlang systems, particularly those being deployed on multicore. This work is part of the RELEASE project, funded by the European Commission. In RELEASE we're in the process of developing Erlang to be used in a scalable way in distributed heterogenous multicore systems.

This is the first time I've talked about this, and we got a set of very good questions from the audience:

  • how scalable is what you do?
  • can you attach / detach Percept2 to/from a running system?
  • how is migration between schedulers related to the degree of parallelism within a system? …cool research question!
So, we've got good ideas for what to do next, plus a research challenge. That's the great thing about research … in the end it's a collaborative thing. Hopefully, too, we've also made some links with people who want to try our stuff out "in anger".

Now time to listen to Ian Barber from Google on linking Google APIs and Erlang …

Monday, November 5, 2012

Starting a research project …

We're one month into the Prowess research project, which follows on from ProTest in looking at property-based testing and web services. At Kent we're looking at a number of things, some coming out of our work on Wrangler, a tool to help people write refactorings for Erlang programs, and others building on work with Quviq and Sheffield University on extracting properties from existing artefacts, like test sets.

Starting a project is a mixture of anticipation and apprehension. It's exciting to have got the go ahead for work that we planned about a year ago, and to be working with a consortium which brings together some Protesters and some new partners. There's apprehension in working out how precisely we'll do what we said we'd do according to plan: can we manage N person-months on work package M at site X? After an afternoon in Brussels a couple of weeks ago, the answer is that yes, we can.

What are we going to be doing at Kent? We'll be extending Wrangler to make sure that it supports refactoring of property-based testing for web services, so that will include working with state-machine models as well as properties. To do this we'll use the extension facilities that we've built into Wrangler, and reported in our paper Let's make refactoring tools user extensible! After a really useful visit by Ramsay Taylor from Sheffield last week, we realise that we'll also be able to use these facilities in supporting mutation testing for Erlang, since our template language is just right for expressing mutations  to programs. Because it's a proper language it's going to be particularly useful for expressing more complicated transformations such as higher-order mutations.

We're also looking at property extraction, building on existing work to extract machines from test suites written in EUnit, the test framework for Erlang. To help with that we're appointing a researcher to join the team: the job is advertised here and closes on Thursday …

Thursday, November 1, 2012

The Kent IT Clinic

One of the things that many computer science departments struggle with is how to teach the practical aspects of being a professional computer scientist. At Kent we're very successful in running schemes with a year in industry, and typically more than two thirds of our students take a year out between their second and final years with us. They come back transformed, understanding how the principles that we teach them work out in "the real world" as well as having much sharper practical skills, in e.g. time management. Putting these two together it's no surprise that they have a six or seven percentage point advantage on average over the students who haven't taken a year out. A number of other CS departments have sandwich programmes like this, and they all report a similar effect.

Kent has something much more distinctive to offer, too. The Kent IT Clinic (KITC), founded in 2004 and based in the School of Computing at the University of Kent, offers IT consultancy services to companies local to the university’s campuses in Canterbury and Chatham. The industrial landscape in east Kent is made up of SMEs (and indeed micro businesses) and public bodies, and so the majority of the KITC’s clients are SMEs wanting IT services to support their businesses, rather than IT-focussed companies.

The novelty of the KITC is that the consultants are students in the final year of their undergraduate programme or the MSc programme in IT Consultancy. Students in the clinic are mentored in their work by an experienced consultant who takes the role of coordinator of the KITC, and the money received by the KITC in payment for its work goes towards paying the salary of the coordinator, as does a proportion of the students’ fees.

Students are not paid for their work; rather they receive payment in academic credit towards their degree. On setting up the clinic it was anticipated that students would also be able to work in the clinic for payment, but under this model KITC work competed with their credit-bearing work, and commitment to the clinic tailed off as the assessment load grew. The coordinator is responsible for the business side of the clinic, and academic supervision and assessment is separate.

Some of the students in the KITC have undertaken a one year sandwich placement, but others have not. Both groups of students gain from their KITC time, since the experience and skills gained there are complementary to the traditional industrial placement.

  • The students on sandwich placements tend to be ‘small cogs in large machines’, working in a managed environment in a team alongside more experienced permanent staff; by contrast the students in the KITC are ‘large cogs in a small machine’ working alongside in peers from the same background in a substantially smaller, less-managed, operation.
  • Sandwich students will often work on a single project during their year’s placement; in the KITC students will work on a variety of projects chosen so that together they satisfy the learning outcomes for appropriate module.
  • Projects are typically supported by a team of students who need to be self-managing; indeed projects can be somewhat more complex, calling on a delivery group as well as an internal systems team, for instance.
  • The group of students working on a project will get experience of the complete life-cycle of the project, from inception through costing and estimation to delivery, deployment and maintenance. 
  • The costing and estimation prove to be particularly challenging for students, since projects tend to differ from each other to the extent that existing data is of little use; on the other hand previous projects are presented as case studies within the taught modules that support the KITC.  
  • The students gain more exposure to the customer than in a larger organisation. They need to negotiate requirements and costs, and deliver to the customer’s satisfaction: students have learned that this can be somewhat capricious!

A number of challenges have been touched on above; the other principal challenges are discussed now.

  • Business demand is constant, but the presence of students is anything but. A typical undergraduate can spend the period October – March in the KITC, while MSc students are able to devote substantial time during May – August. Students will be part of the clinic for one academic year only.
  • Some months are not covered, nor are (substantial parts of) undergraduate vacations. It is therefore vital to manage the pace of delivery that the KITC can achieve.
  • Students are only in the clinic part-time (typically 10 hours per week during term time) and so it is crucial to manage customer expectations of what they can expect of consultants, e.g. in providing out of hours service.
  • The tunover of students brings its own problems. It is not unusual for a client to want maintenance or extension of a project delivered by a previous cohort; it is at this point that the quality of the documentation and code is really put to the test. Similarly, there is a tendancy to over-engineer internal systems, with consequent handover problems.
  • Because the KITC is formally part of the university rather than an autonomous entity, students need to understand ‘professional’ aspects of delivering consulting, such as writing contracts, indemnity insurance and so forth. The School of Computing has benn lucky in the support that it has received in this regard from staff from Kent Innovation and Enterprise, the university’s industrial liaison arm. 

A thriving SME – which is what the KITC is – needs a healthy pipeline of work: some leads forthcoming, some work signed up, some in development and some in delivery. Ensuring this within a small organisation is tricky, and it is particularly problematic with the varying levels of student resource just discussed.

  • Work has come in through a variety of routes: through KIE, word of mouth, repeat business and partnerships with other consultancies.
  • Being clear about what is possible for the KITC and what is not is a difficult call: some of the most successful projects it has undertaken could have been seen as the most risky too, for instance.
  • Ensuring the KITC has the expertise in place is also a challenge. Students will not necessasily have all the skills at their fingertips, but in some cases successful projects have been delivered on the back of students learning as they work. An expertise database is also valuable, but vulnerable to the students’ differing ability to self-assess.

Still, whatever the ups and downs, the KITC has been a fantastic learning experience for students and staff alike over the last eight years, and has given some two hundred students a chance to see how what they learn in the lecture room translates into business.


Monday, October 29, 2012

Why teach?

After some years when I was not doing so much teaching - I was on research leave and before that head of school - I'm back in the thick of it, and really enjoying it. Why is that? The main reason is that it's great to see people learning new things, particularly when it's stuff that's unfamiliar … logic at master's level, or the "professional context" for first years.

Unlike the old joke about the lecture being the mechanism for getting the notes from the lecturer's note pad to the student's without passing through either of their brains, I'm impressed at the interaction going on. Particularly in our new first year course we're trying to turn things into more of a conversation - either directly or using technology (google forms/spreadsheet) - and we're having some success. OK, not everyone is involved all the time, and there have been some tech hiccups, but we're getting there. 

Teaching can be a solitary activity, but often the best courses are ones where there's a team of teachers. The first year course is coming from a team of four: two at each campus, and the collaboration makes us more imaginative and daring than we would have been on our own.

I'm learning a lot, too, from students and colleagues. Now I know how to explain the constraints on the implication introduction rule: give a "proof" which breaks them, and so demonstrate precisely what the constraints embody, and why they are there. Thanks to a suggestion from one of our CO334 students, I also now know that using "data validation" in a google spreadsheet will give us much more useful data in interactive estimation exercises.

Teaching something new is the way to learn new things yourself, and to ask new questions: what are the relationships between contract law and software specification? why is resolution not complete? Both are things to find out more about. 

So, it's fun, and as it's the largest part of what the University of Kent is here for, that's a good thing for everyone!




Sunday, October 28, 2012

Lawyers and software engineers

I'm preparing a lecture about computing and the law, and using as a resource the encyclopaedic Computer Law (7th ed.), C. Reed (ed.) OUP, 2011. It's accessible and clear, even for a non-lawyer,  but this quote from one section on contracts (section 1.1.2.2) took me aback:
“Many IT projects fail precisely because the parties do not exercise sufficient
care to ensure that the supplier’s and the customer’s expectations match. Ensuring
that these do … is the key role of the legal adviser in the contract process.”
While I guess that is a key role for the legal adviser, I wonder how much she can manage to achieve this, in the absence of having strong technical skills. She can certainly ensure that the process is sound: terminology is agreed, all relevant points are discussed one by one, and so on, but when it comes to a discussion of technical issues like feasibility, efficiency, scalability etc. it's not clear that a legal adviser alone can deliver what's needed. Maybe I'm missing the point, but don't we need software engineers (or other technical experts) to be part of the conversation too?

Saturday, October 27, 2012

Links to make you think

In the first year course People and Computers I am tweeting (with hashtag #co334) one topic a day … usually a link to something else on the web.

In this course we're trying to give students studying computer science some context to what they're doing by looking at the history of computing, how we communicate ideas, how to estimate solutions to problems when the full data isn't there, as well as looking at more traditional LSEPIs such as computing and the law, intellectual property, privacy and so on.

Here are the tweets so far, in chronological order:

First week

Second week

Third week

Fourth week

Fifth week