tag:blogger.com,1999:blog-67121798767426044152024-03-13T14:00:38.941+00:00profsjtAnonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.comBlogger54125tag:blogger.com,1999:blog-6712179876742604415.post-43128790073992273322018-12-11T13:57:00.003+00:002018-12-11T13:57:48.677+00:00Marlowe: financial contracts on blockchain<div dir="ltr" style="text-align: left;" trbidi="on">
[From the IOHK blog <a href="https://iohk.io/blog/marlowe-financial-contracts-on-blockchain/">https://iohk.io/blog/marlowe-financial-contracts-on-blockchain/</a>]<br />
<br />
The first computers were programmed in “machine code”. Each different kind of system had a different code, and these codes were low-level and inexpressive: programs were long sequences of very simple instructions, incompressible to anyone who had not written them. Nowadays we are able to use higher-level languages like C, Java and Haskell to program systems. The same languages can be used on widely different machines, and the programs’ structures reflects what they do; on blockchain, their equivalents are languages like Solidity and Simplicity. These modern higher-level languages are general purpose – they can be used to solve all sorts of different problems – but the solutions they express are still programs, and they still require pogromming skills to use them effectively.<br />
<br />
In contrast, Marlowe is a domain-specific language (DSL) which is designed to be usable by someone who is expert in a particular field: in the case of Marlowe, financial contracts, rather than requiring programming skills to use it. Using a DSL has many advantages beyond its use by non-programmers:<br />
<br />
<ul style="text-align: left;">
<li>we can ensure that certain sorts of bad programs cannot even be written by designing those possibilities out of the language;</li>
<li>we can also more easily check that programs have the properties that we want: for example, in the case of a financial contract we might want to make sure that the contract can never fail to make a payment that it should.</li>
<li>Because it is a DSL, we can build special-purpose tools to help people write programs in the language. In the case of Marlowe we can simulate how a contract will behave before it is run for real on the system; this helps us to make sure that the contract we have written is doing what it is intended to.</li>
</ul>
<br />
Marlowe is modelled on financial contract DSLs popularised in the last decade or so by academics and enterprises such as LexiFi. In developing Marlowe we have adapted these languages to work on blockchain. Marlowe is implemented on the settlement layer of the Cardano blockchain, but could equally well be implemented on Ethereum/Solidity or other blockchain platforms; in this respect it is “platform agnostic” just like modern programming languages like Java and C++. The online Meadow simulation tool allows you to experiment with, develop and interact with Marlowe contracts in your web browser, without having to install any software for yourself.<br />
<br />
What does a Marlowe contract look like? It is built by combining a small number of building blocks that describe making a payment, making an observation of something in the “real world”, waiting until a certain condition becomes true, and so on. Where we differ from earlier approaches is in how we make sure that the contract is followed. This means not only that the instructions of the contract are not disobeyed, but also that the participants don’t walk away early, leaving money locked up in the contract forever. We do this using two tools, commitments and timeouts: a commitment requires a participant to “put their money on the table”, and through timeouts we make sure that this commitment happens in a timely manner or remedial action is taken. Putting these constructs together we are able to incentivise participants to continue with the contract once they have committed to it.<br />
<br />
We’re working on a full release of Marlowe for mid 2019, when it will be available on Cardano SL. At the moment you're able to explore Marlowe for yourself using Meadow, and find out much more detail from our online paper. In the next six months we’ll be polishing the language design itself and developing a set of templates for popular financial instruments, as well as using formal logic tools to prove properties of Marlowe contracts, giving users the highest level of assurance that their contracts behave as intended.<br />
<br />
<b>Links</b><br />
<br />
Research paper on Marlowe: Financial contracts on blockchain: <a href="https://iohk.io/research/papers/#2WHKDRA8">https://iohk.io/research/papers/#2WHKDRA8</a><br />
<br />
The Marlowe project on github: <a href="https://github.com/input-output-hk/marlowe">https://github.com/input-output-hk/marlowe</a><br />
<br />
Meadow: online simulation and visualisation of Marlowe contracts: <a href="https://input-output-hk.github.io/marlowe/">https://input-output-hk.github.io/marlowe/</a><br />
<br /></div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com1tag:blogger.com,1999:blog-6712179876742604415.post-62656759783061012432018-08-22T17:00:00.000+01:002018-08-22T17:00:27.728+01:00Review of: CoSMed: a confidentiality-verified social media platform<div dir="ltr" style="text-align: left;" trbidi="on">
<i>CoSMed: a confidentiality-verified social media platform</i> Bauerei T., Pesenti Gritti A., Popescu A., Raimondi F. <i>Journal of Automated Reasoning</i><br />
<br />
What do we mean when we say that a system is verified? In theory it means that certain properties of a system have been proved to hold. In practice, the answer is rather more complicated.<br />
<br />
The first thing to clarify is the form of proof: computer scientists have developed a variety of logics, both general and special purpose, for proofs of properties of correspondingly various kinds. The proofs themselves might be by pencil-and-paper or, increasingly, formalized in a computer in some way. Trust in the verification relies on trusting the proof assistant--this is not the place to go any further down that particular rabbit hole, but anyone interested should read MacKenzie [1] and Nipkow et al.’s paper [2] on the Isabelle proof assistant, which is a sophisticated and mature system that assists users to construct machine-checked proofs “by hand” using a battery of existing libraries and tactics.<br />
<br />
Now that we have a sense of what will be used to perform the verification, we need to identify what will be verified. Typically (and this is the case here) we work with a model of the real system and argue that this represents the essential properties of the real thing. The authors do something subtly different: the model is a state machine, represented by its step function in the internal language of Isabelle, which is a relatively straightforward functional language. They then use the extraction facility of Isabelle to export this model to Scala, so that it is rendered directly into code that can be executed in a (real) web framework, which could in principle be verified itself, although that is not the focus of this work.<br />
<br />
The paper, an extension of a paper published in an established conference series on interactive theorem proving, gives clear explanations of the application domain and the proof, in particular the approach to information flow that the verification uses. Essentially, the authors prove that Alice can only see posts from Bob that were made when they are friends, a property which can change dynamically. So, even if they are currently friends, posts made when they were not are not visible. Technically, this expands the authors’ earlier work on a similar “conference” system in which roles are statically rather than dynamically defined.<br />
<br />
The paper gives a clear exposition of the model, the properties proved, and the proof techniques, and I would expect that the work could be reproduced from the descriptions given. What I missed, and what would have added greatly to the value of the paper for others, is an account of how the verification and the modeling interacted. Is it the case that the modeling entirely preceded the verification, so that the model was built correctly from the start, or was it the case that the very process of attempting to verify the system required it to be modified itself? Insights like this would be very valuable to someone embarking upon a similar exercise, and would potentially help readers overcome the considerable learning curve that there is in going from novice to experienced user of systems like this.<br />
<br />
Nevertheless, the paper makes a definite contribution to the growing literature on verified systems, and the work in general is particularly noteworthy because it was done in conjunction with a third-sector organization to support their online social media presence.<br />
<br />
[1] MacKenzie, D. <i>Mechanizing proof: computing, risk, and trust</i>. MIT Press, Cambridge, MA, 2001.<br />
<br />
[2]<span class="Apple-tab-span" style="white-space: pre;"> </span>Nipkow, T.; Paulson, L. C.; Wenzel, M. <i>Isabelle/HOL: a proof assistant for higher-order logic</i> (LNCS 2283). Springer, Berlin, Germany, 2002.<br />
<br /></div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com0tag:blogger.com,1999:blog-6712179876742604415.post-4111620298482282442017-10-16T13:02:00.002+01:002017-10-16T13:46:06.823+01:00Is Haskell the right language for teaching functional programming principles?<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: inherit;">The “park bench” panel at the <a href="https://skillsmatter.com/conferences/8522-haskell-exchange-2017">Haskell eXchange</a> last week talked about a lot of things, but some of them can be summarised by the question “<i>Is Haskell the right language for teaching functional programming principles?</i>”.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">The point about this is that no one is in disagreement about how good a language Haskell is for doing real work – and that’s underlined by a meeting like the Haskell eXchange, which is primarily for people using Haskell in practice, or wanting to use it. Rather, the question is about whether the principles that Haskell is built on get obscured by the fact it’s a large and complicated language. You can see an example of this at the foot of this post: what you can see there is a perfectly plausible beginner interaction with ghci.</span><br />
<span style="font-family: inherit;"><br /></span>
<div style="font-stretch: normal;">
<span style="font-kerning: none;"><span style="font-family: inherit;">Some alternatives came up: there were advocates for Elm, <a href="http://elm-lang.org/"><span style="-webkit-font-kerning: none; color: #551a8b;">http://elm-lang.org</span></a>, and for PureScript, <a href="http://www.purescript.org/"><span style="-webkit-font-kerning: none; color: #551a8b;">http://www.purescript.org</span></a>, both of which are substantially younger than Haskell. It’s easier to see the attraction of Elm: it’s a lot simpler than Haskell, and so presents fewer impediments to beginners; it has also got a nice <a href="http://elm-lang.org/examples/hello-html"><span style="-webkit-font-kerning: none; color: #551a8b;">online environment</span></a>. What are the downsides? While a number of books are in development (just google …) there aren't the mature texts that Haskell has. Also, it is arguably too tied to the “web development” paradigm, and so will skew learning towards that. </span></span></div>
<div style="font-stretch: normal; line-height: normal; min-height: 19px;">
<span style="font-family: inherit;"><span style="font-kerning: none;"></span><br /></span></div>
<div style="font-stretch: normal;">
<span style="font-kerning: none;"><span style="font-family: inherit;">PureScript also looks interesting, but perhaps more for developers than learners? While Elm is definitely simpler than Haskell, and will remain so, PureScript seems to be on a path to replicating as much of Haskell as possible, in the JavaScript world, so nothing particularly to be gained from making a switch. But to be clear, I’d be interested to hear arguments for PS that I might have missed … </span></span></div>
<div style="font-stretch: normal; line-height: normal; min-height: 19px;">
<span style="font-family: inherit;"><span style="font-kerning: none;"></span><br /></span></div>
<div style="font-stretch: normal;">
<span style="font-kerning: none;"><span style="font-family: inherit;">There’s a trend of fitting new syntax to old languages. Reason, <a href="https://reasonml.github.io/"><span style="-webkit-font-kerning: none; color: #551a8b;">https://reasonml.github.io</span></a>, has done this for OCaml and Elixir, <a href="https://elixir-lang.org/"><span style="-webkit-font-kerning: none; color: #551a8b;">https://elixir-lang.org</span></a>, for the Erlang VM. The languages are intended to be more approachable for those with experience of JavaScript and Ruby, but also benefit from their youth: libraries are less sprawling, and contributions more convergent; on the flip side, they can re-use all the previous developments and libraries for their host languages too. Still, neither seems to be an obvious answer for learners, and, of course, they are both impure and strict!</span></span></div>
<div style="font-stretch: normal; min-height: 19px;">
<span style="font-family: inherit;"><span style="font-kerning: none;"></span><br /></span></div>
<div style="font-stretch: normal;">
<span style="font-family: inherit;"><span style="font-family: inherit;">A final point: why not go for subsets of Haskell, like the Dr Racket approach, <a href="https://racket-lang.org/"><span style="-webkit-font-kerning: none; color: #551a8b;">https://racket-lang.org</span></a>? It would be interesting to see development of different preludes: most of the existing initiatives (just google …) seem to be designed to enhance the standard prelude, but working in the opposite direction might allow us to replicate the <a href="https://hackage.haskell.org/package/helium"><span style="-webkit-font-kerning: none; color: #551a8b;">helium</span></a> environment, for example. Another suggestion (due to Dominic Orchard) is to be able to close certain classes, so that there’s no way that an instance of </span><span style="font-family: "courier new" , "courier" , monospace; font-stretch: normal; line-height: normal;">Num</span><span style="font-family: inherit;"> can be declared for </span><span style="font-family: "courier new" , "courier" , monospace; font-stretch: normal; line-height: normal;">Int->Int</span><span style="font-family: inherit;">, thus avoiding a set of error messages unhelpful to beginners.</span></span></div>
<div style="font-stretch: normal; min-height: 19px;">
<span style="font-family: inherit;"><span style="font-kerning: none;"></span><br /></span></div>
<div style="font-stretch: normal;">
<span style="font-kerning: none;"><span style="font-family: inherit;">What’s the best way forward? It’s hard to know. Maybe we should stick with the system we have, and for which we have some idea about teaching … but it would be interesting to see how moving to Elm worked in practice in delivering effective functional programming teaching.</span></span></div>
<div style="font-stretch: normal; min-height: 19px;">
<span style="font-family: inherit;"><span style="font-kerning: none;"></span><br /></span></div>
<div style="font-stretch: normal;">
<span style="font-kerning: none;"><span style="font-family: inherit;"><i>Full disclosure: </i>at Kent, after teaching Haskell for many years, we currently a compulsory second year course on “functional and concurrent programming” using Erlang as the main vehicle, but introducing a little Haskell. An optional final year course on “programming language implementation” builds on a small but complete compiler written in OCaml. It’s possible to get started with Erlang quickly because it’s syntactically very lean, both for functional and for concurrent programming; the major disadvantage is that it’s substantially more difficult to do type-driven development.</span></span></div>
<div style="font-stretch: normal; min-height: 19px;">
<span style="font-family: inherit;"><span style="font-kerning: none;"></span><br /></span></div>
<div style="font-stretch: normal;">
<span style="font-kerning: none;"><span style="font-family: inherit;">There is some more material on the web comparing elm and PureScript (and others): </span></span></div>
<ul>
<li style="color: #551a8b; font-stretch: normal; margin: 0px;"><span style="font-family: inherit;"><a href="https://alpacaaa.net/blog/post/elm-purescript-in-depth-overview/"><span style="-webkit-font-kerning: none;">Lessons learned porting a game from Purescript to Elm</span></a><span style="color: black; font-kerning: none;">.</span></span></li>
<li style="color: #551a8b; font-stretch: normal; margin: 0px;"><span style="font-family: inherit;"><a href="http://mutanatum.com/posts/2017-01-12-Browser-FP-Head-to-Head.html"><span style="-webkit-font-kerning: none;">Selecting a platform: JavaScript vs Elm vs PureScript vs GHCjs vs Scalajs</span></a><span style="color: black; font-kerning: none;"> </span></span></li>
</ul>
<br />
<span style="font-family: inherit;">Exploring the </span><span style="font-family: "courier new" , "courier" , monospace;">length</span><span style="font-family: inherit;"> function …</span><br />
<span style="font-family: inherit;"><br /></span>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-ltsuv0jksHM/WeCZpCah-7I/AAAAAAAAAWQ/Ej2gK6Ijx9krA0VlysgrarKZPxftT3-oQCLcBGAs/s1600/Screen%2BShot%2B2017-10-13%2Bat%2B11.43.07.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><span style="font-family: inherit;"><img border="0" data-original-height="1506" data-original-width="1484" height="640" src="https://2.bp.blogspot.com/-ltsuv0jksHM/WeCZpCah-7I/AAAAAAAAAWQ/Ej2gK6Ijx9krA0VlysgrarKZPxftT3-oQCLcBGAs/s640/Screen%2BShot%2B2017-10-13%2Bat%2B11.43.07.png" width="627" /></span></a></div>
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;"><br /></span>
</div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com41tag:blogger.com,1999:blog-6712179876742604415.post-57933504563088590022017-08-17T18:22:00.000+01:002017-08-17T18:30:20.207+01:00Review of Graham Hutton's Programming in Haskell, 2e<div dir="ltr" style="text-align: left;" trbidi="on">
Graham Hutton's <i><a href="http://www.cs.nott.ac.uk/~pszgmh/pih.html">Programming in Haskell</a></i> is a leading textbook on the functional programming language Haskell that first came out in 2007, and it is instructive to see how its second edition reflects what has changed over the last decade. A particular strength of the first edition is its brevity – it comes in at some 150 pages - and the second edition doubles that length. Even taking into account a less compressed layout, this probably reflects some 50% more material, which comes in the form of more examples, more exercises, some sample solutions and coverage of brand new topics.<br />
<br />
The book is now structured into two parts: “Basic Concepts" and “Going Further". With a little rearrangement the first part is made up of chapters from the first edition (augmented as noted above) and covers the basics of functional programming in general: function definitions, recursion, types and higher-order functions; as well as more Haskell-specific aspects: list comprehensions, type classes and, of course, the GHC system. The section is rounded off with a capstone example: a “Countdown" program, that shows how Haskell is used to solve a particular concrete problem. Hutton writes very clearly throughout, and illustrates his exposition with a well-chosen set of examples.<br />
<br />
Knowledgeable readers may well have spotted that two of the most distinctive aspects of Haskell - monads and lazy evaluation - are not included in the introductory half of the book. Hutton is not alone in doing this: other authors of Haskell texts [2] and (I should declare an interest) myself included [1] have taken the same position in concentrating first on the essential aspects of <i>functional</i> programming, only looking at the particulars of how it works <i>in Haskell</i> after that.<br />
<br />
While some parts of the second half – lazy evaluation and reasoning about functional programs - are treated much as before, and there is an entirely new chapter on “calculating compilers" that gives readers a glimpse of Hutton's research interests, the major changes from the first edition - both pedagogical and technical - come in the second half of the book.<br />
<br />
The first edition hardly treated monads beyond introducing the type class, and using the do notation in describing interactive programs, whereas the second edition foregrounds monads much more, and presents parsers as an extended example of monads. A change like this would have been entirely foreseeable even at the point when the first edition was written, but other updates reflect much more recent changes in the Haskell ethos. Haskell has taken on a much more explicitly “algebraic" flavour, reflected by a range of new type classes that embody patterns of computation inspired by category theory.<br />
<br />
Just as monads can be seen to embody the idea of "effectful" (potentially side-effecting) computation, "applicatives" capture the notion of applying a pure function to effectful arguments. Similarly, "monoids" are abstract structures in which elements can be combined by a binary operation with a unit (think multiplication and 1), and these can then be "folded" over collections of values, just as a set of numbers can be multiplied together. This new material is also very clearly and concisely presented, and gives an approachable introduction to more abstract ideas.<br />
<br />
Haskell is "growing old disgracefully": it was first defined in 1990, and for twenty years or so was seen primarily as an academic, research-oriented language. The last decade has seen it grow in popularity as a practical language for program development from startups to financial multinationals. The language has also moved on, and taken up the categorical / algebraic approach more fully, and it is good to see this reflected in Hutton's new edition and <i>HaskellBook</i> [2]; other authors will need to do the same.<br />
<br />
To conclude, Hutton's revisions of <i>Programming in Haskell</i> have taken the best aspects of the original – conciseness, clarity and approachability – and built on this foundation to accommodate how Haskell has evolved. I recommend it highly.<br />
<br />
<br />
[1] Simon Thompson, Haskell, The Craft of Functional Programming (3ed), Addison-Wesley, 2011. <a href="http://www.haskellcraft.com/craft3e/Home.html">http://www.haskellcraft.com/craft3e/Home.html</a><br />
<br />
[2] Christopher Allen, Julie Moronuki, Haskell Programming from First Principles, Gumroad, 2017. <a href="http://haskellbook.com/">http://haskellbook.com/</a><br />
<br />
[To appear in Computing Reviews.]</div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com4tag:blogger.com,1999:blog-6712179876742604415.post-43676566520973229372016-09-07T14:37:00.003+01:002016-11-04T13:34:24.786+00:00Monitoring Distributed Systems – Lessons from the RELEASE project<div dir="ltr" style="text-align: left;" trbidi="on">
The <a href="http://www.inf.usi.ch/faculty/hauswirth/publications/CU-CS-1042-08.pdf">observer effect</a> tells us that we cannot observe a system without disturbing it; on the other hand, without observation we are unable to correct, evolve or optimise systems. In this post we first discuss the general question of observing distributed systems, and then we look at some of the ideas and prototypes for more effective monitoring that come out of the <a href="http://www.release-project.eu/">RELEASE</a> project, which was focussed on building scalable distributed systems in Erlang.<br />
<br />
<h2 style="text-align: left;">
Monitoring</h2>
<br />
We begin by exploring what it means to monitor a distributed system, how it is done, and the uses to which monitoring data can be put. Sloppily, we know, we will use the term “monitoring” to cover activities of monitoring, observation, tracing and visualisation: what they have in common is the fact that they involve extracting information from a system which goes beyond the information it is required to produce to function and gives information about how it is performing that function.<br />
<br />
After setting out the various dimensions of monitoring, we conclude by discussing the ways in which monitoring can load a system. This sets the scene for a discussion of how to mitigate this loading, in the context of Erlang distributed systems.<br />
<br />
<h3 style="text-align: left;">
Why monitor?</h3>
<br />
We monitor to make sure that a system is delivering what it should. This might be a matter of ensuring that it can maintain performance over time, as unread messages accumulate in mailboxes, perhaps. It could mean observing the way in which (eventual) consistency is managed, so that the system is able to respond appropriately despite there being disruptions to infrastructure. It could also have a more active purpose: to allow the system to adapt to changing circumstances, either through real-time reactive behaviour, or through longer term code evolution.<br />
<br />
<h3 style="text-align: left;">
What are we monitoring?</h3>
<br />
In a distributed system two kinds of activity can be observed: first there are data which come from a single node, such as CPU usage, or trace messages to particular function calls; there are also data arising from the traffic between different nodes.<br />
<br />
As the examples just given show, these data can be event-based, such as log messages about function calls or other system events; or they can be observations, such as CPU usage. While it is in the user’s control how often to observe a property of the system, trace messages will be generated according to how the system is behaving, and so the rate and scale of their production is not under user control.<br />
<br />
Different levels of monitoring are possible, too. Simple data, such as CPU or memory usage, can be gathered for all systems, whereas in many cases the most useful data is application-aware, more directly reflecting the semantics of the system that is being observed.<br />
<br />
<h3 style="text-align: left;">
When do we monitor?</h3>
<br />
Monitoring can be online: information about a system is available in real-time, so that problems can be identified and rectified as a system runs. A drawback of this approach is that data analysis is limited by the fact that it must produce results on the basis of what has happened up to a point during execution. It must also do this in a timely way, using a limited amount of data.<br />
<br />
On the other hand, more complex and complete analyses can be produced <i>post hoc</i> using data generated during the system execution. The two are combined in a hybrid approach, in which offline analyses are repeatedly generated for portions of an execution.<br />
<br />
<h3 style="text-align: left;">
How do we react?</h3>
<br />
It may be that the results of monitoring are presented to an operator, and it is up to her to decide how to react to what she sees, reconfiguring or rebooting parts of a system for instance. It is also possible to design automated systems that use monitoring data: these range from a simple threshold trigger fired, for example, by CPU loading reaching a particular level, to a fully autonomic system.<br />
<br />
<h3 style="text-align: left;">
The burden of monitoring</h3>
<br />
Monitoring requires data to be produced by the system, and for these data to be shipped around the system to a central monitoring point where storage, analysis and presentation can take place. We’ll look at these in more detail once we have discussed a particular concrete case: Erlang/OTP.<br />
<br />
<h2 style="text-align: left;">
Erlang</h2>
<br />
Erlang is a concurrent, distributed, fault-tolerant programming language that comes with the Open Telecom Platform (OTP) middleware library. In distributed Erlang each node runs a copy of the BEAM abstract machine and runtime: separate nodes can be run on the same or different hosts. The “out of the box” Erlang distribution model provides full connectivity between all nodes; but this can be modified by manual deployment and connection of “hidden” nodes, which are unconnected by default. Erlang nodes can also be partitioned into global groups, with each node in at most one group. (Note: there is current work on improving the scalability of distributed Erlang: <a href="http://www.elixirconf.eu/elixirconf2016/%20zandra-%20norman">link</a>).<br />
<br />
<a href="http://www.sciencedirect.com/science/article/pii/S0743731516000034">SD-Erlang</a> is an adaptation of Distributed Erlang designed to be more scalable by limiting all-to-all connectivity to groups (called “s_groups”). These groups may overlap, thus providing indirect connectivity between nodes in different s_groups, SD Erlang also allows non-local spawning of processes on nodes determined by various attributes including measures of proximity.<br />
<br />
<h3 style="text-align: left;">
Improving monitoring </h3>
<br />
In this section we look at a number of ways of improving monitoring of distributed systems; these are illustrated by means of tools built to monitor Erlang and SD Erlang systems, but the lessons are more generally applicable.<br />
<br />
<a href="http://release- project. softlab.ntua.gr/documents/D5.4.pdf,">SD-Mon</a> is a tool designed specifically to monitor SD-Erlang systems; this purpose is accomplished by means a “shadow” network of agents, which collect and transit monitoring data gathered from a running system. The agents run on separate nodes, and so interfere as little as possible with computations on system nodes; their interconnections are also separate from system interconnects, and so transmission of data gathered does not impose an additional load on the system network.<br />
<br />
SD-Mon is configured automatically from a system description file to match the initial system configuration. It also monitors changes to the s_group structure of a system and adapts automatically to the changes, as illustrated here:<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-jbKButiLU3I/V9AWnJXEb3I/AAAAAAAAATM/VI3XyhFFB7URsIbZwpXxG3BCSQW9XtRngCLcB/s1600/s-group-monitor.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://3.bp.blogspot.com/-jbKButiLU3I/V9AWnJXEb3I/AAAAAAAAATM/VI3XyhFFB7URsIbZwpXxG3BCSQW9XtRngCLcB/s320/s-group-monitor.png" width="232" /></a></div>
<br />
<h3 style="text-align: left;">
</h3>
<h3 style="text-align: left;">
Filter data at source: BEAM changes</h3>
<br />
The Erlang BEAM virtual machine is equipped with a sophisticated tracing mechanism that allows trace messages to be generated by wide range of program events. Tracing in this way typically generates substantial data, which can be filtered out downstream. We were able instead to modify the tracing system itself to filter some messages “at source” and so avoid the generation and subsequent filtering of irrelevant data. These changes make the data gathered for <a href="https://kar.kent.ac.uk/34875/1/percept2.pdf">Percept2</a> more compact.<br />
<br />
<h3 style="text-align: left;">
Enhanced analytics: Percept2</h3>
<br />
Percept2 provides <i>post hoc</i>, process-level, visualisation of the working of the BEAM virtual machine on a single Erlang node. When a node runs on a multicore device, different processes can run on the different cores, and Percept2 enhances the analysis of the original Percept in a number of ways (Percept is a part of the Erlang distribution). For instance, it can distinguish between processes that are potentially runnable and those that are actually running, as well as collecting runtime information about the dynamic calling structure of the program, thus supporting application-level observation of the system. Multiple instances of Percept2 can be run to monitor multiple nodes.<br />
<br />
<h3 style="text-align: left;">
Improved infrastructure: Percept2</h3>
<br />
The <i>post hoc</i> analysis of Erlang trace files for the visualisation in Percept2 is computationally intensive, but is substantially speeded up through parallel processing of the logs using concurrent processes on a multicore machine. Logging of some trace information is also performed more efficiently using OS-level tracing in <a href="http://dtrace.org/blogs/about/">DTrace</a>.<br />
<br />
<h3 style="text-align: left;">
Network information: Devo / SD-Mon</h3>
<br />
Existing monitoring applications for Erlang concentrate on single nodes. In building <a href="http://release-%20project.%20softlab.ntua.gr/documents/D5.4.pdf,">Devo</a> which visualises inter-node messaging intensity in real time, it was necessary to analyse Erlang trace information to find inter-node messages; Devo and SD-Mon also show information about messaging through nodes that lie in the intersection of two s_groups, thus providing the route for inter-group messages.<br />
<br />
<h3 style="text-align: left;">
Monitoring and deployment: WombatOAM</h3>
<br />
<a href="https://www.erlang- solutions.com/products/ wombat- oam.html">WombatOAM</a> provides support for Erlang and other distributed systems that run on the BEAM virtual machine. As well as providing a platform for deploying systems, it also provides a dashboard for observing metrics, alerts and alarms across a system, and can be integrated with other monitoring and deployment infrastructure. WombatOAM is a commercial product from Erlang Solutions Ltd, developed on the RELEASE project.<br />
<br />
<h2 style="text-align: left;">
Conclusions</h2>
<br />
The aim of this post is to raise the general issue of monitoring of advanced distributed systems. After doing this we have been able, <i>inter alia</i>, to illustrate some of the challenges in the context of monitoring Erlang distributed systems.<br />
<br />
We are very grateful to the European Commission for its support of this work through the RELEASE project: EU-ICT Specific targeted research project (STREP) ICT-2012-287510. Thanks also to Maurizio Di Stefano for his contributions to this post.<br />
<div>
<br /></div>
</div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com0tag:blogger.com,1999:blog-6712179876742604415.post-47142154331576933652016-05-10T16:59:00.002+01:002016-05-18T22:29:55.754+01:00Trustworthy Refactoring project<div dir="ltr" style="text-align: left;" trbidi="on">
<h3 style="text-align: left;">
<span style="font-family: trebuchet ms, sans-serif;">Research Associate Positions in Refactoring Functional Programs and Formal </span><span style="font-family: "trebuchet ms", sans-serif;">Verification</span><span style="font-family: trebuchet ms, sans-serif;"> (CakeML)</span></h3>
<div>
<span style="font-family: trebuchet ms, sans-serif;"><br /></span></div>
<div>
<span style="font-family: trebuchet ms, sans-serif;">The Trustworthy Refactoring project at the University of Kent is seeking to recruit postdoc research associates for two 3.5 year positions, to start in September this year.</span></div>
<div>
<span style="font-family: "trebuchet ms", sans-serif;"><br /></span></div>
<div>
<span style="font-family: "trebuchet ms", sans-serif;">The overall goal of this project is to make a step change in the practice of refactoring by designing and constructing of trustworthy refactoring tools. By this we mean that when refactorings are performed, the tools will provide strong evidence that the refactoring has not changed the behaviour of the code, built on a solid theoretical understanding of the semantics of the language. Our approach will provide different levels of assurance from the (strongest) case of a fully formal proof that a refactoring can be trusted to work on all programs, given some pre-conditions, to other, more generally applicable guarantees, that a refactoring applied to a particular program does not change the behaviour of that program. </span></div>
<div>
<span style="font-family: "trebuchet ms", sans-serif;"><br /></span></div>
<div>
<span style="font-family: "trebuchet ms", sans-serif;">The project will make both theoretical and practical advances. We will build a fully-verified refactoring tool for a relatively simple, but full featured programming language (CakeML <a href="https://cakeml.org/">https://cakeml.org</a>), and at the other we will build an industrial-strength refactoring tool for a related industrially-relevant language (OCaml). This OCaml tool will allow us to explore a range of verification techniques, both fully and partially automated, and will set a new benchmark for building refactoring tools for programming languages in general. </span></div>
<div>
<span style="font-family: "trebuchet ms", sans-serif;"><br /></span></div>
<div>
<span style="font-family: "trebuchet ms", sans-serif;">The project, which is coordinated by Prof Simon Thompson and Dr Scott Owens, will support two research associates, and the four will work as a team. One post will focus on pushing the boundaries of trustworthy refactoring via mechanised proof for refactorings in CakeML, and the other post will concentrate on building an industrial strength refactoring tool for OCaml. The project has industrial support from Jane Street Capital, who will contribute not only ideas to the project but also host the second RA for a period working in their London office, understanding the OCaml infrastructure and their refactoring requirements.</span></div>
<div>
<span style="font-family: "trebuchet ms", sans-serif;"><br /></span></div>
<div>
<span style="font-family: "trebuchet ms", sans-serif;">You are encouraged to contact either of the project investigators by email (s.a.owens@kent.ac.uk, s.j.thompson@kent.ac.uk) if you have any further questions about the post, or if you would like a copy of the full research application for the project. We expect that applicants will have PhD degree in computer science (or a related discipline) or be close to completing one. For both posts we expect that applicants will have experience of writing functional programs, and for the verification post we also expect experience of developing (informal) proofs in a mathematical or programming context.</span></div>
<div>
<span style="font-family: "trebuchet ms", sans-serif;"><br /></span></div>
<div>
<span style="font-family: "trebuchet ms", sans-serif;">To apply, please go to the following web pages:</span></div>
<div>
<span style="font-family: "trebuchet ms", sans-serif;"><br /></span></div>
<div>
<span style="font-family: "trebuchet ms", sans-serif;">Research Associate in Formal Verification for CakeML (STM0682): <a href="https://jobs.kent.ac.uk/fe/tpl_kent01.asp?s=4A515F4E5A565B1A&jobid=40106,3472764668&key=47167934&c=549534472123&pagestamp=sejmwzlocjpwfyyfak">Link</a></span></div>
<div>
<span style="font-family: "trebuchet ms", sans-serif;"><br /></span></div>
<div>
<span style="font-family: "trebuchet ms", sans-serif;">Research Associate in Refactoring Functional Programs (STM0683): <a href="http://www.jobs.ac.uk/job/AUE849/research-associate-in-refactoring-functional-programs/">Link</a></span></div>
<div>
<span style="font-family: "trebuchet ms", sans-serif;"><br /></span></div>
<div>
<span style="font-family: "trebuchet ms", sans-serif;">Simon and Scott</span></div>
<div>
<br /></div>
</div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com0tag:blogger.com,1999:blog-6712179876742604415.post-8465695479905480772015-07-09T21:08:00.003+01:002015-07-09T21:08:58.956+01:00What feedback should a programming MOOC provide?<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="text-align: left;">
<span style="line-height: 22px;"><br /></span>
<span style="line-height: 22px;"><br /></span></div>
<div style="text-align: left;">
<span style="line-height: 22px;">What sort of feedback models can be used in a MOOC and other online learning activities? In this note we look at the different styles of giving feedback before looking at automated feedback in a little more detail, and come to a conclusion about what is essential in the next iteration of the Erlang MOOC.</span></div>
<div style="text-align: left;">
<span style="line-height: 22px;"><br /></span></div>
<h3 style="text-align: left;">
<span style="line-height: 22px;">Different feedback styles</span></h3>
<br />
<div style="text-align: left;">
<span style="line-height: 22px;"><br /></span><span style="line-height: 22px;">One mechanism is to assign a <b>Teaching Assistant</b> (TA) to perform the work. The advantage of this is that the TA can tune their feedback to the particular piece of work, and the individual submission for that work. Feedback can be given on partial or incorrect programs, and can cover all aspects of the program’s behaviour. This can also be given ay any stage during the assessment, and not only when a completed solution is available.</span></div>
<div style="text-align: left;">
<span style="line-height: 22px;"><br /></span><span style="line-height: 22px;">One limitation is that it can be impossible to give human feedback alone on a large assessment, for which some testing data and other automated analyses may be necessary to provide the TA with suitable diagnostic information. A more fundamental difficulty is that this is difficult to scale, or to provide in a timely way to suit the participants’ different speed of working. So, while this can work for a small, focussed, group, isn’t feasible for the sort of numbers expected in a MOOC (or even in a university class of 150).</span></div>
<div style="text-align: left;">
<span style="line-height: 22px;"><br /></span><span style="line-height: 22px;">Participant <b>self-assessment</b> provides one route to scaling. The model for this is to give the student a comprehensive mark scheme and marking instructions, and to ask them to apply that scheme to their own work. This can then be moderated by a TA or other teacher. This approach has the pedagogical advantage that it gives the student a chance to gain insight into their own work through applying the marking scheme, but it can only be used on final or complete solutions to assessments, and certainly can’t be used to help students towards a final solution.</span></div>
<div style="text-align: left;">
<span style="line-height: 22px;"><br /></span><span style="line-height: 22px;">A variant of this is <b>peer assessment</b>, under which pairs of students are asked to give feedback to each other on their work. Again, this has a pedagogical advantage of allowing students to gain further insights into how work is assessed, and in giving them a better perspective on their work through seeing another’s. It can also be used to give students support during the development of the solution, and can indeed become a form of peer programming. </span></div>
<div style="text-align: left;">
<span style="line-height: 22px;"><br /></span><span style="line-height: 22px;">There are two variants of the peer assessment approach: the pairs can be assigned “statically” in advance of the work being done, or the process can be more dynamic than that, assigning the pairing only when the two have each made a submission. It is obviously a scalable solution, though without the consistency of having a single marker; it can be supported with appropriate materials, including marking schemes etc., just as for self-assessment.</span></div>
<div style="text-align: left;">
<span style="line-height: 22px;"><br /></span><span style="line-height: 22px;">The final option, and the one that we look at in more detail in the next section, is providing some form of <b>automated feedback</b> to students. Of course, it it possible to combine these modes – as hinted earlier – supplementing some form of person to person feedback with data from a variety of automated tests and analyses.</span></div>
<div style="text-align: left;">
<span style="line-height: 22px;"><br /></span></div>
<h3 style="text-align: left;">
<span style="line-height: 22px;">Automated feedback</span></h3>
<br />
<div style="text-align: left;">
<span style="line-height: 22px;"><br /></span><span style="line-height: 22px;">On the whole, a pre-requisite for giving feedback is that the program unit – call it a module here – needs to be compilable. Without that it is difficult to give any meaningful automated feedback. Though difficult, it is not impossible, and some sorts of lexical-based analysis are possible even for programs that are incomplete or contain errors. Nevertheless, the rest of this section will assume that the program can be compiled successfully.</span></div>
<div style="text-align: left;">
<span style="line-height: 22px;"><br /></span>
<span style="line-height: 22px;"><br /></span></div>
<h3 style="text-align: left;">
<span style="line-height: 22px;">What kind of feedback can be provided using some sort of automated analysis?</span></h3>
<br />
<ul style="text-align: left;">
<li>Static checks</li>
<ul>
<li>type checks</li>
<li>compiler warnings and compiler errors</li>
<li>abstract interpretations and other approximate analyses e.g, dead code analysis</li>
</ul>
<li>Style analysis</li>
<ul>
<li>decidable e.g. lengths of identifiers</li>
<li>undecidable (requires approximation), e.g. the module inclusion graph.</li>
<li>intensional properties: e.g. this is / is not tail recursive</li>
</ul>
<li>Tests</li>
<ul>
<li>hand-written unit tests, performed by the user</li>
<li>unit tests within a test framework; can be performed by the user, or the system</li>
<li>integration tests of a set of components, or of a module within context</li>
<li>these may be tested against mocked components</li>
<li>user interface testing</li>
</ul>
<li>Properties</li>
<ul>
<li>logical properties for functions</li>
<li>property-based testing for stateful APIs</li>
<li>both of these can be accompanied by “shrinking” of counter-examples to minimal</li>
</ul>
<li>Non-functional properties</li>
<ul>
<li>efficiency</li>
<li>scalability</li>
<li>fault-tolerance (e.g. through fuzz testing)</li>
</ul>
</ul>
<div>
<br />
<br /></div>
<h3 style="text-align: left;">
<span style="line-height: 22px;">What role does testing play in online learning?</span></h3>
<span style="line-height: 22px;"></span><br />
<ul style="text-align: left;"><span style="line-height: 22px;">
<li>Confirming that a solution is (likely to be) correct or incorrect.</li>
<li>Pointing out how a solution is incorrect.</li>
<li>Pointing out which parts of a solution are correct / incorrect.</li>
<li>Pointing out how a solution is incomplete: e.g. a case overlooked.</li>
<li>Assessing non-functional properties: efficiency, scalability, style etc</li>
</span></ul>
<span style="line-height: 22px;">
<div>
<br /></div>
</span><br />
<h3 style="text-align: left;">
<span style="line-height: 22px;">Rationale for using automated feedback</span></h3>
<span style="line-height: 22px;"><br /></span>
<div style="text-align: left;">
<span style="line-height: 22px;">Reasons in favour of using automated feedback, either on its own or as one of a number of mechanisms.</span></div>
<ul style="text-align: left;">
<li><b>Timeliness</b>: feedback can be (almost) immediate, and certainly substantially quicker than from a peer reviewer.</li>
<li><b>Comprehensiveness</b>: can do a breadth of assessment / feedback which would be infeasible for an individual to complete.</li>
<li><b>Scalability</b>: in principle it scales indefinitely, or at least to the limit of available resources; if the evaluation is done client-side then there is no resource cost at all.</li>
<li><b>Consistency</b> of the feedback between different participants.</li>
<li><b>Complements</b> peer interactions: can concentrate on “why did this fail this test?”</li>
</ul>
<span style="line-height: 22px;"></span><br />
<div>
<span style="line-height: 22px;"><span style="line-height: 22px;"><br /></span></span></div>
<span style="line-height: 22px;">
There are, however, drawbacks to using automated feedback.<br /><ul style="text-align: left;">
<li>Can miss the <b>high-level feedback</b> – covering “semantic” or “pragmatic” aspects – in both positive and negative cases.</li>
<li><b>Restricted</b> to programs that can be compiled or compiled and run. </li>
<li>Difficult to <b>tie errors to faults</b> and therefore to the appropriate feedback.</li>
</ul>
</span><span style="line-height: 22px;"><div>
<span style="line-height: 22px;"><br /></span></div>
There is a cost to automating testing, as it needs to be implemented, either on the client side – typically within a browser – or on a server. There is a potential security risk if the testing is performed server-side, and so some sort of virtualisation or container will generally be used to test programs that could potentially interact with the operating system. On the server side, any technology can be used, whereas on the client it will typically use whatever facilities are provided by the browser.</span><br />
<div style="text-align: left;">
<span style="line-height: 22px;"><br /></span></div>
<h3 style="text-align: left;">
<span style="line-height: 22px;">Recommendations</span></h3>
<br />
<div style="text-align: left;">
<span style="line-height: 22px;"><br /></span><span style="line-height: 22px;">At the base level we should provide two modes of feedback when the Erlang MOOC is run next.</span></div>
<div style="text-align: left;">
<span style="line-height: 22px;"><br /></span><span style="line-height: 22px;"><b>Dynamic peer feedback</b>, where pairs are matched dynamically as final submissions are made. This allows solutions that are pretty much complete – and hopefully correct – can receive feedback not only on correctness but also on style, efficiency, generality etc. This gives qualitative as well as quantitative feedback, but requires at least one other participant to be at the same stage at roughly the same time.</span></div>
<div style="text-align: left;">
<span style="line-height: 22px;"><br /></span><span style="line-height: 22px;"><b>Unit tests and properties</b> should be provided for participants to execute themselves. While this is less seamless than testing on the server side, it gives participants freedom to use the tests as they wish, e.g. in the course of solving the problem, rather than just on submission.</span></div>
<div style="text-align: left;">
<span style="line-height: 22px;"><br /></span><span style="line-height: 22px;"><b>Understanding tools</b>. We should include information about interpreting output from the compiler and dialyzer; some information is there, but should make it clearer.</span></div>
<div style="text-align: left;">
<span style="line-height: 22px;"><br /></span><span style="line-height: 22px;">Other feedback is desirable, and particularly integrating as much of this feedback into the browser would make an attractive user experience, particularly if it is integrated with the teaching materials themselves, à la Jupyter. As an alternative, the feedback support could be integrated with the server, but this will require effort to craft a bespoke solution for Moodle if we’re not yet using a mainstream MOOC platform.</span></div>
<div style="text-align: left;">
<br /></div>
</div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com0tag:blogger.com,1999:blog-6712179876742604415.post-51073520915532029882015-06-14T16:18:00.003+01:002015-06-14T16:18:39.878+01:00The Erlang mini-MOOC pilot … what have we learned?<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<div style="text-align: left;">
We’ve learned some things from our pilot MOOC: some is surprising and some more predictable. The main places we’ll need to work harder are how we give feedback on homeworks, and how in general how we engage participants … more about both of those below.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
We’ll also do some more detailed reflection on the quantitative data that we’ve gathered, but that’s for another post when we’ve had time to analyse it, and also given time to those who are still working hard on the pilot to finish up.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
So, in no particular order, here is what we have found out from the experience of planning, producing and running the pilot, which is part of the <a href="http://www.kent.ac.uk/elearning/themes/kmoocs.html">K-MOOCs project at Kent</a>.<br />
<br /></div>
<h3 style="text-align: left;">
Planning is key</h3>
<br />
<a href="http://2.bp.blogspot.com/-UuXh6qFeeHc/VXrfQL44JNI/AAAAAAAAAOs/X3RXI4kVdWg/s1600/IMG_8513.JPG" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="240" src="http://2.bp.blogspot.com/-UuXh6qFeeHc/VXrfQL44JNI/AAAAAAAAAOs/X3RXI4kVdWg/s320/IMG_8513.JPG" width="320" /></a>One key thing we learned from <a href="http://www.cc.gatech.edu/people/w-michael-mccracken">Mike McCracken</a> – who has led Georgia Tech’s online activities, including a full online MSc in computer science – is how important it is to plan. Mike's advice was to plan each activity in enough detail so that once we started in production, all that we needed was to make the video, write the quiz, or whatever.<br />
<br />
Here’s some of the planning in progress: you can see that we’ve got three levels: weeks 1 to 3, sections within each week (typically 3 or 4), and sub-sections, which can be anything up to 8.<br />
<br />
<ul style="text-align: left;">
</ul>
<h3 style="text-align: left;">
Planning takes you longer than you expect … but it saves time later.</h3>
<br />
<a href="http://4.bp.blogspot.com/-hQjwFi62kc4/VXrf23UNY_I/AAAAAAAAAO8/RsgvnKeGEAA/s1600/IMG_8517.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="320" src="http://4.bp.blogspot.com/-hQjwFi62kc4/VXrf23UNY_I/AAAAAAAAAO8/RsgvnKeGEAA/s320/IMG_8517.jpg" width="320" /></a>Mark O'Connor – the learning technologist and project manager for the K-MOOCs project was also clear about the importance of planning, and he helped us to refine things into a template for describing each activity.<br />
<br />
This took about two weeks; that’s a lot longer than I’d expected, given that much of the material was already written for an undergrad course at Kent on functional and concurrent programming. However, the actual production took less time than we’d anticipated, and this was down to the value of doing all that planning in advance, so it paid off. Thanks Mike and Mark<br />
<h3 style="text-align: left;">
</h3>
<div>
<br /></div>
<h3 style="text-align: left;">
</h3>
<h3 style="text-align: left;">
Lo-fi is OK</h3>
<br />
<a href="http://2.bp.blogspot.com/-QNSuPEBuAkI/VXrfQn-qMRI/AAAAAAAAAOw/fxY3SxM6eqM/s1600/IMG_9144.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="320" src="http://2.bp.blogspot.com/-QNSuPEBuAkI/VXrfQn-qMRI/AAAAAAAAAOw/fxY3SxM6eqM/s320/IMG_9144.jpg" width="240" /></a>We recorded the three <a href="http://www.kent.ac.uk/elearning/themes/masterclasses.html">Erlang Master Classes</a> using a serious – I guess semi-professional – studio, and having them filmed and post-produced by <a href="http://www.kershmedia.co.uk/">KERSH media</a> to a very high standard. My master class was part of the pilot, and all three will be part of the full MOOC, but our budget didn’t stretch to making all the videos in a studio.<br />
<div>
<br /></div>
<div>
Instead, we used my office and MacBook Pro … pictured right. The box makes sure that the camera angle is right, and the external keyboard means that you can’t hear all the keystrokes during “live coding” sessions. Apart from making sure that no-one could come in and disturb the session, my mobile was switched off and phone unplugged from the wall, no special preparation needed. Once I got into my stride, most of the lectures were done in a single take, with Mark providing some post-recording cutting, in case I’d had to start again, or paused for too long.</div>
<div>
<br /></div>
<div>
The preliminary feedback seems to suggest that this lo-fi approach is “good enough” for the lectures, which after all are concentrating on getting ideas across effectively, and not aiming to be eye candy.</div>
<div>
<br />
<h3 style="text-align: left;">
Teamwork makes things fly</h3>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-mA5bTxnPvJo/VX2KburnsLI/AAAAAAAAAPc/lHUnJdq9WcY/s1600/MOOC-top-level.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="237" src="http://4.bp.blogspot.com/-mA5bTxnPvJo/VX2KburnsLI/AAAAAAAAAPc/lHUnJdq9WcY/s320/MOOC-top-level.png" width="320" /></a></div>
<div style="text-align: left;">
Getting it right was a matter of teamwork. Mark O’Connor played an invaluable role in project admin in general but more importantly in building the web presence for the MOOC. We were using our own <a href="https://moodle.kent.ac.uk/external/">“external” moodle</a> setup, rather than a specific MOOC platform, and Mark was able to give that a look and feel consistent with what we hope people were expecting.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Stephen Adams – who is doing a PhD at Kent on refactoring Haskell programs – was TA for the course, and separating that role from that of the lecturer seemed to work fine. He had clear responsibilities for supporting the forums and so forth when the course was running, whereas my role was to get the pedagogy and content right in advance.</div>
</div>
<div style="text-align: left;">
<br /></div>
<h3 style="text-align: left;">
The platform was “good enough”</h3>
<br />
<a href="http://1.bp.blogspot.com/-Hlnf9xa7Rwc/VX2Kb61tohI/AAAAAAAAAPo/gvWVbBX0wyI/s1600/MOOC-activities.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="213" src="http://1.bp.blogspot.com/-Hlnf9xa7Rwc/VX2Kb61tohI/AAAAAAAAAPo/gvWVbBX0wyI/s320/MOOC-activities.png" width="320" /></a>So we were using Moodle rather than a dedicated MOOC platform, and this had some disadvantages.<br />
<br />
Dedicated platforms will provide some services, such as transcription of lectures, which we were unable to provide in the pilot. We’ll have to find a way of dealing with this if we don’t move up to using a MOOC platform, and indeed we’ll have to be more aware about questions of accessibility in general.<br />
<br />
We also struggled with how to keep participants up to date with changes that are taking place. This is tricky because we decided early on to have a separate discussion forum for each activity: we wanted to do this because we knew that people would want to go through the materials at different rates, and so at any particular time each individual’s interest could potentially be on smoothing different, so we had to group according to topic, rather than just use time. Ideally we’d like to use some sort of feedback dashboard, but in conversation I’ve heard it said that none of the platforms has got a complete answer to this, most likely because it’s a tricky problem.<br />
<br />
Moodle performed better than standard platforms in some ways. At Kent we use <a href="http://panopto.com/">Panopto</a> to power our lecture recording platform (which we used for post production for the MOOC). This allows users to view the media – e.g. slides, screen capture, webcam video – separately, and, for instance, to index into the presentations by means of the slides. This extends what most MOOC providers do as standard.<br />
<br />
<h3 style="text-align: left;">
Publicity</h3>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-qv3L8GAKOqI/VX2OLVEWYRI/AAAAAAAAAPw/qKtjUT8l8Xs/s1600/Tweet.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="260" src="http://3.bp.blogspot.com/-qv3L8GAKOqI/VX2OLVEWYRI/AAAAAAAAAPw/qKtjUT8l8Xs/s320/Tweet.png" width="320" /></a></div>
We’d originally aimed to have fifty to a hundred participants for the pilot. These came from both CS students at Kent and externally. At Kent we offered the course as revision for second year students who had already learned Erlang and as a taster for those who’ll take the Erlang course next year. I’d also recruited some potential participants at the Erlang Factory in San Francisco in March.<br />
<br />
Mark we keen for us to be more ambitious, and so I tweeted an invitation, hash-tagged #erlang. We got enough retweets from key Erlang players that this brought us up to our 500 cap relatively quickly – a pleasing result, not least because it cost us nothing for advertising.<br />
<br />
<div style="text-align: left;">
<h3 style="text-align: left;">
Make it multi-purpose</h3>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-U3ftg08Bpv4/VX2TO-6R1kI/AAAAAAAAAQA/vVbS8urSs-I/s1600/FC.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="177" src="http://3.bp.blogspot.com/-U3ftg08Bpv4/VX2TO-6R1kI/AAAAAAAAAQA/vVbS8urSs-I/s320/FC.png" width="320" /></a></div>
We wanted to make the best use that we could of our limited budget, so we made sure that we could use what we did in three different, but interlinked, ways.<br />
<ul style="text-align: left;">
<li>First, we’d have a MOOC for people new to Erlang, but who we expected to have some programming experience.</li>
<li>Secondly, we’d make some parts of the MOOC, particularly those which give more extended examples, into master classes which could be of value for people who already know Erlang, but who want to see it used in practice.</li>
<li>Finally, the materials will allow us to teach Erlang to of face-to-face CS students in a different way: we’ll particularly be able to try “flipping the classroom” so that students watch lectures in their own time, and lectures can then be used for more interactive activities.</li>
</ul>
</div>
<div style="text-align: left;">
<br />
<h3 style="text-align: left;">
Engagement</h3>
<div>
<br /></div>
<div style="text-align: left;">
<a href="http://1.bp.blogspot.com/-aVpdy1U_Aas/VX2Vo_BymOI/AAAAAAAAAQc/n3vqlB2ED98/s1600/Participation.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="224" src="http://1.bp.blogspot.com/-aVpdy1U_Aas/VX2Vo_BymOI/AAAAAAAAAQc/n3vqlB2ED98/s320/Participation.png" width="320" /></a>We used forums for discussion, one par activity, but also added a general FAQ for some of the questions that came up as well as a “common room” for general discussion that cut across individual activities. We also used a forum as the place we started to try to form an online community, through people introducing themselves to the group.</div>
<div>
<br /></div>
<div>
Although we did have discussions, particularly about the homeworks, we’d have liked to promote more, and one idea we’ll definitely explore next time is to find ways of pairing up people to help each other out. Our plan is to do this dynamically, so that two people who hand in a solution to a homework at about the same time will be asked to comment on each others’ work. We hope that this will make it easier for everyone to see how to contribute, and also how they will be gaining benefit themselves from the process. </div>
<div>
<br /></div>
<div>
Finally, we could also pro-actively form groups to work on later, larger, exercises (maybe supported with some collaborative editing software) but again this group formation will need to be dynamic to reflect the different ways that individuals use the materials.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<br /></div>
<h3 style="text-align: left;">
Feedback </h3>
</div>
<div style="text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-cbRl6oqTuEw/VX2VBkA_NaI/AAAAAAAAAQU/p15wDLdM_C4/s1600/Feedback.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="228" src="http://4.bp.blogspot.com/-cbRl6oqTuEw/VX2VBkA_NaI/AAAAAAAAAQU/p15wDLdM_C4/s320/Feedback.png" width="320" /></a></div>
<div style="text-align: left;">
One of the aims of engaging participants actively was to provide feedback to others on their work. What became clear is that we should also provide some way of participants to get automated feedback on their work too. </div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Some years ago, Kent and Erlang Solutions Ltd worked together on an e-learning project hosted in moodle, key investigator Roberto Aloi. This project developed a feedback plugin for moodle to give testing and style feedback on Erlang programs, and we hope with ESL to re-develop this to fit with more modern technology (such as docker and REST).</div>
<ul style="text-align: left;">
</ul>
<br />
<h3 style="text-align: left;">
Conclusion</h3>
<br />
We were happy with the pilot. It showed us that we could deliver a mini-MOOC using available resources and reasonable amounts of time. It also showed us where to put our effort next, principally in two areas: promoting the best engagement possible from participants, and delivering to them the best feedback that we can, at the time when it is most valuable to them. We hope to move things forward over the summer, and to present a full MOOC later this year or early next. We’ll also follow this blog post with another containing more details about the feedback from the participants in the pilot.<br />
<br />
<br />
<br />
<br /></div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com0tag:blogger.com,1999:blog-6712179876742604415.post-56127086270741353042015-05-28T20:38:00.002+01:002015-05-28T20:38:31.446+01:00On the fine-structure of regular algebra by Foster and Struth (review)<div dir="ltr" style="text-align: left;" trbidi="on">
This review of <i>On the fine-structure of regular algebra</i> by Simon Foster and Georg Struth (Journal of Automated Reasoning 54 (2): 165-197, 2015) was first published in <i>Computing Reviews</i>.<br />
<br />
Although it is dangerous to generalize from a single example, it’s instructive to look at Foster and Struth’s paper as giving us a snapshot of the state of the art in automating ordinary reasoning. First, let’s look at the particular subject of the paper.<br />
<br />
Regular expressions and finite automata are fundamental to the practice of computer science, but they are a fertile topic for research, too. A key question is identity: when do two regular expressions (or two finite automata) behave identically? The answer is “when they describe the same set of strings,” but that’s less useful computationally than it first looks, since it requires that we test sets, which potentially are infinite, for equality. So, how else to judge identity? By setting out a collection of laws, or axioms, that encapsulate identity. Foster and Struth’s paper sets out the various attempts by a number of authors, including Boffa, Conway, Kozen, and Salomaa, to provide axioms that are sound (don’t prove any false negatives) and complete (can prove all identities).<br />
<br />
The “fine structure” of the title is a detailed study of the interrelationships of these sets of axioms: showing when one set implies another (a process of deduction) and when one set is strictly stronger than another, which is shown by giving a counterexample that meets one set and not the other. The process of both deduction and finding counterexamples is computer assisted, and this provides the main contribution of the paper.<br />
<br />
How does the automation directly contribute to the mathematics? It means that a number of loose ends could be tidied up: one set of axiomatizations was simplified, some missing axioms were found in the statement of a particular axiom set, and a gap was found in an existing, well-known proof. So it increased assurance in the work, but probably does not constitute a substantial advance in itself.<br />
<br />
On the other hand, the work is indicative of real progress in the facilities available to support online proof and counterexample development. Advances have come in a number of ways. Most important is the way in which different approaches have come together: it used to be that the proof was either done manually, with a proof assistant, or in an entirely automated way, using some kind of decision procedure. The work under review uses the Isabelle proof assistant [1], which now supports the Sledgehammer tool [2], to apply a number of external automated theorem provers for first-order logic; in cases where this fails, Isabelle has its own simplification and proving systems. This has the effect of allowing users to concentrate their attention on shaping the higher-level architecture of the theories, rather than having to deduce the results proof step by (painful) proof step. Where this reaches its limit is when proofs go beyond the purely algebraic and contemplate rather more mathematical structure (for example, sets) as provided by a higher-order logic.<br />
<br />
Although most mathematicians don’t use theorem-proving technology in their day-to-day practice, as opposed to the wide take-up of computer algebra systems in various areas of mathematics and engineering, it has taken real hold in the area of mechanized metatheory of programming languages, as witnessed by the POPLmark challenge (http://www.seas.upenn.edu/~plclub/poplmark/). It may well be because this is an area where the scale of proofs can be such that the usual social process of proof checking has been shown to be less than ideal, that value that accrues from formalization. In conclusion, it is interesting to speculate on how long it will take for the practicing mathematician to reach for a theorem-proving system as a part of his or her daily work.<br />
<br />
[1]<span class="Apple-tab-span" style="white-space: pre;"> </span>Isabelle home page. https://www.cl.cam.ac.uk/research/hvg/Isabelle/index.html (05/13/2015).<br />
[2]<span class="Apple-tab-span" style="white-space: pre;"> </span>Blanchette, J. C.; Böhme, S.; Paulson, L. C. Extending Sledgehammer with SMT solvers. In Proc. of the 23rd Int. Conf. on Automated Deduction: CADE-23. Springer, </div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com0tag:blogger.com,1999:blog-6712179876742604415.post-82099831689052817102015-05-13T14:07:00.001+01:002015-05-14T16:41:09.403+01:00Erlang MOOC pilot, day 3<div dir="ltr" style="text-align: left;" trbidi="on">
We’re at day 3 of our MOOC pilot on functional programming in Erlang: so how are things going?<br />
<br />
MOOCs aren’t synchronous, and so people can begin and progress in their own time. So far half of those who signed up in advance have started to use the MOOC, and half of them have progressed a good way through the first week’s material. We have people taking an active part from (in no particular order) the Netherlands, Spain, Russia, the USA, Mexico, Denmark, Italy, France, Argentina, Chile, Japan, the Czech republic, New Zealand, China, Canada, the UK amongst others.<br />
<br />
We’re also finding our way in working out the best ways of interacting. Our forums are split “by activity” and while this keeps each one on topic, it leaves no space for general discussions, so we’ve added a “common room” for those discussions. We also wanted to share with everyone some of the responses to points being made in the feedback form, so we have added a (growing) FAQ page to hold those questions and answers. Luckily we’re able to sort quite a few things out, but some others – like making videos downloadable – are going to take more time, and internal university discussions, to resolve.<br />
<br />
We’re also beginning to see discussion and feedback on programming homework assignments. Part of the MOOC ethos is to involve everyone in giving feedback about others’ work and problems, so we have a fully meshed network rather than spoke to hub with the Kent staff providing feedback and so on. It’s still early days, but discussions are beginning. We also had a request for acceptance tests for the homeworks, but again for the purposes of the trial we’re encouraging participants themselves to develop and share these too. We might come back to this as the pilot progresses …<br />
<br />
So far Moodle seems to be coping … there would be clear advantages to going with a fully-featured MOOC platform, such as providing transcripts of the videos for accessibility reasons, but Moodle seems to be managing so far. We’re also able to see how people are progressing in quite some detail, so there will be number crunching going on over the course of the pilot, and hopefully some useful data analysis as an outcome.<br />
<br />
Monday, coincidentally, we were part of a tea party to celebrate the <a href="http://www.kent.ac.uk/beacon/">Beacon Projects at Kent</a>, and as part of that we got a lot of encouragement and interest in what we’re doing. It can only help with developing a case to argue to the university about the multi-faceted value of MOOCs, online lectures and blended learning to our staff, students and public reach.</div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com0tag:blogger.com,1999:blog-6712179876742604415.post-68481387512609779752015-05-11T09:52:00.000+01:002015-05-11T09:52:22.442+01:00“Functional Programming in Erlang” MOOC pilot<div dir="ltr" style="text-align: left;" trbidi="on">
It’s all very peculiar.<br />
<br />
We’ve been working flat out to get the <a href="https://moodle.kent.ac.uk/external/course/view.php?id=237">pilot MOOC on Functional Programming in Erlang</a> up and running at Kent, we have signed up over 500 participants, and it just went live 20 minutes ago. Rather like sending a book manuscript off to the publishers, it has all gone quiet, and we're waiting to see what's going to happen. I should tell you who the “we” are: apart from me, there’s <a href="https://www.kent.ac.uk/uelt/staff/cdt/profiles/maoconnor.html">Mark O'Connor</a>, who is Distance Learning Technologist for the university, and <a href="https://www.linkedin.com/in/adams601">Stephen Adams</a>, who is TA for the course and who is a PhD student in computer science here.<br />
<br />
This pilot is part of a bigger plan, which includes supporting our face-to-face teaching of Erlang at Kent, as well as putting out a set of Master Classes in Erlang by Joe Armstrong, one of the inventors of Erlang, Francesco Cesarini, founder of Erlang Solutions Ltd, and me. These will be coming shortly. This <a href="http://www.kent.ac.uk/elearning/themes/kmoocs.html">K-MOOCs project</a> is being supported by the University of Kent as part of its <a href="http://www.kent.ac.uk/beacon/">Beacon Projects initiative</a>, which is in celebration of the University’s fiftieth anniversary.<br />
<br />
We recorded the master classes in a studio with green screen etc, to try that out, and these are in the final stages of production, but the majority of the MOOC lectures are more “home made” and were recorded on my MacBook Pro, using the built in audio and video facilities, and Kent’s lecture recording facilities that are powered by <a href="http://panopto.com/">Panopto</a> . Given that “available tech” approach, we’ve been pleased with the results (but our participants may choose to differ!).<br />
<br />
In getting the MOOC planned and executed, Mark has been tireless in providing infrastructure, in getting me to plan, in editing and in feeding back on all the activities. We’ve also really benefitted from advice and help from <a href="http://www.cc.gatech.edu/people/w-michael-mccracken">Mike McCracken</a> of Georgia Tech, who has pioneered their online CS courses and MOOCs. Without Mike’s advice about how to script the course in advance, implementation would have been so much more difficult.<br />
<br />
As it is, we’ve got the course in place. I’ll be adding a few finishing touches – more quizzes! – today, but really we’re waiting to see how people get on with what’s there. I’ll aim to blog some more as the pilot runs, and we see how it all works out.</div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com1tag:blogger.com,1999:blog-6712179876742604415.post-22082214384758045962015-04-30T20:26:00.002+01:002015-04-30T20:26:28.464+01:00Automated Proof in Practice<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<div>
While it is dangerous to generalise from a single example, it's instructive to look at Foster and Struth's paper as giving us a snapshot of the state of the art in automating ordinary reasoning, but first let’s look at the particular subject of the paper. </div>
<div>
<br /></div>
<div>
Regular expressions and finite automata are fundamental to the practice of computer science, but they are a fertile topic for research, too. A key question is identity: when do two regular expressions – or two finite automata – behave identically? The answer is “when they describe the same set of strings”, but that's less useful computationally than it first looks, since it requires that we test sets – which potentially are infinite – for equality. So, how else to judge identity? By setting out a collection of laws – or axioms – that encapsulate identity. Foster and Struth's paper sets out the various attempts, by a number of authors including Boffa, Conway, Kozen and Salomaa, to provide axioms that are sound (don't prove any false negatives) and complete (can prove all identities). </div>
<div>
<br /></div>
<div>
The “fine structure” of the title is a detailed study of the interrelationships of these sets of axioms: showing when one set implies another – a process of deduction – and when one set is strictly stronger than another, which is shown by giving a counterexample which meets one set and not the other. The process of both deduction and counterexample-finding are computer assisted, and this provides the main contribution of the paper. </div>
<div>
<br /></div>
<div>
How does the automation directly contribute to the mathematics? It meant that a number of loose ends could be tidied up: one set of axiomatisations was simplified (though finding hitherto undiscovered relationships between axioms), some missing axioms were found in the statement of a particular axiom set, and a gap was found in an existing, well-known, proof. So, it increased assurance in the work, but probably does not constitute a substantial advance in itself.</div>
<div>
<br /></div>
<div>
On the other hand, the work is indicative of real progress in the facilities available to support online proof and counterexample development. Advances have come in a number of ways. Most important is the way in which different approaches have come together: it used to be that proof was either done “manually” with a proof assistant, or in an entirely automated way using some kind of decision procedure. The work under review uses the Isabelle proof assistant, which now supports the Sledgehammer tool to apply a number of external automated theorem provers for first-order logic, and in cases where this fails, Isabelle has its own simplification and proving systems. This has the effect of allowing users to concentrate their attention on shaping the higher-level architecture of the theories, rather than having to deduce results proof step by (painful) proof step. Where this reaches its limit is when proofs go beyond the purely algebraic and reverie rather more mathematical structure (e.g. sets) as provided by a higher-order logic.</div>
<div>
<br /></div>
<div>
While it's not the case that most mathematicians use theorem proving technology in their day to day practice – as opposed to the wide take-up of computer algebra systems in various areas of maths and engineering – it has taken real hold in the area of mechanised meta-theory of programming languages, as witnessed by the POPLmark challenge http://www.seas.upenn.edu/~plclub/poplmark/. It may well be because this is an area where the scale of proofs can be such that the usual social process of proof checking have been shown to be less than ideal, hence that value that accrues from formalisation, and, in conclusion, it is interesting to speculate on how long it will take for the practising mathematician to reach for a theorem proving system as a part of his or her daily work.</div>
<div>
<br /></div>
<div>
This review of <i>On the Fine-Structure of Regular Algebra</i> by Simon Foster and Georg Struth and published in the <i>Journal of Automated Reasoning</i> is a first draft as submitted to Computing Reviews.</div>
</div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com0tag:blogger.com,1999:blog-6712179876742604415.post-66478264340696738982015-02-25T23:07:00.002+00:002015-02-26T08:02:53.201+00:00Some stories about Alan Turing<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
So, here are a few Turing stories of my own. The first from Robin Gandy, who supervised by DPhil, and who, in turn, was Turing's PhD student. Robin had a wealth of stories about Turing, many of which made their way into Andrew Hodges' book – Hodges was a postgrad in Oxford with Penrose around the beginning of the 80s. Perhaps most poignant [quoting from Mike Yates' obituary of Robin] was when asked about Turing's motives if he really did commit suicide, Gandy would become quite heated: “Some things are too deep and private and should not be pried into.” Sara Turing, Alan's mother, certainly always maintained that his death was an accident.<br />
<br />
Her biography of Alan was republished in the centenary year by Cambridge University Press, and that also has only remembered stories of his youth and adulthood. The most striking thing for me was the postscript written by his brother John, on their upbringing, which was not untypical for the English upper middle classes in the early years of the century. Two quotes<br />
<ul>
<li>When Alan was two “rightly or wrongly, [my father] decided that he and my mother should return alone to India, leaving both children with foster parents in England” … “it was certainly a shock for me, even at the age of five” but “it was accepted procedure” (and he goes on to compare it with Kipling's horrendous experience, noting that at least they “escaped” that).</li>
<li>The real bombshell, though, is schooling. John says, without an ounce of irony or indeed anger “I take credit for persuading my parents to send [Alan] to Sherborne instead of Marlborough, which all but crushed me and would certainly have crushed him”.</li>
</ul>
<div>
A final anecdote. In our previous house, a near neighbour was a retired canon from Canterbury Cathedral, Donald Eperson, who wrote puzzles for the Mathematical Gazette, and who had been a schoolteacher before being ordained. Not any teacher, though, he's taught Alan at Sherborne, and indeed is mentioned in the Hodges biography. He remembered Alan, and I lent him the book – unfortunately, references to his naiveté rather upset him, and I was sorry for unsettling him. </div>
<div>
<br /></div>
<div>
It's certainly a great thing that Turing has become almost a household name, and that his memory has been rescued for generations to come as one of the greatest scientists of the twentieth century. It's also a great thing that he was pardoned for his conviction for being gay … but surely something that should apply to everyone who was treated so shamefully?<br />
<br />
Links<br />
<br />
<a href="http://www.independent.co.uk/news/people/obituaryrobin-gandy-1583406.html">Gandy obituary</a><br />
<br />
<a href="http://www.cambridge.org/kn/academic/subjects/mathematics/recreational-mathematics/alan-m-turing-centenary-edition?format=PB">Sara Turing bio of Alan</a><br />
<br />
<a href="http://www.m-a.org.uk/jsp/index.jsp?lnk=917">Memoir of Donald Eperson</a> (look for "Music and Mathematics")</div>
</div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com0tag:blogger.com,1999:blog-6712179876742604415.post-16562348117600264932015-02-25T22:44:00.003+00:002015-02-25T22:44:21.516+00:00The Imitation Game – telling a good story about Alan Turing<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
So, what to make of The Imitation Game, the film based on the life of Alan Turing?<br />
<br />
Well, first of all it tells a good story. Some of the key messages about codebreaking are there:<br />
<br />
<ul style="text-align: left;">
<li>Knowing something about the content – particularly stylised beginnings or endings – make it easier to break the code. </li>
<li>The paradox of the codebreaker: you can't betray that by changing your behaviour too much, or the coders will change their setup … something card players surely recognise.</li>
<li>The Bletchley crowd were a mixed bunch: classicists rubbed shoulders with mathematicians and debs.</li>
</ul>
<div>
But it's clearly telling a story in the sense of lying too, and that's a frustration. Maybe it must to move the plot along, but some of the changes seem wilful and so out of character:</div>
<div>
<ul style="text-align: left;">
<li>Part of the extraordinary nature of Bletchley was its scale: in the film it's shrunk to something like a "Famous Five" adventure: Turing and his small crew have the idea for the machine, build it (no Tommy Flowers), and then take the decision about not being able to reveal that the code has been cracked; that just doesn't make historical sense, but I guess keeps the plot moving;</li>
<li>An anecdote about the scale of the place: a couple who were in the forces during WW2 were recently visiting Bletchley, and half way round the husband confesses to his wife that he'd worked there during the war – because of the secrecy surrounding the whole operation, he'd been sworn to secrecy – only for her to admit to working there too; perfectly possible</li>
<li>More egregious is the sub-plot about Cairncross, and suggesting that Turing had in some way colluded with him – no historical evidence for this at all.</li>
<li>Worst of all, I think, was the conceit of Turing's "home computer" Christopher. No evidence for that at all.</li>
</ul>
<div>
So, it's a good story, well acted and put together, but it tells too many stories to be completely satisfactory. Check out the biography by Andrew Hodges for a comprehensive and erudite view of Turing's life.</div>
</div>
<br />
<br />
<br />
<ul><ul>
</ul>
</ul>
</div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com0tag:blogger.com,1999:blog-6712179876742604415.post-59756786443632201882014-10-05T18:25:00.003+01:002014-10-06T15:25:45.116+01:00Advice on going to your first conferenceI was asked for advice from someone going to his first conference … in this case <a href="http://www.codemesh.io/">CodeMesh</a> in London. Here are my thoughts … any comments or other advice?<br />
<br />
<b>Strategy</b> … what are you aiming to find out from the conference: some very general impressions about what is going on with functional innovation, something focussed on a particular language or languages, or on particular technologies? Depending on this is …<br />
<br />
<b>Plan</b> … CodeMesh has 4 parallel tracks (a real hassle) but it's well worth making sure that you have a plan for what you most want to hear, reflecting your strategy. You should be able to move between individual talks, but that can sometimes be tricky because of synchronisation between different rooms, as well as the distances between rooms. If that's tricky, then you can always change at session breaks.<br />
<br />
Another way of <b>choosing </b>is<b> </b>to go to the talk that you know <i>least</i> about. If you go to the one you know most about, that talk will probably spend 80% or more of its time telling you things that you already know. Of course, it needs to be one you're interested in …<br />
<br />
<b>Networking</b> … a lot of what happens during the conference is outside the sessions, so do make sure that you make the most of the coffee / tea / meal breaks, and the evening sessions. Speakers are always happy to chat, so you can engage with them outside sessions, or follow up from any questions you ask in a session (scary, but a way of getting noticed). You can always email speakers, introducing yourself, and ask a question, if you do not feel comfortable asking a question after a talk.<br />
<br />
Find an <b>introducer</b>, and have them introduce you to some people. If you are going with your PhD supervisor or another colleague or friend who has been there before, they should do the introductions.<br />
<br />
<b>Presenting your ideas </b>… you're going to meet people who'll ask what you're working on and you want to interest them and move the conversation along, not stumble over how to explain what you're doing. So, have a 30 second <b>elevator pitch </b>ready.<br />
<br />
<b>Talk</b> … OK, you may not feel ready for this, but if you had something specific and interesting to say you can often give a 5 minute "lightning talk" at meetings like CodeMesh. That get you and your work noticed, and people are usually very generous in listening and supporting speakers with less experience.<br />
<br />
<b>Learn</b> … if you can get to the tutorials then you can learn a whole lot at these. There are some excellent tutorialists at CodeMesh this year.<br />
<br />
<b>Social Media </b>… increasingly there is a whole virtual side to conferences, so track the twitter feed and other online stuff, and contribute yourself too.<br />
<br />
<b>Branch out </b>… if you're there with your buddies, don't just stick with them, but aim to meet new people too, particularly at any of the conference social events. Even a casual conversation a couple of times over a couple of conferences lays a foundation for a deeper professional relationship, especially with peers.<br />
<br />
<b>Refreshments</b> … there's always <b>free </b>(well, paid for in the registration price) coffee/tea etc. at the breaks and lunch in the middle of the day, but at quite a lot of conferences there's <b>breakfast</b> too, so it's worth getting along in time for that.<br />
<br />
<b>Afterwards</b> you can follow up with people you have met by email or social media. You can also catch up on presentations which you missed by <b>watching the video</b> if the conference talks are recorded. Personally, I find that despite my best intentions, it's very unusual for me to find the time to do this.<br />
<br />
Particularly for <b>big conferences</b>, it's best to pick and chose which talks to attend rather than sitting in on everything, otherwise you'll get burned out in the first couple of days. “I'd also advise listening to the speakers rather than sitting on your laptop/phone - personally I'd ban such devices from talks!” Also, keep away from alpha male superstars and their groupies … focus on the people who give presentations that fire you with enthusiasm, and those that take your presentation seriously.<br />
<br />
Finally, <b>have fun</b> soaking up all the new ideas and meeting all the interesting people behind them.<br />
<br />
<b>Updates</b> … thanks to Scott Fritchie, Andy Gill, Kathy Gray, Graham Hutton, Stefan Kahrs, Greg Michaelson, Neil Mitchell and Gem Stapleton for their comments and suggestions.<br />
<br />
HTH<br />
<br />
Simon<br />
<div>
<br /></div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com2tag:blogger.com,1999:blog-6712179876742604415.post-52254996227635760302014-08-25T14:42:00.001+01:002014-08-25T14:42:16.620+01:00Reading Robert Macfarlane by the internetThere's an old fashioned pleasure to reading on a wet August afternoon. Robert Macfarlane's <a href="http://www.theguardian.com/books/2012/jun/10/old-ways-robert-macfarlane-review">The Old Ways</a> takes you – in imagination – out into the wilder (or indeed not so wild) parts of Britain. What increases the pleasure and depth of the experience is reading with the internet by your side. With the internet we get so much further …<br />
<br />
First, the maps – Ordnance Survey if you have paid for them, or Google maps if not. Walking along the <a href="http://www.broomway.org.uk/">Broomway</a> in (or rather off) Essex takes you to that footpath along the sands, right next to the "DANGER AREA" signs. In the Hebrides, we can <a href="https://www.google.co.uk/maps/@59.1037103,-5.9841347,12z">find the islands on Google map</a>s – and satellite view - unnamed, but unmistakable from his descriptions. And then to wikipedia to <a href="http://en.wikipedia.org/wiki/Gannet">see what gannets look like and read about the peculiar anatomy</a> that sustains their deep dives into the ocean, making the story of the gannet that pierced the hull of the boat but kept it plugged entirely believable.<br />
<br />
And then onto Harris itself. Trying to negotiate the walk he makes – again we have topography and named lakes, but no hill names – but <a href="https://www.google.co.uk/maps/@58.0874133,-7.05204,4132m/data=!3m1!1e3">hills cast shadows on the satellite picture</a>, and photographs culled from somewhere (even street view was here) show the picture from the ground. Then we can look at <a href="http://www.stevedilworth.com/page1.htm">Steve Dilworth's art</a> and read about <a href="http://www.lrb.co.uk/v23/n22/iain-sinclair/in-hackney">what Iain Sinclair says about him</a>.<br />
<br />
Awe-inspiring.Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com2tag:blogger.com,1999:blog-6712179876742604415.post-78620190331310473602014-08-25T11:03:00.000+01:002014-08-25T11:03:02.293+01:00Cloud RefactoringA draft review of “<i>Cloud refactoring: automated transitioning to cloud-based services</i>” by Young-Woo Kwon and Eli Tilevich, from <i>Automated Software Engineering,</i> (2014) 21:345–372, DOI 10.1007/s10515-013-0136-9; to appear in Computing Reviews.<br />
<br />
Refactoring is the process of changing how a program works without changing what it does, but even before the term was coined, it was practised by programmers as "program restructuring" from the early 1950s. Refactoring itself came to prominence with work by Opdyke, Griswold and Johnson in the early 1990s and was popularised by Fowler's 1999 book. Refactoring is done to improve code for a range of reasons: to make it conform to coding standards, to make it easier to read, or to prepare the code for modification or enhancement.<br />
<br />
Whatever the case, refactorings of large code bases can be infeasible without automated – or semi-automated – tools, and many IDEs incorporate refactorings for a variety of languages, although it is perhaps most developed in the IDEs for the Java language, including IntelliJ and Eclipse. Refactoring "in the small" is the preserve of the developer, and may be done as a part of his or her day to day development process; larger-scale refactoring is often anticipated, but perhaps harder to justify as part of an aggressive release cycle, unless, of course, there is some concrete gain to be made. What better example could there be of this than migrating an existing system to the cloud?<br />
<br />
Taking a system and moving it to the cloud must be the right thing to do: it provides scalability, resilience, and also fits the zeitgeist. However, as the authors make very clear, it is not without difficulties. It is important to preserve functionality – the system should not change what it does – but also it should maintain non-functional properties like efficiency (e.g. latency and throughput) and questions of resilience are more acute in a distributed/cloud setting. In common with many other refactoring tool builders, the authors propose a two-pronged approach to the problem: first, they give an analysis to provide recommendations of how systems might be refactored and then they develop a tool to implement the refactorings identified. Their first phase combines static clustering analysis with runtime profiling to identify potential services which can be migrated, and their second implements refactorings that make this change, through introducing the appropriate interfaces and proxies and at the same time adding fault-handling facilities to deal with the additional risks introduced by moving to a more distributed application platform.<br />
<br />
These refactorings are included in Eclipse and available through its standard refactoring workflow, and the work is demonstrated through two small examples and a larger case study performed for GE Research. One of the key questions faced by designers of a cloud refactoring tool and their users alike is how much of the refactoring workflow should be automated. In discussing a number of examples the authors say that "we selected appropriate classes for the reason of the performance, call-by-reference, and meaning of features", thus making clear the role for the domain-aware engineer in the process. It would have been interesting to have heard more about the view of the developers of the software about the results of the analysis: were the services identified meaningful to them? would they have structured things somewhat differently from the analyses of the tool?<br />
<br />
In summary, this paper makes a clear and well-described contribution to what is a fast moving field: Google Scholar, for example, gives over 1000 references matching the keywords "cloud" and "refactor", and this number can only be set to grow as migrating cloud proves to be more of a challenge than its advocates suggest.<br />
<br />Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com0tag:blogger.com,1999:blog-6712179876742604415.post-4468132640971980692014-07-19T19:14:00.000+01:002014-07-19T19:14:17.796+01:00In honour of Robert WyattPicture the scene … you're having an evening in the pub, with the usual music in the background: guitars, 4-4 bass beat, when suddenly it changes: it's a song with more complicated rhythms and dissonant notes. What is more, it's sung in a lugubrious style by someone who sounds like a real person, rather than an identikit American drawl. The singer is Robert Wyatt, our honorary graduand today, and picking music to catch your attention like this has come to be called "Wyatting" in his honour.<br />
<div>
<br /></div>
<div>
Canterbury is know historically for the cathedral, and Chaucer’s Canterbury Tales, but for those of us who grew up in the 60s and 70s, it was also the home of the Cantebury scene – musicians whose careers started out in the city in the late 60s. The scene may have been no more that a group of boys whose families had eclectic record collections … record collections that included 20C classical – Hindemith but “nothing German … not long after the war” – and jazz – Max Roach and John Coltrane, perhaps – but those boys went on – jointly and separately – to be a key part of the creative explosion that was the 1960s, and Robert was there at its centre.</div>
<div>
<br /></div>
<div>
<a href="http://2.bp.blogspot.com/-ILQOak56wn0/U8q0vX6lxKI/AAAAAAAAAL4/dECLAwGTQxg/s1600/IMG_7111.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://2.bp.blogspot.com/-ILQOak56wn0/U8q0vX6lxKI/AAAAAAAAAL4/dECLAwGTQxg/s1600/IMG_7111.jpg" height="400" width="315" /></a>Right from the start, Robert was one of a very select group – others are Justin Bieber, Dave Clark, Karen Carpenter, Levon Helm and Ringo Starr – of singing drummers. As he tells it, drumming was something you could do while listening to a Max Roach record, and what's more you could get started by drumming with rolled up newspapers. You could also sing along, too, and teach yourself to be a musician. All of this meant that at school – the Simon Langton Boys School, here in Canterbury – Robert was rather left behind, and instead he became one of the Wilde Flowers, who played in Canterbury and further afield for a couple of years. That is “Wilde” with an “e”, in honour of Oscar – an example of the wordplay that “lets words collide” running through all Robert’s work.</div>
<div>
<br /></div>
<div>
In 1966 Robert helped to form Soft Machine, with David Allen (from Herne Bay, and introduced to Roger as “someone else who has grown their hair long”), Kevin Ayers and Mike Ratledge. Soft Machine never managed to break through to the rock or pop mainstream, but were hugely influential, first for their gentle English surrealism, and later for their cool, stripped down, fusion jazz rock. They cut their teeth – and sharpened up their act – playing support to Jimi Hendrix in a long 1968 tour of the USA. Robert observes: when you're in front of 5,000 young Texans waiting to hear Hendrix, you don't mess about; … it also helps if you don't have a guitarist in the band! This tour brought them American success, which was mirrored on the continent, where their jazz style found a sympathetic ear. In the UK, they became the first rock group to play at the Proms.</div>
<div>
<br /></div>
<div>
Soft Machine broke up, reformed to make their landmark ”Third” album 2LPs with four side-long pieces. and finally Robert left them for good. He formed a larger group – Matching Mole – a bilingual pun on “machine molle” / soft machine (groan!) but not long after that in an accidental fall from a window in 1973 he received injuries that have meant that he now uses a wheelchair. This took him – in his words – into just another way to be, and it meant that he had time to slow down and think about how to sing. His first record after that – Rock Bottom – sets the pattern for his work since: slow songs, striking tunes and rhythms, usually based on keyboard melodies. A reworking of the Neil Diamond song I'm a Believer – first sung by the Monkees – gave him his first hit, but that is only one side of his work: he's a great musical collaborator, and played with a huge range of people through the 1970s and 80s, from free jazzers like Carla Bley to avant gardists and more traditional rock stars, often with a political message. His most remembered song from that time is his version of Elvis Costello's Shipbuilding: an elegiac meditation on the Falklands War and its effect on the parts of the county that had been hit hardest by the Thatcher government's policies.</div>
<div>
<br /></div>
<div>
Looking back over Robert's musical career – which started at around the same time as the University of Kent, and shares its 50th anniversary – it is hard to think of more than a handful of musicians who have been able to keep their music as vital and original as it was when they began. His recent work has seen collaborations with the Brodsky Quartet - also honorary Kent graduates - Brian Eno and a re-recording of a set of songs by the electronica group Hot Chip. </div>
<div>
<br /></div>
<div>
Recognition for his work has come in many forms: he has been a guest editor of Radio 4's Today programme, he is an honorary doctor of the university of Liège, and he is a <i>petit fils ubu</i> from the <i>collège de pataphisique</i>. Locally, he is celebrated in a life-size stencil by the street artist Stewey in Canterbury's Dover Street, on the site of the Beehive club where he played at the start of his career. </div>
<div>
<br /></div>
<div>
In its turn, the University of Kent would like to record its appreciation of Robert's work. For his musical achievements and influence over the last fifty years, most honourable deputy vice-chancellor, to you and to the whole university I present Robert Wyatt to be admitted to the Degree of Doctor of Music, honoris causa.</div>
<div>
<br /></div>
<div>
[Text of the oration for Robert Wyatt's honorary degree award, Canterbury, 18-7-14].</div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com0tag:blogger.com,1999:blog-6712179876742604415.post-40809110940231873502014-07-03T14:35:00.002+01:002014-07-03T14:35:28.704+01:00Fifteen facts about EPSRCThanks very much to Alex Hulkes for his visit and presentation about EPSRC in general and their ICT programmes in particular. Here are fifteen things that I learned.<br />
<ol>
<li>EPSRC has about £2.5bn worth of live projects at any time.</li>
<li>“We have to do applied research …” because it is part of <a href="http://www.epsrc.ac.uk/about/history/royalcharter1994/">EPSRC's Royal Charter.</a></li>
<li>Terminology: EPSRC has two kinds of theme: “capabilities” correspond to specific research areas, while “challenges” are cross-cutting themes like energy or global uncertainty.</li>
<li>Terminology (2): for EPSRC “interdisciplinary” means belonging to more than one of their research areas.</li>
<li>75% of PGR funding is DTP plus CASE, and so not subject to particular shaping or direction.</li>
<li>Pathways to impact: either say how you <i>will</i> achieve impact, or say that it doesn’t make sense for your research to have impact (at this point). </li>
<li>It’s good if you can say how your research project fits in with what EPSRC are currently funding, as long as it’s not a straight duplication of work that’s already funded.</li>
<li>Developing leaders (fellowships): while it’s important to have a good research project, that’s a necessary rather than a sufficient condition: you need to be able to convince that you are a research leader.</li>
<li>These schemes are less popular than earlier fellowships schemes, perhaps because of the difficulty of getting evidence of leadership potential down on paper.</li>
<li>In ICT, EPSRC wants to keep developing and encouraging new research areas. It also wants greater collaboration with other areas</li>
<li>It’s also keen to get <i>ambitious </i>proposals: of the funding for responsive – rather than directed – research, some 50% of the cash is in longer/larger grants.</li>
<li>Proposals will get at least 3 reviews for panel.</li>
<li>There's no correlation between the number of reviewers and the success/failure of the proposal.</li>
<li>The PI response is important: the factual response has an effect when (and only when!) you can <i>provide evidence</i> that shows that an objection doesn't hold.</li>
<li>Success rates (at least for ICT) are constant across areas set to grow/stay constant/ shrink. On the other hand the “grow” area has received more applications.</li>
</ol>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com0tag:blogger.com,1999:blog-6712179876742604415.post-51865787333731888082014-06-17T10:38:00.004+01:002014-06-17T10:38:57.678+01:00Review of "Certified Programming with Dependent Types"<br />
<div>
Chilpala's text <i><a href="http://adam.chlipala.net/cpdt/">Certified Programming with Dependent Types</a></i> is an outstanding introduction to how programs can be guaranteed to be correct, by means of the Coq theorem prover – programs that are, in his terminology, “certified”. While machine-assisted proof has been possible for more than a quarter of a century, it is only in the last five years that a substantial body of fully-formal proofs has been delivered. These include mathematical results – such as Gonthier's proof of the four colour theorem – and also those in the domain of theoretical computer science. Indeed, the “POPLmark challenge” <a href="http://www.seas.upenn.edu/~plclub/poplmark/">http://www.seas.upenn.edu/~plclub/poplmark/</a> has set a benchmark for proof mechanisation in language metatheory. So, this text is timely in providing an accessible introduction to the area; but what is it that makes it stand out?</div>
<div>
<br /></div>
<div>
Firstly, Chilpala gives an excellent introduction to the area, explaining the background of the different approaches to theorem proving in systems such as ACL2 and PVS among others, as well as the Coq system that is the subject of the book. Complementing this is a discussion of the logics implemented by the various systems. He argues cogently for a theory that supports dependent types, under which the result types of functions can depend upon the values of inputs. Dependently typed systems support a dual interpretation: objects can be seen as values belonging to types, or equivalently as proofs for propositions – the so called ‘Curry Howard’ isomorphism. Allowing the interpretations to co-exist gives a type system that can express program pre-conditions, or alternatively a logic in which a functional programming language can be used to build proofs. Coq implements a powerful dependently typed theory that has a functional programming foundation, proof checking in a secure kernel (according to the ‘de Bruijn principle’), advanced proof automation through a tactic language Ltac and a principle of reflection.</div>
<div>
<br /></div>
<div>
Secondly, Chilpala opens the exposition in Chapter 2 with a worked set of examples that concentrate on language evaluators and compilers. Rather than covering all the material needed first, he plunges into the exposition, giving a survey of what is possible, and saying that “it is expected that most readers will not understand what exactly is going on here”. Does this approach work? It is clear reading this that anyone reading the chapter needs to understand a language like Haskell or ML, but with that knowledge it is possible to gain a good sense of how the system is used in practice, and so I would personally endorse it. After all, it's possible to skip on to chapter 3 and follow a sequential approach if this proves to be too forbidding.</div>
<div>
<br /></div>
<div>
Thirdly, in contrast to some introductions, the book promises a “pragmatic” approach to proof construction or engineering. This is welcome, since Chilpala acknowledges that Coq is a large system that has grown in complexity over the last twenty years. Does he deliver on his promise? The book is divided into four main sections: the first two cover the fundamental technical material, namely “basic programming and proving” and “programming with dependent types” in some 250 pages, but the remaining two sections (130pp in total) cover “proof engineering” and “the big picture” and it is in these two that he is able to cover the pragmatics of proof. He covers not only common approaches like logic programming in proof search but the problems of larger-scale proof, such as the evolvability and robustness of proofs. This is forcefully communicated through a set of “anti patterns” which mitigate against well-structured and evolvable proofs, and provides strategies for avoiding these.</div>
<div>
<br /></div>
<div>
The book doesn’t contain exercises, but these can be found on the book website, contributed by readers. The <a href="http://adam.chlipala.net/cpdt/">website</a> also provides access to the full text of the book, as well as the Coq code for all the chapters, as well as supporting libraries of code and tactics. It is very clearly written, and tha author has a direct, approachable style. If you want to find out more about using the Coq system for building realistic, large-scale, proofs, particularly for certifying programs, then I recommend this highly.</div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com0tag:blogger.com,1999:blog-6712179876742604415.post-25602905382141697582014-05-10T17:53:00.000+01:002014-05-12T13:40:22.372+01:00PROWESS mid-term workshopI'm just back from a trip to Borås in Sweden, where we held a one day workshop for the <a href="http://www.prowessproject.eu/">PROWESS project</a> to showcase what it has achieved, just over half way through the project. The aim of PROWESS is to bring the benefits of <a href="http://blog.jessitron.com/2013/04/property-based-testing-what-is-it.html">property-based testing</a> to testing <a href="http://en.wikipedia.org/wiki/Web_service">web services</a>. We're a consortium of universities from the UK (Sheffield and Kent), Sweden (Chalmers) and Spain (A Coruña, UPM); the Swedish research organisation SP, and SMEs from those three countries too: Erlang Solutions (UK), Quviq (Sweden) and Interoud (Spain).<br />
<br />
As well as attendees from the project we had more than twenty others from companies in Sweden, Denmark, the UK and the Netherlands and universities in Denmark, Sweden, the UK and Brazil. The day went very well, with project members like myself being pleased to see how the separate strands of the work are coming together, and that the interactions we're having informally are turning into practical tools.<br />
<br />
More importantly, we got positive reports back from the external attendees too, who were able to give us really helpful suggestions about how we could extend and build on what we had done. The prize for the coolest talk has to go to Benjamin Vedder, whose demo of fault injection for a quadcopter using a QuickCheck model stole the show [OK, the fault injection only happened in a simulation – to the relief of the front row – but still fun to see the use of QuickCheck in Erlang used to inject faults in to C++ embedded systems!]<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-G78uXJdg3co/U25XyfeDjDI/AAAAAAAAALY/s_DpLfPcoyI/s1600/IMG_5947.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="http://2.bp.blogspot.com/-G78uXJdg3co/U25XyfeDjDI/AAAAAAAAALY/s_DpLfPcoyI/s1600/IMG_5947.jpg" height="320" width="240" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Benjamin's quadcopter</td></tr>
</tbody></table>
<br />
The morning programme gave an overview of the project and property-based testing in QuickCheck, as well as introducing the VoDKATV platform from <a href="http://interoud.com/">Interoud</a>, which provided a case study running through the more detailed presentations in the afternoon<br />
<ul>
<li>Overview of the project – John Derrick, <a href="http://staffwww.dcs.shef.ac.uk/people/J.Derrick/">University of Sheffield</a></li>
<li>Case study: a web service for administrating the VoDKATV platform – Miguel Á. Francisco, <a href="http://interoud.com/">Interoud</a></li>
<li>Property based testing for web services: an introduction – John Hughes, <a href="http://www.quviq.com/index.html">Quviq</a> and <a href="http://www.cse.chalmers.se/~rjmh/">Chalmers University</a>.</li>
</ul>
The afternoon gave participants a chance to talk in more detail about specific technical innovations in the project:<br />
<ul>
<li>Inference of state machines from QuickCheck traces – Kirill Bogdanov, <a href="http://staffwww.dcs.shef.ac.uk/people/K.Bogdanov/">University of Sheffield</a></li>
<li>Automating Property-based Testing of Evolving Web Services – Huiqing Li, <a href="http://www.cs.kent.ac.uk/people/staff/hl/">University of Kent</a> and Laura Castro, <a href="https://sites.google.com/a/madsgroup.org/laura-castro/">University of A Coruña</a>.</li>
<li>Fault injection – Benjamin Vedder, <a href="http://www.sp.se/en/index/research/dependable_systems/prowess/Sidor/default.aspx">SP</a>.</li>
<li>More-bugs -- how to not find the same bug over and over again – Ulf Norell, <a href="http://www.quviq.com/about.html">Quviq</a>.</li>
<li>A Property-based Load Testing Framework – Diana Corbacho, <a href="http://www.linkedin.com/pub/diana-parra-corbacho/42/a78/b55">Erlang Solutions Ltd</a> and Clara Benac Earle, <a href="http://babel.ls.fi.upm.es/~cbenac/">UPM, Madrid</a>.</li>
<li>Smother: Extended code coverage metrics for Erlang – Ramsay Taylor. University of Sheffield</li>
<li>Automatic complexity analysis – Nick Smallbone, <a href="http://www.cse.chalmers.se/~nicsma/">Chalmers University</a></li>
</ul>
The full programme for the day, including talk abstracts, is <a href="http://www.prowessproject.eu/ai1ec_event/prowess-midterm-review-meeting/?instance_id=2725">here</a>; and the slides for the presentations are <a href="http://www.prowessproject.eu/prowess-mid-term-workshop-2/">here</a>.<br />
<br />
I'd recommend this kind of open day event to anyone planning a complex research project, as it gives you all a chance to get some invaluable external insights into the work that you're doing. It's also a real fillip to the project to see the tangible progress that has been made, and also to enthuse project members to do even better in the second half of the project.<br />
<div>
<br /></div>
<br />Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com0tag:blogger.com,1999:blog-6712179876742604415.post-60218279146529128492014-03-25T18:34:00.001+00:002014-03-26T11:28:26.229+00:00Writing your thesis and preparing for your viva<h3>
</h3>
<h3>
Writing your thesis</h3>
<div>
<br />
The first question to ask yourself is "<i>who am I writing for?</i>" Ok, you're writing for yourself and your supervisor, but you and (s)he know the work too well. Think instead of the examiners, and others who will read it now and in the future.</div>
<div>
<br /></div>
<div>
Sticking with the examiner for the moment, you should realise that the examiners want to pass you: you make it simpler for her if she can find all the evidence she needs as easily as possible. In fact, this is just as applicable when you're writing papers in general, and particularly for when you're drafting a research grant application. So, here are some specific points </div>
<div>
<ul>
<li>write it as you would say it: that means keep the language straightforward … just because it's written, there's no need to use longer words, or more complicated constructions;</li>
<li>if in doubt, include it: it's not a problem to read something that you know (within reason) but is <b>is</b> a problem to come across a TLA that you don't understand;</li>
<li>give a clear statement of contribution of the work: what have you done that goes beyond the state of the art (which you will explain in a "related work" chapter) … also it's worth saying where to find those contributions in the thesis (by giving section cross-references)</li>
<li>if some of the work has already been published in jointly authored papers, it's very good for you to give a clear statement of <b>your</b> contribution to the papers;</li>
<li>make it easy to read the thesis by including the right <b>signposting</b>, which means that anyone reading the thesis knows precisely what you do, where; in particular, I'd advise you to include:</li>
<ul>
<li>an overview of the whole thing in the introduction</li>
<li>a summary at the beginning of each chapter, section by section</li>
<li>and a conclusion at the end summarising the contribution of the chapter</li>
</ul>
</ul>
</div>
<div>
To get another perspective on what to aim for in writing, you can look at the criteria for the <a href="http://academy.bcs.org/content/distinguished-dissertations">CPHC/BCS Distinguished Dissertations</a> competition, which aims to reward the best theses in CS in the UK. This prize looks for</div>
<div>
<ul>
<li>whether the thesis makes a noteworthy contribution to the subject, and what that contribution is;</li>
<li>the standard of exposition;</li>
<li>how well it places its results in the context of computer science as a whole; and</li>
<li>how easily a computer scientist with significantly different interests would be able to grasp its essentials.</li>
</ul>
</div>
<div>
As you're writing there are people who will give you feedback. At Kent someone from your supervision panel will help, as well as your supervisor, it's worth checking early on who from your panel will do this.</div>
<div>
<br /></div>
<div>
A cardinal rule in getting feedback is only ask for feedback on things that are final, albeit final first drafts. There's nothing worse than giving feedback on something and then being told "oh yes, I was going to do that" … it should have been done (preferably) or at least there be some note to the effect that it's still to do. Why do I say this? The first reading is always the most effective … when you read for a second time, it's inevitably going to be shaped by the results of the first read. </div>
<div>
<br /></div>
<div>
Finally, make sure that you've fixed the problems pointed out first time before you ask someone to read it again!<br />
<br /></div>
<h3>
</h3>
<h3>
Preparing for your viva</h3>
<div>
<br />
First, make sure you check when it is, and keep in touch with the internal examiner and supervisor about what they need you to do specifically to prepare for the viva.</div>
<div>
<br /></div>
<div>
Candidates are often (usually?) asked to give a summary of the main contributions of the thesis at the start of the viva: this could be a 10 minute oral presentation (no slides) or (sometimes) with slides … either way, it's worth be prepared, and check with your supervisor about this.</div>
<div>
<br /></div>
<div>
Read the thesis again in the days before the viva, so that you're familiar with what is where … it will probably be a few months since you submitted it, and so it will have gone out of your mind. While you're reading thorough, it makes sense to keep track of typos that you find, so that you can fix them in the final version. You can go armed with errata lists, either personal or to share with the examiners, that you compile while reading it.</div>
<div>
<br /></div>
<div>
Will your supervisor be present? The Kent rules are they are only there with your explicit permission, and this is entirely up to you. At other universities the rules might be different, and other people might be there – this might be an independent chair of the proceedings, for example – or the viva might be recorded. It may be that you would like to invite your supervisor to be present after the viva is over, if there are corrections to be discussed once the decision has been made.<br />
<br />
Another way of preparing is to give a seminar to your research group: that will help you think about what your contribution is, and also how the parts of the thesis fit together, something that may well be a question in the viva itself. I've heard of people having a mock viva, but I'm not sure that's such a good idea … each viva will be different, and preparing by reading and giving a seminar should be enough.</div>
<div>
<br /></div>
<div>
Wear something comfortable … and take some water with you.</div>
<div>
<br /></div>
<div>
Probably you'll have access to a whiteboard, but just in case not it's useful to take along some paper if you want to draw diagrams or otherwise explain things visually or symbolically. </div>
<div>
<br /></div>
<div>
Once the viva gets going,</div>
<div>
<ul>
<li>think before you answer … it's no problem to consider what you want to say before you say it;</li>
<li>if you don't understand a question, ask for clarification … if it's not clear to you then it may well not be clear to the other examiner either, and</li>
<li>if things seem to get stuck into a discussion of a particular point, politely ask to move on. </li>
</ul>
<div>
Finally, some common questions that get asked,</div>
</div>
<div>
<br />
<ul>
<li>as I said earlier, examiners often begin by asking you to give a summary of the contribution that your work makers,</li>
<li>and this might be followed by going into some more depth about how the bits of the thesis fit together – it is supposed to be a "body of work" and it's worthwhile thinking about how you would explain that in your case;</li>
<li>other questions might ask about your knowledge of work by others in this field,</li>
<li>or get you to talk in detail about "your argument on p34" or "your proof on p31" … it's not all high-level and context, but will engage with the details of what you have written, and how you justify it.</li>
</ul>
</div>
<div>
So, good luck with the writing and the viva itself!</div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com2tag:blogger.com,1999:blog-6712179876742604415.post-74375474787522829562014-03-24T14:50:00.001+00:002014-03-24T14:50:36.376+00:00The ideal graduateThe School of Computing has a stakeholder panel, made up of representatives from companies and other organisations that we work with. Its job is to give the school strategic advice on its teaching, research, enterprise and public-facing activities in general. In a recent meeting we discussed what it is that panel members look for in the interns, placement year students and graduates that they recruit.<br />
<br />
One of the distinguishing features of computer science at Kent is that about eighty percent of students - over 100 this year - take a year's industrial placement between their second and final years. It's therefore not a hollow promise to say that their degree overall is a single package, delivered jointly by the school and its industrial partners. We teach what it makes sense to learn in the classroom, while our partners are able to give students the experience that can only come from working on real projects for real customers.<br />
<br />
So, what is it that this selection of our graduate employers look for?<br />
<br />
<ul>
<li>First, it goes without saying that <b>strong technical skills</b> are required: in particular, our students need to be able to program. Not all CS graduates are <b>competent programmers</b> (!) - and many employers will test applicants' programming competence. Theory and maths were discussed too. But what other attributes do they need?</li>
<li><b>Curiosity and the drive to learn.</b> It's clear that all our graduates will be working in a quickly changing environment, and so will have to keep up with new technologies and applications. When presented with a situation, the best graduates will ask "why?", and try to find out the answer. They will likely have an agenda of what it is they want to learn, and what skills they want to develop. As a school, we therefore need to make sure that we help our students to be independent learners, both individually and together in groups.</li>
<li><b>Approach.</b> It's not enough to roll up your sleeves and start coding … nor is a system finished when it compiles without errors and passes a handful of tests. Those who are able to plan their work using pencil and paper before getting into the details of the work are likely to perform much better. Similarly, systems need to be unit tested, but also whole applications need to be exposed to potential users, and tested to destruction. This leads on to …</li>
<li><b>Empathy.</b> Testing is just one aspect of this. There are end users who'll need to interact with what a developer does, and having some sense of their needs, and how they will use the system: listening skills, and understanding the context of work - what does drive this: money? user engagement? - isn't an optional extra. Others may well work with challenging groups, for example clients of social care, with a particular approach and needs. </li>
<li>The flip-side of this is being able to <b>tell a story</b>, or have a "value proposition". We can all suffer from this - I certainly have colleagues who can write research applications that are all about a great solution, but the problem is it's not clear what problem it is solving, if anything. So, begin able to appreciate context, and explain an idea from a different point of view, is key.</li>
<li>More traditionally, I guess, are <b>project skills</b>: team working, appreciating the lifecycle - there's testing again, agile processes (our students, and students in general, seem to cling to waterfall models), and general infrastructure skills. A nice angle on testing was "what will be the impact of this software on the system when I integrate it, in terms of functionality, performance and scalability?" - it's not just a matter of meeting those few unit tests.</li>
<li>Many employers saw students come in wanting to be developers, while an employer will be looking for wider ambition, ultimately. Employers are also looking for diversity in their recruits, not just in terms of gender, but all dimensions of difference.</li>
</ul>
Looking at this list it's easy to see that some of these are skills which students will acquire through their placement year, or in the <a href="http://www.kitc-solutions.co.uk/">Kent IT Consultancy</a>, but there's a clear challenge there for academic departments in how they develop students as independent learners, curious about the world in general and tech in particular. Experiencing failure - in a "safe" environment - would also really help, and immersion in a topic can really help: a one week intensive MSc project some years ago was one of the most positive teaching experiences I have had … that also reminds me of teaching Open University summer schools - a year's worth of university experience in a week!Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com1tag:blogger.com,1999:blog-6712179876742604415.post-64592672017064154472014-03-13T08:35:00.002+00:002014-03-26T20:14:09.428+00:00Monadic Functional Reactive Programming<div>
<span style="font-family: inherit;">Reactive programming - programs that interact with the "outside world" - present a problem to the pure functional programmer: how to program in the traditional style in the absence of updateable state? But problems can also be opportunities, and in this case the opportunity is to rethink the problem from a different perspective. The result that ensues is "functional reactive programming", often abbreviated to FRP, originated by Conal Elliott and Paul Hudak.</span></div>
<div style="min-height: 13px;">
<span style="font-family: inherit;"><br /></span></div>
<div>
<span style="font-family: inherit;">The essence of FRP is to think declaratively, or in other words to think about how to make a mathematical model of the problem domain. The key insight is then to see that the functional world offers just the right set of abstractions to do this. A continuously varying quantity - such as the position of an object in an animation (or indeed in a robot world) - a given by a function from time to coordinates. Continuous quantities can trigger discrete events, for example when a position exceeds a certain value, and conversely events can trigger changes in continuous quantities - such as the rebound of a (simulated) billiard ball from a (simulated) cushion. So, FRP gives very general - and declarative - models of hybrid systems.</span></div>
<div style="min-height: 13px;">
<span style="font-family: inherit;"><br /></span></div>
<div>
<span style="font-family: inherit;">So far, so good. We have a very powerful model of a very general set of systems, but how is this to be implemented? We are in a functional language, so we can describe a naive implementation, but sadly that is desperately inefficient, and so the history of the last decade or so has been one of searching for sufficiently efficient implementations gained without losing too much of the original power and elegance of the framework. One approach is to make everything discrete, which is the approach of the Lustre language, but aiming to keep as close to true continuity as possible, at least as an abstraction, has proved to be more of a challenge. Elliott has provided a number of implementation ideas, as has Nilsson; the paper here provides a new approach in which a monadic interface is designed for FRP. </span></div>
<div style="min-height: 13px;">
<span style="font-family: inherit;"><br /></span></div>
<div>
<span style="font-family: inherit;">Under the monadic approach, continuous behaviours are allowed to terminate (and also to begin) and so there is a natural sequencing operation between them. As well as giving a more familiar API to the programmer, the author claims that this allows a clean - i.e. declarative - yet efficient implementation of the system. While it is too early to give a definitive judgement on this - we should wait to see what the take-up is - it may be that finally FRP has come into its own with this work. If so, we can look forward to a whole set of new libraries for high-level reactive programming in interface design, graphics, web programming and more. Watch this space …</span></div>
<div>
<span style="font-family: inherit;"><br /></span></div>
<div>
<span style="font-family: inherit;">This review of <a href="http://dl.acm.org/citation.cfm?id=2503783">Monadic Functional Reactive Programming, from the 2013 Haskell Symposium</a>, was drafted for Computing Reviews, but too long for their liking. So, I have posted it here instead :-(</span></div>
Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com0tag:blogger.com,1999:blog-6712179876742604415.post-9738308051023718682014-02-18T18:02:00.001+00:002014-02-18T18:08:38.783+00:00Why I lectureSometimes it's a real joy to give a lecture: the topic is right, and you have managed to get the slides to say what you want without clutter or confusion. This happened to me today, in giving my final lecture for a while in our “Functional and Concurrent Programming” course at Kent. I wanted to pull together all that we'd covered about functional programming, and do this by treating a “meaty” example. I chose language processing: <a href="https://player.kent.ac.uk/Panopto/Pages/Viewer/Default.aspx?id=43784792-3d33-4e10-bd75-7deb5baf1a64">here's the lecture recording</a> and <a href="http://www.cs.kent.ac.uk/~sjt/co545/Lecture10.pdf">here are the slides</a>.<br />
<br />
Why choose language processing? It’s central to a CS course: even if our graduates are unlikely to write a compiler, there’s a much higher chance of writing something that will manipulate XML, or a domain-specific language, or build scripts, or test code … the list goes on. It's also a place where the combination of immutable data, expressive structured types, a highly developed lists library, and an interactive development loop work really well together. Anonymoushttp://www.blogger.com/profile/02841876350884552299noreply@blogger.com0