Psychology of Programming
Interest Group
[PPIG Home]    
about   newsletters   workshops   resources   contents
Home ~ PPIG Newsletters ~November 2005

Newsletter: November 2005

PPIG 2005 Workshop Review Edgar Chaparro describes what went on at the last PPIG workshop held in Brighton, UK.
Work in progress workshop Andree Woodcock provides us with information about a forthcoming work-in-progress workshop
PPIG 06 Some preliminary information about the next PPIG workshop.
Book and Journal Reviews - Chris Douce reviews Handbook of Mathematical Cognition edited by Jamie Campbelll
What should we be doing? Ruven Brooks and Carl Hardeman gives their point of view on areas and topics which may be of interest to the psychology of programming community
Newsletter Interview: Professor Jorma Sajaniemi has agreed be be interviewed for this edition of the newsletter
Conferences, Workshops and Call for Papers
A brief overview of theories of learning to program is a short article by Steven C. Shaffer
Spotlight on PPIGers where you are and what you are doing
EUSES Consortium Margaret Burnett tells us the latest news about the End Users Shaping Effective Software Consortium
Doddery Fodder  a busy Frank Wales tells us of a number of new TLAs
Light tones and connected links  Chris Douce introduces a class of paradigm busting URLs

Editor: Chris Douce


Welcome to the Autumn edition of the Psychology of Programming Interest Group newsletter. This issue, possibly the biggest yet, is packed with a varied array (no pun intended) of interesting and informative articles. I hope you like what you find. If you have any comments about the articles found here, or if you would like to contribute something for the next edition, please do get in touch.

Edgar Chaparro writes a review of the last workshop held in Brighton, UK. The workshop was organised by Pablo Romero and his associates at Sussex University. Many thanks are extended to the Sussex team for organising a fun and informative event.

Ruven Brooks and Carl Hardeman both answered the rather vague question 'what do you think should we be looking at?'. Many thanks are extended to Ruven and Carl for their considered replies which will give many of us food for thought.

This issue also contains a review of a particularly interesting book, A Handbook of Mathematical Cognition - the first 'psychology' rather than 'computing' book to be reviewed for some time.

New to this issue is PPIG interview. Many thanks to Professor Jorma Sajaniemi from the University of Joensuu, Finland for agreeing to be interviewed for this edition of the newsletter.

Thanks are also extended to Frank Wales who returns with his light hearted look at the ecclectic and thought provoking.

As mentioned in earlier newsletters we're crying out for book, journal or paper reviews. Remember, please feel free to submit articles at any time. The newsletter is intended to serve you : researchers and practitioners who have an interest in the human aspects of software development and computer programming.

Chris Douce

PPIG 2005 Workshop Review

by Edgar Chaparro

Workshop Review

The seventh Annual Workshop of Psychology of Programming took place in Brighton, a lovely city in the south-east of England. The papers ranged from keynote talks through technical papers, to 'work in progress' papers that described a great range of exciting projects

Ken Kahn was the first keynote. He gave a very interactive talk, which discussed the trade-offs of concretising computational abstractions in children's programming environments.

The first session was about Collaborative Programming. A group from Sussex (I'm included) tried to uncover some of the factors that could affect Pair Programming. Later, Pablo Romero did a great job presenting Sallyann's paper which discussed discrepancies in the literature about rating expertise in collaborative software development. For those who don't know, she couldn't come because she gave birth to a handsome boy called Jacob, maybe a future PPIG member. Rosen showed us the importance of interpersonal relationships in software development teams. To close this first session, Cheng et al. presented a work in progress paper about an exploratory pair programming study which is being carried out at Stanford University.

In the first social event of the workshop we went to the Brighton Pier, where our friends from overseas could try the famous English Fish and Chips. Later, a walk around the city centre ended up in a nice bar called 'Zoot Street' where Chris and Pablo showed us a little of their entertainer's side, with a very funny Pub Quiz. I must say that the Pub Quiz has strengthened my knowledge of Finnish culture.

The second day started with a graphical visualisations session, Markku Tukianinen et al. presented a very interesting empirical study of the gaze behaviour during a dynamic program animation. The non-intrusive eye tracking used for the study was very interesting. Later Pablo et al. discussed the use of multiple sources of information in software debugging environments.

Both studies gave great insights on expertise, or should I say experience. The next paper was presented by Seppo Nevalainen. This was a study of short term effects of graphical versus textual visualisations of variables on program perception. Closing this session, Philip O'Brien argued in favour of exploring, from a theoretical perspective, the use of spatial cognition during program comprehension.

A session entitled Programming, Creativity and Creative Arts contained several exciting presentations. Greg Turner, a great guy from Australia, was the first speaker. I met him the day before, during the doctoral consortium, and it is hard not to like his idea. How can we make programming more comprehensible to artists? Later, Alan Blackwell gave a great presentation! Indeed his paper was a resounding success. We should have more Live Coding sessions!

His presentation gave us some thought-provoking ideas about programming environments of the future - something to be explored in future workshops. Finally, Ronald Leach and Caprice Ayers showed their work in progress paper that intends to establish an agenda for the study of creativity in programming.

The third session of the day was about Professional and Software Development. In this session Jorma Sajaniemi presented a paper which explores the the role of variables in Experts' programming knowledge. Later, Pamela O'Shea and Chris Exton argued about the role of source code within object-oriented ava program summaries, describing maintenance activities. John Sung from Sussex presented a cognitive approach to software engineering. Deirdre Carew et al. closed this session, presenting their study that investigates the comprehensibility of requirements specifications.

The design and tool session brought three work in progress papers. Andree Woodcock presented a pilot study that strengthened the idea of studying programming as a design activity. John Sturdy intends to build a tool that could support programmers' memory; I will definitely need such a tool. Finally, Luke Church introduced #Dasher an integrated Development Environment based on continuous gestures.

The last session of the day was about Methodologies. Laura Beckwith presented a methodology used in a study conducted in cooperation with Oregon State University and Drexel University. It was a qualitative investigation, aiming to uncover the impact of gender in problem-solving software features. A very interesting paper, it gave me much food for thought. It is interesting to consider how gender affects collaborative programming. Closing the day, Enda Dunican presented a framework for evaluating Qualitative Research methods in Computer Programming Education.

In the second night of the workshop we had our social dinner trying the best of Japanese culinary tradition at Moshi Moshi. It was a lot of fun and I have to acknowledge Marc Eisenstadt's effort, trying to find a camera to register it. He found one! So, take a look at his [off-site] blogblog, maybe you are there.

The last day of the workshop was an extended session about teaching programming. There were very interesting papers. Daniel Farkas and Narayan Murthy showed the attitudes of students toward introductory programming courses for non-majors. He aims to investigate the reasons contributing to the decline in enrolment in computing programs.

Paul Bycling and Jorma Sajaniemi presented a study about the roles of variables in teaching programming. Susan Bergin and Ronan Reilly talked about the influence of motivation and comfort-level in a first year object-oriented programming course. Finally, Jim Ivins and Michele Poy-Suan Ong presented a psychometric study of computing undergraduates.

Marc Eisenstadt closed the 17th PPIG Workshop with an illuminating key note presentation where he discussed the relevance of the topics covered, and also outlined some of his ideas of social software. He talked about blogs, group wikis, RSS news feed, etc, where he raised a lot of questions about how we can use these new technologies for our benefit. Finally, he tried to predict what a PPIG workshop could be like 5 years from now.

I would like to finish this with a personal note. It was my first PPIG workshop and I found it a great opportunity to meet people with similar interests. Everyone was very kind and helpful which helped to create an amazing environment. I believe that the format used for the event, where works in progress were presented together with full papers, encouraged valuable feedback for all participants, in particular the younger ones.

Finally, thanks to Pablo and his team for organising everything so well. They managed to create a successful event that I hope will continue growing, but always within such a friendly environment.

top ]

Work in Progress PPIG Workshop

Unroll Your Ideas: a work-in-progress meeting of the Psychology of Programming Interest Group

January 12-13 2006
Coventry School of Art and Design, Coventry University, UK

PPIG aims to bring together people working in a variety of disciplines and to break down cross-disciplinary barriers.

Despite its name PPIG entertains a broad spectrum of research approaches, from theoretical perspectives drawing on psychological theory to empirical perspectives grounded in real-world experience, and is equally concerned with all aspects of programming and software engineering, from the design of programming languages to communication issues in software teams, and from computing education to high-performance professional practice.

Besides an annual workshop series, PPIG also organises occasional meetings such as this one.


This informal workshop is intended to foster exchange of ideas and constructive suggestions for research in progress. Doctoral students and more experienced researchers will be equally welcome. The intention (depending on submissions) is to use mornings for short presentations and afternoons for discussion.

In order to allow ample discussion in round-table style, numbers will be limited.


Intending participants are requested to notify their intention as soon as possible, preferably by supplying a short (half-page) abstract, to be followed by an extended abstract of 2-4 pages.

The extended abstract should be submitted one month before the workshop to allow time for review - we shall aim to be inclusive but we may have to limit numbers of acceptances. Accepted abstracts will be posted on the web in advance of the meeting to allow participants to read and reflect on them.

Please email abstracts in PDF format to:

Technical Committee

Maria Kutar, University of Salford
Thomas Green, University of Leeds
Marian Petre, Open University
Andree Woodcock, Coventry University

top ]

PPIG 06 - Preliminary Details

The PPIG 2006 workshop will be co-located with the ACM Symposium on Software Visualization (Softvis 06) and the IEEE Symposium on Visual Languages and Human-Centric Computing in Brighton, UK. More information about Softvis can be found in the conference and call for papers section of this newsletter.

Aims and Scope

The annual PPIG workshop is a forum in which researchers concerned with cognitive factors in software engineering can present and discuss recent results, findings and developments.

A feature of the PPIG workshops has been their openness to a wide spectrum of concerns related to programming and software engineering, from the design of programming languages to communication issues in software teams, and from computing education to high-performance professional practice.

Similarly, PPIG entertains a broad spectrum of research approaches, from theoretical perspectives drawing on psychological theory to empirical perspectives grounded in real-world experience. PPIG aims to bring together people working in a variety of disciplines and to break down cross-disciplinary barriers.

More information relating to paper submission deadlines will be distributed through the PPIG announce forum. More details will be available in the next newsletter.

top ]

Book and Journal Reviews

by Chris Douce

Handbook of Mathematical Cognition

by Jamie ID Campbell
Psychology Press, 2005
ISBN 1841694118

When I was a youngster I remember talking to a wise elder relative about my interest in computers. 'Ooh, you have to be good at maths to work with computers' came the reply. 'Is this really the case?' I asked myself? Shrugging my shoulders, I continued what I was doing: typing 'LOAD' then pressing play on the cassette recorder.

Several years later when I was doing some work experience, messing around with an implementation of Basic on a Compaq 'portable' (the type that hospitalised many a keen executive), I asked the IT man a question. I asked him: 'do you really have to be good at maths to be good at computers?' (Meaning, at the time, to have a career in IT; to be a software developer). Of course, my question was profoundly simplistic. The reply that I was given was a pragmatic one and was administered with a sagely nod: 'I think it might help'.

At its most fundamental level software is, of course, entirely numerical. When writing the simplest of programs you are immediately reminded of the connection between programming and maths. You are faced with expressions which comprise of conditional operators, have to construct functions that return values and make data types that are made up of different types of number: integer, real and boolean.

I have been aware for some time of a subfield of cognitive psychology which tries to understand how we deal with mathematical concepts and ideas. The publication of a book called 'handbook of mathematical cognition' has allowed me to see how this topic is profoundly connected to our own.

Handbook of Mathematical Cognition, a collection of papers representing the state of the art, is divided into five parts. Part one is entitled 'cognitive representations for numbers and mathematics'. Unsurprisingly this first part will be of interest to those researchers who study the notation aspects of programming languages and systems (whether it be programming languages or design tools).

The second part is entitled, 'learning and development of numerical skills'. The third part is, 'learning and performance disabilities in math and number processing' followed by 'calculation and cognition' and the part that I found particularly fascinating, 'neuropsychology of number processing and calculation'.

Another key question addressed is which faculties are used to solve mathematical problems - how is 'number' represented and manipulated? Does language have an effect on the representation of numbers (in terms of how numerical values are presented in natural language)?

It's always fun to look beyond your own immediate discipline and it's really fun to see what kinds of experiments other researchers are coming up with and seeing whether certain approaches are adopted by others. An experiment is described which shows that it takes more time to decide which numbers are larger when numbers are close together (i.e., 5 and 6 versus 2 and 8).

Here we find further parallels between the psychology of computer programming and the psychology of mathematics (as a programmer I am continually telling a machine to discriminate between different sizes of numbers). In both computer programming and mathematics we are both obviously constrained by bounded rationality. Only when we try to understand what tasks and concepts are difficult can we then begin to understand how we can improve our educational approaches or develop tools that offer useful support.

I do have my favourite papers; they are: Development of Numerical Estimation by Siegler and Booth. Stereotypes and Maths Performance by Ben-Zeev, Duncan and Forbes explore an issue that is common between mathematics and computing: the gender imbalance. Mathematical cognition and working memory by LeFevre, DeStefano, Coleman and Shanahan is also good fun. I also recommend Spatial Representation of Numbers by Fias and Fischer.

Strategy selection and usage appears to be an important subject. In the psychology of programming strategy and the adoption of strategy during comprehension (as well as production) is a perennial topic. Since the domain of mathematics is particularly precise (it overcomes the notorious issue of the problem domain), studying the literature regarding strategy selection in mathematical performance may yield interesting insights into programmer performance.

Others papers illustrate how learners apply different strategies as expertise increases. Mathematical operations change from simplistic finger-based operation, to verbal counting strategies, to memory-based where mathematical facts are recalled from long-term memory, thus freeing working memory to attend to more complex operations. Of course, there is much similarity with work performed in the area of programming expertise. Interestingly there is also an exploration into the performance of exceptional performers.

There also exists a parallel with our own attempts to create models of programming activities and action. This is clearly evident in the paper Architectures for Arithmetic by Campbell and Epp.

The Handbook of Mathematical Cognition is not an introductory text. The papers demand a degree of familiarity of the domain but the literature review papers will help you to get oriented. It's more like a reference book - something to dip into from time to time to find out what others are working on. What is really great about this book is the referencing - it gives you a whole new set of jumping off points in to the literature which may inspire some interesting ideas.

But do you have to be good at maths to be a good programmer? Of course, the answer to this question can never be a simple yes or no. I find myself agreeing with the IT man who I asked the same question many years ago, 'I think it may help.' My conclusion? Certainly recommended. (But ask your library to get it since it's fairly expensive!)

Do you know of a journal that may be of interest to fellow PPIG members? If so, please tell us about it. Interested in writing a review, or perhaps you are an editor and would like to introduce your journal to us? Please feel free to send a message to chrisd(at)

top ]

What should we be looking at?

Through the discussion list I asked the PPIG community what issues we should be studying. I took the opportunity to ask Ruven Brooks, a long-standing contributor to the psychology of programming community, this question. Many thanks are extended to Ruven Books and Carl Hardeman for their considered replies.

Ruven Brooks

The phrase, 'programming in the large,' was introduced in 1975 by Frank Deremer and Hans Kron to emphasize the difference in tasks and activities between software developments done by one or two people and software development done by larger groups.

One widely circulated model of effort allocation in large scale development claims that as little as 20% of the effort may be devoted to writing code that will be part of the delivered product. Since most psychology of programming work has focused on the coding phase, a useful guide for research directions may be to look at psychological aspects of those activities that take up the remaining 80% of the resources.

One way to view all of these other activities is that programming is a problem solving process that begins with a problem to be solved. At the beginning, this problem is not the problem of writing a program, but rather is some need from outside the programming domain: 'give me an easier way to manage my schedule than a paper calendar.'

As with all problems, not just 'ill-defined' ones, a problem elaboration activity takes place. This is referred to as requirements gathering or requirements analysis. As the features of the problem emerge, design solutions at various levels are proposed for them. Eventually, the design becomes refined to a detailed enough level to permit coding in an existing programming language.

What parts of this process are well supported by research and which ones have been neglected?

The upstream problem elaboration process has been an area of active research; topics such as task analysis and user modeling have a large research literature. The design process has been the subject of substantial study, with concepts such as 'opportunistic design' being used to explain design behavior.

In software, there is a large body of work on design and specification notations, although little of the work is focused on behavioral issues. Where the light of investigation seems dimmer is at the boundary between specification and coding.

How much specification is effective? Can too much specification actually reduce coding performance? What specification notations work best? Are more formal ones really better in terms of the coding product produced? Is it useful to present the same information in more than one format? How much role does general domain knowledge play in interpreting specifications? Is the design affected by the order in which a specification is presented? These are all questions which could benefit from more investigation.

Although the body of work on coding is large (although probably not large enough) nearly all of it has been focused on general purpose programming languages, particularly those taught in academic environments to novices.

In commercial software production, though, there is a great deal of work done in scripting languages, particularly those associated with an application. For example, there are install script languages, build scripting languages (MAKE) and test scripting languages. In particular, for those selling packaged software, install scripts are critical, since if a product will not install, or worse still, messes up already installed products, the customer is far more likely to make a support line call or return the product than if there are bugs in the code once the product is installed.

In addition to working with different objects than general purpose languages, these scripting languages often have very different syntax and control flow. It may well turn out to be the case that what is known about general purpose languages applies to these languages as well, but the question is still entirely open.

In software development environments which produce software to be sold externally, testing is a major use of software development resources. The number of testers may equal or exceed the number of programmers, and even in methodologies such as Extreme Programming which focus on small programming teams, programmers spend significant amounts of time on test code.

Among the questions to be answered are, what factors affect a person's choice of what tests are to be performed? How good are programmers at testing their own code? Are they better at testing other people's code than their own? What are the individual differences in testing? How do you train novices to do testing? The research areas mentioned earlier also interact with testing; there are test script languages, and 'black box' testing starts with the specifications, so script language and specification research questions have their testing aspects.

Alas, my perception is that most software companies are currently far more interested in transferring work to lower wage countries than they are in understanding software development and providing better tools; nevertheless, I suspect that research in the areas outlined is more likely to be seen as relevant and, at least, a starting point for interaction, than the past areas of psychology of programming.

As well as being a seasoned industry professional, Ruven published his first psychology of programmer paper in the International Journal of Man-Machine studies back in 1977. He is widely known for his work on 'program beacons' which continues to inspire and direct empirical research exploring strategies of program comprehension.

Carl Hardeman

It is clear to all but mangement and professional project managers and most developers that the problems with getting software correct stem from:

  1. Inherent complexity. Stop here if you do not understand that the number of business cases and test cases explodes exponentially (like a salesman's route optimization problem) and therefore testing can never be more than sampling. Design must be simplified and abstracted just like outlining a chapter in one's book in Ms Thistlebaum's English Literature class.
  2. Constant change. Design for change.
  3. Extreme requirements for performance, reliability, volume, etc.
  4. Maintenance of conceptual integrity (from Fred Brooks of Brooks' Law fame).

So the questions I suggest PPIG address are:

  1. Do developers fail to recognize complexity or shun it based on their own confidence?
  2. Why do developers detest engineering, particularly measurement and statistical process control and how can we overcome that? This is a sine qua non for getting software right.
  3. What definitive objective statements can be made as to when a software product is ready or production use?
  4. Why do we fail to recognize a big ball of mud for which the only solution is redesign? [off-site] Big Ball of Mud
  5. Why do we fail to understand that business model changes of the past few years (to a realtime state-event process control model from an after the fact damage control model) requires a rearchitecting of software rather than an extensive patchup?
  6. Why are there massive projects and failures? One could argue there are no massive projects, only large scale integration of smaller projects - none of which should be so large as to have lost intellectual control over it. For instance, the US Treasury Department income tax system was a failure. Could they have written a small system which handled only the simple 1090EZ forms which would have the large majority of taxpayers up and running on a new system in a short time? Then develop separate systems to handle the more complex cases.
    Yes, they could have, but they worked on a failed monolithic model.

Failing to recognize and handle with engineering methods basic complexity is what the problem is with software quality. It seems most cannot see that, the Snuffleupagus, or the King's new clothing.

I once knew of a system with 100+ variables which could affect the outcome. Assuming they are bimodal, and many were multi-modal, that's 2 to the 100 - 1 cases. That's more cases than stars in the observable universe. And that ignores further complexity such as the variables having to occur in specific sequences. That makes DNA look easy. Yet I imagine the failed new US IRS System had at least that complex a situation.

That should help with the understanding that

  1. you cannot test when you have a combinatorial explosion of cases - you can only sample
  2. therefore you can only use statistics (e.g. standard error of the estimate and Markov chain analysis) to make statements about being ready for producion, and
  3. these problems cannot be overpowered with human intellect alone and require engineering e.g. Cleanroom Software Engineering using sequence enumeration for specification and statistical sampling for verification.

Having rambled all that. How can PPIG contribute to the improvement in the quality of software? Clearly coding is a minor issue. I am positive your members can frame the proper questions once they appreciate the correct problem.

As a Master Gardener, one grows to understand gardening is about the soil as the plants will generally take care of themselves once planted in a nice prepared bed of soil.

Carl Wayne Hardeman has 40+ years experience as a developer and 25+ adjunct faculty in Computer Science.

top ]

Newsletter Interview: Professor Jorma Sajaniemi

There is one group of researchers that have been attending PPIG workshops with reassuring regularity.

Hailing from the University of Joensuu, Finland, the Finnish Group have carried out psychology of programming research using a number of different approaches and methods. They have adopted the engineering approach by developing tools to support programmers, have explored the sometimes controversial issue of programmer testing and entered the world of program understanding by carrying out experiments using eye-tracking systems.

More recently they have innovated in the area of computer science and programming education by introducing and evaluating the concept of variable roles, continuing earlier work in the area of program animation.

Professor Jorma Sajaniemi has agreed to be interviewed for this edition of the PPIG newsletter. As well as disseminating the fruits of his research through the PPIG series of workshops Professor Sajaniemi has also published his work in a number of other conferences and journals that will be recognisable to many: OOPSLA, International Workshop of Program Comprehension, Computer Science Education, ITiCSE.

Professor Sajaniemi, you can, without a doubt, be considered one of the PPIG regulars. Can you tell us what drew you to this field, the intersection between software engineering and human factors? Why is this area of continued interest to you and your group?

It all began in mid-80's when I was a young Associate Professor of Computer Science at the University of Joensuu. I realized that students coming to the university with a background in high-school programming courses had severe problems in acquiring principles of structured programming - an innovative programming technique at that time! The reason was 'obvious' to me: the BASIC programming language of those days.

It missed control structures and lead to spaghetti code with a lot of GOTO statements. So I wrote a Letter to the Editor for the main Finnish newspaper Helsingin Sanomat; the title was 'BASIC spoils brains'.

The next day my phone rung and there was Pertti Saariluoma, now a Professor of Cognitive Science at the University of Jyväskylä, then an Assistant of Psychology at Joensuu. He said that my conclusion was correct but the reasoning was wrong. What a nice way to meet a new person!

So we started to discuss about psychology of programming. At first we didn't even realize that many words like 'memory' and 'icon' meant totally different things to the two of us. But we could overcome these problems and eventually started to investigate psychology of spreadsheet calculation. His interest was to understand human cognition; my interest was to help the poor spreadsheet author.

In 90's I escaped university for a while and went to work in industry. Almost ruining a large software project I soon found out that what was missing in the university curriculum was Software Engineering. Escaping back to academia I introduced software engineering to our teaching.

I also realized that the problems in industry could not be solved by developing new tools and techniques in the standard way, i.e., by devising fancy things that are technically possible, but the limitations of human cognition must be recognized when designing tools and techniques.

Since then I have tried to help the poor programmer and software engineer by trying to find out ways to tailor tools and techniques to human cognition. Seeing the vast number of problems and the small number of researchers and research results - even worldwide - keeps me on this track.

When you were working in industry, can you think of any particular problems or situations that caused you to reflect on the practices you adopted as a part of your work?

One of the major problems was change management. It was not only hard to keep track of all changes but making even trivial changes was becoming almost impossible. Each change affected a number of program files, test cases, documents, training materials etc.

There were several variants of the system because of alternative database engines, operating systems, etc. A change could have different effects in different variants and a number of special cases to be considered with each change that arose. It soon became very hard to remember what files and which special cases should be checked with each change. So we had a clear example of the limitations of the human memory system!

As a solution I developed the notion of 'interactive check lists', one for each change type, that automatically opened the relevant files for inspection, propagated changes from one file to others as far as possible, and reminded of special cases to be thought of - a tool to overcome the limitation of human cognition! This tool did not utilize newest technological innovations. However, it was one of the resolutions saving the project.

It seems that you consider the development of tools (to assist the programmer) is an activity that goes hand in hand with the development of appropriate processes. Do you think process (and process evaluation) is just as important as the development of useful tools? Programmer education is, of course one of the key topics within PPIG. As a part of your teaching activities do you present different types of software engineering process?

Process control seems to be the only way to ensure product quality. If we want to avoid omissions, detect errors, stay out of update clashes etc, we must obey approved working habits and rules, i.e., we must have a well-defined process. Process improvement is hard in practice, because there are so many affecting factors starting from resistance to change to lack of appropriate tools.

Process development is also hard to approach scientifically because empirical evaluation of new process proposals is very expensive and still only indicative. To cover all this in software engineering courses requires much more time than what is usually available.

By tools I do not mean special-purpose computer programs only; a tool can be a notation (like UML), a method (like responsibility-driven design) or a process definition (like XP). Tools are supposed to help programmers and software engineers in their work or to ensure that processes are followed. Programmer education should give students understanding of the principles behind these tools so that they can adopt new tools easily.

One of the tools that I use as a part of a software development process is the spreadsheet, not only as a way of trying to calculate how long certain activities may take, but also to track bugs too. Before your work on variables, you carried out some interesting work looking at how people work with spreadsheets. Can you tell us a little about this work?

My work on spreadsheet has two tracks. The work with Pertti Saariluoma concentrated on the structure of cognition and spreadsheets were just a test-bed for the experiments. Consequently the results were interesting for cognitive psychologists rather than directly usable for helping spreadsheet authors.

An example of the results was the finding that people use visual imagery in programming work - an important result for understanding cognition but very hard to utilize in devising better tools. The other track was the development of structured spreadsheets that tried to introduce application data structures into the realm of spreadsheets. The intention was to make the mapping between program structures and application domain structures simpler. The work resulted in several prototypes but this kind of development would require a commercial producer who would take care of making a real product.

In the psychology of programming research, I see (at least) three very different foci: structure of cognition and cognitive processes; structure of knowledge representations; and contents of knowledge.

Psychologists are mostly interested in the first two categories and research has mainly concentrated on them. Computer scientists have been more interested in the second category. For example, the work of Elliot Soloway in 1980's revealed something about the structure of plan knowledge but he studied the exact contents of the knowledge only superficially.

Our research on the roles of variables belongs to the third category: we have extracted expert programmers' knowledge in detail and taught it explicitly to novices, with success. I think that this third category - contents of knowledge - is most important when building tools for programmers and deserves much more attention in future.

Your work on the roles of variables, an approach to help students understand how and where they can be used, is becoming increasingly influential. Is this an area that you are continuing to explore? More specifically, are you working on any ideas that may interest delegates for PPIG 06?

Having studied roles for four years now, we are little by little starting to understand what they really are about. However, we do not yet know what features of role-based animation makes it so powerful for learning, neither do we know why animation works so well. We are continuing our research in these areas and we hope to find answers that can be generalized to other animation environments, also. However, this type of research takes at a lot of time and I cannot give any schedule for it!

As another line of research, we are currently planning some classroom experiments with other universities and high schools, also in other countries. Our intent is to find out the applicability of roles in various contexts like in object-oriented teaching. In addition to our own research, I would also like to see independent studies of using roles in teaching.

I have always found PPIG Workshops as a great place to present on-going work (and even non-going work!) as well as final reports of quality research. As PPIG Workshop is the only world-wide annual event in psychology of programming, its proceedings should, however, be published in a more organized way. Of course, the variation among PPIG papers is huge - which is not a bad thing at all. During the last few years there have been some steps towards recognizing this variation in the publication process but I think that there still is a need for a more permanent solution.

To find out about more about Saja's work, you can visit his [off-site] Jorma Professor Sajaniemi's websitewebsite

top ]

Conferences, Workshops and Call for Papers

SIGCSE (Computer Science Education)

1-5 March 2006
Houston, Texas, USA.
[off-site] SIGCSE Conference websiteConference website

Conference on Software Engineering Education and Training (CSEET)

April 19-21 2006
[off-site] CSEET Conference websiteConference website

International Conference on Program Comprehension (ICPC)

June 14-16 2006
Athens, Greece
[off-site] ICPC Conference websiteConference website

ICPC is was the conference previously known as the International Workshop on Program Comprehension (IWPC). History of the IWPC event can be found by visiting the [off-site] IWPC event summary pageIWPC event summary page

Ed: I don't know whether this is a coincidence or not, but the dates seem to fit with the below event

International Congress of Applied Psychology (ICAP)

July 16-21
Athens, Greece
[off-site] ICAP Conference websiteConference website

Innovation and Technology in Computer Science Education (ITiCSE)

26-28 June 2006
University of Bologna, Italy
[off-site] ITiCSE Conference websiteConference website

ACM Symposium on Software Visualization (SOFTVIS)

September 4-5, 2006
Brighton, UK

Software visualization encompasses the development and evaluation of methods for graphically representing different aspects of software, including its structure, its abstract and concrete execution, and its evolution.

The goal of this symposium is to provide a forum for researchers from different backgrounds (HCI, software engineering, programming languages, visualization, computer science education) to discuss and present original research on software visualization.

SOFTVIS has become the premier venue for presenting all types of research on software visualization. After SOFTVIS03 in San Diego, and SOFTVIS05 in St. Louis, the third iteration of this conference series will be co-located with the [off-site] IEEE Symposium on Visual Languages and Human-Centric Computing website IEEE Symposium on Visual Languages and Human-Centric Computing and with the Psychology of Programmers Interest Group (PPIG) Workshop in Brighton, UK.

We seek theoretical as well as practical papers on applications, techniques, tools and case studies. Topics of interest include, but are not restricted to, the following:

More details on the submission process will be published on the [off-site] SOFTVIS'06 websiteSOFTVIS'06 web site

22nd International Conference on Software Maintenance (ICSM)

24-27 September 2006
Philadelphia, Pennsylvania, USA [off-site] ICSM Conference websiteConference website
There is also an associated [off-site] ICSM doctoral trackdoctoral track

Generative Programming and Component Engineering (GPCE'06)

November 22-26, 2006
Portland, Oregon, USA
[off-site] GPCE Conference websiteConference website
GPCE' 06 will be co-located with OOPSLA. I also recommend that you visit: [off-site] conference and workshop which hosts its own [off-site] conference and workshop listconference and workshop list

top ]

A brief overview of theories of learning to program

By Steven C. Shaffer


There is a wide variation in the ability of students to learn to program, and 'there is a need to understand the fundamental causes that make [advanced] learners different from others, and the same applies for those students who fail to understand even the basics of the subject' (Mancy & Reid, 2004, p. ii).

This paper reviews the literature on the mechanisms of learning to program and the attributes of students with regard to their ability to acquire this skill.

Mancy, R. and Reid, N. (2004) Aspects of Cognitive Style and Programming. Proceedings of 16th Annual Workshop of the Psychology of Programming Interest Group (PPIG), Institute of Technology, Carlow, Ireland, 5-7 April 2004, 1-9.

How is programming learned?

The most common theory of how programming is learned is probably the 'schema accretion' model, wherein the student builds up a 'toolbox' of approaches to how to solve particular classes of problems. However, Davies (1994) argues for the importance of knowledge restructuring in the development of expertise. He offers empirical evidence for the changes in the structure of programming knowledge that develops with expertise. Another theory is that expert programmers have more complete schemata than novices (not just more, but fuller).

Novices tend to focus on the key words in the problem statement rather than the deep structure of the problem (Davies, 1994, p. 706). Novices also tend to work in the order of the schema that they are using (Davies, 1994, p. 707).

Davies, S. (1994) Knowledge restructuring and the acquisition of programming expertise. International Journal of Human-Computer Studies. 40, 703-725.

Mayer (1985) proposes that learning to program includes 'acquiring a mental model of the underlying (computer) system ... a mental model is a metaphor consisting of components and operating rules which are analogous to the components and operating rules of the system.' (p. 90)

Mayer, R. (1985). Learning in complex domains: a cognitive analysis of computer programming. The psychology of learning and motivation, vol 19, Academic press.

In apparent contrast to those who theorize programming in a knowledge-centric manner, some see learning to program from the standpoint of procedural planning. For example, Mayer, Dyck & Vilberg (1986) found that training in appropriate procedural models may enhance students' ability to learn programming: 'Students who were given pretraining in the output of English in procedures learned [programming] much faster than those with no pretraining ... a straightforward conclusion is the procedure comprehension is a component skill in learning [to program], and this can be taught to novices' (p. 609).

Mayer, R. Dyck, J., Vilberg W. (1986). Learning to program and learning to think: what's the connection? Communications of the ACM, July 1986, vol 29 no 7.

Student attributes

Several studies, mostly performed during the late 1980's and early 1990's, have identified various individual student attributes as significant with regard to learning to program.

An empiricist view of learning to program says that the student must have appropriate previous experiences in order to learn to program; a nativist view says that the students differ innately in their ability to learn to program (Mayer, 1985, p. 121).

The following are some of the results that researchers have concluded:

  • Students' ability in math word problems is a significant indicator of the ability to learn to program (Mayer, Dyck & Vilberg, 1986, p. 608)

  • Motivation, background, interests and programming experience all are strong indicators of success in programming classes (Prabhakararao, 2003, p. 281).

  • A student's ability to program is significantly related to his/her scores on generic tests of translating a problem statement into a formal language (e.g., as in algebra word problems) and a test of ability to comprehend a procedure (Mayer, 1985, p. 125).

  • Working memory and prior achievement in mathematics were sufficient predictors of programming ability (Lehre, Guckenberg, Sancilio, 1988, p. 102).

  • Adoption of strategies may be a function of efficiently using working memory; this also indicates that the development environment and the language selection will influence programmer behavior (Davies, 1994, p. 706).

Prabhakararao, S. (2003) Bringing educational theory to end-user programming. Proceedings of the 2003 IEEE Symposium on Human Centric Computing Languages and Environments. Oct. 28-31, 2003 pps 281-282.

Lehre, R., Guckenberg, T. and Sancilio, L. (1988) Influences of LOGO on children's intellectual development. Teaching and Learning Computer Programming: Multiple Research Perspectives, R. E. Meyer (ed), Lawrence Erlbaum Associates, Hillsdale, NJ.

The ability to learn to program is highly correlated with field independence, which is the ability for learners to 'restructure material in their own way, applying internally generated rules arising from prior experience or developed from cues in the material' (Mancy & Reid, 2004, p. iii).

The task of programming is not monolithic, however; different types of programming require different attributes. For example, learning modern object oriented programming requires different cognitive tasks than the older, procedural, programming approach (Romero, Lutz, Cox, and du Boulay, 2002).

At least since the 1970s there have been tests commercially available which purport to measure a person's aptitude for programming; however, these have not been widely used. Instead, a self-selection process has been the norm: if a student was able to persevere through a programming-oriented curriculum, the expectation has been that s/he was able to program reasonably well. However, this self-selection process loses effectiveness when grade inflation pressures allow untalented students to obtain good grades in a programming class.

Romero, P., Lutz, R., Cox, R., du Boulay, B. (2002). Co-ordination of multiple external representations during Java program debugging. Proceedings of the IEEE 2002 Symposia on Human Centric Computing Languages and Environments, Arlington, VA, USA.

Process attributes

Several aspects of the process of learning to program have come to light through research. A reasonable synopsis of this research is as follows: Students learn to program through a consecutive series of assignments which help to engender in them a mental model, consisting of either a knowledge structure, a series of plans, or both, which are then used to solve more complex problems. A 'good' mental model is one that helps novices learn to program better (Mayer, 1985 , p. 91).

The environment in which the learning occurs can either aid or hinder learning, based on familiarity and cognitive load requirements. Important environmental variables include the programming language used, the user interface, context, and the lab environment. For example, novices are often confused by the varying contexts in which the same programming language statement can be used (Mayer,1985, p. 98).

Mayer (1985) found some evidence that 'the particular formalism used for expressing procedures may affect the relative difficulty of encoding particular types of transactions' (p. 120), and Ko & Myers (2005) has a similar finding. A student must master the semantics of a programming language before s/he will be able to predict the outcome of a program and therefore effectively write programs (Fay & Mayer, 1988, p. 63). These results may point toward a programming language version of the linguistic relativity hypothesis (see Shaffer (1996) and Shaffer (1997)).

The user interface used to develop the programs during a course can have a significant effect on learning. A balance needs to be struck between two competing priorities: (1) using an interface which enables the student to learn without undo cognitive strictures, versus (2) using an ecologically valid interface which will enable the student (especially the vocational student) to actually perform programming tasks in a non-educational environment.

As already mentioned, certain user interfaces can be shown to be too mentally taxing for a novice programmer (Davies, 1994, p. 706): 'Research has demonstrated that many students have difficulty acquiring the semantic structure of the programming environment' (Fay & Mayer, 1988, p. 63), although repeated exposure to the environment lessens this difficulty (i.e., the student can eventually 'get used to' the user interface).

Finally, some (McBreen, 2001) propose that programming is best learned via a practical apprenticeship, which Jakovljevic (2003) describes as 'based on three phases: observation, scaffolding and increasingly independent practice' (p. 309). Thus, programming is looked upon as a task to be performed more so than as a field to be learned.

However, caution is required so that we do not fall into a belief that programming can be taught in a strict behavioral fashion; there is definitely a cognitive content to computer programming which must be nurtured in order to develop real mastery. For example, Mayer (1985) demonstrates that having an appropriate mental model of the computer enables students to transfer what they have learned to new programming situations.

Ko, A. and Myers, B. (2005). A framework and methodology for studying the causes of software errors in programming systems. Journal of Visual Languages & Computing, 16 pps. 41-84

Fay, A. and Mayer, R. (1988) Learning LOGO: A cognitive analysis. Teaching and Learning Computer Programming: Multiple Research Perspectives, R. E. Meyer (ed), Lawrence Erlbaum Associates, Hillsdale, NJ.

Shaffer, S. (1996). [off-site] Resurrecting the
			linguistic relativity hypothesisResurrecting the linguistic relativity hypothesis Unpublished paper.

Shaffer, S. (1997) Try cross-cultural training. Information Week, September 29, 1997.

McBreen, P. (2001). Software Craftsmanship: The new imperative. Addison-Wesley Professional, New York, USA.

Jakovljevic, M. (2003). Concept mapping and appropriate instructional strategies in promoting programming skills of holistic learners. Proceedings of the SAICSIT, Gauteng, South Africa, pps 308-315.

Summary & Conclusions

Exactly how students learn to program is not well understood, and thus designing training in programming is driven at the moment primarily by tradition. This question is strongly related to the psychology of programming; as the latter field deepens, perhaps the pedagogical questions will be answered as well.

Steven C. Shaffer works as a computer science lecturer and instructor at The Pennsylvania State University.

top ]

Spotlight on PPIGers

Would you like to tell other PPIGers how you are and what you are doing through the newsletter? If so, please e-mail chrisd(at)

Enda Dunican

I am conducting research entitled 'Grounded Theory Study of Novice Programming Students in Irish Third-Level Educational Institutions'

I am engaging in a qualitative study of novice computer programming using the Grounded Theory research method. Data collection will comprise three methods:

  1. Open-ended interviews
  2. Participant observation in programming labs
  3. Focus groups

Any feedback or comments PPIG colleagues could give me would be gratefully received, particularly if they have experience of using participant observation in computer programming labs or engagement in focus group discussions with programmers or programming atudents. Furthermore any references to qualitative data collection in this area would be very useful. I may be contacted at: enda.dunican (at)

Christian Holmboe

I finished my PhD earlier this year. The thesis, entitled 'Language, and the Learning of Data Modelling' should be of general interest to the PPIG community, can be [off-site] Christian Holmboe's Thesisdownloaded in pdf-format.


This thesis belongs to the domain of computer science education research, but also deals with issues relevant for general educational science, socio-linguistics, and linguistic philosophy.

It addresses three aspects of the relationship between language and the learning of data modelling:

  1. scientific concept building;
  2. choice and use of natural language terms as labels for elements of the data model;
  3. discourse as mediating tool in collaborative learning environments.

The thesis is written from a situated cognition perspective focusing both on individual and distributed forms of knowledge. The data material comprises tape recordings of classroom interaction and students' written explanations of five scientific concepts from the domain of data modelling. Both high school students and first year university students were studied.

Towards a Vygotskyan background, students scientific concept building processes are described as a trajectory from initial hunches to holistic knowledge, influenced in parallel from definitional and practical knowledge.

Informed by the theories of Wittgenstein, it is shown how the discursive practices have strong influence on the conceptual understanding of the students, who seem to form consistent scientific language communities inside the classroom.

Focusing on the different language games coexisting in the discursive practices of data modelling, it is demonstrated that object-oriented modelling may not be as close to everyday reasoning as assumed, and that this accounts for some of the problems faced by students.

Other problems are related to the distinction between natural and artificial languages, which plays an important role both for scientific concept building and for labelling of attributes and entities. Furthermore, a framework is developed for analysing the students' discursive shifts between different semiotic systems and abstraction levels. Proficiency is characterized by the ability to manoeuvre seamlessly across this framework.

The work was carried out at the Department of Teacher Education and School Development at the University of Oslo. Please note that the four individual research papers included in the thesis have all been accepted/published in international journals and are, of course, subject to copyright and distribution restrictions.

Derek Jones

Derek Jones, a past PPIG delegate and discussion list 'rabblerouser' boldly released his book, The New C Standard: An economic and cultural commentary, available to anyone who has an internet connection.

It was released with a fanfare on the news site Slashdot with an interesting comment: one major new angle is using the results from studies in cognitive psychology to try and figure out how developers comprehend code (!)

The Slashdot news item can be found [off-site] Slashdot news itemhere

A related article can be found on the [off-site] Inquirer articleinquirer

The book can be downloaded by following link: [off-site] The New C Standard: An economic and cultural commentaryThe New C Standard: An economic and cultural commentary

I feel this superhuman effort is certainly worth a look. The New C Standard is incredibly well referenced and anyone who is familiar with some of the psychology of programming literature will no doubt recognise many of the papers that he has taken the time to explore.

Derek Jones is an ex-compiler writer who is now an accomplished technical writer.

James Vickers

I have just completed my MSc ILT thesis entitled 'Selecting students for first or national diploma based on problem solving diagnostics' which was a culmination of 3 years worth of research into aptitude testing of computer programmers.

As I am a teacher working in a English further education college (mainly with students aged 16-19) I have been dismayed in recent years as to why we have so many students failing software development courses.

My management set me the task of finding out whether or not we could test to see if students were 'capable' of programming or not. I did some research into this area, and found that many of the existing tests were either commercial (and for that matter, aimed at Industry selection, not education) or inapplicable.

To cut a long story short, I devised my own test which has proven a predictor of both general ability (about 90% accurate for 16 year olds), and also has proven a direct indicator for performance by the end of their course. i.e. Student who scored 51% on my test would gain a bare pass, but a student who scored 80% would gain a pass with distinction.

James can be contacted through his [off-site] James Vickers websitewebsite

top ]

End Users Shaping Effective Software Consortium

The EUSES Consortium (End Users Shaping Effective Software) has developed a number of resources and activities for people interested in dependability issues that arise in end-user programming. These are publicly available, and we invite PPIGers to make use of them.

1. Errors resource

The EUSES Consortium has started a resource for collecting information about errors that occur in end-user programming. Currently, it features some particularly notable error anecdotes, and has links to papers on the subject as well as to other resources on errors, including EUSPRIG's excellent collection of spreadsheet errors.

It can be reached at: [off-site] EUSES end user programming errors

We are also always interested in receiving more material for the errors resources page.

If you have something to contribute to this resource, please send it to: eusesconsortium (at)

2. Special Interest Groups and Workshops on end-user software engineering

At CHI'05 (ACM Conference on Human Factors in Computing Systems) in Portland, Oregon, EUSES members Brad Myers, Margaret Burnett, Mary Beth Rosson, and Susan Wiedenbeck held a SIG on the topic of end users creating effective software.

This was a follow-up to the SIG held at CHI'04 in Vienna. The CHI'05 SIG was attended by about 60 people. The attendees each introduced their work, and then discussions took place on collaboration possibilities and items for a possible CHI'06 workshop.

Notes taken during the SIG are [off-site] EUSES SIG notesavailable online

We are now moving to a set of workshops, planned for ICSE, CHI, and other appropriate venues. The First Workshop on End-User Software Engineering (WEUSE I), organized by EUSES members Gregg Rothermel and Sebastian Elbaum, was held in conjunction with this year's ICSE, on May 21, 2005, in St. Louis.

The program for the WEUSE I was structured around four themes. These themes corresponded to the major topics presented in the papers that were accepted to the workshop. Each theme was introduced by a lead speaker who set forth his or her vision in the theme area, and was followed by a discussion session.

The four themes and their associated lead speakers were:

Proceedings from WEUSE I are available from the [off-site] WEUSE I websiteWEUSE I website

A direct link to the proceedings in [off-site] Proceedings in PDF form.pdf form

Slides presented by the lead speakers for each of these themes are also available at the WEUSE I site.

And, stay tuned for announcements of WEUSE II.

Margaret Burnett, Project Director, EUSES Consortium

top ]

Doddery Fodder: Better late than never

by Frank Wales

I'm sorry I couldn't write sooner, but I've been too busy trying to Get things Done to get anything done.

What's that you say? Why did I capitalize 'G' and 'D', but not 't' in "Get things Done"? It's funny you should ask me that. You see, there's this new way of working that's been spreading through geek circles faster than the combined fears of avian flu and broken iPod screens: 'Getting things Done'. It's a way of organizing all the projects in your life that's been formulated by [off-site] David AllenDavid Allen, and isn't remotely like a cult (although I, for one, welcome our Next Action Overlord).

Alpha-geeks and the plain text fetish

Now, I'm sufficiently bad at organizing myself that I require professional psychiatric help, so I'm a sucker for anything that might boost my personal productivity. Oddly enough, GtD (as it's known among aficionados) actually seems to work for me, and does so without the need for expensive paper supplies or fancy software packages.

Over the last year or so, many web sites and mailing lists have arisen to feed the GtD frenzy, of which I list a piffling amount:

Those of us who have been programming computers since before PCs existed, and who have somehow managed to avoid having our [off-site] brains addled by fancy development environments, still value very simple, architecture-neutral ways of handling information: plain-text files, hand-written notes, yelling. So it's not really a surprise to discover that many so-called [off-site] alpha-geeksalpha-geeks (as well as beta-geeks, and even released geeks) still rely on paper, big text files and other quaintly old-fashioned ways to manage personal work flow.

A Curious Coincidence

"That's all well and good", I somehow hear you say, "but how does this relate to programming?" Once again, I'm glad you asked.

Those who've dunked their head into the eXtreme Programming bucket will no doubt be aware of the XP notion of "stories" as a way to chop the endless serpent of new features into bite-sized chunks of billable work. One recommended way to manage the size of a "story" is to write it in your neatest hand-writing on a 3-inch-by-5-inch index card. Such a raw, physical constraint on the amount of writing (even when not done in crayon) helps to limit a customer's natural tendency to expand any particular "story" into Harry Potter and the Half-Baked Program.

Oddly enough, the GtD horde has independently discovered the virtue of using 3x5 index cards as the basis for tracking Next Actions associated with projects. In fact, they've even gone so far as to invent the [off-site] Hipster PDAHipster PDA, a card-and-clip alternative to the moribund Palm-type PDA.

Consequently, it seems just too obvious to use 3x5 cards with simple pieces of software functionality described on them, combined into GtD-like projects and managed according to GtD principles, as the basis for personal management of software development.


While he was head of HP Labs, Joel Birnbaum once gave a talk about how the fixed, simple standard of the mains wall socket has enabled startling innovation on either side of it. Using the miracle of Inappropriate Metaphor Transfer, beloved of desktop application designers everywhere, I have therefore decided that the 3x5 index card is the mains socket of stability that connects the 230 volts of eXtreme Programming with the Cuisinart of GtD. The result: something I've given the rather catchy name of "Getting software Done".

That's right, I'm trying to Get software Done using nothing but 3x5 cards, paper clips, cardboard files, little magnets, paper clips, clear plastic sleeves, lots of paper clips, and my very own labelling machine. Oh, and paper clips.

Lest you fear that I'm some kind of maniac for thinking like this, let me reassure you that I'm not alone. At least [off-site] MetaGrrl Blogone other person on the face of the Earth is doing software management this way, and I bow before this [off-site] index card majestyawesome index card majesty.

A Theory of Testing

"But surely", I imagine you objecting in a tone of exasperated effrontery, "if GsD is to be of genuine value, it needs to be scientifically verified as being effective. Mankind cannot merely rely on the witty and erudite writing of one charismatic genius to be convinced. Nor, for that matter, on anything you might say." Well, now you're just being insulting.

The problem, of course, is that I can't redevelop the same software under identical circumstances using a wholly different method, in order to compare the outcomes. Fortunately, theatrical cosmology (also known as Star Trek) offers a solution. According to its "many worlds" theory, there are other mes in parallel universes already using other methods on the same software projects. So, all that all of me has to do is find a way to send notes inter-universally (perhaps on 3"x5"x7"x9" superhyper-index cards), and we'll discover which is best.

Given that I've thought of this in our universe, there must logically be a version of me that is more fired up about solving this problem than finishing this article. So, I can continue writing, safe in the knowledge that some-me else is working to complete this study, and communicate the results to all the rest of me. Hence, I can ignore the problem and wait for me to answer it anyway.

The only way this won't work is if it turns out that certain episodes of Star Trek are impossible: therefore, proving that GsD is valid becomes a special case of proving that Star Trek is completely possible, the so-called ST-complete theorem. (This is distinct from the ST-incomplete theorem, which posits that there is still at least one unmade Star Trek episode worth watching; unfortunately, [off-site] Star Trek: NemesisStar Trek: Nemesis is an astonishing proof that the ST-incomplete theorem is false.)

Second-system Syndrome

To demonstrate that I'm not only in the GtD groove, but also Web 2.0-aware, I have put pictures of my GsD set-up on [off-site] Flickr, and I've added the 'ppig' tag to some of my [off-site] bookmarks (which you can get as an [off-site] RSS FeedRSS feed too). Unless enough of you complain, I shall also be forced to create a podcast version.

"Web 2.0?!" comes your plaintive cry. And you thought the existing Web wasn't even out of beta-testing for version 1.0 yet. Well, listen up. Silicon Valley's [off-site] cash hydrantscash hydrants are once again being loosened, in preparation for dousing dangerously [off-site] inane and trivial ideasinane and trivial ideas with suffocating amounts of filthy lucre. And this time, the danger has been identified as [off-site] Web 2.0Web 2.0.

In short, it's like Web 1.0, but [off-site] Web 1.0 doubleddoubled.

Helpfully, [off-site] Wired has a lengthy article that is even more hubristic and self-important than usual. But a simple way of imagining [off-site] web 2.0web 2.0 is to remove anything from web pages that isn't computer-crunchable data, label it at random with misspelled words, and then let other people convert it into live video widgets, creating so-called "Ajaxified Tagsonomy Mash-Up Streams" (ATMUS). Worrying about what that means for society is called 'ATMUS fear', which is something we need more of around here.

Still, don't get your hopes up for a trendy programming job with Aerons and lattes, since Paul Graham thinks [off-site] Paul Grahamhiring is obsolete, so you're going have to [off-site] how to loose moneylose your own money this time. (Unless you [off-site] Paul Graham disagreementdisagree with Paul Graham, of course.)

And have some pity for Ted Nelson, who invented 'hypertext', but who [off-site] Transliteratureseems to be as far from achieving his visions as ever. (Nelson also [off-site] Interesting peopleadvocates creating software according to a cinematic model, with a visionary director in charge. Much as I'd be morbidly curious to see a spreadsheet by Quentin Tarantino, I don't think I'd trust my taxes to it.)

Jason Ajax: the crime-busting programmer with over-organized hair

Web 2.0's calling card is AJAX, a term invented by [off-site] Adaptive PathAdaptive Path's Jesse James Garrett to explain to management what we indignant programmers now whine that we've been doing for years anyway. (A valuable side-effect of the buzz around 'AJAX' is that many Dutch football fans have been driven into apoplexy upon discovering that 'ajax' is one of the hottest search terms online, leading them to worry that their favourite football club [off-site] Ajax Football clubAjax was in trouble.)

AJAX is short for 'Asynchronous Javascript and XML', but really refers to anything that tarts up the user interface by communicating with web servers without having to refresh the whole page. This decreases the time before some other web site you claim you didn't visit starts displaying pornography anyway, thus leading to the other kind of tarting up.

The poster boy for AJAX is [off-site] Google MapsGoogle Maps, although the AJAX part of that is actually pretty straightforward compared with the back-end system that serves up tileable, scalable maps of the entire civilized world plus [off-site] KansasKansas.

But, as in the movies, it's the surface gloss that attracts the attention and the babes, so get ready to start [off-site]polishing.

Those who have a constitutional dislike of XML-anything can instead consider using [off-site] JSONJSON (pronounced 'Jason'). 'Javascript Object Notation', to give it its fairly dowdy name, is a way of bundling data as a little Javascript program that gets executed to set variables.

There is now a [off-site] AJAX appskerfuffle of companies announcing 'Web 2.0' products and services, including many that seem like they're re-trying online business models that failed back in the last millennium. But with every new buzzword comes new money, new programmers and the potential for new Superbowl commercials.

With all this, we also get the chance to write software for the alleged [off-site] Web 2.0Web 2.0 platform, with its partially debugged, subtly incompatible, and often wholly absent features. But, of course, we'll be egged on by those clueless oldbies who [off-site]feel a storm coming.

Programming for people with short attention...oh, look, a giraffe!

It seems that, with the incredible fragmentation of software creation that Web 2.0 represents, the traditional notion of 'application development' is a bust. Instead, we now have to consider the merits and difficulties of progressively assembling shards of software on an undulating environment that is distributed, ever-changing, and (thanks to [off-site] Greasemonkey) [off-site]completely unpredictable. It's almost as if we want to encourage programmers to ignore the big picture, and just do incremental, little stuff, as a way of keeping development problems within the limits of human comprehension. We also get to slap the label 'beta' on all our newly web-enabled systems as the universal excuse for why they're still not done yet.

Scripting now dominates programming at both ends of the web, but with languages based on quite different models: in the browser Javascript is prototype-based, while on the server Python and Ruby are class-based, and PHP and perl are drugs-based. Moreover, there is no shared data-representation syntax, so we must talk in [off-site]poxy XML-mediated Chinese whispers. Meanwhile, [off-site]RESTafarians are inadvertently working to migrate transactions into the client where they belong, until you want them to work.

Web 2.0 involves publishing, transforming and merging previously disconnected services in hitherto unexpected ways, with hilarious consequences. This routinely involves writing programs that write programs in other programming languages, generally with incompatible quoting conventions, no useful visualization tools, and sufficiently loose interpretive semantics as to guarantee debugging opportunities until retirement or legal action, whichever comes sooner. And once you get all the web stuff working, then you get to ponder the relative semantics to physicists, programmers and biologists of a term like 'vector', before deciding to become a lion tamer instead.

More research required

Importantly for PPIGlets, Web 2.0 offers lots of opportunities for relevant psychology of programming research. It does this by being fertile ground for the kind of highly entertaining, large-scale software disaster that only copiously over-funded naivete can create. Just think, this could be your once-in-a-career chance to get away from doing research on those student web pages that you happened to find lying around the campus.

If all this frightens you into thinking that using your computer in our evolving, connected world is like driving across some endless metaphorical bridge while it's being re-built by us crazy programmers, don't worry. As long as you drive fast enough, you'll probably be okay. Just wave to us as we argue about which chisel we should use to hammer in the screws that hold the road together.

Of course, this shows the limits of metaphor; everyone one knows that you don't hammer in screws with a chisel: you remove them with a chisel - you hammer them in with a screwdriver. In my next article, I will therefore show you how to build a robust metaphor out of index cards, which you can use to explain away the unexpected success of almost any project.

Frank Wales

top ]

Light tones and Connected Links

by Chris Douce


The issue of software patents continue to feature in the media. Recently the EU software patent directive has been defeated. I have found two interesting articles by Richard Stallman on this topic, both of which have been published in the Guardian, a popular UK newspaper.

[off-site] Patent absurdity - June 23 2005Patent absurdity - June 23 2005

[off-site] Comment : Soft Sell - August 2 2005Comment : Soft Sell - August 2 2005

[off-site] A mediaWiki entry describing the term patentA mediaWiki entry describing the term patent


As Frank has already mentioned software development seems to have another fashion at the moment called 'Ajax'.

The Ajax process seems to be spawning many threads:

Here's the [off-site] Bounderbounder who is attributed to coining the term

[off-site] Ajax wikidefinitionAjax wikidefinition (a sensible one this time)

[off-site] Microsoft AtlasMicrosoft Atlas

A comment on comments

The debate on whether or not to comment continues to rage. I have gone from liberally peppering my code with useful human readable pointers to asking myself, 'is it really necessary?' One view is that, when added, comments are a maintenance burden. When looking at it this way, I have to agree.

I've pulled together a couple of views relating to this hotly debated topic.

[off-site] Successful Strategies for commentingSuccessful Strategies for commenting

The following link takes on a more controversial tone:

[off-site] Comments are more important than codeComments are more important than code

On this point, I have to disagree. (Comments alone do not allow you to receive e-mail or browse the web)

On an inspired hunch I searched for a paper entitled 'comments considered harmful'. I was not disappointed. There were at least two:

After finding 'polymorphism considered harmful' I threw caution to the wind and discovered several other variations:

The ACM portal can provide hours of programming related entertainment, often at the expense of ones own code. (I also believe that there is a paper entitled 'considered harmful considered harmful', but have yet to find it).

Returning to an earlier ramble, I performed a [off-site] related search which made me recall the phrase, 'there's no such thing as an original thought'.

If anyone is interested is interested in collaborating on a 'considered harmful literature review', please feel free to send me an e-mail.

Cosmic Programmers

Whilst reading the handbook of mathematical cognition and finding a chapter on exceptional performance I came across an interesting article by software writer and develper Joel Spolsky

[off-site] Hacking: Art or ScienceGood versus Average Programmers

The article describes a study carried out by Professor Stanley Eisenstat at Yale. Here I shameless plagarise Joel's piece (but since I attribute it to him, it should be okay):

Programming and Art (Revisited)

The last PPIG workshop contained some presentations that were rooted as much in the humanities as much as they were in technology. I'm referring to Alan Blackwell's presentation entitled 'the programming language as a musical instrument' and Greg Turner's paper entitled 'Attuning: A Social and Technical Study of Artist-Programmer Collaborations'

Whether programming is art is something that both programmers and artists explore.

Here is another attempt written by John Litter:

[off-site] Art and Computer programmingArt and Computer programming

[off-site] Hacking: Art or ScienceHacking: Art or Science

top ]


Many thanks go to the illustrious reviewers of this edition of the newsletter (Frank Wales in particular). Thanks are also extended to all contributors. Your words of wisdom are appreciated.