|Psychology of Programming
|about newsletters workshops resources contents|
Editor: Chris Douce
Welcome to the Winter 2008 edition of the PPIG newsletter.
My apologies to all readers for the length of time it has taken to collate this issue of the newsletter. I hope that you find this edition interesting. This issue contains an excellent summary of last years 2007 workshop by Johanna Hunt. Johanna will also be playing a central role in helping to organise the forthcoming work in progress workshop which will be held at the University of Sussex.
This newsletter also contains some preliminary information about the forthcoming main PPIG workshop which will be held at Lancaster University.
Please find enclosed all the regular collection of bits and pieces: a book review section, spotlight on PPIGers along with a set of call for papers which I hope will find useful. As always, if you have any ideas or suggestions for articles, please feel free to get in contact.
PPIG 2008 is due to take place at Lancaster University, UK in the autumn of 2008. An announcement about the date and the first call for papers will be made in early 2008.
Lancaster is a small, historic city in the North West of the UK. Attractions include an imposing medieval castle, one of the finest parks in the UK, and beautiful countryside and coastline. Lancaster is within easy reach of Manchester to the south and the Lake District (home of Beatrix Potter, and romantic poets such as Wordsworth and Coleridge) to the north.
PPIG 2008 website
City of Lancaster
We look forward to seeing you there!
[ top ]
21-22 February, 2008
Department of Informatics, University of Sussex, UK
Paper submissions are now invited for the PPIG work-in-progress workshop.
The Psychology of Programming Interest Group are holding the 4th work-in-progress workshop at the University of Sussex on 21-22 February 2008.
The PPIG work-in-progress workshop is a forum in which researchers at all levels can present and discuss current work and recent results, findings and developments concerned with psychological aspects of software development. A feature of the PPIG workshops has been their openness to a wide spectrum of concerns related to programming and software engineering, from the design of programming languages to communication issues in software teams, and from computing education to high-performance professional practice. Similarly, PPIG entertains a broad spectrum of research approaches, from theoretical perspectives drawing on psychological theory to empirical perspectives grounded in real-world experience.
This informal workshop is intended to foster exchange of ideas and constructive suggestions for research in progress. Doctoral students and more experienced researchers will be equally welcome.
PPIG aims to bring together people working in a variety of disciplines and to break down cross-disciplinary barriers. If you have any queries as to whether your topic falls within the remit of PPIG, then please do not hesitate to contact the Local Chair, Johanna Hunt at J.M.Hunt@sussex.ac.uk
Please submit papers by email to J.M.Hunt (at) sussex.ac.uk.
All accepted papers will be circulated to workshop participants prior to the workshop, and will appear on the PPIG web site.
Further information PPIG-WIP website
We look forward to seeing you there!
Article prepared by Johanna Hunt
[ top ]
by Johanna Hunt
The ninthteenth Annual Workshop of Psychology of Programming took place in Joensuu in the Lake District area of Finland on 2-6 July 2007. The programme was the most ambitious yet; featuring keynote talks through technical papers, a doctorial consortium, tutorials, and a wide range of evening activities. There was also a PPIG Challenge which has dared people to formulate the best improvement to an existing system or notation, based on psychological principles, and a presentation challenge that had everyone revising their slides to enter.
The workshop opened quietly on the Monday with the Doctoral Consortium, allowing us early days researchers the chance to chat about our research and build up our confidence for the main conference.
All but the first talk were focussed on the same topic, programmer education. Amongst the speakers: Essi Lahtinen spoke about cognitive skills in the use of visualisations on the level of basic programming. Jussi Kasurinen discussed Python and teaching the fundamentals of programming. Cecilia Vargas spoke about how BlueJ could help or hinder novices in learning OOP. Andres Moreno talked about visualisation with Jeliot 3 and how to adapt program visualisation to learners with different knowledge and aptitude/attitude.
Discussions were focussed on research methods and approaches and Marian Petre and Jorma Sajaniemi asked detailed and interesting questions after each presentation. Most importantly we were reminded to never start with a research method, but instead a theory or question that needs to be evaluated: What's my question? What evidence will satisfy? How will I collect the evidence? And the interesting questions: What do people say they do? What do they actually do?
A scene from the first day of the workshop
The afternoon session was an informal tutorial on eye-tracking and potential application when researching programming led by Roman Bednarik and Anne Jansen. We were taught about visual attention and Fovea cones, the history of eye-tracking systems from 1898 through to modern eye-tracking systems, and potentials and challenges for such systems in the context of studies of programming.
The main focus of the tutorial was a hands-on experience using an eye-tracking system to conduct research. This was highly interactive and great fun.
We closed the day with socialising and enjoyable drinks in the Pub Palaveri, attached to the Hotel Atrium.
The main workshop opened on the Tuesday with an introduction from Jorma Sajaniemi on the two challenges for the conference. There was a presentation challenge daring each presenter to compete for best example of an incident from childhood, reference to the comic strips on Viivi & Wagner, and a 'green touch'.
Marian Petre kicked off with a keynote talk on expert strategies for dealing with complex and intractable problems. Several strategies were defined and discussed: simplification, transformation, re-segmentation, relaxing constraints, analogy, abstraction, re-shaping the problem space, seeking insight and nibbling. Expert programmers are seen to be reflective practitioners, using reflection, correction and re-assessment. Her slides are available.
The first paper session came after the coffee break. It was entitled '6*3=18 a.k.a. Moods of Analysis' (session chair: Justus Randolph) and was comprised of three research presentations. Iftikhar Ahmed Khan spoke about how moods affect creativity, reasoning, behaviour and programmer performance and discussed a study on the impact of mood on debugging coding performance. Seppo Nevalainen spoke about looking at visual attention in relation to PlanAni (a program animator based on the idea on roles of variables). Finally Essi Lahtinen discussed a cluster analysis study of novice programmers. She identified six clusters of student types: Competent Students, Unprepared Students, Practical Students, Theoretical Students, Memorising Students, and Indifferent Students.
After a lovely lunch in the canteen came the paper session on Assisting Software Engineering lead by Pauli Byckling. Brendan Cleary presented the first of the three papers on 'Assisting Concept Location in Software Comprehension', and was followed by Rozilawati Razali on the usability of UML-B, and Jim Buckley made a strong bid for the presentation challenge with anecdotes and a touch of green while presenting on meta-modelling process prototype called ESCAPE.
Jim Buckley led the final session of the day, entitled 'From Past to Future' with two papers: Jorma Sajaniemi on what we should study in the future, based on the shift to OO programming, for programmer education in the paper `From Procedures to Objects: What Have We (Not) Done?' Petri Gerdt followed with 'Introducing Learning into Automatic Program Comprehension' considering machine learning methods for automatic program comprehension.
The evening reception event at the Educa-building of the University of Joensuu was an interesting chance to socialise, admire the impressive wood sculptures and listen to some beautiful traditional music and song.
We listened to some traditional music and song
Close up of a wooden sculpture
Wednesday brought a busy but interesting day. Stuart Wray led the first session of the day on 'Tools of Learning'. Juha Sorva, with a lovely entry into the Viivi and Wagner competition, talked about a Roles-Based Approach to Variable-Oriented Programming focussing on follower and gatherer roles and how role based programming could help students form variable-related schemas and learn key programming skills. This was a really interesting presentation and I was entertained by the speculative language presented ROTFL (Role-Oriented Titillating but Fictional Language). The next paper was a work-in-progress paper on 'Student Attitude Towards Automatic and Manual Exercise and Evaluation Systems' by Teemu Tokola and others. We were treated to a Tolkien-based anecdote story, and early-stage research on automatically tested pre-exercises for programming students.
The next session was on Research Methodology chaired by Enda Dunican. Stephan Salinger and Laura Plonka were at PPIG to discuss their work on developing a general-purpose coding scheme for the analysis of pair programming using a refined version of grounded theory (introducing four practices to support the data analysis), and exhibited some lovely green in their arrows thus entering the presentation challenge. Roman Bednarik followed with a green-topped presentation, leading on nicely from the tutorial on the first day, discussing analysing quantitative eye-tracking data studying visual attention when conducting debugging tasks.
The final paper in this session was 'An Experiential Report on the Limitations of Experimentation as a Means of Empirical Investigation' advising us not to rush empirical studies with software organisations, and to remember that the controlled experiment route is not always optimal. We were encouraged to consider how such data can still be rigorous and potentially useful.
After lunch we had a group discussion led by Marian on 'Children's mental/operational models of programming—Do children's programming tools miss something?' We discussed the activities performed by children, mostly on the internet, which involve programming activity without being explicitly recognised as such; parameter tweaking, optimisation, variation and composition of components (e.g. skins), and the creation of simulations, animations and games. A variety of questions were approached: Is it programming if it is fun, simple, socially-led? Are there generational differences in the mental models being developed? Do these 'play' experiences generalise or lead to correct abstractions for developing programming skills? Where does algorithmic theory fit in these experiences? If there is a difference what does it look like and what are the implications? Should more formal programming be taught earlier in the curriculum?
After that interesting discussion we finished early to gather ourselves for the exciting trip out for the Conference dinner. We travelled by coach (a longer journey than originally planned as unfortunately our coach broke down enroute) to the Orthodox Monastery of New Valamo where we had an interesting tour of the attractive monastery grounds and learned of the history of the monks who live there. The Byzantine conference dinner was a lovely affair and I admit to being very taken by the Valamo-made berry wines, which were delicious.
PPIG dinners are always memorable!
Monastery grounds visited during trip
Photograph of the Monastery tower
The final day of research papers started with a keynote Françoise Détienne with 'A multidimensional framework for analysing collaborative design: emergence and balance of roles.' Having never seen Françoise talk about her work I was very interested in hearing more about this. Roles along several dimensions were discussed: Social dimension (direct and indirect participation, influence and power); Cognitive dimension (problem solving); Interactive Dimension (discursive and interactional). Examples were given from data. The question was asked whether socio-technical environments constrain and enable roles? Do socio-technical environments enable role emergence and role balance, and in this way create an enabling environment for participants. What are the characteristics of enabling environments? Her slides are available.
Roman Bednarik was session chair for the subsequent paper session on Learning Programming. Olga Timcenko led with the interesting 'Example of Using Narratives in Teaching Programming: Roles of Variables' and her entertaining presentation of her experiences with Lego programming. Sylvia da Rosa followed with 'The Learning of Recursive Algorithms from a Psychogenetic Perspective' and Anabela Gomes followed with a discussion of problem solving in programming. For the latter I was convinced that she included many photos of her beautiful hometown just to inspire envy.
The day was broken for lunch and then the PPIG Business Meeting, focussing on the future of PPIG; potentials, future research, forgotten history and marketing. The final paper session of the workshop was on Programming Aptitude (chair Marja Kuittinen).
The first paper was Sue Jones presenting on 'Spatial Ability and Learning to Program', presenting her research on mental rotational ability and gender in relation to self-efficacy and comfort. Her results suggested that more experienced students have higher mental rotation ability and she wondered whether mental rotation could be improved. The final talk of the workshop was from Stuart Wray with 'SQ Minus EQ can Predict Programming Aptitude' – presenting an analysis of SQ and EQ (Systemizing and Emotional Quotients) correlated against programmer aptitude.
Second day presentations
The Most Invisible Session Chair Prize
The PPIG Challenge and the presentation followed, an entertaining event. As often happens the awards category had multiplied to a number of humorous awards listed below:
After the closing ceremony everyone was invited to experience a real Finnish sauna by the lake Pyhäselkä with games and a BBQ. Sadly I was not able to join everyone for this, but I heard some wonderful reports about the experience and the fun everyone had.
The final day of the expansive and varied workshop was filled with two tutorial sessions: Jorma Sajaniemi and Pauli Byckling on 'Roles of Variables and their Use in Programming Education' and Enda Dunican and Ioanna Stamouli on 'Qualitative Research: Grounded Theory and Phenomenography, what, how, and when to use them.'
I attended the latter session, curious whether my knowledge of other qualitative analysis techniques would carry over to phenomenography (about which I knew very little). Grounded theory and phenomenography were introduced and defined in terms of their theoretical and methodological aspects. By examples it was possible to gain an insight into how they worked in practice. This was a useful and practical session as we all had an opportunity to conduct analysis on a small set of data with both approaches, thus gaining practical experience and understanding. This was an interesting and useful tutorial which really clarified some issues about variance across qualitative research methods.
All in all PPIG 2007 was a wonderfully organised and varied workshop. There was no shortage of interesting things planned, which helped to create an amazing environment. My apologies if I have spelled any names incorrectly!
I would like to thank Jorma Sajaniemi and the rest of the organising committee for putting together such a smooth running, successful-yet-ambitious workshop. I'll fondly remember the Finnish hospitality I experienced (especially after my bags were delayed enroute), have developed a fondness for berry wine thanks to the Valamo monks, and will likely never forget Wagner the pig. I would also like to thank PPIG for providing me with the opportunity to attend such a great event.
[ top ]
by Chris Douce
edited by Andy Oram and Greg Wilson
I think I first discovered the idea of beauty, or at least aesthetics in software when I was told about that famous book The Art of Computer Programming by Donald Knuth. In my undergraduate days I was astonished by elegance when I was presented with parsing algorithms during my compiler classes during my undergraduate days. I was then astonished when I finally 'got' the quicksort algorithm. Ever since, I have always had an interest in the elegance of software – not necessarily in terms of algorithmic sense or in terms of elegance of embedded systems where every instruction may yield a saving, but in how code can communicate to both reader and machine.
When I saw this book advertised, I put in an order straight away – it's title struck a chord. But no sooner had the book arrived in the university library for collection, some bounder had already taken it out!
There is no other way to say this: Beautiful Code is a big book. Just shy of five hundred and fifty pages and thirty three chapters, there is a lot to get through. As a result I have to confess that I have chosen my reading. There is something here for Perl scripters, accessibility developers, testing fanatics, debuggers, biologists, developers working with concurrency, embedded systems designers and those working with mathematical algorithms.
Kernighan kicks off on page 3 by writing 'solving the right problem is a big step toward creating a beautiful program'. This is later echoed by Kowlawa who write 'code should accurately and efficiently perform the task that it was designed to complete, in such a way that there are no ambiguities as to how it should behave'.
Jon Bentley quotes Antoine de Saint-Exupéry who said "a designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away", p.29, a sentiment that is, of course, efficiently reflected in the abbreviation DRY.
Beauty and elegance can also be found during testing: 'the main purpose of tests is to instill, reinforce, or reconfirm our confidence that the code works properly and efficiently', Savola, p.86. Tests should be clear to developers other than the tester as well as building confidence and understanding.
There are examples of programming in the large too. The book is also complete with forays into the world of service-oriented architecture and concurrency (anyone remember Occam?).
There is much that resonates. For example, 'ideally, everything that you need to understand a piece of code should fit on a single screen. If not, the reader of the code will be forced at best to hop around from screen to screen in the hopes of understanding the code', p.255. 'I believe that beautiful code is short code, and I find that lengthy, complicated code is generally quite ugly', p.261. Gulhati, p.163, is after my heart by when he writes, '.. success and beauty in an economic sense depends directly on the code being flexible enough to evolve over time and meet the requirements of its users, and do it again and again and again over the course of many years'.
As I have hinted at earlier, for me, programming beauty should be a compromise that expresses ideas elegantly for both the machine and for the reader. I strongly believe that programmers should be kind to their peers. Helping others to understand your own work will in turn benefit the organisation that you work for. Aspiring to (and hopefully achieving) beauty can save money.
Some aspects of beauty are ultimately intertwined with common sense: use local scope, minimise side effects, don't use global variables, and much of these sentiments are echoed throughout the book.
I like Beautiful Code a lot. I can see myself looking at some chapters again and again. This said, I think a more firm and clearer chapter on databases would have been welcome. Correctly designed databases are a joy to the eye of the beholder! The whole book might have been stronger had it been subjected to slightly more vigorous editing.
This said, I recommend Beautiful Code. There are some cracking chapters. Plus, all royalties are said to go to Amnesty International. You can't get fairer than that.
Interview with the authors
O'Reilly book description
As ever, Wikipedia has a number of cracking pages that are a treat for the browser:
Aesthetics and Information Technology
[ top ]
Visser, W. (2006). The Cognitive Artifacts of Designing. Mahwah, NJ: Lawrence Erlbaum Associates
The Cognitive Artifacts of Designing, in which Willemien Visser presents her view of design from a cognitive perspective. In order to render the essential and specific qualities of design, she characterizes it as a construction of representations. This presentation is preceded by a critical review of the research performed since some 30 years in the domain of cognitive design studies.
The Cognitive Artifacts of Designing presents a new perspective in cognitive design research: design is most appropriately characterized as the construction of representations (internal and external). This viewpoint constitutes an alternative to today's main theoretical approaches, i.e. the classical cognitive-psychology viewpoint (represented by Simon's symbolic information processing model) and the situativity standpoint (which, in design studies, generally takes the form of Schön's reflective practice framework). With respect to methodology, breaking with the classical cognitive-psychology approach, where research is mostly conducted in artificially restricted conditions, we claim the necessity to characterize design on the basis of data collected on designers' actual working activity in professional design projects.
We characterize the different representational structures and the activities operating on them; an outline is sketched of directions regarding functional linkages between these structures and activities. We discuss different aspects of the representational structures - e.g., their form and function - and their variations according to the phases of the design process: representations at the source of a design project (requirements or "design problems"), intermediate representations, and representations at the end of a design project (specifications or "design solutions").
The construction of representations is a high-level cognitive activity, which is implemented through three main types of activities, i.e. generation, transformation, and evaluation of representations. These activities resort themselves to other activities and operations, such as interpretation, association, integration, exploration, inference, restructuring, combining, hypothesizing, and also drawing (sketching and other forms) and gesturing (pointing, delimiting, tracing, and other forms).
We defend an augmented cognitively oriented "generic-design hypothesis." There are both significant similarities between the design activities implemented in different situations and crucial differences between these and other cognitive activities; yet, characteristics of a design situation (related to the designers, the artifact, and other task variables influencing these two) introduce specificities in the cognitive activities and structures that are used. We propose some candidates for dimensions underlying differences between such forms of design.
More information about the book can be found from the Routledge website.
[ top ]
by Chris Douce
After quite a few months without programming, I found it necessary to perform a stint of debugging. This was not source code debugging, but installation debugging. I needed to get a software development framework up and running, including a web server and database. It wasn't too long before I was again happily fiddling with configuration files and faced with the need to look at system logs and error files to try to understand more about why my setup was not doing the things that I expected it to.
Debugging work within the psychology of programming has often focussed at the lower level: looking at and considering how programmers and software engineers comprehend and assess their source code. I would be delighted to learn about studies which addressed the interesting intersection between the practice of system administration and source code programming.
During my most recent foray into the need to debug systems, I turned to the most useful tool in my armory: the internet search engine. I can't remember the exact time when I first tried using a search engine to help me debug code, but I think it was after I had exhausted all avenues of internal inspiration and was looking to others for help. I think I copied and pasted an error message into Google, with the question, 'I wonder if other people have seen this?' The answer was obvious: of course they had, and I was duly inundated with hits.
How do I make a decision about what to click on? Decisions based upon the origin of the link: is it is a discussion forum? Does the 'search slice' match the context of my problem? Have I seen the site before and has it been useful? Is there the possibility that this link will lead to an inspirational dead end, or will it yield interesting tips about where to look?
I have many questions about this practice of searching for error messages: do other developers use this same strategy? (I suspect that they do) Do they look for the same information as I do when they are making decision about what link to follow? Finally, is there a better approach that can be used for the presentation of error messages that could make it more effective for the developer? One very welcome web-browsing feature that I make extensive use of is the change of the link colour which shows me which nodes of a 'web search tree' I have visited previously, helping me to hone and refine my keyword set.
There is a happy ending to this story. The answer, was, of course, obvious. To complete my installation and to get rid of those final pesky error messages all I had to do is to made a tiny edit to one of the configuration files. Of course, there is always more to do. I now have a couple of ideas about where I should start to look to improve the understanding I have of my tools.
Even though I solved my problem I have little recollection of how I understood the search results I was presented. I have no firm understanding of what rules or heuristics I applied, but do wonder whether I could have found an answer more quickly if the results were presented in a different format.
[ top ]
University of Glasgow, UK, 31st March 2008
The UK Higher Education Academy Information and Computer Sciences are again seeking keen and enthusiastic people, that have novel ideas (tried and tested) for teaching introductory programming to first year computing students, to speak at the 8th One-Day Conference on the Teaching of Programming, to be held at the University of Glasgow on 31st March 2008.
If you are interested in participating please email a 1000 word or pdf description of what you wish to share with the audience, and an indication of how you wish to present it, to J.E.Carter (at) kent.ac.uk by 21st January 2008. Notification of acceptance will be by 4th February 2008.
Further information about this event can be obtained by contacting Karen Fraser [email@example.com] or by visiting the UK Higher Education Academy ICS website.
Editors: Monika Buscher; Jacki O'Neill, John Rooksby
Submission deadline: 18 April 2008
When we think of diagnostic work, often the first domain to come to mind is healthcare. However, practices of noticing and categorising trouble and of defining the scope for remedial action span many domains. For example, diagnostic work also takes place in software and hardware troubleshooting, engineering, emergency work, detective work, coaching, hospitality work, piano tuning, and quality control. Broadening the analytical focus can leverage important insights for the design and use of CSCW technologies.
Although frequently conceived of as a 'moment' of individual cognition, diagnosis is often a material, collaborative process. It requires careful sensory and sensitive engagement with other people (e.g. in healthcare, teaching, policing or customer service), resourceful and iterative probing of information technology ( e.g. debugging code, playing a video game) and manipulation of material objects (e.g. fixing a printer jam). Some activities involve rational everyday knowledge, others demand scientific practices, representation and calculation, and some call for emotional and intuitive ways of knowing. Moreover, technology use pervades diagnostic work, mediating or facilitating it. Increasingly, technologies are used in remote diagnostic practices, for example, for bomb disposal, environmental monitoring, healthcare, or for customer support from one of a myriad of call centres. And local diagnosis also often relies on technological support, for example to alert people to problems, to help assess their nature, to locate solutions, to communicate diagnostic reasoning and so on.
Diagnostic practices are a pervasive and important feature of contemporary life. They matter, not least because it is through diagnostic work that different perspectives (e.g. novices and experts, users, developers and designers) meet. Technologies meant to support diagnostic work can interfere with the everyday practices, organizational structures and skills involved, both positively and negatively. For this Special Issue of the Journal of Computer Supported Cooperative Work we invite contributions that explore key dimensions of this dynamic relationship to inform the design and use of CSCW technologies, including questions around:
Collaboration: Diagnosing is often a collaborative endeavour. How is collaboration organised and sustained? Is it made visible or invisible? How do participants 'calibrate' for varying degrees of competence? What technologies are used? How could technologies support collaboration?
Human-matter engagement: Engagement with physiological or material agencies entails skills of human-matter 'communication'. People use technologies that translate, amplify, or otherwise document material activities. They use thresholds, patterns and alarms. How do (or don't) such technologies help people in making matter 'speak'? How do they 'sit' with the collaborative dynamic of diagnostic work?
Human-technology engagement: The states and processes of many of the technologies meant to support diagnostic work themselves are hard to notice, inspect, 'diagnose', let alone 'debug'. How do people understand and make the most of these technologies? How do they notice and exploit affordances and address breakdown?
In this special issue of the Journal of Computer Supported Cooperative Work we seek to analyze the collaborative practical accomplishment of technologically mediated or facilitated diagnostic work. We particularly invite studies of domains outside of healthcare. Regardless of the domain studied, authors must clearly address what constitutes diagnostic work within the context of their study, they must clearly describe the collaborative nature of diagnostic work and the opportunities and challenges that technologies in general and CSCW technologies in particular raise. Papers may focus on:
Find out more from the CSCW Journal website.
Co-located with ICSE 2008 Leipzig, Germany May 12, 2008
End-user programmers far outnumber professional programmers, and are using a wide range of programming languages and environments to create software. Unfortunately, evidence suggests that there is a high incidence of errors in applications developed by end users for a wide variety of purposes. Some of these errors have a high impact on individuals and organizations. This aspect has motivated researchers to explore new ways in which to help end users develop dependable software. Approaches and tools traditionally developed for professional programmers cannot be brought directly to end users primarily because end users have different background, training, and motivations than professional programmers. Therefore, current research in the area of end-user software engineering involves specialists in software engineering, programming languages, human-computer interaction, empirical studies, education, and cognitive psychology.
The Fourth Workshop on End-User Software Engineering is a one-day workshop which will focus on the challenges faced by researchers working on helping end users create dependable software. The primary goal of the workshop is to bring together researchers working in this research space.
Brief presentations will kick off the various sessions of the workshop. The rest of the time will be devoted to group discussions. The overall structure of the workshop will be flexible, including at least one open session aimed at fostering research collaborations.
Topics of interest include (but are not limited to) the following:
Further information can be found on the ICSE website
Publication date: July/August 2008
Many of the recent advances in science have been dependent on software. Because of the complex nature of the science underlying the software, much scientific software is written either by scientists themselves or by multidisciplinary teams of software engineers and scientists.
In the former case, scientists face the challenge of knowing little about software engineering beyond coding. In addition, they often work within a culture in which the skills and knowledge required to develop software are devalued. They thus fall into the category of "professional end-user developers." In the latter, the multidisciplinary teams face the challenges of different cultures (science and software development) and communication.
The aim of this issue is to explore the particular challenges facing scientific software development and the ways by which these challenges might be addressed.
Topics of interest include, but are not limited to:
For further information about this special issue, please feel free to contact Judith Segal (j.a.segal (at) open.ac.uk)
Herrsching am Ammersee, Germany, 16-20 September 2008
The IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC) is the premier international forum for researchers and industrial practitioners to discuss the theory, applications and evaluation of technologies, visual and otherwise, that enhance the role of humans in the computing process.
Established in 1984, the mission of the IEEE Symposium on Visual Languages and Human-Centric Computing ("VL/HCC") is to support the design, formalization, implementation, and evaluation of computing languages that are easier to learn, easier to use, and easier to understand by a broader group of people.
This includes all research aimed at the above mission, regardless of whether it uses entirely visual technology, text, or instead uses sound, taste, virtual reality, the web, or any other technologies. Examples of research addressing this problem include, but are not limited to, language/environmental design aspects, theory that supports the many media used toward this goal, implementation aspects, empirical work, software comprehension aspects (including software visualization), and software modeling and/or software engineering aspects.
We solicit original, unpublished research papers that focus on one or more aspects of human-centric computing technology, for instance visual programming or interaction, text, sound, virtual reality, the Web, or other multimedia technologies.
Research papers may address cognitive and design aspects, underlying theories, formal methods, taxonomies, implementation efforts, tool support, and empirical studies. We also solicit short papers that present work in progress or demonstrations of tools. Areas of interest include, but are not limited to, the following:
Accepted papers will appear in the Proceedings of VL/HCC'08, published by the IEEE Computer Society.
The conference is also inviting submissions for workshops and tutorials to be held in conjunction with the symposium; more information about these submission types can be found on the VL/HCC'08 web site.
Authors of the best papers accepted for the conference will be asked to submit revised versions of their work for a special issue of the Journal of Visual Languages and Computing.
More information is available on the symposium website
October 9th and 10th, 2008, Kaiserslautern, Germany
The objective of the International Symposium on Empirical Software Engineering and Measurement (ESEM) is to provide a forum where researchers and practitioners can report and discuss recent research results in the area of empirical software engineering and metrics.
This conference encourages the exchange of ideas that help understand, from an empirical viewpoint, the strengths and weaknesses of software engineering technologies. The conference focuses on the processes, design and structure of empirical studies, and the results of specific studies. These studies may vary from controlled experiments to field studies and from quantitative to qualitative studies.
The best papers in the symposium will be published in a special issue of the Journal of Empirical Software Engineering.
[ top ]
Would you like to tell other PPIGers how you are and what you are doing through the newsletter? If so, please e-mail chrisd(at)fdbk.co.uk.
Jorge Aranda, from the University of Toronto, recently told us that he has presented a paper that he co-authored at the Requirements Engineering 07 conference. In a study of seven small companies it was found, among other things, that their practices seemed to depend considerably on the context in which they operated; and we observed that practices should not be prescribed generally, but with context in mind.
Requirements in the wild: How small companies do it
Jorge tells us more: Besides continuing our work with small companies, we are studying software development in two other contexts. The first one is scientific research groups. Most of today's relevant scientific research requires the development of some software component - in fact in many cases it depends entirely on such software. However, it is usually developed by small teams of people that, though highly knowledgeable in their areas of expertise, do not have proper software development skills.
We're studying several scientific research groups to find how they work around software development and project management challenges, and what problems do they consider particularly important in their programming experience.
The second domain that we are studying is much larger software companies, and in particular, which techniques and organizational structures do they use to address the problem of excessive information overload, so that their teams reach a shared understanding of their project's requirements and status.
I presented the paper at the ITiCSE conference the week before the PPIG 07 workshop. So the details may read better as follows:
Sue is currently a Medici Fellow at the University of Nottingham, UK. This is a one year fellowship concerned with technology transfer, focusing on the commercialisation of research outputs from CS, learning sciences and the business school.... while still trying to complete her PhD thesis - so keeping her somewhat occupied!
Sue presented her paper entitled "Spatial skills and navigation of source code" at the ITiCSE conference in Dundee at the end of June 2007. The results of her study showed relations between spatial ability and patterns of navigation of source code when carrying out a code comprehension exercise. A week later, Sue presented a paper at the last PPIG workshop, discussing the interaction between various individual differences, including spatial ability, on programming performance:
Jones, S., and Burnett, G.E. (2007). Spatial skills and learning to program. Paper presented at PPIG07, and available from the PPIG website
Micheal, who works in the University of Limerick has successfully defended his PhD thesis. The abstract is presented below. Congratulations!
Evolving a Model of the Information-Seeking Behaviour of Industrial Programmers
Several authors have proposed information-seeking as an appropriate perspective forstudying software maintenance activities. However, there is little research in the literature describing holistic information-seeking models in this context. Instead, researchers have concentrated on the related fields of software comprehension and software tool development. The work in software comprehension, while providing a cognitive basis for describing software maintenance activities, is abstract in nature and cannot provide strong guidance to those that aim to support software maintenance engineers. In addition, work in this area has been marked by a distinct lack of empirical studies of programmers' actual information needs, during their real-world maintenance tasks. Correspondingly, the work on software tool development, which largely depends on this work, also suffers.
This thesis focuses on maintenance programmers' information-seeking behaviour to address this gap and makes three core contributions to the field. Firstly, it proposes a holistic model of programmers' information-seeking behaviour, derived from related information-seeking research from other domains. Secondly, it derives and presents an analysis schema (Coding Manual) that allows programmers' talk-aloud to be characterised and subsequently analysed in the context of this model. Thirdly, it presents eight empirical studies that serve to evaluate and refine the proposed preliminary information-seeking model for programmers involved in software maintenance activities (using this schema). This evaluation largely validated the model but also suggested several important refinements. Indeed, the results are highly consistent with the 'Concept Location' research of Rajlich et al. and with Marchioninni's information-seeking work.
The case studies, their results, and their impact on the proposed information-seeking model are discussed in this thesis along with recommendations for further research based on its findings.
John Rooksby, one of the organisers of the forthcoming PPIG 2008 workshop, also organised a small workshop at Lancaster university entitled Testing Socio-Technical Systems between 20th and 21st September 2007, following on from the earlier successful 'ethnographies of code'.
Testing Socio-Technical Systems
This workshop focused upon socio-technical issues in systems testing. Despite a growing interest in human and organizational issues in the development and use of technology, research on testing has remained stubbornly technical. Whilst this technical work has made great advances and has debunked the idea that testing has to be a person spotting errors, it has not and cannot remove the fact that testing takes place within organizational constraints and must meet organizational demands.
[ top ]
The Natural Programming Project is working on making programming languages and environments easier to learn, more effective, and less error prone. We are taking a human-centered approach, by first studying how people perform their tasks, and then designing languages and environments that take into account people's natural tendencies.
Early work focused on designing languages for novices based on how people think about expressing algorithms and tasks. Current work is focused on programming environments and libraries. We studied novice and expert programmers working on every-day bugs, and found that they continuously are asking "Why" and "Why Not" questions, so we developed the "WhyLine" debugging tool which allows programmers to directly ask these questions of their programs and get a visualization of the answers. The WhyLine decreased debugging time by a factor of 8 and increased programmer productivity by 40%.
We studied typical maintenance tasks and discovered that programmers spend about 38% of their time navigating around code, and so we are in the process of designing a new tool to help eliminate this overhead. When learning how to use new libraries, we observed that programmers tend to try to adapt examples, so developed techniques to make reuse of example code easier. For editing of code, our studies show that people do not require the full flexibility of text editing, so we designed a prototype environment that provides more support.
More information about the Natural Programming Project can be found on the CMU Natural Programming project website
Brad Myers talks about the project on a Google video:
Google Video: Update on the Natural Programming Project
[ top ]
by Chris Douce
This edition of loosely collected links relating to the discipline of programming is truly eclectic. Here we begin with a link back to last years newsletter book review of Dreaming in Code , with an interview with Scott Rosenberg.
What Makes Software Development So Hard?
When working in the trenches, the tools can make a huge difference, and knowing your tools well helps you to get a productivity boost. There is an interesting piece on The Register that gives some information about the origins of the VI editor. I haven't used it for a couple of years now, but when faced with a command prompt on a random Unix box that doesn't have a gui, you know that you have power at your fingertips.
The Birth of vi
When faced with having to battle with toolsets and the inherent complexities of software development, the vibrancy of open source communities always astonish me. It's a full time job keeping up with the development of the packages that you run on your laptop. Here's an interesting question:why do people get involved with open source? (a survey study that also explored this point was linked to on in an earlier issue of this newsletter). Here's a blog posting that wonders if: Boredom drives open source developers?
But what are the factors that influence the success of an open source project. One question is: How can open source projects survive poisonous people?. This Google video is truly interesting and worth a watch. I haven't heard the phrase 'lowering the bus-factor' before, but I like it!
An underlying issue is the need to hire the right people: a topic that is of continual interest: A Guide to Hiring Programmers: The High Cost of Low Quality
Increasingly I find on-line videos useful resource to get to grips with new languages. Not so long ago I started to look at different PHP development frameworks. I didn't get very far (there were way too many of them!) but what I did find made me chuckle: Maintaining PHP code
Days after viewing this video, I received notification of another related development: Perl on Rails.
On the topic of Perl, I recently discovered an article called: Programming is Hard, Let's Go Scripting... otherwise known as the Perl 'state of the onion'.
As a programmer, I like Perl, but as a maintainer, I don't! Still, it's always fun to go Code Scavenging no matter what language you're using (or learning)
Earlier on this year, I stumbled across a fantastic headline: Free tool offers 'easy' coding. How could I resist?
The resulting story had strong parallels with an earlier discussions on the PPIG mail forum which began with the question Have any of you tried to teach your young kids to program?
Which tied in nice with the anniversary of the Logo programming language
Finally, I conclude with a programming related headline that featured in the New York Times: Faster Chips Are Leaving Programmers in Their Dust which lead onto a corresponding discussion on Slashdot entitled Is Parallel Programming Just Too Hard?. This reminded me of another news announcement: the 80 core processor. I can see that some of us who work within the craft of programming will have to write some try beautiful code to get the best from this beast!
[ top ]
Many thanks go to the effors of the reviewers of this edition of the newsletter. All your comments and words of wisdom are always appreciated. Special thanks must also go to Johanna Hunt: your contributions have made this edition something special.