Thursday 29 August 2013

David Willetts and 'Educational Equilibrium' theory (and what it means for Widening Participation)

Widening Participation is usually presented as a good thing - particularly by people who, like me, work for institutions whose mission it is to provide higher education opportunities to those who would otherwise be excluded from the system. I continue to care about this deeply, and I have found that working with this group of students is at once more challenging and potentially more rewarding than anything else we might do in education. In the 'exclusive' institutions (I have plenty of dealings with them too), there are clever people with many qualifications talking about changing the world with complicated words. Educating people in a Widening Participation institution IS changing the world in a real way that matters -much more than educating middle class kids who will succeed whatever happens.

Having said that, there are some worries I have about our uncritical acceptance of widening participation. The problem stems from whose "agenda" it really is - the student's agenda? society's? the government's? or individual institutions? I've always considered that "widening participation" is an agenda for people disenfranchised by education, particularly by the dismal experiences they've had of education by the time they reach 18, or for mature learners, and are thinking about what to do next. Opportunities are available to them that were not there 20 years ago, and that should be a good thing. But what exactly do we mean by "opportunities"?

The widening participation agenda has gone hand-in-hand with the expansion of HE and government-driven frameworks which are meant to guarantee quality standards between an ever-increasing array of institutions and course opportunities. Despite the fact that a UEL degree (say) is not the same as a Cambridge degree, the quality frameworks do at least create norms of expectation between different parts of the sector. There are, of course, many good things about having a degree from UEL, and having a degree may well be better than not having a degree; having a degree from Cambridge however is by the reckoning of most people - rightly or wrongly - 'better'. But the problem is that when we talk about 'opportunity' we talk as if the students being presented with the range of opportunities are in some fundamental ways identical. This is clearly not the case. The more challenging the personal and educational backgrounds of students, the more diversity there is in the specific challenges they face in exploiting any opportunity. Presenting courses which are moulded from a slightly adapted traditional model of education as 'opportunities' to this group of people is only feasible if they are in a position to exploit the opportunity; but those who are best equipped to exploit the "opportunity" of widening participation (if they so choose) are still those who will probably go on to enter elite institutions.

This is really a problem about 'information' and economic equilibrium. As Friedrich Hayek pointed out a long time ago, information is not evenly distributed in a society. Equilibrium theory holds that prices are in equilibrium in an economy by the balance of supply and demand and as a result of individuals making rational decisions - but with the implicit assumption that people have the same information. In pointing out that people do not have the same information (indeed, being informed is different among different people even in the light of the same information), Hayek argued that equilibrium theory couldn't be right, in the process placing the focus of economics on epistemology (which was his most significant contribution). Currently we have a similar kind of 'equilibrium theory' of education, promoted principally by David Willetts. Willetts is so keen to promote a 'market' in Higher Education that he overlooks the complexity of individual differences amongst learners/consumers and the different ways in which individuals will interpret information. Despite presenting himself as a man of letters, Willetts really wants to be a man of action. Beyond a certain point, the letters just get in the way!

Information is of crucial importance to Willetts, because like all good capitalists, he believes his market in HE will only work if people are properly informed about their consumer choices. So we have 'Key Information Sets' that every University has to compile, providing data about employability, average earnings, fees, retention, etc in addition to 'student satisfaction' data. Willetts's idea is that learners will peruse this data rationally making decisions according to economic cost-benefit. If everybody did this, then we might see how 'educational equilibrium' works. Clever students choose to attend more prestigious institutions whose KIS data indicates better life-chances after following a degree course. Those institutions grow. Institutions whose statistics are less attractive start to lose applications. Some of these latter institutions might seek to gain some competitive advantage by competing on price through increasing efficiencies (reducing staffing). Other institutions might seek to compete in the market with increasingly innovative course offerings designed to appeal to those students unable to get into more prestigious institutions. All in all, 'educational equilibrium' is achieved through balancing consumer choice supported by rich information with increasing diversity as institutions find their place in the market (or go under). But, as Roger Brown has recently argued (see http://www.hepi.ac.uk/files/TheInformationFallacy-RogerBrown.pdf), this is predicated on an information model where students all behave rationally in the light of the same information.

Brown refers to [Lord] "Browne's Paradox" - the hope that market reforms will lead to increasing diversity in the sector, which Willetts also believes. He argues that the Browne reforms will produce the conditions that lead to the reverse. In order to understand this, we have to understand the ways Vice Chancellors and students make decisions. The information that the KIS conveys will be dependent on the ways in which different students interpret it. Whilst there is a tendency to think of information as a kind of 'positive' force in decision-making (that's probably how Willetts sees it), it is perhaps better seen as a 'constraint' within which decisions occur. League table information, for example, makes studying at certain institutions 'unthinkable' for those students who have the choice not to. Equally, for those students who don't have that luxury, league tables simply reinforce the message of social exclusion. The same goes for the economic data about earnings. This means different things depending on your place in the social pecking-order. We might look at a statistic that says "80% of our students are employed within 6 months of graduating" but then think "but as what?", "what about the others?". The extent to which it matters to a student depends on the circumstances, life history and confidence of that student. For Vice Chancellors, their job is to produce information that looks good. Their choices for doing this are constrained by the information that's there. What emerges? There is positive feedback between information around 'successful courses' and so naturally Vice Chancellors close those courses deemed 'unsuccessful', losing sight of the differences between different institutional cultures  (masked by the information) and often losing sight of their own local priorities because the information they have to-hand doesn't allow them to defensibly do any different. Consequently, there is a reduction in the diversity of offerings.

The information sets of this 'educational equilibrium' experiment are not liberating student choice. They are, in fact, reinforcing social divisions. They embody and emphasise the uneven constraints that are imposed on individual learners who by accidents of birth do not come from the kind of backgrounds which would give them real access to the opportunities of the elite.

But then there is worse. Those institutions who find their 'credit rating' approaching 'junk status' (i.e. poor KIS stats, bottom of the league tables, etc) see that the way forwards for them is to ape the established elites. "We too can be like Cambridge!" they cry, as they attempt to import young, cheap PhD graduates from around the world to boost their research ratings (irrespective of whether they can teach or not!) and attempt to raise their profile in the hope of impressing the "widening participation" student market. Unfortunately, this was a market of people who didn't have much choice of where they went anyway, and even if the marketing gimmicks have a modicum of success, the efforts that went into the marketing were not reflected in course designs which often are a poor fit for the kind of diverse challenges presented by this group of students (and this is not to mention the misery within so many of these institutions as a result of the savage cuts imposed as they try to keep their heads above water).

Now we should ask "Where's the opportunity?" Students who have few choices find themselves dealing with their part of the sector which is not only contracting, but trying to compete for different kinds of students from the ones that they actually recruit. Instead of having their needs met, these students risk finding themselves being trapped by the institutional bureaucracies of repeated modules, refers, failures, etc whilst finding their institutions strangely distracted by dreams of "being somewhere else". That's not an opportunity I would wish on anyone!

Facing up to these issues is important because the result of the government's educational reforms is really that people ask "is widening participation worth it? - these people shouldn't be at University, they should be in apprenticeships, or work..." The problem is that there was a point where widening participation was indeed an opportunity, and much good came from it. The changes to HE funding mean that it's "opportunity" status for the student is now under question. Widening participation students face the seductions of marketing departments of universities, just as they face the seductions of loan sharks, with a high risk that they become trapped in expensive educational bureaucracy when they could have pursued other options to give themselves more freedom in the economy. However, the information about the 'other options' isn't there. Indeed, there is no way an informed rational decision can be made that takes into account all the options, and there is nobody who can help them find their way through it.

Widening participation shouldn't be a prison for students, and it certainly shouldn't be a means to an end for institutional survival. Yet both these things are natural consequences of Willetts's crazy theory. The fundamental issue ought to be fairness and justice. The "education debate" masks the fundamental inequalities that are unfolding in front of us. It is because we lack a holistic way of thinking about the relationship between education and society, and monitoring that relationship, that we may be sleep walking into a social disaster. 

Wednesday 28 August 2013

Altered States: Will the Forthcoming wave of Cheap Virtual Reality Headsets Change EVERYTHING?

I've just ordered an Oculus Rift development kit (see http://www.oculusvr.com/) Why? Because there's something exciting about it, and at a time of pretty deep depression, we need exciting things (which aren't too expensive). And I've a feeling that, whilst Virtual Reality has been around for a long time, suddenly it will become accessible and desirable for everyone. That, I think will change things - in education, business, social life and personal entertainment. I want to speculate on what these changes might mean.

It's never just one thing that changes everything. Usually it's a number of things that have been gradually growing and suddenly come together. In the 1980s, the computer-on-a-PCB technology came together with a growing movement towards personalisation of entertainment and computer gaming, with greater ease of use of credit cards for relatively large-scale consumer purchases. All of this created the market opportunities for Sinclair, Acorn, Commodore, Atari and others.

What's coming together now? I think there are three important things 'on the boil'. Firstly, there is a profound inquiry going on into the nature of information and the way that our lives, and our decisions, are being shaped by the information environment. Big data, and the analytic work in education and elsewhere are aspects of this, but there are deeper things going on, where people are looking at the nature of the interfaces between humans, machines, data, and decision. This work has many aspects, including the remarkable work going on in semantics in economics and technology. At the ASC conference at the University of Bolton, Bill Seaman from Duke University presented some of his work on Neo-Sentience. I think this is an important emerging area where people are experimenting with new kinds of  interface: 'poetic' and aesthetic interfaces change the relationship between ourselves and our computers (see Bill's book: http://www.amazon.co.uk/Neosentience-Benevolence-Engine-Bill-Seaman/dp/1841504041/ref=sr_1_1?ie=UTF8&qid=1377721454&sr=8-1&keywords=neosentience). That will impact on the way we make decisions.

Secondly, the data revolution which we have all been part of is reaching a particular stage of maturity where we are becoming more aware of the "ecology of data" and way that politics, freedom, knowledge and technology are tied up together. Most profound is the fact that a tool is now a two-way means-ended thing. There is a means to an end for the user (the tool performs a useful function); but there is also a means to an end for the tool provider (the data captured from the user's use of the tool). That's another way of saying that tools have become 'parasitical' on their users.

Thirdly, there is a biological revolution. It's not just the DNA sequencing (although that has given us many of the technologies used for the semantic analysis!), but most profoundly a deepening awareness of morphogenesis - both at a biological level and at a social level. This relates strongly to the first point, but it is where a deeper understanding of our biological ontogeny relates to the material and informational constraints within which we live and develop that powerful critiques are emerging about the nature of that information environment, our conception of what a 'computer' is, the way economic systems work, and the possibilities for redesigning the whole thing.

So how does this relate to VR? I must confess I didn't really take this seriously until I saw Jennifer Kanary's wonderful "Pscyhosis Simulator" (see http://www.labyrinthpsychotica.org/Labyrinth_Psychotica/psychosis_simulation.html) at the ASC conference. She used a VR headset and wearable laptop running software which transformed images in real-time (I'm guessing MAX/MSP with Jitter) to create what was in effect 'virtual LSD'. The effect was fascinating, not just for the person wearing it, but for everyone else witnessing it (see picture below). This was a kind of 'instant theatre'. The wearer of the device had their perceptual state transformed to the point that their utterances reflected this state, which a volunteer from the audience had to respond to. What I found impressive here was the conversation they had, and the profound effect the state transformation created by the technology had on the communications.

Virtual Reality is about 'Altered States' - and this may be what we need right now. When we have become so logically-driven down the 'hard' road of information, targets and box-ticking by computer technologies which mislead us into thinking that there is some separation between human and machine, the 'altered states' of VR drive home the message that we are one with the world. The Neosentience that Seaman and others talk about is really concerned with re-imagining computers as a way of getting a better grasp on our relationship with information. It may be that things that only philosophers talk about at the moment will become blatantly apparent to everyone. Questions about information, education, learning, love, sex (of course porn in VR will be the early driver for innovation!), empathy and conviviality will be presented to us in a stark way that has never before been possible. This has important implications for everything we do in education: MOOCs may be deathly dull at the moment, but a networked  immersive VR exploration coordinated through the MOOC platforms? Why not!

When the questions we ask about the world change, then there is a true moment of transformation. I wonder if this is it...

Tuesday 27 August 2013

Ethical and Critical Challenges to Cybernetics

Heinz Von Foerster articulated his ethical position as "I always act so as to increase the number of choices." This is presented as a core principle of ethical behaviour within second-order cybernetics, which Von Foerster more eloquently described in the context of Wittgenstein's position on Ethics (from the Tractatus) [Von Foerster's paper on ethics is available here: http://www.stanford.edu/group/SHR/4-2/text/foerster.html]

In Philosophy, "normative ethics" is the position of examining moral behaviour in determining the grounds for 'rightness' and 'wrongness'. Von Foerster's position, with it's emphasis on action, seems to fall into this camp'. Within normative ethics, there are some basic categories of position which can be adopted. They are:

1. Deontology - the position that right action is to be judged against a set of universal rules. The word 'deon' means 'duty', and so the deontological position is a position where the dutiful intention of the agent is what matters, not the consequence of the action. Some deontologists argue that religious law fundamentally is the ultimate yardstick for goodness, and so 'goodness' become dutiful observance of natural law.  This doesn't appear to be the position that Von Foerster adopts. He might suggest that deontology appears to suffer an 'observer' problem: the first question is "who's rule?" Having said this, there is a point at which deontological positions become more like 'virtue ethics' (see below) - where the question of the nature of the relationship between "good will" and "good character" can be addressed.

2. Consequentialism - this is the position that right action is judged by its consequences. Bentham's utilitarian position is often cited as an example of this. I think, if Von Foerster's ethical position is to 'increase the number of choices' then his position is most closely associated with consequentialism. There are, as with all these positions, grey areas around consequentialism, where it blends into deontology through seeing consequences as inseparable from human rights, or indeed where "consequences" are seen as constituted by observations within a society (again, there is an observer problem here)

3. Virtue Theory - this is originally an Aristotelian/Platonic position which places focus not on the behaviour itself, or its consequences, but on what is revealed about the person through their behaviour. There is a tendency in virtue theory not to reduce persons to behaviour, and to regard persons with their dispositions as the fundamental building blocks of a good society.  MacIntyre points out that 'ethics' implies 'ethos' - that the social constitution of individual persons and the personal constitution of societies are intrinsically linked. This I think provides an important critique to Von Foerster's position, since Von Foerster appears to reduce people to an analytical perspective on actions.

There are other positions in normative ethics, although these three summarise the broad thrust of the debate amongst ethicists. Apart from Virtue theory, there is a tendency to reduce people to actions or consequences, although deontology potentially provide irreducible "laws". Fundamentally, any ethical situation is addressed by a (particular) person, and yet ethical theorising occurs according to some vantage point which is abstracted away from any individual person.

In Kohlberg's "developmental ethics" this tendency of abstraction is particularly noticeable. The stages of ethical reasoning involve, according to Kohlberg, a progression from somewhat "autistic" questions (he was influenced by Piagetian developmental stages, and his approach is consistent with cybernetics) like "what's in it for me?" to a balancing of social norms and expectations, to the deontological acknowledgement of ethical principles, duties and contracts. But in the final analysis, what emerges of the 'person'? What of the impact of the attachments they have or had in their upbringing? Whilst Kohlberg's interest is in moral reasoning, there is an inherent cognitivism in his picture of moral reasoning which reduces the individual to communicative mechanisms.

Vladimir Lefebre has directed his "ethical reduction" to national characters, producing what he calls "first and second ethical systems" (see Stuart Umpleby's slides on Lefebre here http://www.gwu.edu/~umpleby/mgt216/Mgt%20216%20Lefebvre.ppt) According to this system, there are conflicts between what he calls "system 1" and "system 2" ethics as to whether ends justify means, whether conflicts between means and ends is a problem. He presents a characterisation of "saints", "heroes", "philistines" and "dissemblers" according to each of these ethical systems, associating each with different national ethical preconceptions. This is  work which is interesting because it provides a way of identifying what Isaiah Berlin called "value pluralism", and certainly provides a useful metric for identifying where international conflict might arise. However, here again, we are dealing not with real people, but with abstractions.

Dealing with these problems of abstraction and ethics requires an approach that takes account of the person that might wish to do the abstracting. Von Foerster seems strangely blind to the person behind the theory - ironic for the advocate of a philosophy of personal reflectivity. He reduced himself to a 'variety processing' mechanism.

To unpick this, I think the approach is necessarily critical. I say that because I think the principal issue with ethical reasoning is 'fear'; criticality is fundamentally a way of dealing with fear.

This is one of the reasons why George Bataille's economic theory has interested me recently. Bataille's main point is that our rational reasoning about economic behaviour was grounded in fears surrounding taboos. Rationality emerges as a human construct from the swamp of things that cannot be talked about, rather than being constitutive of an inherently logical universe. That's where cybernetic's problem is: whilst it tries to engage with the problems of the swamp, it inherently argues that through its rational abstractions, the swamp's dynamics can be articulated. Deep analysis of the person and deep critique needs to go hand-in-hand with the remarkable constructions (both ideational and technological) which we can create from the disciplines like cybernetics.

Sunday 25 August 2013

Four Climaxes and a Theory (a musical question about information)

I've written before about musical climaxes - particularly the Liebestod at the end of Tristan and Isolde (see http://dailyimprovisation.blogspot.co.uk/2012/03/musical-climax-and-musical-essentialism.html and http://dailyimprovisation.blogspot.co.uk/2010/12/in-what-way-is-it-all-about-sex.html). What's interesting is that at these extraordinary moments, there is a feeling (in me at least - and I don't think I'm alone) of realisation - I say to myself "now this is what the last 3 hours or so has been all about..". It's the 'aboutness' of the moment that's interesting. Because 'aboutness' is a fundamental aspect of how we make sense of the world more generally. It is fundamental to information, it is fundamental to our concepts of education, curriculum, competency and the dreaded 'learning outcomes' (if only they were in the slightest bit 'climactic'!!) as well underpinning human relationships. "What is this really about?" is a powerful question.

There are other climaxes to consider in music. It is largely a 'romantic period' thing. Although, I would count Purcell's "Hear my prayer" as containing a wonderful pre-romantic example. The classical style doesn't really lend itself to climaxes, being more dialectical with contrast and balance providing the structural force. In the romantic and modern period, I've been thinking about the end of Mahler's 2nd Symphony as another example, and  a bit  later, the 'Libera Me' from Britten's War Requiem. These four examples share some common features.

The first feature is an increase in redundancies: motifs pile on motifs, often becoming shorter and shorter (a process sometimes referred to as 'liquidation'). In the Britten (and in the Purcell) this motivic 'piling on' is done contrapuntally - different voices echo the motifs after one another. In Wagner, the liquidation is done in one voice (the lyrical accompaniment to Isolde), gradually reducing to the 3-note rising chromatic, with occasional rhythmic variations. What is also important for Wagner is, of course, the harmonic background, which gradually winds itself up the cycle of 5ths. For Vincent D'Indy (see http://dailyimprovisation.blogspot.co.uk/2012/01/vincent-dindy-and-breath-of-music.html), this winding up is a means of increasing tension - for D'Indy, it sets up the inevitable release of tension through falling back (moving down, going flat) along the upward tonal path. The tonality plays an even more fundamental role in the Mahler, where the initially soft prayer-like homophonic chorus subtly transforms the tonal landcape of the last movement, leading to the most 'scrunchy' and ecstatic dominant chord in music for the moment of resolution (and revelation).

Music analysts, like information analysts, tend to overlook the redundancies. But I think this is the most important thing. As Loet Leydesdorff is saying very clearly now (see http://arxiv.org/abs/1301.6849) redundancies have an autocatalytic effect, which work on both the manifestation and expectation of information. But the question is, in music, what is autocatalysed? How does this autocatalysis lead to the experience of climax? In what way is the impression "this is what it's about" created?

There are two things to say here. First, the motivic redundancy and the tonal movements are both varieties of redundancy. Understanding tonal shifts as redundancy is perhaps not obvious. However, tonal shifts are accompanied by motivic repetition. There is a shifting ground of expectation which causes increased redundancy and expanded expectations. Secondly, there is a point beyond which expectation cannot be managed. I think this is because expectations depend on a form of double-contingency (after Parsons). Behind this double-contingency is the principle of engagement with music that Boris Asafiev calls 'intonation'. At some point we might wish to 'sing' (at least in our head), and this is the expression of our expectation. But might this expression of expectation be dependent on what we might expect others to intone to the music? I think it might. The expression of the 'universal' in music underpins its fundamental character, and this universality is not an individual experience.

The climax builds around expectations which are created through double-contingencies in an imagined environment of ourselves and others. As the redundancies pile on, autocatalysis increases the complexity of this environment. It becomes harder and harder to hold on to the double-contingencies. Notionally, it becomes  harder and harder to hold on to the things which maintain the identity which we made for ourselves (through listening to the piece until this point). At some point, it has to fall apart. In everyday life, we might say "sod it!"; in music, we climax. I think what might then happen is that the structure of our expectations atomises. All we are left with is redundancy, and gradually we begin to build something new.

This process is clearly what happens in the Britten. In Wagner, the liquidation is simply the end of the piece. In Mahler, there is a kind of apotheosis. In Purcell, the thing is a bit more controlled, and determined by the logic of the vocal lines, but there is little doubt that the 'cry' in Purcell has brought us to a different place by the end.

Why this is important is another question. By musical climaxes are 'playful' in the sense of not being real. Yet the transformation they effect can carry over into 'normal life', even if it is merely down to the value of the catharsis.

Friday 23 August 2013

From Learning Outcomes to Personal Corpus tools

I wonder what future historians will make of our current ideas about education. I wouldn't be surprised if they pay particular attention to our obsession with 'learning outcomes'. It is this rather ill-grounded innovation which has dominated the educational landscape in most institutions,  underpinning regimes of quality, consistency, management, and assessment. Indeed, without learning outcomes, it is hard to see that the current phase of massification of education (and the concomitant commodification) could have taken place. Massification requires coordinating principles, and the learning outcomes provides the principle.

The idea that 'learning' (which is a contested concept) can have a measurable outcome (which is also contested) doesn't stand up to close scrutiny. Deep down, it's a bit of a ruse. It provides a way of dealing with the complexity of education - particularly personalised education - such that a variety of practices of learners can be measured and meaningful assessment reached. It is because of this embrace of variety of practice, and variety of assessment, whilst appearing to maintain objectivity, that learning outcomes have been placed at the centre of the massification/commodification process. Whilst the drivers might have been pedagogical and well-intentioned, the implications are commercial.

Is it time to rethink? The massification of education has gone hand in hand with the growth of an imperious education industry. The meeting of outcomes becomes the prison that hooks learners into programmes all over the world, whilst not infrequently, students actually meeting outcomes appears to do them little good. Furthermore, for staff, the coordination of quality regimes around learning outcomes has become the dominant discourse in institutions whose purpose had previously been to pursue truth; now they pursue managerial diktats revolving around assessment regimes, and validation procedures.

The business of learning outcomes is related to the issue of 'aboutness'. It is the process of saying that what a student can do is 'about' something, codified by outcome criteria. In today's education, defending the 'aboutness' of a student's competency is usually the domain of the teacher, and typcially, since the teacher will have a vested interest in getting the student through their assessment, there is a strong temptation to say "yes, I think this is evidence of ..." even if it's a bit a stretch to say it.

This encourages a mentality with individuals summing up their knowledge with a tick-list of achievements, verified by assessment, but often not demonstrable outside the educational setting. Learning Outcomes have led teachers and learners to believe that 'aboutness' of education is what matters: having found the 'evidence', the ticked box indicates that the thing that the box is about has been acquired.

Understanding the topic of learning is important. To this extent aboutness matters. But only to the point that understanding the topic is generative of stories and actions within the individual that others can coordinate around and agree that the learner's actions are indeed 'about' the same topic. Asserting the aboutness is not the point. The point is to demonstrate fluent patterns of behaviour which others can judge for themselves. Moreover, it is for each individual to find ways in which they can find their own stories and relate their stories to particular topics. The path of learning should be a path of growing personal confidence and freedom.

The problem  with Learning Outcomes is that their emphasis on the aboutness of learning, and their concern for making defensible statements backed up with 'evidence', is that the performative and personal coherence of individual learning is not developed. (There's an excellent piece on 'evidence-based policy' here http://www.iea.org.uk/in-the-media/press-release/%E2%80%98evidence-based%E2%80%99-policies-are-damaging-uk-policymaking - and I would echo many of the criticisms with regard to education) Often, the rigidity of ticking boxes leads to a kind of educational alienation (see http://dailyimprovisation.blogspot.co.uk/2012/05/education-and-alienation.html) where learners and teachers find themselves in inauthentic activities, underpinned by fear of failure to comply with institutional requirements. Under these conditions, confidence is the first victim - which is a real scandal!

How do we get out of this? We need a way in which learners can discover what is within them, and teachers can guide and develop them in ways that relate to learner's interests. Technology can help. The corpus analytical tools that are now so developed can be used by individuals to explore the totality of what they write. These are tools for asking powerful questions: what does a learner write about? who else writes about that? how do those others express their ideas? what have they learnt? where did they learn it from? what are they interested in but never write about? why don't they write about it? and so on...Can corpus analytics show a gradual tuning-in to established discourses and social networks? I think there is a possibility. And the way the tuning happens, the stages it goes through, are a learning journey. Is it too much to state that even in medicine (which is the classic counter-example in inquiry-based learning), there can be a process of tuning into discourses - which can only occur through practical experience?

The other advantage of a corpus-oriented approach is that within institutions themselves, management becomes oriented not around assessment points, tick-boxes, etc, but learning progress measured in terms of 'fit' with established discourses. Even the discourses within the institution itself can be analysed, so that the learning journeys of students and the learning processes within the institution become more closely harmonised. In this way, Universities might listen more attentively to their students and embrace the learning of everyone within a broad social ecology. 

Thursday 15 August 2013

About "Aboutness": The Business Corpus

I'm in the process of writing a number of papers about the TRAILER project. TRAILER is an attempt to relate thinking about self-certified competencies and informal learning with the organisational needs and reflexive processes of businesses. It's an ambitious project because there is so much conceptual and technological territory to cover. Unsurprisingly for a relatively small-scale Lifelong Learning project, some of the interventions have been a bit hit-and-miss. However, something important is, I think, being uncovered - not so much in the project's successes, but in its challenges.

A big problem within the tooling of the TRAILER system is the necessity for users to make selections from enormous lists of competencies. This isn't TRAILER's fault! It's really the fault of an EU commission obsession with taxonomies of what are pretty meaningless terms as a means of coordinating professional development. Unfortunately, I think a powerful group of academically ambitious semantic-web software engineers rather oversold the potential of their technologies as they insisted on the creation of job 'ontologies': would that there had been some decent philosophers inspecting things, and they would have pointed out that these so-called 'ontologies' were really 'epistemologies' - and bad ones at that!

Worse has been the fact that ineffective taxonomies attracted bureaucrats because they appeared to make highly complex educational domains manipulable by managers. Consequently, the 'competency' idea caught on - most recently Poland was proudly presenting its own competency frameworks, drawing heavily on models in other EU nations.

The problem with competency as it is conceived by the commission is very deep. Fundamentally the problem is about "aboutness". A competency statement is a statement 'about' a flow of experience: either the experience of performing a particular skill, or the experience of watching someone performing a skill. We see this every day, and we say informally, "what I'm seeing is really about...". However, how I decide what something is about is a mystery. How it relates to the flow of experiences is poorly understood. Yet this lack of understanding doesn't stop the competency taxonomists labelling experiences everywhere, saying what they are about, and then treating their statements of 'aboutness' as a kind of code for employability.

I believe the way to tackle this is to look more deeply at the nature of 'aboutness'. To say something is 'about' something is, in the final analysis, a decision to make a particular utterance. Something in the flow of experience has contributed to the conditions within which that decision can be made. This utterance is an expression of an expectation which is in some way maintained by the flow of experience, in the same way that a melody is maintained by its accompanying harmony and figuration. There are mechanisms (particularly in biology) that we might draw on to explain this. One of the principle mechanisms may be the way that the flow of experience acts as a 'catalyst' for the formation of particular expectations over time (in the way that certain enzymes catalyse cellular growth). I think the equivelent of enzymes in human communication are 'redundancies' - all the parts of a communication which are not directly relevant to the aboutness of a thing.

If we understand how we arrive at an understanding of aboutness, we don't need to create taxonomies. We can instead infer the aboutness of something analytically. Through processes of data mining and simple data requests from users ("say briefly what you think this is about in your own words") we can not only get a grasp of a somebody's competency, but also an indication of the way individuals see themselves, and the extent to which professional skills and personal values are integrated. All these are things that we intuitively 'read' when we meet someone, and with data mining it might be possible to access a rich enough dataset of information in order to at least have a much deeper dialogue with somebody.

What this requires to work is a corpus of information. I talked the other day about the 'personal corpus'. But what of the 'business corpus'? The Business Corpus is a collection of all the documents, minutes of meetings, strategies, tenders, legal documents, etc. that a business produces. All business continually ask themselves what they are about, how they relate to the world (their customers), what skills they require, what products to develop, etc. The answers to these questions lie in between an examination of what they have already done, and an examination of what's happening in the world and what's changed.  On looking at the Business Corpus, and on being asked "what matters right now?", an analysis can be performed which examines the relation between what is already known and what might be possible. Throw into this particular profiles generated through individual 'personal corpuses' and sophisticated matching can show where things might become possible.

This is what TRAILER is really about. But so far TRAILER has attempted it with taxonomic competencies, not data mining. Exploring the alternatives is now a priority, because there are clearly problems with taxonomies. But more important, there are now technical opportunities to do the data mining, and open source tools to do it with, which were not there before. Furthermore, our understanding of how the 'aboutness' of something arises over time can be connected with this analytical effort can result in both individuals and businesses seeing what is meaningful to them without the burden of making choices that fit into a bureaucratic machine.

Wednesday 14 August 2013

Raspberries, Human Agency and Music

As academics, we have a tendency to think too much. There are pathologies of thought - the main one manifesting itself as a lack of humour, where all the earnestness and seriousness becomes a form of narcissism. The only defense is to pop our own bubble. That moment of revolt is a moment of freedom. It is where we step outside our abstractions (or someone else's abstraction that we got caught up in) and say "this is a load of wank, isn't it?" It is the 'wank' moment (or the 'bollocks' moment if your prefer) that is the real trigger for revolution: when leaders become figures of fun, the emporer is seen to have no clothes (clothes made out of piezoelectric fibre indeed!), and we all look at each other and laugh. Alethia is closely related to catharsis.

There is a reason, I think, that the classical period in music stands out above all others. It is the period that subsumed humour as a structural principle. From Haydn's 'surprises' and jokes to Beethoven's Bagatelles and Mozart's farces, always in this music there is anticipation of the wise fool disrupting the proceedings, and in the process, setting everyone free. Of course, in literature, humour is there from the beginning, but assimilating it into music was a special moment because music is so much more elemental, and easily tempts us into worlds which are beautiful but otherworldly. The classical moment was the moment humans could situate themselves between the sublime beauty of other-worldliness and the decisive moments of comic intervention which would make everyone sit up.

I've been making a lot of "ambient" electronic music recently. I find myself getting caught up in the melifluous fluctuations of sonority which seem to only be possible with electronics. Strangely, once started, these sonic environments are difficult to escape. "What would Beethoven do?" I ask myself. Well, he would want to break the flow; he would want to do something different. Because fundamentally, as human beings, that's what we are really about - doing something different. It is also the principle of intellectual endeavour.

I remind myself of this as I look at the unfolding disaster of the UK Higher Education landscape and particular anxiously at what's happening in science. The dominance of big data cranking in almost every science is seductive in the same way as ambient music: everything becomes process. Who's going to blow the raspberry? Who's going to say the emporer hasn't any clothes? Who's going to say "if we carry on doing this, we'll  forget what science is about"... (actually,  Bill Amos has said that recently: http://www.timeshighereducation.co.uk/comment/opinion/big-science-big-hype-big-mistake/2005124.article)

Maybe it would help if we could refind a way to blow raspberries in ambient music! Most contemporary music is 'process music' - motoric rhythms, subtle changes of texture, clever transformations, inflections, timbres, etc are par for the course.  I've always felt Michael Tippett was ahead of his time because the spikiness of his music doesn't fit this pattern at all. He understood Beethoven better than anyone, and like Beethoven, he breaks movement with out-of-place outbursts. But that is the human moment - and Tippett's music speaks it more clearly than many other 20th century composers (maybe Shostakovich also knew about this). So Tippett may be my model.

It's a model that's more broadly applicable though. When the education system has itself become subsumed into a process that nobody seems to be able to determine or control, and where everyone is in a kind of spaced-out confusion, we need to look for the raspberry blowers. Education needs its human moment quite urgently.

Tuesday 13 August 2013

Was Gordon Pask Right about Conversation?

Pask's conversation theory is a rich and rather dense piece of work that few people outside the cybernetics community - within which he was a central figure - know. Thanks to Diana Laurillard however, most people know about the 'conversation model' - that model which situates learning as a coordination of communications between the teacher and the learner, where teachers judge their actions according to the feedback (or 'teach-back') given by learners. At a basic level, this appears defensible. However, the concept that does the most work in the conversation model, and in Pask's theory in general, is the concept of 'agreement'. I came across this video where Pask talks about 'agreement' in the context of his 'entailment meshes' (basically, the relationships between concepts).


The problem with agreement is that a mechanistic approach that sees it as a coordination of speech acts, or other forms agency has many explanatory gaps when faced with real teaching and learning situations. Where is boredom? Where is novelty? Where is the feeling that either a learner or teacher might get that "despite all the words being right, something's not connecting"? Where is empathy (there appears precious little of that in his presentation here!)? Where is enthusiasm? Where is passion for the subject? And perhaps most importantly, where is the explanation for our wanting an explanation in the first place?! Like a lot of cybernetic theories, there are no real people in Pask's model; only abstractions of people. That most human of attributes, agreement, is subsumed into an inhuman context.

The problem with this is that an abstraction of person is basically the description of a process. Concepts fly into concepts, bound by some mysterious force which negotiates some kind of attraction or repulsion between them. (Pask was very fond of physical metaphors). But it all sounds a bit like Newtonian "hard and massy" particles. Concepts become related to fundamental mechanisms which by their operation alone can constitute the full richness of a person. But what of the person (Pask and friends) that want this to be true? To you and me they are awkward, funny, real (but dead, of course!) people - not constituted by fundamental mechanisms of 'P-individuals' and 'M-individuals'. Those technical terms are just the terms by which those real people (the real Gordon Pask) wants to know himself. But if he wants to know himself in such a constrained way, what does that tell the rest of us about him?

Of course, I'm being a bit unfair. This isn't just about Pask. It's about EVERY lunatic social and psychological theorist who believes they can explain what a person is, whilst failing to see the person they are in wanting to do this! (I obviously include myself in this). Surely it can't be feasible to reduce a person an abstraction, or (worse) to a process?

The physical analogies are interesting though, because one thing physics tells us is that whilst we may see mechanisms of hard and massy particles knocking into each other, there's a hell of a lot we can't see which nevertheless seems to play a big role in the process. It's not just 'dark matter', but the full gamut of unimaginable causal influences on things that happen.

In physics and in learning, there is a common rule which I cannot sum up more simply than by saying: "The thing that's missing is the thing that's missing."

Most importantly, that's not to say that the "thing that's missing" isn't causal. Dark matter is causal, but we only know the cause by its effects. In fact, as Hume pointed out centuries ago, we only know any cause by its effects (or at least, the regularity of its effects). The problem with "the thing that's missing" is that regularity is hard to come by - there can be no regularity with something that is missing.

Pask was wrong because in his eagerness to overlook his own desire to explain conversation, and his eagerness to overlook the "thing that's missing", he failed to consider the "thing that's missing" as a fundamental part of conversation, and a fundamental part of agreement. I'm not writing this out of some clash of concepts in my head. I'm writing it because I sense something missing. There's another  way of saying this: I am not writing this as some logical consequence of what I already know or conversations I have had; I am writing it as the result of critical inspection. Being logical and being critical are different intellectual attitudes. The logical approach seeks to concretise ideas and form coherent structures out of them; the critical approach seeks to overcome the fears that sit behind the desire for the logical approach.

Pask has a logical model of conversation and of agreement. In criticising it, I am advocating a critical approach to conversation instead. The coordinating forces in conversation are not coordinations around concepts, but coordinations around fears. Frightened teachers teach worst because fear typically leads them to take an authoritarian stance to students, so that they protect themselves from awkward questions. They could still be doing exactly what Pask says in his model, but the positioning between teacher and student would make the experience very different from an unafraid teacher.

By saying we coordinate around fear, what I'm really saying is that the coordination is around 'what's missing'. The driver for learning - indeed, the driver for agreement - is critique, but what's missing emerges in the flow of experience which includes conversation. Much of experience just passes us by without any impact until a particular point when suddenly we realise what it was about. The way our expectations and realisations arise floating on a sea of redundant information is the great mystery of human experience. The irony is that since most of what passes is redundant, we take no notice of it. Yet it may be the most important thing: a melody without accompaniment is a pale shadow of the melody with its accompaniment

This is where Pask went wrong. He took the 'aboutness' of things - the topics - and tried to create a logical mechanism where aboutnesses interacted (in entailment meshes, agreements, and other paraphernalia). He lost sight of the things the aboutness was about. He lost sight of the redundancy upon which aboutness arises. He lost sight of what he'd lost sight of.

Monday 12 August 2013

The Personal Corpus

Text Mining tools and algorithms are becoming increasingly sophisticated. Most people are unaware of what can be revealed from the data that they post to Facebook, or the data they submit to Google through searching and using Gmail. We are now in a position where the global internet corporations know more about each of us that those closest to us; indeed, with their analytic tools, they may know more about us than we do ourselves.

But text mining tools, whilst complex in their algorithms, are not rocket science. It wouldn't take much to provide these kinds of tools to ordinary learners and teachers. I'm finding the idea of empowering everyday users with sophisticated data mining tools increasingly attractive as a means of gaining greater personal autonomy in the face of global forces which are harvesting personal data for their own ends. So let's start with the idea of a "personal corpus"

A Personal Corpus is the sum total of the text you ever write. Emails, essays, tweets, etc. Everything goes in. Your data analytic tools can pick over it. You can do a particular kind of search with this sort of setup. Rather than say what you are looking for, instead you say what you think is the most important thing at a particular time: the "topic" of the moment. A "topic" is really a compression of  a lot of stories in a flow of information. What the analytic tools do is examine your "topic". It might look for occurrences of your topic in your corpus. But more importantly, it might look on the internet for other 'stories' relating to your topic. What emerges is a search corpus (drawn from the internet). The match between the search corpus and the personal corpus can then be calculated. It may be that the "topic" is something new; something you've never thought before. In this case, a process of recursive search can reveal sub-topics that might lead you from your chosen topic to the topics identified in your personal corpus. A path between your topic and the topics already in your corpus can be calculated.

So, for example, you think the most important thing at the moment is a "data mining". I have a personal corpus (this blog!) which I can search for this. But a key word search is less revealing than a search where I compared all the expanded definitions and stories around 'data mining' with the narratives I already have in my personal corpus. Here I can look at the depth of matching, identify associated terms, and explore the links between those associated terms and my corpus. So I can identify, for example, that in 2010 I was talking about something like this, and maybe I would want to revisit some of this work. To me, that is valuable because I've been redirected to look not at some resource on the internet, but at something that was already within me.

With a Personal Corpus, the relationship between the user and social software is reversed. Most social software tools are used for 'sharing' documents - social software serves as a repository. With the Personal Corpus, the internet and social software tools are used as a resource for data extraction; corpus data is not intended to be shared, but stored (maybe) locally in order to be analysed.

With the Personal Corpus, individual users can determine the likely impact of particular social messages, whilst at the same time be able to get an insight  into the value that companies like Google might extract from that data. But more importantly, it provides greater personal autonomy through allowing users to explore the likely impact of different kinds of intervention.

The Personal Corpus might be seen as an extension to E-portfolio tools (which never really took-off, did they?!) or to the Personal Learning Environment (which was hijacked by the axe-grinding blogeratti!) It might provide a way of really giving learners some useful tools which give them something back that might have some meaning for them...

Thursday 8 August 2013

From Meaning to Communicative Ecology

Understanding the meaningfulness of communications in an organisation is the first step to understanding their ecology. An ecology requires a continuous stimulus for meaning generation. In order for this to occur, sufficiently different types of communication are necessary. Because of this, typical managerial interventions can upset the balance of ecological communications. The way to destroy any ecology (whether biological or communicative) is for one individual will to assert itself over all the others. It doesn't matter if the will is to drill a deep-sea oil well, or whether it is to sack half the staff; both the cause and the effect are the same: the cause is 'lack of listening' and the effect is catastrophe.

We need to find a way of measuring communicative ecology. We now have sophisticated ways of measuring biological ecology, and I want to explore ways in which those techniques (and others) might be leveraged towards managing institutions better. The challenges are significant. Technology, particularly now we have 'big data' (for small minds!) can be leveraged by powerful people to justify any hare-brained scheme, giving little room for opposition in the face of 'evidence' that a decision is the right one. The problem lies in the poisonous combination of computer technology (as we know it) and cognitivism. It is cognitivism which encourages individuals (managers) to believe that they can alone compute the solution to the institutions problems by virtue of the fact that they alone have a better computer (their brain) and are privy to all the necessary information from their computers.

To really deal with managerial pathologies, we have to deal with the problem of cognitivism, and in order to do that the fundamental metaphor that underpins it needs to be dismantled and re-assembled. This is the metaphor of the computer. Or rather, the Von Neumann/Turing computer which separates memory from processing. One of the really exciting things that emerged from the ASC conference was the interest shown in new conceptions of the computer: drawing on the earlier work of Beer and Pask, electro-chemical and biological computing appears to be exciting a lot of interest. Most important in this work is the lack of separation between the human being and the 'machine'. In this universe machines are sentient and the fundamental attachment relations not just between a single human and the machine, but the attachments between individuals becomes fundamental to the computation process. In this configuration, there can be no separation between man and machine, and no separation between processing and memory. All is structure. Because of this, no single individual is capable of computing anything alone. There is no 'alone'; we need each other to think.

Which is where we get back to communicative ecology. The biological connection between structure, processing and memory, between man and (sentient) machine becomes a social structure. Understanding and analysing how that social structure is performing is likely to be the bread and butter of managing social ecologies.

If we understood better how we really work, then there are sensible things that can be measured. In particular, we need to understand how it is we make a decision. Increasingly, I am convinced it is not the measure of information that matters in the making of decision, but the measure of redundancy. Redundancy has an autocatalytic effect on thought (another key issue emerging from the ASC conference, which was full of redundancy). Managing social ecologies may be about managing the generation of redundancies.

Generating redundancy doesn't come easy in a society that is drilled for efficiency. But the efficiency drive can also be traced back to cognitivist myths. We are back to the challenge of challenging the received metaphor of intelligence, capability, thought, merit and productivity. Getting technical about reimagining computers may not seem like the game-changer that is required in the difficult circumstances we find ourselves. But there are currently computer scientists playing with things, saying to themselves "this will change everything"... They've been right before.

Sunday 4 August 2013

American Society for Cybernetics Conference at the University of Bolton

The ASC's discussion conference at the University of Bolton has just finished (see http://asc-cybernetics.org/2013/). What a week it's been! Right now, I feel very proud of my University and its town. Bolton doesn't often see this kind of international influx of Americans plus a good number of other nationalities, and for a whole gang of them to descend onto the town's bars and restaurants and have a really great time was very heart warming. Their presence lit up the University: in the foyer in the mornings, groups of people with strange english accents were discussing interactive art, improvisation, sentient computing, big data, psychosis, education, learning, economics, music and sociology. This is the discussion that cybernetics is.

It was the most intense conference I've ever been to. I'm only glad that I didn't have the jet-lag that most of the delegates had. It was also the most creative conference. The ASC's discussion conferences began in 2010 in Troy, NY at Rensselaer Polytechnic University. That conference was a remarkable experiment. In the presence of Ernst Von Glasersfeld, three days were devoted to discussing Mathematics, Art and Design, with artistic performances and paper presentations built around in the evenings. Many of the group at Rensselaer also came to Bolton: the feeling was that the dynamic creative energy from upstate New York had been transported to this little part of industrial northern England. I had to confess, I didn't think it would be possible before the conference.

When the conference started, things started to fall in place. Risks that were taken in designing tasks for the participants (like "make your own musical instrument") turned into wonderfully rich expressions of creativity that set the tone for the discussions which followed. Then, having been given the (pretty difficult!) task of asking how acting, learning and understanding can be distinguished and are related, groups set about exploring the issues, often getting into deep water and finding their way out of it through heightened creativity and playful performances. The plenary sessions were where each group presented their findings. This was a driver for real innovation in the ways ideas were expressed.

In the evening of the first day, there were artistic performances by some of the delegates who included Bill Seaman (http://billseaman.com/), Graham Clarke (http://www.grahamviolin.com/), a fascinatingly theatrical demonstration of Jennifer Kanary's psychosis simulator (http://www.labyrinthpsychotica.org/Labyrinth_Psychotica/Home.html) and Ranulph Glanville's electronic piece 'Blind'.

Discussions continued the following day, with groups changing round and the focus shifting from Understanding to Acting. In the evening, papers were given. I saw presentations by Loet Leydesdorff on his work with Inga Ivanova on information redundancy and meaning (work which I have been fascinated by for a long time), Jerome Carson gave a revealing presentation about workplace stress, Faisal Kadri talked about artificial intelligence and emotion, and Narayana Mandaleeka from TATA spoke about value and quality improvement processes, and Tirumala Vinacotta (also from TATA) spoke of holistic thinking in business.

There was a balance struck between the discussions and the academic content. Not everyone knew cybernetics to the same extent (there were tutorials on the day before the main conference). The overall impression of the conference was that it was a kind of retreat where cyberneticians could talk with each other at depth and seriousness about some very difficult topics. The difficulty of the topics meant that many discussions kept coming round to the same issues, but the repetition of this - rather than becoming boring - inspired greater creativity and playfulness. I find this the most interesting feature of the conference.

Most conferences generate variety: lots of different papers, lots of topics - delegates have to attenuate the information by selecting what they are interested in. This conference generated redundancy: an extended (and sometimes repeated) conversation about a single topic - delegates had to be creative to deal with the redundancy. My thinking (drawing on Leydesdorff's thinking) is that redundancy is in some way autocatalytic: it creates the conditions where growth can occur. It may be too early to tell, but there was both personal growth and intellectual growth within the discipline occuring over the three days.

A post-conference visit to Blackpool sealed what had been a very special time for those who were there. The University of Bolton will be fondly remembered by this group of remarkable thinkers as the place where many of their new ideas took root.