Thursday 27 August 2015

Beyond Paul Mason's Postcapitalism

The extraordinary economic crisis, which has now been with us for seven years, has at the very least been a crisis in economics. Deep down however, it is a crisis in epistemology. We have been shaken in our certainties about how the world works, what each of us might expect out of life, the role of the state, the provision of social care, the maintenance of the roof over our heads, and the rewards of education, learning – even technical innovation. Each turn of the neoliberal screw and another certainty drops off the world. Because it’s a crisis in epistemology – knowledge – one would expect education to play a central role in understanding how to get out of it. Unfortunately, neoliberalism messes with education too inflicting a market-oriented commodification of knowledge, certification, status, journals, H-indexes and so on: neoliberalism has attempted to uphold the scarcity of education, which disenfranchises those whose experiences do not fit its rigid models and its outdated pedagogies.
Paul Mason’s central argument in his new book “Postcapitalism” is that the dominance of information and technology in society means that the scarcity-preserving ways of capitalism in education, intellectual property and so on, are in conflict with the forces of openness and abundance. It’s a kind of stand-off which is leading to stagnation and will eventually usher in an abundant “postcapitalist” world where free exchange is the rule (a kind of economic Wikipedia), not the maintenance of scarcity. It’s a bold thesis and well-argued: among the things I’ve admired most is his exploration of a lot of off-the-beaten-track economic insight, as well as some endorsements of classic economic theory which is not usually associated with a progressive world-view. In the latter category, his approval of Austrian economics (the root of neoliberalism!), and particularly von Mises and Hayek’s opposition to planning will upset Marxists. But what Mason is saying is that within von Mises’s analysis (which is fundamentally rooted in the study of economic exchange – it was von Mises who first revived the idea of ‘catallaxy’ – the science of exchange – which Hayek would go on to develop) there is a seed for getting a better grasp of what information and technology are doing to our economy and the importance of the ‘gift economy’ in building a new future. However, the problem for economists is stepping outside the paradigm of mainstream economics whilst fashioning progressive arguments in an old language. But to suggest that Mason is still caught in the paradigm of neoclassical econometric models is not to snipe unnecessarily; it is to find a way of articulating what his instincts are telling him without becoming beholden to the bad econometric habits which took us to this terrible situation in the first place.
Mason picks up on the need of capitalism to maintain scarcity in the digital world, creating technological monopolies around intellectual property: not just Apple and Microsoft, but equally academic journals, degrees and so on. However, central to his argument is the role of capital imbalances in triggering economic cycles where technological innovations and the opening of new markets are effects, not causes. He enthuses about the Kondratieff ‘long wave’ cycles (I learnt a lot about Kondratieff), arguing for the veracity of macroeconomic mechanisms and lamenting Kondratieff’s low profile, whilst criticising Schumpeter’s similar cycles which privilege technical innovation as the cause of an economic wave. The Schumpeterian dominance has led to all sorts of policy initiatives to drive investment in new technologies from biotech to Graphene in the hope of revitalising capitalism. Mason questions this, considering whether the information revolution is creating a world where there are no new old-fashioned capitalist markets to be opened up unless scarcity is enforced in ways that will increase inequality further. In championing Kondratieff, Mason is arguing that we have to understand the problem of capital imbalances as triggers for a new wave. He explains:
“a long wave takes place because large amounts of cheap capital have been accumulated, centralized and mobilized in the financial system, usually accompanied by a rise in the supply of money, which is needed to fund the investment boom. Grandiose investments are begun – canals and factories in the late eighteenth century, railways and urban infrastructures in the mid-nineteenth century. New technology is deployed and new business models created – leading to a struggle for new markets – which stimulates the intensification of wars as rivalries over colonial settlements increase. New social groups associated with the rising industries and technologies clash with the old elites, producing social unrest.”
Here, I feel a weakness emerging from Mason’s economic orthodoxy: he doesn’t ask ontological questions. He doesn’t ask “what is capital?”. He does have an attempt at asking "what is money" giving a good description of fiat money, and the importance of confidence in a banks promises, although of course this raises more ontological questions than he has space to address. Instead, he points to the history of financialisation with a fascinating quote from Fernand Braudel commenting on the fall of the Netherlands economy in the 17th century, that
“Every capitalist development of this order seems, by reaching the stage of financial capitalism, to have in some sense announced its maturity: it is a sign of autumn”.
Seeing this against the rapid financialisation of western economies where markets become direct sources of finance for companies, banks pursue consumer debt to generate profits and financial instruments become ever more complex, and ever more dependent of sophisticated and super-fast algorithmic trading (Mason doesn’t mention this, but issues like the dominance of BlackRock’s Aladdin on trading floors across the world is a huge problem) is enlightening. However, there are deeper questions to ask about human-human relations and human-world relations wherein the question about capital and money has to be situated.
Marx’s ontology is a good starting point for addressing these issues. Drawing on the labour theory of value, and pointing out that capitalist investment in the means of production necessarily creates the conditions for continual investment in order to maintain the machines, Mason draws attention to Marx’s idea of the “free machine”. Marx speculated as to whether a ‘free machine’ which did not require any further capital investment to keep it going would ultimately drive its marginal cost of production down to zero. Mason argues that the world of information technology and the internet is such a machine: once made, it keeps going for ever. This drives Mason to the conclusion that the open source, gift economy will produce a new social order. The problem is that without a deeper analysis of the ontological issues, it is hard to determine if he is right or not. There may be free machines, but the deeper human-human and human-world issues are fundamental.
Having said this, Mason’s analysis of the present crisis is, I think, pretty much spot-on: there is a stand-off. Capitalism attempts to maintain scarcity in the face of forces that seek to produce abundance. Everything we see today from Apple to Google, the TTIP agreement, DRM, elite education, branding and pay-walls shows at the very least where the battle lines are, which produce increasing inequality as capital amasses without anywhere to go. Mason argues for a political settlement to address the stand-off, including a universal wage and government intervention in monopolies. I think at this point he reveals the idealism which underpins his arguments. Yes, there is a stand-off – but much of it is a confidence trick – particularly education. Epistemology crises are matters of misplaced faith.
Education is important because it is relational. It is about between-ness, whilst human exchange of capital, goods, services (which really fascinated von Mises), is also an in-between thing. The difference between the free stuff of the free machine that Mason thinks is the future and the paid-stuff which is the result of producing scarcity, is that the free stuff emphasises relations, whilst the paid-stuff allows itself to be objectified, rationalised, counted, econometricised, and so on. More importantly, the marketization of education has emphasised the close relation between money and education. Money is also relational – as we are currently seeing in the woes of the Chinese stock exchange. But the system prefers to objectify it.
Then there is a problem with our technology. Mason’s picture of the future is a big-data based, agent-based-model-simulation driven future which keeps track of the gift economy. But what is the creation of such models and their deployment for coordinating society if not a huge exercise in scarcity production? The stages of thinking that led him to this are stages of thinking about capital, labour and technologies as “things”. But these things are also relations. In fact, they are relations of expectations which become codified in the way people communicate and the technologies they use. Change expectations and you change everything.
There is a stand-off. But all stand-offs need to be negotiated: they need to be exposed as relations and for those relations to be unpicked and expectations explored. The technological issues which sit at the heart of the stand-off need to be argued: technology needs to be politicised. The inequality problem and the creation of scarcity has produced a situation where the stand-off cannot be exposed as relational because those defending their positions all speak with the same voice defending their position: it is a state of ecological imbalance. Fundamentally, it is a pathology of management who early-on used their communications technologies to keep everyone ‘on message’. This has meant that policy in corporations, universities, government is now determined by people who are pretty much the same as one another.

In the end, the only way to address a crisis of epistemology is to work to understand each other better: our technologies need to become better intersubjective technologies. That way, in the words of von Foerster, we can create technologies that address extant social problems, rather than asking our extant technology to create problems it can solve.

Thursday 20 August 2015

A Research Plan: Informal Learning, technology and Cybernetic Science

All learning occurs within constraints: some constraints are codified within formal educational systems, others are often unarticulated, emergent in personal life, or codified in aspects of life unrelated to education (e.g. economics or politics). Whilst informal learning falls into this latter category, the constraints surrounding formal and informal learning are nevertheless entwined. For example, the formal education system characterises the learning of students as constrained by the conventional regulatory mechanisms of the education system (classrooms, curriculum, timetable, assessments, etc), whilst the learning of teachers and managers, together with much of the learning of the students within the system is subject to constraints which extend into society at large. What are the constraints of learning beyond the institution? How can we investigate them? What are their dynamics? What measures might governments take to manipulate them to improve society?
My research approach to informal learning has been focused on identifying and understanding the dynamics of the various constraints within which learning occurs. This has been a practical, theoretical, technological and pedagogical project, involving experiments in teaching and learning as much as theory-building: Informal learning presents issues which challenge the separation between teaching and research, and more fundamentally issues of social equity, status, polity and the nature of the relationship between education and society. Since Universities are tied-up in mechanisms of status and equality, my research into informal learning has trodden a difficult line between the formal constraints of academic discourse, curricula, publishers and active pedagogical engagement. However, this presents epistemological and methodological challenges which have formed a major part of my research.
My practical engagement ranges from attempts to increase flexibility, self-organisation and personalisation in formal assessment in Bolton’s Computing department, innovative engagements with students to exploit their informal learning to enhance employability, to a Masters framework we established at Bolton (IDIBL: Inter-Disciplinary Inquiry Based Learning) based on the Ultralab Ultraversity model which provided an opportunity for exploring the constraints existing between ideals and reality in bridging the gap between formal and informal education. In addition to this, I have delivered (semi-formal!) seminars on informal learning and technology to teachers, learners, businesses and artists in the UK and Europe as part of projects funded by the EU and JISC. Throughout this, my methodological approach has been to use cybernetic modelling techniques to generate ideas, explanations and interventions, and then to explore the constraints which are exposed when some ideas work and others don’t.
Informal Learning Research to-date: Personal Learning Environments
The theoretical foundation for the study of constraint is explicitly a cybernetic concern and distinct to the more conventional focus on the identification of causal mechanisms in the light statistically-produced event regularities in education. Cybernetics is a discipline of model-building. Although positivist approaches to modelling envisage their use in explaining or predicting phenomena such as the impact of policy interventions (it is almost certainly impossible to ‘simulate’ the reality of education, although I have usefully experimented with Agent-based modelling techniques in the past), models can simply be viewed as ways of generating ideas, theories, explanations and possibilities for intervention, each of which can be explored in reality.
My early research using modelling approaches involved personal technologies to bridge the gap between informal and formal learning. Principal amongst these was the Personal Learning Environment (PLE), a development which grew from the emergence of technological conditions that permitted learner-centred control of technology (as opposed to institution-centred control) with consequent new possibilities for individuals to coordinate informal learning, personal and social communications, and formal learning commitments. These developments attracted attention from government and EU policy makers who made considerable investments in exploring the potential of the new technology. In total, Bolton attracted over 10 large-scale projects from the EU and JISC which explored the ramifications of the technologies between 2004 and 2014.
Theoretically and methodologically, the technical transformations presented significant challenges. Technologies are both constraining of practice (and can be a barrier to learning) whilst also providing new opportunities for informal access to resources and communities. This principal question across all this work was, “How can we understand the social dynamics of these constraints?” In the PLE project (funded by JISC), a cybernetic model, the “Viable System Model” (VSM) (Beer, 1971) was used to both to critique existing organisational and technological constraints, and to generate new possibilities for the organisation of technology and the personal coordination of individual learning. The VSM is a powerful generative idea which articulates the way “viable systems” coordinate the different activities which contribute to the maintenance of their viability: since adaptation is essential to viability, applied to individuals some of these activities relate to what is clearly “informal learning”, some to formal learning, whilst others address more basic requirements for viability.
The PLE project was explicitly focused on generating ideas and exploring new technologies (social software was only just beginning to be important at the time (2005); we produced a prototype system to illustrate the concepts and explore their implications). To explore the extent to which the ideas generate by the PLE models were to be found in reality. I managed a further project called SPLICE (Social Practices, Learning and Interoperability in Connected Environments), which focused on taking these ideas into the outside world. The outcomes from SPLICE helped identify constraints, informing about what did and didn’t work, which in turn led to theoretical refinement in the models. Most significant amongst the findings was the importance of intersubjectivity in informal engagements with technology. Existing cybernetic models tended to view social dynamics as a kind of symbolic processing (Pask’s conversation theory is particularly notable for this), and there was an explanatory gap concerning the less tangible aspects of inter-personal engagement (identity, friendship, alienation, empathy, fear, love, charisma). Strong evidence for the importance of intersubjectivity included the effectiveness of “champions of informal learning practices”. Fundamentally, both the PLE and SPLICE projects sought to create conditions for the transformation of informal learning. Outcomes from SPLICE helped inform the technology strategy at the University of Bolton which I co-authored in 2009.
The importance of the cybernetic focus on constraint has been to support an approach which is informed by failures as much as by successes at the interface between theory and the reality of the lifeworld of education. However, cybernetics is recursive and reflexive, and constraints exist not just between reality and theory, but between theory and method, between method and reality and between the competing discourses about education. Policy itself is a constraint which can lead to the separation of theory and method: for example, where technological developments depart from theoretical foundation. Informal learning was a major theme of subsequent projects (2009 – 2014) which built on PLE ideas, notably the EU TRAILER (Tagging, Recognition and Acknowledgment of Informal Learning Experiences) and the large-scale school-based ITEC project. The experiences of these projects reconfirms the importance of understanding intersubjectivity in informal learning, but also issues of the relationship between informal learning and the “institution of education”, social status, and the aims of policy-making.
Scientific Method and the Philosophical Background behind the Exploration of Constraints in Learning
The work around the PLE articulates a research methodology which sits at the heart of epistemological and ontological debates about methods in the social sciences. These have historic origins in the philosophy of science, and particularly the arguments put forwards by Hume concerning event regularities and the social construction of causes. In my PhD of 2011 (on “Educational Technology and Value in Higher Education”), I adopted a Critical Realist position that Hume was wrong to be sceptical about the reality of causes, and there are discoverable mechanisms in the social world (what Bhaskar calls ‘transitive mechanisms’). This was helpful in identifying ‘demi-regularities’ (Lawson) among the phenomena produced by the PLE and in generating possible explanations for their production. However, it failed to go beyond a “theoretical melange” towards more concrete and defensible knowledge which could reliably inform policy. More recently, my work has become critical of the implicit foundationalism in Critical Realism, and I have turned once again to Hume to consider in what ways his radical scepticism about nature might be justified, supported by arguments from ‘Speculative realism’, feminist science studies, and socio-materiality. At the same time I have looked at the scientific approaches of early cybernetics – notably that of Ross Ashby, whose understanding learning as self-adaptation was inherent in his attempts to create an artificial brain in the 1950s. Ashby’s method, as he articulated it, was to be informed in research by identifying the constraints in nature which prevent some theoretically-imagined possibilities being realised. In essence, Ashby’s method is to use cybernetic ideas to generate theories, explanations, and new practices which are then tested in the environment we share (the Husserlian Lifeworld). Between theories, practices and experience are the constraints which the scientific process reveals. I’ve found this mutual constraint most effectively illustrated by the Trefoil knot (itself a feature cybernetic theory) in Figure 1 below:
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTpUITPF0P8uhFfs7VUcV0ogWfplXNg-vPKgE8oReplhvx10SHhTJWKsJfVcCdcjFDMkEfkklDBdzl7pm3X7HiM677QXOhj4amh0QcS4jgbS_Ju3_hyuIZS3BwuRAYtVbOlJu1lg-ZmLkY/s1600/trefoil.jpg
Complex social phenomena like ‘informal learning’ present constraints not just between observed phenomena and available explanations. Constraints occur in the discourse about explanations (e.g. between Kuhnian paradigms, or Burrell and Morgan’s ‘sociological paradigms’), in the organisation of institutions and academic discourses, journals, and so on; and they occur at the boundary between underlying generative theory, empirical practices and technological developments. Constraints produce “surprises” or irregularities between expectations and experience and demand explanation. In the study of complex social phenomena like ‘informal learning’, different sociological paradigms (e.g. functionalism, phenomenology, critical theory) produce different kinds of approaches and different kinds of results which are often incompatible (i.e. produce “surprises” at the constraint boundaries). “Surprise” drives the creation of new interventions and methods to expose the constraints that produce it whether resulting from theoretical, methodological or practical inconsistency. Indeed, Hume’s regularity theory can be usefully seen through the lens of “surprise” and “confirmation”. Furthermore, the phenomenology of learning processes themselves can also be characterised by “Surprise” and “confirmation” insofar as there is perturbation and adaptation.
The cybernetic analogue of “surprise” and “confirmation” is contained in the way information is understood, and in recent years this has been my focus both in developing new methods and generating new ideas and interventions. Shannon’s information theory provides a calculus for the ‘surprisingness’ within communications. In recent years, I have worked with Loet Leydesdorff at the University of Amsterdam in understanding the relationship between analysis of surprisingness in communication and the development of meaning (this has been Leydesdorff’s main work in understanding evolutionary economics, and which has been highly influential in economic and social policy-making across the world).
Focus on information theory builds on the cybernetic models behind the PLE, but deepens them with more refined approaches to method, and a richer array of empirical possibilities. Most critically, surprises occur against a background, and it is studying this background which has proved most fruitful. The background is variously called ‘constraint’ or ‘absence’; in information theory, it is called ‘redundancy’. The dynamics of redundancy appear to be promisingly powerful in analysing the kinds of highly heterogeneous data that can emerge in informal learning.
The Research Plan: The Enrichment of Theoretical models with Information Theory
Dealing with the complexity and heterogeneity of data in informal learning forms the cornerstone of my research plan. On witnessing the socio-material investigations of Suchman’s human-machine interaction or the educational interventions of Sugata Mitra, we witness occasional ‘surprising’ moments against a background of tentative habitual practices with little happening. Even the coal miner with a copy of Marx’s “Capital” (a classic example of informal learning) would have had a few moments of insight against a background of exhausting labour, managing social commitments, conversation and rest. Whilst the sociomaterial view championed by Suchman, Olikowski or Barad presents ‘entanglement’ as the principal focus for investigation, from a cybernetic perspective, entanglements can be seen as interactions of constraint. Surprising moments appear as an ecological synergy between different processes which individually might not appear to do very much: it’s rather like the flow of rhythm, counterpoint, harmony, melody and accompaniment in music. The question is whether these surprising moments of discovery and development are investigable within coherent theoretical contexts. The approach is to see the relationship between different sources of data as synergistic rather than to attempt to ‘triangulate’ data.
The synergistic approach to data in informal learning research focuses on the redundant ‘background’ to surprising moments. Support for this comes from a number of developments in systems theory, ecology and biology, including:
  1. Recent work in ecology where the dynamics of ecosystems have been studied for the balance they provide between system rigidity and flexibility expressed in terms of information metrics (Ulanowicz, 2009)
  2. The synergetics of Herman Haken (2015) which has focused on the dynamics between redundant information in processes of social growth and development
  3. The Triple Helix of Leydesdorff (2006) which has examined the conditions for innovation by exploring mutual redundancies in discourse dynamics
  4. Emerging systems-biological theories of growth – particularly those of Deacon (2012), but also Kaufmann (2000), Brier (2008), Hoffmeyer (2008) which focus on constraint and absence as a driving force for biological development and epigenesis.
Of these influences, the one I am most close to is that of Leydesdorff with whom I have co-authored a few papers. The synergistic approach to data analysis can be best explained by comparing ‘additive’ and ‘subtractive’ synthesis illustrated in Figure 2. The right-hand side image represents triangulation as the subtraction of data which is not shared between the three dimensions. By contrast, additive synthesis combines the three data sources synergistically. Redundancies may be aggregated in this synergistic way since overlap is a principle of redundancy, and this avoid the problems of the “mereological fallacy” (the confusion of parts for wholes) because the analytical focus is on constraint, not on specific causes or features. Consequently, the technique is suited to examining the large number of dimensions of data which can be associated with informal learning.
Figure 2: Synergistic 'additive' data synthesis (LEFT) vs Subtractive 'triangulating' synthesis (RIGHT) (Diagram adapted from Leydesdroff)


This approach extends cybernetic models by bringing together their underlying theoretical principles with a deeper analysis of constraints. In the cybernetic models of learning produced by Pask (and later Laurillard) which modelled communication from the perspective of codified messages exchanged between teachers and learners – for example, Pask’s conversation theory, Parsons’s ‘double contingency’ of communication, Luhmann’s social systems theory – there were problems in bridging the gap between the model and ‘real people’, and a failure to adapt the model in the light of reality. With a synergistic approach, issues such as empathy, tacit knowledge, and less tangible intersubjective aspects of human communication and learning become available for analysis. Bringing Alfred Schutz’s concept of ‘tuning in’ to the inner world of each other into an analytical frame also presents new perspectives on Vygotskian ZPD dynamics as well as Freirian critical pedagogy.
Potential of Research in the identification of constraints in informal learning and generation of new possibilities, projects and papers
The study of information in education described above is finer-grained than previous cybernetic models which have been used in education (notably Pask/Laurillard conversation theory, and the viable system model of Beer). Having said this, it is also commensurable with those previous models. At the very least, this theoretical re-orientation generates new areas of investigation:

  1. “Intersubjectivity in informal learning”: Schutz’s concept of intersubjectivity (which he developed from Husserl and which influenced a number of educational theorists including Goffman and Bruner) places emphasis on what he calls “mutual tuning-in” to one another – a process which occurs over time. Social relationships which convey little information, but contain much mutual redundancy between the parties are good examples of the kind of thing Schutz was interested in (he made particularly acute observations about the way music communicates). Informal communications using social media also often convey little information, but have much redundancy (e.g. Twitter, Facebook, Dubsmash). They would seem to be ideal opportunities for investigating the link between mutual redundancy and intersubjective understanding. Vygotskian ZPD dynamics and critical pedagogy are also closely related to issues of intersubjectivity.
  2. “Intersubjectivity in formal learning”: Analysis of the constraints around informal learning and social relations can provide new ways of thinking about the relations between teachers and students in formal education, the effective design of learning activities, new forms of assessment and so on.
  3. “Informal learning in new settings”: Technologies provide new ways of creating material constraints within which informal learning behaviour can be studied. For example, the mainstreaming of virtual reality environments will provide opportunities to create rich data streams about learner curiosity, emotions, likes and dislikes. It would be particularly interesting to combine such approaches with research on intersubjectivity. Other material contexts could exploit interactive public artworks using sensors to record data about engagement patterns. The site of such experiments can also be important: use in libraries, museums, galleries, pubs and so on can provide powerful new ways of understanding how human curiosity and adaptation works in different socio-material environments.
  4. “Informal learning, Inquiry and accreditation”: In Inquiry-based learning contexts, the constraints of the curriculum are reduced but assessment is conducted through techniques like Winter’s “Patchwork text”. Researching the ways in which learners might analyse and externalise their own learning dynamics in everyday life can present new assessment strategies which focus on the ways in which intellectual growth and development can be measured and accredited through changes in patterns of communication in everyday life between students, teachers, the workplace, technological practices and so on.
  5. “Analysing Narrative of Informal Learning”: Narrative approaches to educational research, particularly Peter Clough’s “narrative fictions”, are consistent with an approach to analysing constraint. Narratives feel ‘real’ because they reproduce constraints we know from reality. In research in informal learning, reality can get lost behind rhetoric, and to analyse constraints inherent in realistic narrative accounts can be a fruitful way of exposing the dynamics of constraint for informal learning in reality.
  6. “Status and Informal Learning”: The social dynamics of social status has become a major issue of inquiry – particularly research into Higher Education (see Roger Brown). I have recently used John Searle’s social ontology and his concept of “status function” as a way of analysing the PLE, and the dynamics of educational projects within educational institutions more generally. Searle’s linguistic understanding of status is powerful, but I believe currently misses its relationship to scarcity (status are scarcity are related which he acknowledged to me at a talk he gave in Cambridge in May). Education’s production of formal certification is a manufacture of scarcity very much related to current processes of marketization. Informal education is technically ‘abundant’ rather than scarce, and (it seems) does not have the same status. This is a problem concerning the separation between education and society with implications for social equality and deserves much deeper investigation.  I believe a deeper understanding of intersubjectivity can enrich Searle’s model whilst providing pointers for more equitable policy.

Saturday 15 August 2015

The practice of exposing constraints: Bartok, Schoenberg, Wittgenstein and Cao Fei

I've been having a wonderful family holiday (not words that often come together!) with a tour of Slovakia, Hungary and Austria. There are discoveries to be made everywhere, and among all the sights and sounds, something gets pieced together which brings ideas into focus. In beautiful green Budapest, we ventured into the suburbs to find Bartok's house (which is now a museum - unfortunately closed until late August). Bartok has always fascinated me because he was a scientist as much as an artist: one who was particularly focused on musical form as natural form. His music abounds with Fibonacci numbers and golden sections, the life in his irregular rhythms has always suggested to me that he was right in making these comparisons. The maths helped generate the notes which generated a feeling which (one might imagine) also could be proportionally studied (although I have never come across anything that does this). Bartok honed-in on the constraints between mathematical ideas, natural form and aesthetic experience.

Then to Vienna, and to the world of Arnold Schoenberg (What if they played Schoenberg in Viennese hotel lobbies rather than Beethoven and Mozart?). Like Bartok, Schoenberg too used mathematical techniques to generate ideas. Some feel that he let the maths override the aesthetics, although personally I find his music quite beautiful. His musical oeuvre seems to get more impressive as history gives greater distance. But like Bartok, Schoenberg was also exposing the constraint between aesthetic experience and the logic of construction. Of course, to some extent all artists do this. But Schoenberg and Bartok stand out as two examples where very different formal ideas are used to generate different sorts of possibilities. It's very cybernetic really.

Whilst in Vienna, I ventured to the annual Wittgenstein symposium in Kirchberg am Wechsel. Wittgenstein used to live in the area between 1920 and 1922, being a (rather terrible) primary school teacher in nearby Trattenbach. There were a lot of philosophers at the conference. Mostly they talked about ideas - ways of generating possibilities - and mostly, I saw little inclination to explore generated possibilities of ideas in practice. However, Wittgenstein himself was very adept at exploring his ideas practically through thinking about the way language is used in everyday reality. In "On Certainty", which featured quite heavily in the conference, Wittgenstein exposes fundamental questions about knowledge and reality by asking almost child-like questions and thinking through the consequences. It's wonderful stuff:


  • 600. What kind of grounds have I for trusting text-books of experimental physics? I have no grounds for not trusting them. And I trust them. I know how such books are produced - or rather, I believe I know. I have some evidence, but it does not go very far and is of a very scattered nature. I have heard, seen and read various things.
  • 601. There is always the danger of wanting to find an expression's meaning by contemplating the expression itself, and the frame of mind in which one uses it, instead of always thinking of the practice. That is why one repeats the expression to oneself so often, because it is as if one must see what one is looking for in the expression and in the feeling it gives one.
  • 602. Should I say "I believe in physics", or "I know that physics is true"?
  • 603. I am taught that under such circumstances this happens. It has been discovered by making the experiment a few times. Not that that would prove anything to us, if it weren't that this experience was surrounded by others which combine with it to form a system. Thus, people did not make experiments just about falling bodies but also about air resistence and all sorts of other things. But in the end I rely on these experiences, or on the reports of them, I feel no scruples about ordering my own activities in accordance with them. - But hasn't this trust also proved itself? So far as I can judge - yes.
  • 604. In a court of law the statement of a phy. sicist that water boils at about 100C would be accepted unconditionally as truth. If I mistrusted this statement what could I do to undermine it? Set up experiments myself? What would they prove?
  • 605. But what if the physicist's statement were superstition and it were just as absurd to go by it in reaching a verdict as to rely on ordeal by fire?
Here he is grappling with constraint. His logic generates a variety of possibilities, which he explores through commonsense language. Some of the possibilities are absurd, some open out into deeper questions. It's not unlike Schoenberg's technique: among the various configurations of musical material generated by his technique, some work and some don't: the music is at the boundary between the two (the parallels between them have been studied by James Wright in his "Schoenberg, Wittgenstein and the Vienna Circle").

Finally, a connection back to Manchester (obviously there's also a Wittgenstein connection there too). At the Whitworth Art Gallery at the moment there is an exhibition of contemporary chinese art. The piece that struck me most powerfully was a video by a young Chinese artist called Cao Fei: her piece in Manchester is called "Utopia" - a moving and witty portrayal of industrialisation and dehumanisation. In the Secession Building in Vienna, Fei has an exhibition including a number of films installations. She's like a Chinese Luis Bunuel: very funny, poignant, vicious, and spot-on in her observations of early 21st century China. She films brilliant stunts, including an industrial truck transporting rubble disguised as Thomas the Tank Engine, and wonderfully surreal montages where the everyday suddenly becomes choreographed and the inner lives of ordinary people take on a poetic form. Fei's films sit precisely at the boundary between dreams and reality. I see this most clearly in her the piece in the Whitworth: at the end of the film, which is shot in a light-bulb factory, she gets the workers to pose still in front of the camera. This is so powerful because these people are never still. And as eyes blink and legs wobble slightly, the discomfort of actually being still - the weirdness of stopping - is communicated to the viewer (who is, in the art gallery, static in front of the screen). Fei's boundaries are moving things: they wobble about and shift with time. And perhaps that's the way with all boundaries.

Tuesday 11 August 2015

Theory, Method and Reality in Education: A case for Cybernetic Science

All scientific and artistic activity exists at the interface between theory, practices and the experience of the lifeworld. These are three mutually-constraining dimensions. However, I don't think they are separable: they 'flow' into each other in a continuous stream. Perhaps a useful metaphor for expressing their relationship is the trefoil knot:


Theories are the concepts, mechanisms or models which are discursively produced among scientists grounded in a variety of foundations (many of which are incompatible with one another). Practice involves methods and performances which also are discursively produced from the normative practices for the acceptable investigation of theories, acceptable art, and the properties of apparatus, instruments, materials and measuring techniques. The set of available and acceptable theories is constrained by the set of available and acceptable practices, and vice-versa. Both direct their attention at the uncovering of the nature of the 'lifeworld', which may be thought of as representing all possible events in nature. Knowledge of the lifeworld is both constrained by available theories and available methods, and the lifeworld itself constrains theories and methods. The lifeworld of education provides an example: what is possible in education is constrained by conventional theories and conventional research methods, yet shifts in constraints which admit new possibilities of method or theory can then reveal possibilities in education. Scientific progress is always a revealing of the contours of mutual constraint. Scientific and social pathology is a reinforcing of constraints in ignorance or misunderstanding of how they are produced.

Both Hume and Kuhn acknowledged the role of discourse in the production of scientific knowledge. However, Hume's restricting of the production of scientific knowledge to the identification of empirical event regularities in close-system experiments (whilst working for physics where event regularities were unproblematic prior to Einstein and Bohr) doesn't work in the social sciences, and Kuhn tells us that paradigm-shifts even in the physical sciences implicate rich arrays of social phenomena beyond event regularities which are responsible for the establishment of 'acceptable' theories: much of this has to do with the normative institutional structures of science and the academy. At the same time, Popper's 'negative' approach to scientific development - the significance of falsification - presents challenges to scientists which go beyond the rationalism in their theories, and challenge basic psychological issues of ego and socio-economic issues of the social status of scientists, universities and corporations. Even those who acknowledge falsification as a fundamental criterion of scientific knowledge find it extremely hard to practice in reality - particularly in the social sciences. After all, what criteria would be used to falsify a theory of education?

If we examine the three mutually-constraining domains, it is possible to see the discoveries of both physical and social scientists in a new light. In Hume's account, 'causes' are identified in discourse in the light of event regularities. Typically, causes are expressed as descriptions of mechanisms - for example, in Newton's laws of motion, or Boyle's gas law. Hume's regularity theory can be seen as a theory about "surprising" events and "unsurprising" or "confirming" events. An experiment might produce a surprising event. The experiment itself, and its apparatus, is the product of a creative generation of possibilities emerging from available theories and material properties. The surprisingness of a new empirical event is relative to the available theories and models which can or cannot predict it. In the light of a surprise, new methods can be investigated to reproduce the surprise. If the surprise becomes unsurprising with the application of new methods, then there is sufficient ground for the adaptation of existing theory. What emerges in a new theory is a redrawing of the boundary between surprising and unsurprising events as a relation between theory, method and the lifeworld.

In the social sciences, there are similarly surprising and unsurprising events, but these are relative to individual epistemologies as well as the broader discourse. In fact, even in the physical sciences, individual epistemologies (identities, power relations, gender, etc) are also important at the boundary between the surprising and the unsurprising. Empirical practice in the social sciences pursues a different path from that in the physical sciences, and yet fundamentally its objective can remain the same in revealing the boundaries of the surprising and the unsurprising. In the way that physical science theories generate new apparatus for experiment, so social theories generate new ways of investigating society: for example, new technologies might be created. Cybernetic theories offer rich and powerful ways of generating diverse possibilities for the investigation of the lifeworld.

In the social world, some things are more surprising than others: few phenomena are completely unsurprising. Even terrorist attacks and revolutions exist within the realm of expected possibilities, however unlikely or unwelcome we may feel them to be. Revealing the contours of constraint between the lifeworld, practices and theories has a different character in the social sciences to the physical sciences. The determination of the boundary between those theoretically generated ideas which can be found in nature and those which can't entails repeated re-examination of theories and practices in the light of experiences. This is as true of the arts as it is of the social sciences. However, this reflexivity does not mean that the social sciences are not scientific in the same way as the physical sciences; it means that the contours of constraint between the theoretical, practical and experiential are more explicitly entwined than they are in the physical sciences (where at a deep level, they are also tightly integrated).

The pathologies of science result from ignorance of mutual constraint. The physical sciences make ignorance of mutual constraint possible because the regularities they focus on suggest that the human constraints of theories and practices can be bracketed-out. This results in positivism. When similar assumptions about bracketing-out of theoretical, practical or material constraints occurs in social science (as it does in much technological development in education), the result is functionalism. Managerialism is pathological because it assumes that its negative consequences of cruelty and injustice can be bracketed-out in favour of 'reliable' financial accounting: managerialism, like all pathological science, identifies contours of "reality" and blinds itself to those indicators which suggest that the assumed contour is false. Art's reflexivity, by contrast, is focused on getting at the contours which deeply confirm experience at many levels. Art's weakness is that it mystifies its practices.

Cybernetics is a scientific practice which embraces both a theoretical generation of possibilities and a practical exploration of those possibilities in nature. Illustrative of this process is Ross Ashby's cybernetic "Design for a brain". Ashby's aim was to use a body of theory about self-regulating mechanisms to reproduce the complexity of the brain. His question was "what kind of mechanism is self-adapting?" (in other words, a learning mechanism). His theory generated possibilities of hierarchies of mechanisms which would regulate each other. At each stage of his investigation, Ashby ask's himself "this is mechanism I have so far. Now is that what's going on?". In answer to the question, he is interested in the way in which experience (gained through creating machines based on his theory) does not match the mechanism. So he exposes the constraints he has not yet considered. So he thinks again, and generates more possibilities and new technological experiments in the light of a modified theory. Stafford Beer and Gordon Pask operated in much the same way when they were experimenting with biological computers.

Knowledge in cybernetic science emerges from recursively exploring the pattern of constraints between theory, practice and experience. It is gained when new constraints are revealed: when the body of current theory is at some level falsified. Cybernetics itself has become rather focused on its theories at the expense of its practice. This has been unfortunate. The need to revisit its origins lies in the fundamental scientific problems which now face us today in the physical, social and artistic domains. 

Thursday 6 August 2015

The Problem with Cybernetics and the Cybernetics still worth hoping for

I'm wrestling with a paper about cybernetics in education at the moment. The problem I find with the cybernetic territory is that there's so much of it, yet mostly it rehearses the same set of ideas. However the personalities involved would (and still do) argue fiercely between themselves about finer points where they could be shown to be correct in some detail or other. Essentially whilst certain ideas are common between them, there are differences about foundations: and since foundations are really important to a transdisciplinary theory, people can get pretty upset. Personally, I've given up on foundations: it's all conjecture. This is a real problem in the social sciences generally. Alexander Wendt complains in his recent "Quantum Mind and Social Science" that
"In contrast to physical sciences like chemistry or geology, where there is broad agreement on the nature of reality and how we should study it, in the social sciences there is no such consensus. As a result scientific theories rarely die, and if they do, like zombies they inevitably come back to life later." (Wendt, 2015)
As I wrote yesterday, the tension in cybernetics is that as a transdisciplinary science of everything (and so necessitating common foundations, just as earlier cosmologies did), cybernetics is caught between being a late outpost of German idealism (a Naturphilosophie or a kind of neo-enlightenment mechanical philosophy) or to be a radical scientific method as it was embodied in the practice of Ross Ashby. It's this latter 'cybernetics as method' which I'm interested in. The foundationalism serves a function: it is generative of ideas; but then we need to look at what actually happens, examining the constraints that apply to the ideal cases of cybernetic models and reflecting on the knowledge about the world that those constraints reveal. A number of cybernetic thinkers get this: Bateson's 'pattern' ('that connects') is another word for constraint; another name for it is "redundancy"... and 'absence' too. Watzlawick comments that:

"The search for pattern is the basis of all scientific investigation. Where there is pattern there is significance - this epistemological maxim also holds for the study of human interactions."
But what this actually means is that the process is essentially negative. We are not informed by what works; we are informed by what doesn't work. The things that surprise us do so because constraints are in operation of which we were previously ignorant. Having got the signal of new constraints, science proceeds to comprehend this new pattern. Except a lot of the time it doesn't - especially in educational technology. I've always thought about engagements with educational technology innovation as being like using a 'torch' to shine a new light onto ancient practices. I've encountered many educational technologists who don't bother to look to see what shows up and instead blame the places where the torch inconveniently shines!

Most cybernetically-inspired interventions fail. If not at an immediate operational level, there are deeper problems systemically to do with power relationships or sustainability or unpleasant unforseen consequences (all the pathologies of functionalism).  Wiener was quite sceptical about the prospects for cybernetics:
As we have seen, there are those who hope that the good of a better understanding of man and society which is offered by this new field of work may anticipate and outweigh the incidental contribution we are making to the concentration of power (which is always concentrated, by its very conditions of existence, in the hands of the most unscrupulous). I write in 1947, and I am compelled to say that it is a very slight hope.
What we learn through intervention is understanding how the social world constrains the ideals of our models. But the constraints are much harder to describe than the original models, and a major constraint may be contained within the thinking that produced the theory in the first place (and so the theory is wrong). Nobody likes to think their theory is wrong, and so educational technologists tend to use theories, models and technologies to create problems they believe they can explain. Whether we look at the Pask/Laurillard conversation model, the Downes/Siemens connectivism, Vygotskian ZPDs, Activity theory, dialogic learning, etc... the story is the same: each creates problems they can explain. But reality presents the problems they cannot explain. These are the interesting bits!

So what do we do in the light of something that doesn't work? Firstly, few theorists really spell out in detail what they might really expect or wish to see in the light of their theory. Pask was far too abstruse to do this. But the late Gary Boyd at Concordia University did do it. He gave the following scenario to illustrate conversation theory:


"A is a medical student and B is an engineering student. The modeling facility they have to work with might be Pask’s CASTE (Course Assembly System and Tutorial Environment, Pask,1975); equally possibly now one might prefer STELLA or prepared workspaces based on Maple, MathCad, or Jaworski’s j-Maps. The recording and playback system may conveniently be on the same computers as the modeling facility, and can keep track of everything done and said, very systematically. (If not those parts of a CASTE system, a version of Pask’s tutorial recorder THOUGHTSTICKER (Pask, 1984) could well be used).
Level 0–Both participants are doing some actions in, say, CASTE (or, say, STELLATM), and observing results (with, say, THOUGHTSTICKER) all the while noting the actions and the results.
Level 1—The participants are naming and stating WHAT action is being done, and what is observed, to each other (and to THOUGHTSTICKER, possibly positioned as a computer mediated communication interface between them).
Level 2—They are asking and explaining WHY to each other, learning why it works.
Level 3—Methodological discussion about why particular explanatory/predictive models were and are chosen, why particular simulation parameters are changed, etc..
Level 4—When necessary the participants are trying to figure out WHY unexpected results actually occurred, by consulting (THOUGHTSTICKER and) each other to debug their own thinking.

The actual conversation might go as follows. In reply to some question by A such as, “HOW do engineers make closed loop control work without ‘hunting’?” B acts on the modelling facility to choose a model and set it running as a simulation. At the same time B explains to A how B is doing this. They both observe what is going on and what the graph of the systems behaviour over time looks like. A asks B, “WHY does it oscillate like that?” B explains to A, “BECAUSE of the negative feedback loop parameters we put in.” Then from the other perspective B asks A, “How do you model locomotor ataxia?” A sets up a model of that in STELLA and explains How A chose the variables used. After running simulations on that model, A and B discuss WHY it works that way, and HOW it is similar to the engineering example, and HOW and WHY they differ. And so on and on until they both agree about what generates the activity, and why, and what everything should be called." 

Boyd is wonderfully specific in the detail of the different levels of engagement which are inherent in Pask's model. However, this scenario doesn't feel very real. So what about a less successful scenario to balance things out. This is a parody of conversation theory and I have tried to stick to the different levels and their meaning:

The medical student A and the engineering student B have been told to engage in this exercise as part of their assessment. The modeling facility is unfamiliar to A but familiar to B. A has already spent some time trying to log-in. Slightly stressed, A is not happy about the recording facility, since he feels quite vulnerable in the environment. He requests that the recording facility is turned off. 
Level 0– B is playing with stuff in the environment. A is staring at the screen tentatively clicking things without knowing what he is doing. There is a marked difference in their behaviours in response to stimuli.
Level 1—B names processes illustrated on the screen. A stares at the screen listening, rather perplexed. He tries to give the impress she understands by repeating the words B uses, and saying "yes" a lot.
Level 2—B attempts to explain dynamic processes in the environment. A tries to understand, transfixed by the animations, but really isn't interested.
Level 3—A considers WHY they are doing this. B explains the assessment schedule. A worries about his assessment and reckons this is WHY he ought to pay some attention.
Level 4—A is trying to figure out WHY he is in this ridiculous situation. Finally he asks B "what do I need to do to pass the assignment". B tries to figure out WHY he is asking the question.

The levels of conversation in my parody, as with Boyd's original, are meant to be simultaneous regulating mechanism. The problem is that in the parody, there is little regulation occurring in the way that was anticipated. In fact, if there is communication happening at all it is about the broader educational context within which it is all happening. So we can ask of the parody, What is the constraint that causes a deviation from the ideal situation that Boyd presented?

Constraints in the Parody

The first constraint is that A isn't interested in the topic. There is a fundamental question within conversation theory as to where an "interest" lies, and how it is articulated. Pask assumes (along with every MOOC) that an interest lies within the individual mind, is conceptual in focus, and is expressed in language. Why should interests be like this? Why can't they themselves be relational or intersubjective? Are they really only articulated in language? What about childhood experiences? What about play?

A second constraint is that the technology gets in the way of A. The physical presence of technologies is not a neutral medium for the transmission of messages. The technologies encode expectations of behaviour, power relationships and so on. These affect A and B asymmetrically.

A third constraint is the institutional regulations that produce the situation which A and B find themselves. The interesting thing about these constraints is that they are shared between A and B. It's interesting that in the parody, the institutional regulations occur in the higher levels, where there is a crisis in the lower levels.

A fourth constraint is B's communication skills. B struggles to understand A's situation. A knows that they are not understood. A's attempt to address this by asking "what's it all about?" because the question they ask is really "What's constraining you to behave like this?"

Watslawick on Communication 

Reflecting on this parody raises the question as to what a 'cybernetic method' would do if it truly considered the constraints revealed in implementation. First of all, it would consider the constraints in the formulation of the theory: there is something wrong with conversation theory. Watslawick explains that:
"Once it is realized that statements cannot always be taken at face value, least of all in the presence of psychopathology - that people can very well say something and mean something else - and, [...] that there are questions the answers to which may be totally outside their awareness, then the need for different approaches becomes obvious."
Saying and meaning are very different things. The latter has to do with expectations and is a much deeper level function. Watslawick quotes Bateson saying:
"as we go up the scale of orders of learning, we come into regions of more and more abstract patterning, which are less and less subject to conscious inspection. The more abstract - the more general and formal the premises upon which we put our patterns together - the more deeply sunk these are in the neurological or psychological levels and the less accessible they are to conscious control."
Secondly, the parody does at least identify a moment of communication when A and B identify a common constraint (the assessment). Might refinements be made to the model that embraced the idea of common constraints as a counterbalance to the exchange of messages? Might we further explore the relationship between common constraints and messages exchanged? Could this be measured?

Is time a constraint? Were A and B under pressure to hold their discussion? What about the flow of time in their interactions?

Such questions result from "becoming informed" about constraints in the real world. With greater information, the reflexive processes of theory creation and model generation can be enhanced and new interventions developed.

For reasons which have to do with deeply ingrained constraints in the education system, we seem to have got stuck with educational technology.

Wednesday 5 August 2015

Varieties of Transdisciplinarity in Educational Theory

A coherent theory of education has to be transdisciplinary. Yet to say this is, firstly, to invite questioning about the nature of disciplines. Secondly, if a “coherent theory” was possible, it would search for a common foundation upon which the edifices of psychology, philosophy, biology, economics, anthropology, chemistry, politics, mathematics and every other subject have created different ways of explaining phenomena, including education. Thirdly, a transdisciplinary theory would itself amount to a new discipline and in turn must not only account for itself, but also for the nature of theorising itself! We might conclude that a theory of education has to be a “theory of theory”. Finally, we have to ask, even in possession of a coherent and defensible transdisciplinary theory, it would be absurd to suppose that we wouldn't have educational problems. In what way would a coherent theory of education be an adequate guide to address educational problems? In the face of such intractable questions, tortuous arguments, and a body of educational theory which is largely incoherent, it’s surprising that any teaching and learning gets done at all!


The relationship between theory and practice in education is one of entanglements. Bodies, ideas, ethics, politics, institutions, histories, technologies and matter collide in every aspect of social life. In the entanglements are a variety of grandiose “theories of everything”: imagined foundations concealing their imperfections and inviting the imaginations of teachers to experiment with their practices – sometimes with wonderful, if not universal, results. Transdisciplinary foundations are not new - in history, they reflect the zeitgeist and scientific practice of the age. The 18th century “theory of everything” was German Naturphilosophie which drew on the recent scientific discoveries as Beiser points out with regard to Hegel:
“the organic concept of nature of Naturphilosophie seemed to be the best scientific worldview, the only theory to explain the facts. It seemed confirmed by all the latest empirical research into living matter, electricity, magnetism and chemistry.” (Beiser, F (2005) Hegel)  

For all its deficiencies Naturphilosophie served the function which all transdisciplinary foundations serve: as a source of inspiration for new ideas. For the followers of Hegel and Schelling, this was the expression of scientific metaphysics. Beethoven, who was not generally given to theoretical statements, caught the ‘energetic’ and ‘electrical’ spirit of his time:
The grain of seed, tightly sealed as it is, needs the damp, electric warm soil in order to sprout, to think, to express itself. Music is the electric soil in which the spirit thinks, lives and invents […] All that is electrical stimulates the mind to flowing, surging, musical creation. I am electrical by nature.”


The shift from Naturphilosophie to more the recent "systemic" transdisciplinary ideas within which new thinking about education and learning emerged was a gradual process involving the intersection of two separate intellectual developments: General Systems Theory and Cybernetics. Developments in biology placed increasing emphasis on circular relationships, feedback and the concept of ‘system’ in the natural world. Bertalanffy’s General Systems Theory, first expressed in the 1920s, was
“the scientific exploration of “wholes” and “wholeness” which, not so long ago, were considered metaphysical notions transcending the boundaries of science. Hierarchic structure, stability, teleology, differentiation, approach to and maintenance of steady states, goal-directedness – these are a few of such general system properties; and novel conceptions and mathematical fields have been developed to deal with them: dynamic system theory, automata theory, system analysis by set, net, graph theory and others. This interdisciplinary endeavour, of elaborating principles and models applying to “systems” in general, at the same time produces a possible approach toward unification of science.” (in Lazslo, “Introduction to System Philosophy”, p xviii)


This new ‘systems’ foundation, served to stimulate new ideas, new scientific practices, new ways of looking at the world. Among those new ideas were new approaches to ‘learning’. (Piaget’s educational thought owed a great debt to Bertalanffy).

By the time of the Second World War, systems theory was well-established - a point Bertalanffy was keen to emphasise because Norbert Wiener's post-war "cybernetics" was often mistakenly seen as the beginnings of transdisciplinary systems thinking. Cybernetics itself resulted from the huge technological investment in that war, and in particular the creation of machines with feedback. Wiener explained that he had been “working on the many ramifications of the theory of messages.” Cybernetics saw the study of “communication and control” as its transdisciplinary foundation. Wiener goes on to say
“Besides the electrical engineering theory of the transmission messages, there is a larger field which includes not only the study of language but the study of messages as a means of controlling machinery and society, the development of computing machines and other such automata, certain reflections upon psychology and the nervous system, and a tentative new theory of scientific method.”

Bertalanffy argued that whilst many of the concepts are shared, Wiener's cybernetics formed a subset of what he saw as the broader ambitions of General Systems Theory. Among the areas of concern which overlapped was interest in learning – particular the relationship between the "learning" behaviours of humans and machines. Wiener discusses the contribution of the psychiatrist Ross Ashby whose work on learning
“is probably the greatest modern contribution to this subject [learning] insofar as it concerns the analogies between living organisms and machines. Learning, like more primitive forms of feedback, is a process which reads differently forward and backward in time. The whole conception of the apparently purposive organism, whether it is mechanical, biological, or social, is that of an arrow with a particular direction in the stream of time rather than that of a line segment facing both ways which we may regard as going in either direction. The creature that learns is not the mythical amphisbaena of the ancients, with a head at each end and no concern with where it is going. It moves ahead from a known past into an unknown future and this future is not interchangeable with that past.”


Wiener’s transdisciplinary foundation of cybernetics carried with it a mathematical armoury which supported new ways of thinking about electronic communications through the ‘mathematical information theory’ of Claude Shannon, new ways of thinking about computers in the work of Von Neumann, and (as a consequence of computers) new ways of thinking about human cognitive processes and artificial intelligence. Each of these new ideas made its claim for transdisciplinary explanatory power: information theory became the foundation for the internet and today’s approaches to “big data”; cognitivism absorbed information theory and the nascent computer science in its claim to unmask metaphysics and produce computational accounts for consciousness and reliable testing for human intelligence. As with all transdisciplinary approaches, the acolytes of each became adept in masking their deficiencies.

In what ways were the new totalising philosophies different from earlier incarnations? Beethoven, in commenting about the relation between philosophy and music commented that philosophy's "indigence, which desires to found everything upon a single principle, is relieved by music". So too might we say that the poverty of educational theory is relieved by teaching and learning itself. In other words, it's not enough to have a theory; one must be continually engaged in exploring its possibilities and actualities practically. And the most practically-engaged were the cyberneticians. 

The cyberneticians were doing science in a new way, making machines to address practical and theoretical problems, examining their results and adjusting their approaches. Ashby argued that the reflexive processes of the imagination in “generating all possible systems” were tightly coupled (or entangled) with practical engagement with the world. Where other transdisciplinary approaches reflexively generated unifying theories and sought empirical evidence in support of them, cybernetics – at least for Ashby – sought to discover the boundaries between what the human imagination was capable of inventing and what nature would produce. Wiener’s cybernetics was an acknowledgement that a new transdisciplinary theory wasn’t enough; what was needed was a new approach to science. Cybernetics appeared to promise a new era of scientific investigation of educational interaction in which technologies and humans would interact in new ways, where ultimately, progress would be made in the elimination of social pathologies. However, in the light of these hopes and their historical context, contemporary use of technologies in education appears to have fallen short of the promise.

The 21st Century Perspective

In a passage in Diana Laurillard’s book on “Learning as a Design Science”, she states:

“The promise of learning technologies is that they appear to provide what the theorists are calling for. Because they are interactive, communicative, user-controlled technologies, they fit well with the requirement for social-constructisit, active learning. They have had little critique from educational design theorists. On the other hand, the empirical work on what is actually happening in education now that technology is widespread has shown that the reality falls far short of the promise."

She then goes on to cite various studies which indicate causes for this 'falling short'. These include Larry Cuban's study which pointed to:
  1. Teachers have too little time to find and evaluate software
  2. They do not have appropriate training and development opportunities
  3. It is too soon – we need decades to learn how to use new technology
  4. Educational institutions are organized around traditional practices
She goes on to echo these findings by stating:
"While we cannot expect that a revolution in the quality and effectiveness of education will necessarily result from the wider use of technology, we should expect the education system to be able to discover how to exploit its potential more effectively. It has to be teachers and lecturers who lead the way on this. No-one else can do it. But they need much more support than they are getting."
Laurillard established her reputation in educational technology by adopting a cybernetic model of learning from Gordon Pask which in turn was a development of the logic of Ashby's work in the field of human interaction. Pask called it his "interaction of actors" theory, and like Ashby, created a variety of machines and experiments to explore the the possibilities generated by his theory and discover what could and couldn't be found in the world around him. Laurillard argued that her 'conversational model' of learning could subsume existing learning theories, including those of Piaget, Kolb, Vygotsky, and others. It could, in other words, be a transdisciplinary foundation for educational processes.
Here emerges a central question which has not only plagued the use of cybernetic and systems thinking in education, but elsewhere, and created what is generally called 'functionalism' or 'regulative sociology'. There are two fundamental dimensions to this problem. On the one hand, cybernetic theories are seen to encompass earlier 'systems theories' where the provenance of constituent theories is obscured, and what emerges is a new variety of 'naturphilosophie'. On the other, cybernetic theories become divorced from cybernetic empirical practice. Allusions to 'cybernetics', 'complexity', 'systems' and so forth carry with them a scientistic 'aura' but betray the essentially practical and entangled nature of cybernetic experiment. Instead they become transcendental metaphysical props, poorly inspected and poorly understood. More seriously, they are easily hijacked by those in society whose intentions are to exercise control, subvert justice, and increase inequality to their own advantage.
The problem is what to do with one's theory in the light of disappointing results. The complexities of the education system include the complexities of individual egos, status and power let alone the economics of personal job security, research funding and student fees within the university system. All theories are incomplete, and some are deeply deficient. However, in a educational system where academic careers are made or broken on the success of theories rather than authenticity of practice, the disincentives for discarding a talked-about theory are significant - particularly if the its continued presence in discourse is instrumental in continued research funding. Yet these conditions are themselves the result of deficient theories and technologies which ought to have been refined or discarded.