Skip to Main Content
Article navigation
Purpose

Bateson's model of classifying different types of learning will be analyzed from a logical and technical point of view. While learning 0 has been realized for chess playing computers, learning I turns out today as the basic concept of artificial neural nets (ANN). All models of ANN are basically (non linear) data filters, which is the idea behind simple and behavioristic input‐output models.

Design/methodology/approach

The paper will discuss technical systems designed on the concept of learning 0 and I and it will demonstrate that these models do not have an environment, i.e. they are non‐cognitive and therefore “non‐learning” systems.

Findings

Models based on Bateson's category of Learning II differ fundamentally from Learning 0 and I. They cannot be modeled any longer on the basis of classical (mono‐contextural) logics. Technical artifacts which belong to this category have to be able to change their algorithms (behavior) by their own effort. Learning II turns out as a process which cannot be described or modeled on a sequential time axis. Learning II as a process belongs to the category of (parallel interwoven) heterarchical‐hierarchical process‐structures.

Originality/value

In order to model this kind of process‐structures the polycontextural theory has to be used – a theory which was introduced by the German‐American Philosopher and Logician Gotthard Günther (1900‐1984) and has been further developed by Rudolf Kaehr and others.

You do not currently have access to this content.
Don't already have an account? Register

Purchased this content as a guest? Enter your email address to restore access.

Please enter valid email address.
Email address must be 94 characters or fewer.
Pay-Per-View Access
$41.00
Rental

or Create an Account

Close Modal
Close Modal