By author > Mollicchi Dr Silvia

Integrating the Manifest and the Postulational Images at the time of pervasive computational technologies
Dr Silvia Mollicchi  1, *@  
1 : Ada Lovelace Institute
28 Bedford Square -  United Kingdom
* : Corresponding author

The development of computational technologies in the contemporary age offers multiple examples that we could articulate through the lenses of Sellars' writing on the Manifest and the Scientific Images. This presentation will argue that the pervasive use of computation as a support or even replacement for our cognizing of reality renders the integration of the theoretical image as it currently obtains into the manifest image that we observe increasingly challenging, if not impossible, as is seen with the case of deep learning neural nets.

The paper will begin with a brief exposition of Sellars' manifest and scientific images, as sites in which we use language according to an observational and theoretical or ‘postulational' mode respectively. We will look at these two modes as they are presented in the context of Sellars' writing on language acquisition (for instance 'Language as Thought and Communication', 'Meaning as Functional Classification' and 'Actions and Events'). This will enable us to emphasise the continuity between Sellarsian theory of language and mind and epistemology, where the account of the two images is usually presented. By way of this preface, we will look at how notions of observational and theoretical modes of language cut across the distinction between manifest and scientific images of man in Sellarsian literature. We will also emphasise how, Sellars' writing suggests not only the possibility of the two images needing to be held together in a synoptic vision, but also of going from the postulation of entities that we cannot observe but only infer to, after sufficient training in the related specialised vocabulary, actually observing their behaviour in a non-inferential way.

In other words, our capacity to integrate aspects of the scientific image into the manifest one hinges on the double function of natural language mentioned above. We use language to postulate the existence of entities, which we cannot directly observe and, by virtue of this postulation, we infer expected behaviour, which we may be able to observe. Eventually postulation and inference are no longer necessary and a theoretical element is ‘integrated' in the manifest image we observe. Notably, it is because theoretical discourse is defined in a manner that already relates theoretical entities to observational ones—which have ostensive links with the world—that we can eventually integrate what we know about the former into the what we know about the latter. Here, we will make some considerations on this integration and how it affects our observational capacities. This will necessitate a brief treatment of what we mean by ‘observation', borrowing from Brandom's exposition of the topic, which will link back up to the considerations made on language acquisition and how Sellars' themes of knowing that and knowing how intersect the development of the two images and their synoptic continuum.

We will continue by highlighting how any equation of the ‘algorithmic image' with the scientific (or postulational) one would be mistaken. Indeed, computation at large is an analytic mode of grasping reality and not of postulating theoretical entities that we may or not eventually integrate in the manifest image. However, it is striking that specific computational technologies, themselves based on the possibility of parsing reality on the basis of specific categories (pre-determined by programmers or emergent as in the case of deep learning artificial intelligence), operate or contribute to decision-making, the logic of which is increasingly elusive. The difference in ‘processing capacities', which we will briefly consider, results into our increasingly limited ability to translate the inferences made by an algorithm into inferences that a human agent could plausibly make.

Yet, when our vision of the world is supported by a computational logic that does not feedback into the structuring of our phenomenological experience in such a manner that we can represent it to ourselves, then our capacity to integrate what we know and how we know it—inferential reasoning and reasoning we do without being aware of inferring—is curtailed. This seems to be increasingly the case with the development of artificial intelligence in the form, for instance, of deep learning neural networks.

The paper will look at two scenarios of algorithmic decision-making. The first one will come from the context of the social sciences and show the localised difficulties, although we will argue not impossibilities, of integrating into our observational capacities the mode of computing of the algorithm. In this context, we will advance hypothesis as to what is necessary to facilitate the kind of integration described above. The second scenario will reference an example of deep learning neural network applications for medical diagnosis and an application for scientific discovery recently piloted. In this context, we will reflect on the supposed impossibility to integrate the components of the interim scientific image to which algorithms may contribute into our manifest image of man.


Online user: 16 RSS Feed | Privacy
Loading...