Seminár z kognitívnej vedy a umelej inteligencie je (od zimného semestra 2015-2016) pokračovaním spoločného seminára z umelej inteligencie, ktorý organizovali Ústav aplikovanej informatiky FIIT STU (prof. Kvasnička) a Katedra aplikovanej informatiky FMFI UK (prof. Farkaš, a pred ním doc. Šefránek). V zimnom semestri je seminár určený hlavne pre študentov kognitívnej vedy, aby mali možnosť zorientovať sa v existujúcom výskume v našom okolí, v letnom semestri má rozšírený záber aj na umelú inteligenciu. Po celý rok sú však vítaní všetci záujemcovia, ktorí sú chcú vypočuť ponúkané prednášky.
Organizátorom seminára je prof. Igor Farkaš.
Čas a miesto konania seminára: utorok 16:30-17:45 v miestnosti I-9 (pavilón informatiky) na FMFI UK.
Various disciplines suggest that our brain functions predominantly in a generative, predictive manner. Neural encodings develop and neural activities unfold in an anticipatory fashion for the purpose of generating homeostasis-oriented, flexible, adaptive behavioral control. Thereby, behavior includes not only bodily motions but also the control of attention, social interactions, communication, versatile planning, and reasoning. Critical to master such higher-level cognitive abilities and thus to enable the imagination of past, future, and hypothetical scenes and events is the development of generalized and abstract encodings. In accordance, I will sketch-out an integrative, event-predictive theory of cognition. I will show results from behavioral psychological experiments, which highlight that our mind indeed focusses on minimizing event-predictive uncertainty while acting in a goal-directed manner. Moreover, I will give a short overview over our computational models, which learn hierarchical, event-predictive encodings from sensorimotor experiences for the purpose of optimizing flexible and highly adaptive, interactive goal-directed behavior.
The amount and complexity of information is growing steadily, however human tendency to believe in pseudoscientific theories or deny science altogether, and to be subject to cognitive biases is growing, which has a negative impact on the life of individuals (refusal of vaccination, financing ineffective “treatment” methods, ignoring medical recommendations, believing in conspiracies, etc.). Lack of comprehension and acceptance of expert information are caused by (1) cognitive limitations of the information recipient (e.g., confirmation bias, lack of scientific reasoning, receptivity to bullshit, etc.), (2) the way information is phrased and presented (e.g., using percent instead of natural frequencies, format-related biases in cognitive processing of probabilities and risks, etc.). By studying these limitations we should be able to design and test the possibilities of efficient communication of information for debiasing and optimization of decisions.
The basic premise of economics is an assumption of rational behavior in decision-making processes, the skills to choose the means, which lead to targets. Decision making takes an aim to objectify the results of decisions. One of known tools to do so, is the subjective expected utility theory, introduced by mathematicians like Frank Ramsey, John von Neumann or Leonard Savage, who also have attempted to axiomatize it to a general theory of utility. The prominent critic of this theory was Daniel Ellsberg, who demonstrated several tests proving that people behave differently than expected and came with the ambiguity effect bias. At present, Ellsberg's test is considered very well standardized, so we used it as a base for further testing mutual influence of various biases, like the anchoring bias and bandwagon effect. Testing was made on a sample of 480 students (so far) without bringing any clear conclusion, but showed some possibilities quantifying the impact of various biases to decision-making.
From uncritical preference for default options to epistemically suspect beliefs, biased reasoning and decision making leads to costly errors and severe consequences across all life domains. Cognitive biases are well documented, and we already know a lot about their causes. Yet, finding effective interventions still remains a huge scientific challenge since early attempts met with little success. We will discuss several promising ways how to enhance rationality, such as mental simulation, choice architecture or thinking in a foreign language.
Sleep is a continuous heterogeneous process consisting of a finite number of sleep stages during a night. Its quality, length and structure influence our health, mood and daily behaviour. Polysomnography (PSG) is a comprehensive recording of the biophysiological changes that occur during sleep and helps to objectively quantify sleep, both in clinical practice and research. PSG recordings create a basis of the majority of sleep models - either for the traditionally used Rechtschaffen and Kales sleep scoring or probabilistic sleep modeling, a novel approach of sleep modelling. In addition to revealing important events pointing towards sleep disorders, these models can provide objective biomarkers reflecting the quality of sleep. In the presentation an overview of methods analyzing the correspondence between the objective measurement and subjective assessment of the sleep quality is given, and their advantages and disadvantages are discussed.
In order to interact intelligently with the world, the embodied robots must acquire a number of abilities, one of them being their own body schema. We will present several examples of simple models along this direction, that are based on artificial neural networks, taking advantage of known paradigms such as supervised, unsupervised and reinforcement learning. The ideas will be presented via selected tasks of motor learning, learning body schema and mapping between reference frames in a simulated humanoid robot. These examples will serve as motivation for potential research projects - theoretical, or based on computational modelling.
Disrupted lexical-semantic processing is a core symptom of several neuropsychiatric disorders and has often been attributed to compromised functions of the prefrontal and temporoparietal cortex. In this presentation, we provide a short report of our recent empirical research using transcranial electrical stimulation (tES) of the respective brain areas to enhance lexical-semantic functions and processing (e.g., response initiation, response inhibition, and response flexibility during retrieval from semantic memory). Our preliminary findings indicate that tES over the prefrontal brain regions can modulate such functions and processes in healthy adults. This intervention may be potentially useful in the treatment of functionally related neuropsychiatric conditions.
The phenomenon of artificial intelligence as the complex offspring of cognitive science and information technology (and inherently associated with "internetization", "digitalization", "virtualization" of reality, "robotization", etc.), is generally recognized as the key phenomenon of the contemporary human world having the potential not only to change the very nature of humanity itself but also to bring about one of its most serious existential risks (Open Letter on AI, 2015). For understanding the nature and value of AI, it is crucial to understand such concepts as human agency, intelligence (natural, general, weak, strong), consciousness, mind, thinking, free will, life, machine, technology, culture, etc., which fall under the competence of philosophy. Philosophy is the key for further development of AI (D. Deutsch). The area of the philosophy of AI has been developing (in parallels with cognitive science) since the historic 1956 DARPA conference, but the first academic synthesis has been provided by a textbook some 15 years ago (Russell & Norvig, 2002). Today the discussion of two groups of philosophical issues of AI can be distinguished: "internal" (stemming from Turing's problem of thinking machines and ending up with Bostrom's concept of super-intelligence) and "external" (starting from Minsky's "society of mind" and ending up with the ethical question of "good AI").
Autism spectrum disorder (ASD) is a neurodevelopmental disability characterized by impairments in communication, social interaction, restricted interests and repetitive behaviour. The aetiology of autism is poorly understood. ASD is diagnosed four times more frequently in males than in females. Previous studies have revealed that autism may arise as a result of exposure to high concentrations of prenatal testosterone. Ratio of the second and the fourth digits (2D:4D) is usually used as a proxy for prenatal testosterone. Our research findings on children with autism diagnosed at Academic Research Centre for Autism are discussed with the reference to the “extreme male - brain” theory of autism.
Mental imagery is a phenomenon that has been given various theoretical accounts in cognitive science. We present an enactive approach to visuospatial mental imagery, implemented in a hybrid computational model, using the so-called perceptual actions. The model consists of a forward model, an inverse model, both implemented as neural networks, and a memory/controller module, that grounds simple mental concepts, such as a triangle and a square, in perceptual actions, and is able to reimagine these objects by performing the necessary perceptual actions in a simulated humanoid robot. We tested the model on three tasks – salience-based object recognition, imagination-based object recognition and object imagination – and achieved very good results showing, as a proof of concept, that perceptual actions are a viable candidate for grounding the visuospatial mental concepts as well as the credible substrate of visuospatial mental imagery. (The core of this work was done with students of cognitive science - Jan Jug, Tine Kolenik and André Ofner - during their mobility semester).
Standard error backpropagation is still the most prominent algorithm for supervised training of artificial neural networks, although it has been claimed to be biologically implausible. Algorithms based on local activation differences such as GeneRec (O'Reilly, 1996) were designed as an alternative to error propagation. In continuation with our previous research (Malinovská/Rebrová and Farkaš, 2013), we present a model based on GeneRec that learns heteroassociative mappings. We show how our Universal Bidirectional Activation-based Learning algorithm can learn various other tasks if different parameters are used and that its performance in canonical neural network tasks, such as 4-2-4 encoder or XOR as well as in machine learning benchmarks such as the MNIST dataset, is comparable with a standard error backpropagation model.
Since the mid 1970s connections between music and language have been widely studied in cognitive science. The first part of this talk will present the main directions such studies have taken in recent years (structural, neurobiological, evolutionary, and semantic). In particular, I will discuss major breakthroughs from linguistics that made an impact on music cognition and then reflected back on the study of language. In the second part, I will provide a precis of four recent studies by my group. The first of these looks into the relative weights of perceptual clues for constructing well-formed metrical and melodic patterns in music; the second suggests that seemingly disparate cross-linguistic conceptualizations of music-theoretic constructs, such as scales ''moving upward'' or ''getting thinner'', do not necessarily support the strong case for linguistic relativity, but may rather emerge from more universal, higher-order schematic invariants; the third study revisits the referential power of program music, suggesting that in constructing musical meaning participants are highly sensitive to contextual priming; the fourth segment integrates such results into a larger-scale theoretical program on the nature of musical meaning, vouching for ''multi-level grounding'' in (musical) semantics. Altogether, I hope to suggest that some constructs originally defined in linguistics and then taken over by music cognition can help shed additional light on some major dilemmas of cognitive science in general.
Už dnes jazdí po našich cestách veľa vozidiel s rôznymi elektronickými systémami, ktoré informujú vodiča o hroziacom nebezpečenstve a preberajú riadenie, keď vodič nekoná. Postupne pribúdajú aj vozidlá s umelou inteligenciou, ktorej môže vodič úmyselne odovzdať riadenie a následne sa venovať inej činnosti. Na rozhodovanie vozidlá vyplýva nielen znalosť ovládania vozidla, znalosť dopravných predpisov a dopravných značiek, ale aj nešpecifikované okolnosti, ktoré musí vozidlo zvládnuť aspoň na úrovni ako priemerný vodič človek.
We investigate the role of perceptual similarity in visual metaphor comprehension process. In visual metaphors, perceptual features of the source and the target are objectively present as images. Moreover, to determine perceptual similarity, we use an image-based search system that computes similarity based on low-level perceptual features. We hypothesize that perceptual similarity at the level of color, shape, texture, orientation, and the like, between the source and the target image facilitates metaphorical comprehension and aids creative interpretation. We present three experiments, two of which are eye-movement studies, to demonstrate that in the interpretation and generation of visual metaphors, perceptual similarity between the two images is recognized at a subconscious level, and facilitates the search for creative conceptual associations in terms of emergent features. We argue that the capacity to recognize perceptual similarity—considered to be a hallmark of creativity—plays a major role in the creative understanding of metaphors.
Capturing the movement of a human body is a challenging problem. The surface of the body deforms in a complex way that is hard to be described in a precise way. Therefore, in computer graphics, this complex deformation is usually approximated by the so-called animation skeleton. The skeleton segments the body surface into a set of several rigid areas, and surface deformation of the body is approximated by a rigid transformations of the skeleton. The topic becomes even more challenging if we want to capture not only the movement, but also the point cloud frames of the body, and later reconstruct the whole 3D model of the subject. Since the body can be in different poses each frame, the so-called non-rigid surface fusion has to be performed. This step consists of transforming each point cloud into a normalized pose, where the individual surface sheets need to be stitched. Finally, we would like to reconstruct the 3D model of the subject and transform it into a desired pose. Having a single camera only, a visible surface area is limited, and thus the deformations on the back faced body parts need to be approximated. This can be be either computed using physical simulation that is very time and power consuming, or approximated using machine learning and statistical models.
Artificial neural networks recently became the most successful approach in machine learning with many successful applications in various domains such as image recognition, machine translation, reinforcement learning or generative modeling. The majority of artificial neural network models are currently trained using the gradient descent method with an error backpropagation mechanism. However, the main disadvantage of these models is the required large number of training examples for achieving reasonable performance. This limits the applications to domains offering abundance of labelled training examples or, in the case of reinforcement learning, the large number of interactions with an environment makes it possible to learn the task in that environment. In this talk, we will explore the few-shot learning approach, requiring only a few labeled examples and review various few-shot learning methods based on artificial neural networks. We will also propose a novel approach to the few-shot learning approach called Categorical Siamese neural networks.