Seminár z kognitívnej vedy a umelej inteligencie je (od zimného semestra 2015-2016) pokračovaním spoločného seminára z umelej inteligencie, ktorý organizovali Ústav aplikovanej informatiky FIIT STU (prof. Kvasnička) a Katedra aplikovanej informatiky FMFI UK (prof. Farkaš, a pred ním doc. Šefránek). V zimnom semestri je seminár určený hlavne pre študentov kognitívnej vedy, aby mali možnosť zorientovať sa v existujúcom výskume v našom okolí, v letnom semestri má rozšírený záber aj na umelú inteligenciu. Po celý rok sú však vítaní všetci záujemcovia, ktorí sú chcú vypočuť ponúkané prednášky.
Organizátorom seminára je prof. Igor Farkaš.
Čas a miesto konania seminára: utorok 16:30-17:45 v miestnosti I-23 (pavilón informatiky) na FMFI UK.
Humans are perfect at object handling. For a very long time, industrial robots were doing only simple, hard-coded manipulation. High-quality 3D scanning enabled us to localize objects precisely, and plan their picking and movement to avoid collisions. Localization and picking are tasks that we are currently solving with both analytical and machine learning approach. In the talk, we discuss both approaches and compare their advantages and disadvantages.
Relations stand for the links between entities (is unlocked by (COMPUTER PASSWORD)). On the one hand, the same relation can be held between different entities (“is an instrument of” works for "artist brush", "tailor needle", "hairdresser scissors", and "fisherman fishing", etc.) and in turn, may be accessed from different instances (Popov and Hristova, 2015), even when it is irrelevant and can disrupt the task at hand (Hristova, 2009). Furthermore, an active relation may mislead memory as suggested by the Relational luring effect (RLE) (Popov, Hristova, Anders, 2017): 1) word pairs were falsely recognised as studied if they were instances of the learned relations ("snail shell" instead of "bear den") and 2) the correct rejection RTs significantly slowed down with the number of the trials since previous instances of the same relation. On the other hand, since every two entities can be linked through different relations ("artist brush" can be meaningfully related via:”is an instrument of”, “broke”, “hide”, “toss” etc.), the winning ones may be the most contextually or goal relevant, but also the most typical ones. This encoding priority may indicate that some relations are more contextually relevant (Hristova, 2009), but also more typical for the perceived entities. Indeed, it turned out that the typicality of the instance, but not of the role-fillers predicts the RLE (Popov, Hristova, Pavlova, in prep). Hence, the relational, rather than the semantic or the role similarity may explain the RLE and respectively, the heightened readiness of the Long-Term relational representation it witnesses for.
Speech data mining (that will be represented by transcription in this talk) is an important discipline of machine learning. In addition to classical ML and computer science components, it also sources other sciences such as physiology, lexicography, and phonetics, making it a funny inter-disciplinary domain. Similarly to other ML sub-fields, it has been turned upside-down in the recent years by the massive use of neural networks. Although their use in speech dates back to 2000, it is only around 2010 they took the ground and started to dominate the field. Brno speech group has been through all these changes, sometimes following the others, sometimes defining the history. The talk will cover these developments as well as some new research trends.
The past decade has witnessed enormous growth in research and applications of machine learning techniques. Modern deep learning techniques have beaten humans in several important benchmarks, yet it is clear that intelligence available in today's systems lags behind even the simplest animals. Part of the lag is theoretical, since there is little agreement on how to build connectionist systems. We would like to draw inspiration from biology, but there is still only fragmentary understanding of learning mechanisms in animal nervous systems. The other critical hurdle that needs to be cleared is what hardware should be used to implement future artificial intelligence systems. In our talk we will present our work on memristors, a promising nanotechnology device that may serve to emulate synapses in the next generation of artificial neural networks. Our results will be framed within the context of competition between deep learning and neuromorphic approaches to artificial intelligence.
Various disciplines suggest that our brain functions predominantly in a generative, predictive manner. Neural encodings develop and neural activities unfold in an anticipatory fashion for the purpose of generating homeostasis-oriented, flexible, adaptive behavioral control. Thereby, behavior includes not only bodily motions but also the control of attention, social interactions, communication, versatile planning, and reasoning. Critical to master such higher-level cognitive abilities and thus to enable the imagination of past, future, and hypothetical scenes and events is the development of generalized and abstract encodings. In accordance, I will sketch-out an integrative, event-predictive theory of cognition. I will show results from behavioral psychological experiments, which highlight that our mind indeed focusses on minimizing event-predictive uncertainty while acting in a goal-directed manner. Moreover, I will give a short overview over our computational models, which learn hierarchical, event-predictive encodings from sensorimotor experiences for the purpose of optimizing flexible and highly adaptive, interactive goal-directed behavior.
The amount and complexity of information is growing steadily, however human tendency to believe in pseudoscientific theories or deny science altogether, and to be subject to cognitive biases is growing, which has a negative impact on the life of individuals (refusal of vaccination, financing ineffective “treatment” methods, ignoring medical recommendations, believing in conspiracies, etc.). Lack of comprehension and acceptance of expert information are caused by (1) cognitive limitations of the information recipient (e.g., confirmation bias, lack of scientific reasoning, receptivity to bullshit, etc.), (2) the way information is phrased and presented (e.g., using percent instead of natural frequencies, format-related biases in cognitive processing of probabilities and risks, etc.). By studying these limitations we should be able to design and test the possibilities of efficient communication of information for debiasing and optimization of decisions.
The basic premise of economics is an assumption of rational behavior in decision-making processes, the skills to choose the means, which lead to targets. Decision making takes an aim to objectify the results of decisions. One of known tools to do so, is the subjective expected utility theory, introduced by mathematicians like Frank Ramsey, John von Neumann or Leonard Savage, who also have attempted to axiomatize it to a general theory of utility. The prominent critic of this theory was Daniel Ellsberg, who demonstrated several tests proving that people behave differently than expected and came with the ambiguity effect bias. At present, Ellsberg's test is considered very well standardized, so we used it as a base for further testing mutual influence of various biases, like the anchoring bias and bandwagon effect. Testing was made on a sample of 480 students (so far) without bringing any clear conclusion, but showed some possibilities quantifying the impact of various biases to decision-making.
From uncritical preference for default options to epistemically suspect beliefs, biased reasoning and decision making leads to costly errors and severe consequences across all life domains. Cognitive biases are well documented, and we already know a lot about their causes. Yet, finding effective interventions still remains a huge scientific challenge since early attempts met with little success. We will discuss several promising ways how to enhance rationality, such as mental simulation, choice architecture or thinking in a foreign language.
Sleep is a continuous heterogeneous process consisting of a finite number of sleep stages during a night. Its quality, length and structure influence our health, mood and daily behaviour. Polysomnography (PSG) is a comprehensive recording of the biophysiological changes that occur during sleep and helps to objectively quantify sleep, both in clinical practice and research. PSG recordings create a basis of the majority of sleep models - either for the traditionally used Rechtschaffen and Kales sleep scoring or probabilistic sleep modeling, a novel approach of sleep modelling. In addition to revealing important events pointing towards sleep disorders, these models can provide objective biomarkers reflecting the quality of sleep. In the presentation an overview of methods analyzing the correspondence between the objective measurement and subjective assessment of the sleep quality is given, and their advantages and disadvantages are discussed.
In order to interact intelligently with the world, the embodied robots must acquire a number of abilities, one of them being their own body schema. We will present several examples of simple models along this direction, that are based on artificial neural networks, taking advantage of known paradigms such as supervised, unsupervised and reinforcement learning. The ideas will be presented via selected tasks of motor learning, learning body schema and mapping between reference frames in a simulated humanoid robot. These examples will serve as motivation for potential research projects - theoretical, or based on computational modelling.
Disrupted lexical-semantic processing is a core symptom of several neuropsychiatric disorders and has often been attributed to compromised functions of the prefrontal and temporoparietal cortex. In this presentation, we provide a short report of our recent empirical research using transcranial electrical stimulation (tES) of the respective brain areas to enhance lexical-semantic functions and processing (e.g., response initiation, response inhibition, and response flexibility during retrieval from semantic memory). Our preliminary findings indicate that tES over the prefrontal brain regions can modulate such functions and processes in healthy adults. This intervention may be potentially useful in the treatment of functionally related neuropsychiatric conditions.
The phenomenon of artificial intelligence as the complex offspring of cognitive science and information technology (and inherently associated with "internetization", "digitalization", "virtualization" of reality, "robotization", etc.), is generally recognized as the key phenomenon of the contemporary human world having the potential not only to change the very nature of humanity itself but also to bring about one of its most serious existential risks (Open Letter on AI, 2015). For understanding the nature and value of AI, it is crucial to understand such concepts as human agency, intelligence (natural, general, weak, strong), consciousness, mind, thinking, free will, life, machine, technology, culture, etc., which fall under the competence of philosophy. Philosophy is the key for further development of AI (D. Deutsch). The area of the philosophy of AI has been developing (in parallels with cognitive science) since the historic 1956 DARPA conference, but the first academic synthesis has been provided by a textbook some 15 years ago (Russell & Norvig, 2002). Today the discussion of two groups of philosophical issues of AI can be distinguished: "internal" (stemming from Turing's problem of thinking machines and ending up with Bostrom's concept of super-intelligence) and "external" (starting from Minsky's "society of mind" and ending up with the ethical question of "good AI").
Autism spectrum disorder (ASD) is a neurodevelopmental disability characterized by impairments in communication, social interaction, restricted interests and repetitive behaviour. The aetiology of autism is poorly understood. ASD is diagnosed four times more frequently in males than in females. Previous studies have revealed that autism may arise as a result of exposure to high concentrations of prenatal testosterone. Ratio of the second and the fourth digits (2D:4D) is usually used as a proxy for prenatal testosterone. Our research findings on children with autism diagnosed at Academic Research Centre for Autism are discussed with the reference to the “extreme male - brain” theory of autism.