Högskolan i Skövde

his.sePublications
Change search
Refine search result
1 - 49 of 49
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Bengtsson, Björn
    University of Skövde, School of Informatics.
    Dynamisk Kollisionsundvikande I Twin Stick shooter: Hastighetshinder och partikelseparation2019Independent thesis Basic level (university diploma), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    I examensarbetet jämförs undvikande av kollision och tidsefektivitet mellan det två metoderna hastighetshinder och partikelseparation i spelgenren Twin stick shooter. Arbetet försöker besvara frågan: Hur skiljer sig undvikandet av kollision och tidseffektiviteten mellan metoderna hastighetshinder och partikelseparation, i spelgenren twin stick shooter med flockbeteende? För att besvara frågan har en artefakt skapats. I artefakten jagar agenter en spelare medan agenterna undviker kollision med andra agenter, dock eftersträvar agenterna att kollidera med spelaren. I artefakten körs olika experiment baserat på parametrar som har ställts in. Varje experiment körs en bestämd tid och all data om kollisioner och exekveringstid för respektive metod sparas i en textfil.   Resultatet av experimenten pekar på att partikelseparation lämpar sig bättre för twin stick shooters.  Hastighetshinder kolliderar mindre men tidsberäkningen är för hög och skalar dåligt med antal agenter. Det passar inte twinstick shooter då det oftast är många agenter på skärmen.  Metoderna för undvikandet av kollision har användning till radiostyrda billar och robotar, samt simulation av folkmassa.

    Download full text (pdf)
    fulltext
  • 2.
    Bevilacqua, Fernando
    et al.
    Federal University of Fronteira Sul, Chapecó, Brazil.
    Backlund, Per
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Engström, Henrik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Proposal for Non-contact Analysis of Multimodal Inputs to Measure Stress Level in Serious Games2015In: VS-Games 2015: 7th International Conference on Games and Virtual Worlds for Serious Applications / [ed] Per Backlund; Henrik Engström; Fotis Liarokapis, Red Hook, NY: IEEE Computer Society, 2015, p. 171-174Conference paper (Refereed)
    Abstract [en]

    The process of monitoring user emotions in serious games or human-computer interaction is usually obtrusive. The work-flow is typically based on sensors that are physically attached to the user. Sometimes those sensors completely disturb the user experience, such as finger sensors that prevent the use of keyboard/mouse. This short paper presents techniques used to remotely measure different signals produced by a person, e.g. heart rate, through the use of a camera and computer vision techniques. The analysis of a combination of such signals (multimodal input) can be used in a variety of applications such as emotion assessment and measurement of cognitive stress. We present a research proposal for measurement of player’s stress level based on a non-contact analysis of multimodal user inputs. Our main contribution is a survey of commonly used methods to remotely measure user input signals related to stress assessment.

  • 3.
    Billing, Erik
    Umeå universitet, Institutionen för datavetenskap.
    Cognition Rehearsed: Recognition and Reproduction of Demonstrated Behavior2012Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The work presented in this dissertation investigates techniques for robot Learning from Demonstration (LFD). LFD is a well established approach where the robot is to learn from a set of demonstrations. The dissertation focuses on LFD where a human teacher demonstrates a behavior by controlling the robot via teleoperation. After demonstration, the robot should be able to reproduce the demonstrated behavior under varying conditions. In particular, the dissertation investigates techniques where previous behavioral knowledge is used as bias for generalization of demonstrations.

    The primary contribution of this work is the development and evaluation of a semi-reactive approach to LFD called Predictive Sequence Learning (PSL). PSL has many interesting properties applied as a learning algorithm for robots. Few assumptions are introduced and little task-specific configuration is needed. PSL can be seen as a variable-order Markov model that progressively builds up the ability to predict or simulate future sensory-motor events, given a history of past events. The knowledge base generated during learning can be used to control the robot, such that the demonstrated behavior is reproduced. The same knowledge base can also be used to recognize an on-going behavior by comparing predicted sensor states with actual observations. Behavior recognition is an important part of LFD, both as a way to communicate with the human user and as a technique that allows the robot to use previous knowledge as parts of new, more complex, controllers.

    In addition to the work on PSL, this dissertation provides a broad discussion on representation, recognition, and learning of robot behavior. LFD-related concepts such as demonstration, repetition, goal, and behavior are defined and analyzed, with focus on how bias is introduced by the use of behavior primitives. This analysis results in a formalism where LFD is described as transitions between information spaces. Assuming that the behavior recognition problem is partly solved, ways to deal with remaining ambiguities in the interpretation of a demonstration are proposed.

    The evaluation of PSL shows that the algorithm can efficiently learn and reproduce simple behaviors. The algorithm is able to generalize to previously unseen situations while maintaining the reactive properties of the system. As the complexity of the demonstrated behavior increases, knowledge of one part of the behavior sometimes interferes with knowledge of another parts. As a result, different situations with similar sensory-motor interactions are sometimes confused and the robot fails to reproduce the behavior.

    One way to handle these issues is to introduce a context layer that can support PSL by providing bias for predictions. Parts of the knowledge base that appear to fit the present context are highlighted, while other parts are inhibited. Which context should be active is continually re-evaluated using behavior recognition. This technique takes inspiration from several neurocomputational models that describe parts of the human brain as a hierarchical prediction system. With behavior recognition active, continually selecting the most suitable context for the present situation, the problem of knowledge interference is significantly reduced and the robot can successfully reproduce also more complex behaviors.

    Download full text (pdf)
    FULLTEXT01
  • 4.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Balkenius, Christian
    Lund University Cognitive Science, Lund, Sweden.
    Modeling the Interplay between Conditioning and Attention in a Humanoid Robot: Habituation and Attentional Blocking2014In: IEEE ICDL-EPIROB 2014: The Fourth Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, October 13-16, 2014 Palazzo Ducale, Genoa, Italy, IEEE conference proceedings, 2014, p. 41-47Conference paper (Refereed)
    Abstract [en]

    A novel model of role of conditioning in attention is presented and evaluated on a Nao humanoid robot. The model implements conditioning and habituation in interaction with a dynamic neural field where different stimuli compete for activation. The model can be seen as a demonstration of how stimulus-selection and action-selection can be combined and illustrates how positive or negative reinforcement have different effects on attention and action. Attention is directed toward both rewarding and punishing stimuli, but appetitive actions are only directed toward positive stimuli. We present experiments where the model is used to control a Nao robot in a task where it can select between two objects. The model demonstrates some emergent effects also observed in similar experiments with humans and animals, including attentional blocking and latent inhibition.

  • 5.
    Billing, Erik
    et al.
    Department of Computing Science, Umeå University, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Sweden.
    Janlert, Lars-Erik
    Department of Computing Science, Umeå University, Sweden.
    Robot learning from demonstration using predictive sequence learning2012In: Robotic systems: applications, control and programming / [ed] Ashish Dutta, Kanpur, India: IN-TECH , 2012, p. 235-250Chapter in book (Refereed)
    Abstract [en]

    In this chapter, the prediction algorithm Predictive Sequence Learning (PSL) is presented and evaluated in a robot Learning from Demonstration (LFD) setting. PSL generates hypotheses from a sequence of sensory-motor events. Generated hypotheses can be used as a semi-reactive controller for robots. PSL has previously been used as a method for LFD, but suffered from combinatorial explosion when applied to data with many dimensions, such as high dimensional sensor and motor data. A new version of PSL, referred to as Fuzzy Predictive Sequence Learning (FPSL), is presented and evaluated in this chapter. FPSL is implemented as a Fuzzy Logic rule base and works on a continuous state space, in contrast to the discrete state space used in the original design of PSL. The evaluation of FPSL shows a significant performance improvement in comparison to the discrete version of the algorithm. Applied to an LFD task in a simulated apartment environment, the robot is able to learn to navigate to a specific location, starting from an unknown position in the apartment.

  • 6.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Rosén, Julia
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Lamb, Maurice
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment. University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Language Models for Human-Robot Interaction2023In: HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, ACM Digital Library, 2023, p. 905-906Conference paper (Refereed)
    Abstract [en]

    Recent advances in large scale language models have significantly changed the landscape of automatic dialogue systems and chatbots. We believe that these models also have a great potential for changing the way we interact with robots. Here, we present the first integration of the OpenAI GPT-3 language model for the Aldebaran Pepper and Nao robots. The present work transforms the text-based API of GPT-3 into an open verbal dialogue with the robots. The system will be presented live during the HRI2023 conference and the source code of this integration is shared with the hope that it will serve the community in designing and evaluating new dialogue systems for robots.

    Download full text (pdf)
    fulltext
  • 7.
    Boström, Henrik
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Maximizing the Area under the ROC Curve with Decision Lists and Rule Sets2007In: Proceedings of the 7th SIAM International Conference on Data Mining / [ed] C. Apte, B. Liu, S. Parthasarathy, D. Skillicorn, Society for Industrial and Applied Mathematics , 2007, p. 27-34Conference paper (Refereed)
    Abstract [en]

    Decision lists (or ordered rule sets) have two attractive properties compared to unordered rule sets: they require a simpler classi¯cation procedure and they allow for a more compact representation. However, it is an open question what effect these properties have on the area under the ROC curve (AUC). Two ways of forming decision lists are considered in this study: by generating a sequence of rules, with a default rule for one of the classes, and by imposing an order upon rules that have been generated for all classes. An empirical investigation shows that the latter method gives a significantly higher AUC than the former, demonstrating that the compactness obtained by using one of the classes as a default is indeed associated with a cost. Furthermore, by using all applicable rules rather than the first in an ordered set, an even further significant improvement in AUC is obtained, demonstrating that the simple classification procedure is also associated with a cost. The observed gains in AUC for unordered rule sets compared to decision lists can be explained by that learning rules for all classes as well as combining multiple rules allow for examples to be ranked according to a more fine-grained scale compared to when applying rules in a fixed order and providing a default rule for one of the classes.

  • 8.
    Cai, Haibin
    et al.
    School of Computing, University of Portsmouth, U.K..
    Fang, Yinfeng
    School of Computing, University of Portsmouth, U.K..
    Ju, Zhaojie
    School of Computing, University of Portsmouth, U.K..
    Costescu, Cristina
    Department of Clinical Psychology and Psychotherapy, Babe-Bolyai University, Cluj-Napoca, Romania.
    David, Daniel
    Department of Clinical Psychology and Psychotherapy, Babe-Bolyai University, Cluj-Napoca, Romania.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Ziemke, Tom
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Department of Computer and Information Science, Linkoping University, Sweden.
    Thill, Serge
    University of Plymouth, U.K..
    Belpaeme, Tony
    University of Plymouth, U.K..
    Vanderborght, Bram
    Vrije Universiteit Brussel and Flanders Make, Belgium.
    Vernon, David
    Carnegie Mellon University Africa, Rwanda.
    Richardson, Kathleen
    De Montfort University, U.K..
    Liu, Honghai
    School of Computing, University of Portsmouth, U.K..
    Sensing-enhanced Therapy System for Assessing Children with Autism Spectrum Disorders: A Feasibility Study2019In: IEEE Sensors Journal, ISSN 1530-437X, E-ISSN 1558-1748, Vol. 19, no 4, p. 1508-1518Article in journal (Refereed)
    Abstract [en]

    It is evident that recently reported robot-assisted therapy systems for assessment of children with autism spectrum disorder (ASD) lack autonomous interaction abilities and require significant human resources. This paper proposes a sensing system that automatically extracts and fuses sensory features such as body motion features, facial expressions, and gaze features, further assessing the children behaviours by mapping them to therapist-specified behavioural classes. Experimental results show that the developed system has a capability of interpreting characteristic data of children with ASD, thus has the potential to increase the autonomy of robots under the supervision of a therapist and enhance the quality of the digital description of children with ASD. The research outcomes pave the way to a feasible machine-assisted system for their behaviour assessment. IEEE

  • 9.
    Durán, Boris
    et al.
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Lee, Gauss
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Lowe, Robert
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Learning a DFT-based sequence with reinforcement learning: A NAO implementation2012In: Paladyn - Journal of Behavioral Robotics, ISSN 2080-9778, E-ISSN 2081-4836, Vol. 3, no 4, p. 181-187Article in journal (Refereed)
  • 10.
    Ericson, Stefan
    et al.
    University of Skövde, School of Technology and Society.
    Åstrand, Björn
    Halmstad University, Sweden.
    Algorithms for visual odometry in outdoor field environment2007In: RA '07: Proceedings of the 13th IASTED International Conference on Robotics and Applications / [ed] Klaus Schilling, ACTA Press, 2007, p. 414-419Conference paper (Refereed)
  • 11.
    Eriksson, Patric
    et al.
    University of Skövde, Department of Engineering Science.
    Moore, Philip
    Mechatronics Research group, School of Engineering and Manufacture, De Montfort University, Leicester, UK.
    A role for 'sensor simulation' and 'pre-emptive learning' in computer aided robotics1995In: 26th International Symposium on Industrial Robots, Symposium Proceedings: Competitive automation: new frontiers, new opportunities, Mechanical Engineering Publ. , 1995, p. 135-140Conference paper (Refereed)
    Abstract [en]

    Sensor simulation in Computer Aided Robotics (CAR) can enhance the capabilities of such systems to enable off-line generation of programmes for sensor driven robots. However, such sensor simulation is not commonly supported in current computer aided robotic environments. A generic sensor object model for the simulation of sensors in graphical environments is described in this paper. Such a model can be used to simulate a variety of sensors, for example photoelectric, proximity and ultrasonic sensors. Tests results presented here show that this generic sensor model can be customised to emulate the characteristics of the real sensors. The preliminary findings from the first off-line trained mobile robot are presented. The results indicate that sensor simulation within CARs can be used to train robots to adapt to changing environments.

  • 12.
    Hamed, Omar
    University of Skövde, School of Informatics.
    Pedestrian Intention Recognition: Fusion of Handcrafted Features in a Deep Learning Approach2020Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The safety of vulnerable road users (VRU) is a major concern for both advanced driver assistance systems (ADAS) and autonomous vehicle manufacturers. To guarantee people safety on roads, autonomous vehicles must be able to detect the presence of pedestrians, track them, and predict their intention to cross the road. Most of the earlier work on pedestrian intention recognition focused on using either handcrafted features or an end-to-end deep learning approach. In this project, we investigate the impact of fusing handcrafted features with auto learned features by using a two-stream deep neural network architecture. Our results show that the combined approach improves the performance. Furthermore, the proposed method achieved very good results on the JAAD dataset. Depending on if we considered only the immediate image frames before or image frames half a second before the crossing, we received prediction accuracy of 90%, and 84%, respectively.

  • 13.
    Hamed, Omar
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Steinhauer, H. Joe
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Pedestrian’s Intention Recognition, Fusion of Handcrafted Features in a Deep Learning Approach2021In: AAAI-21 / IAAI-21 / EAAI-21 Proceedings: A virtual conference February 2-9, 2021: Thirty-Fifth AAAI Conference on Artificial Intelligence, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, The Eleventh Symposium on Educational Advances in Artificial Intelligence, Palo Alto: AAAI Press, 2021, p. 15795-15796Conference paper (Refereed)
    Abstract [en]

    The safety of vulnerable road users (VRU) is a major concernfor both advanced driver assistance systems (ADAS) and autonomousvehicle manufacturers. To guarantee people safetyon roads, autonomous vehicles must be able to detect thepresence of pedestrians, track them, and predict their intentionto cross the road. Most of the earlier work on pedestrianintention recognition focused on using either handcraftedfeatures or an end-to-end deep learning approach. In thisproject, we investigate the impact of fusing handcrafted featureswith auto learned features by using a two-stream neuralnetwork architecture. Our results show that the combined approachimproves the performance. Furthermore, the proposedmethod achieved very good results on the JAAD dataset. Dependingon whether we considered the immediate frames beforethe crossing or only half a second before that point, wereceived prediction accuracy of 91%, and 84%, respectively.

  • 14.
    Hansson, Andreas
    et al.
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Niklasson, Lars
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Using Segmentation to Control the Retrieval of Data2006In: The 2006 IEEE International Joint Conference on Neural Network Proceedings, IEEE, 2006, p. 1764-1769Conference paper (Other academic)
    Abstract [en]

    One problem when storing sequential data using recurrent neural networks is that it is hard to preserve long term dependencies. Only the most recently stored data tend to be accurately recalled. One approach for reducing this recency effect has been to divide the data into segments and store the segments separately. This approach has provided promising results in prediction and classification domains. This paper analyzes in what way recall of the stored data is affected by segmentation. It is concluded that segmentation enables the control of which data that can be recalled. The problem of preserving long term dependencies in recurrent neural networks can therefore be reduced.

  • 15.
    Hedenberg, Klas
    et al.
    University of Skövde, School of Technology and Society.
    Åstrand, Björn
    Halmstad University, Sweden.
    Obstacle detection for thin horizontal structures2008In: Proceedings of the World Congress on Engineering and Computer Science 2008: WCECS 2008, October 22 - 24, 2008, San Francisco, USA / [ed] S. I. Ao, Craig Douglas, W. S. Grundfest, Lee Schruben, Jon Burgstone, Hong Kong: Newswood , 2008, p. 689-693Conference paper (Refereed)
    Abstract [en]

    Many vision-based approaches for obstacle detection often state that vertical thin structure is of importance, e.g. poles and trees. However, there are also problem in detecting thin horizontal structures. In an industrial case there are horizontal objects, e.g. cables and fork lifts, and slanting objects, e.g. ladders, that also has to be detected. This paper focuses on the problem to detect thin horizontal structures. The system uses three cameras, situated as a horizontal pair and a vertical pair, which makes it possible to also detect thin horizontal structures. A comparison between a sparse disparity map based on edges and a dense disparity map with a column and row filter is made. Both methods use the Sum of Absolute Difference to compute the disparity maps. Special interest has been in scenes with thin horizontal objects. Tests show that a trinocular system with the sparse dense method based on the Canny detector works better for the environments we have tested.

  • 16.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Översikt: AI: nuläget och vart är vi på väg?2019In: NOD: forum för tro, kultur och samhälle, ISSN 1652-6066, no 3Article, review/survey (Other (popular science, discussion, etc.))
    Download full text (pdf)
    fulltext
  • 17.
    Hemeren, Paul
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Rybarczyk, Yves
    Dalarna University, Falun, Sweden.
    The Visual Perception of Biological Motion in Adults2020In: Modelling Human Motion: From Human Perception to Robot Design / [ed] Nicoletta Noceti, Alessandra Sciutti, Francesco Rea, Cham: Springer, 2020, p. 53-71Chapter in book (Refereed)
    Abstract [en]

    This chapter presents research about the roles of different levels of visual processing and motor control on our ability to perceive biological motion produced by humans and by robots. The levels of visual processing addressed include high-level semantic processing of action prototypes based on global features as well as lower-level local processing based on kinematic features. A further important aspect concerns the interaction between these two levels of processing and the interaction between our own movement patterns and their impact on our visual perception of biological motion. The authors’ results from their research describe the conditions under which semantic and kinematic features influence one another in our understanding of human actions. In addition, results are presented to illustrate the claim that motor control and different levels of the visual perception of biological motion have clear consequences for human–robot interaction. Understanding the movement of robots is greatly facilitated by the movement that is consistent with the psychophysical constraints of Fitts’ law, minimum jerk and the two-thirds power law.

  • 18.
    Högnäs, Jerry
    University of Skövde, School of Humanities and Informatics.
    Från 2D till 3D: Reflektioner kring arbetsprocessen bakom filmtrailern till Gabriel Glömmer2008Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    Detta är en reflexiv rapport om arbetsmetoden bakom skapandet av en filmtrailer som ska marknadsföra den fiktiva filmen om Gabriel Glömmer baserad på en bok av Ulf Löfgren. Boken handlar om en tillbakadragen pojke med en livlig fantasi som drömmer sig till en sagovärld där han får uppleva äventyr och möta nya vänner. Texten har ett fokus på skapandet av miljöer i 3D, med mål att behålla den grafiska stilen från 2D illustrationer som finns i boken. Resultatet av projektet är en filmtrailer med längden 1 minut och 40 sekunder. Filmtrailerns bilder har en uppbyggnad som försöker efterlikna de illustrationer som finns i boken.

    Download full text (pdf)
    FULLTEXT01
  • 19.
    Kiryazov, Kiril
    et al.
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Lowe, Robert
    University of Skövde, The Informatics Research Centre. University of Skövde, School of Humanities and Informatics.
    The role of arousal in embodying the cue-deficit model in multi-resource human-robot interaction2013In: Advances in Artificial Life: ECAL 2013: 2-6 September 2013, Taormina, Italy: Proceedings of the twelfth European Conference on the Synthesis and Simulation of Living Systems / [ed] Pietro Liò, Orazio Miglino, Giuseppe Nicosia, Stefano Nolfi, Mario Pavone, MIT Press, 2013, p. 19-26Conference paper (Refereed)
  • 20.
    Kiryazov, Kiril
    et al.
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Lowe, Robert
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Becker-Asano, Christian
    Albert-Ludwigs-Universität Freiburg.
    Randazzo, Marco
    Istittuto Italiano di Tecnologia, Genoa.
    The role of arousal in two-resource problem tasks for humanoid service robots2013In: RO-MAN, 2013 IEEE, IEEE conference proceedings, 2013, p. 62-69Conference paper (Refereed)
  • 21.
    Kleinhans, Ashley
    et al.
    CSIR, Pretoria, South Africa.
    Thill, Serge
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Rosman, Benjamin
    CSIR, Pretoria, South Africa.
    Detry, Renaud
    University of Liège, Belgium.
    Tripp, Bryan
    University of Waterloo, Canada.
    Modelling primate control of grasping for robotics applications2015In: Computer Vision - ECCV 2014 Workshops: Zurich, Switzerland, September 6-7 and 12, 2014, Revised Selected Papers, Part II / [ed] Lourdes Agapito; Michael M. Bronstein; Carsten Rother, Springer International Publishing Switzerland , 2015, 1, p. 438-447Chapter in book (Refereed)
    Abstract [en]

    The neural circuits that control grasping and perform related visual processing have been studied extensively in macaque monkeys. We are developing a computational model of this system, in order to better understand its function, and to explore applications to robotics. We recently modelled the neural representation of three-dimensional object shapes, and are currently extending the model to produce hand postures so that it can be tested on a robot. To train the extended model, we are developing a large database of object shapes and corresponding feasible grasps. Finally, further extensions are needed to account for the influence of higher-level goals on hand posture. This is essential because often the same object must be grasped in different ways for different purposes. The present paper focuses on a method of incorporating such higher-level goals. A proof-of-concept exhibits several important behaviours, such as choosing from multiple approaches to the same goal. Finally, we discuss a neural representation of objects that supports fast searching for analogous objects.

  • 22.
    Kusetogullari, Huseyin
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment. Department of Computer Science, Blekinge Institute of Technology, Karlskrona.
    Yavariabdi, Amir
    Department of Mechatronics Engineering, KTO Karatay University, Konya, Turkey.
    Hall, Johan
    Arkiv Digital, Växjö, Sweden.
    Lavesson, Niklas
    Department of Computer Science, School of Engineering, Jönköping University, Sweden.
    DIGITNET: A Deep Handwritten Digit Detection and Recognition Method Using a New Historical Handwritten Digit Dataset2021In: Big Data Research, ISSN 2214-5796, E-ISSN 2214-580X, Vol. 23, article id 100182Article in journal (Refereed)
    Abstract [en]

    This paper introduces a novel deep learning architecture, named DIGITNET, and a large-scale digit dataset, named DIDA, to detect and recognize handwritten digits in historical document images written in the nineteen century. To generate the DIDA dataset, digit images are collected from 100,000 Swedish handwritten historical document images, which were written by different priests with different handwriting styles. This dataset contains three sub-datasets including single digit, large-scale bounding box annotated multi-digit, and digit string with 250,000, 25,000, and 200,000 samples in Red-Green-Blue (RGB) color spaces, respectively. Moreover, DIDA is used to train the DIGITNET network, which consists of two deep learning architectures, called DIGITNET-dect and DIGITNET-rec, respectively, to isolate digits and recognize digit strings in historical handwritten documents. In DIGITNET-dect architecture, to extract features from digits, three residual units where each residual unit has three convolution neural network structures are used and then a detection strategy based on You Look Only Once (YOLO) algorithm is employed to detect handwritten digits at two different scales. In DIGITNET-rec, the detected isolated digits are passed through 3 different designed Convolutional Neural Network (CNN) architectures and then the classification results of three different CNNs are combined using a voting scheme to recognize digit strings. The proposed model is also trained with various existing handwritten digit datasets and then validated over historical handwritten digit strings. The experimental results show that the proposed architecture trained with DIDA (publicly available from: https://didadataset.github.io/DIDA/) outperforms the state-of-the-art methods. 

    Download full text (pdf)
    fulltext
  • 23.
    Li, Cai
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lowe, Robert
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Ziemke, Tom
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Department of Computer and Information Science, Linköping University, Sweden.
    A Novel Approach to Locomotion Learning: Actor-Critic Architecture using Central Pattern Generators and Dynamic Motor Primitives2014In: Frontiers in Neurorobotics, ISSN 1662-5218, Vol. 8, article id 23Article in journal (Refereed)
  • 24.
    Li, Cai
    et al.
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Lowe, Robert
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Ziemke, Tom
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Crawling Posture Learning in Humanoid Robots using a Natural-Actor Critic CPG Architecture2013In: Advances in Artificial Life, ECAL 2013: Proceedings of the twelfth European Conference on the Synthesis and Simulation of Living Systems, 2013, p. 1182-1190Conference paper (Refereed)
  • 25.
    Li, Cai
    et al.
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Lowe, Robert
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Ziemke, Tom
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Humanoids learning to crawl based on Natural CPG-Actor-Critic and Motor Primitives2013In: Proceedings of the IROS 2013 Workshopon Neuroscience and Robotics: Towards a robot-enabled,Neuroscience-guided healthy society, 2013, p. 7-15Conference paper (Refereed)
  • 26.
    Lowe, Robert
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Designing for Emergent Ultrastable Behaviour in Complex Artificial Systems: The Quest for Minimizing Heteronomous Constraints2013In: Constructivist Foundations, ISSN 1782-348X, Vol. 9, no 1, p. 105-107Article in journal (Refereed)
  • 27.
    Lowe, Robert
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Göteborgs Universitet, Tillämpad IT.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Affective-Associative Two-Process theory: A neural network investigation of adaptive behaviour in differential outcomes training2017In: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 25, no 1, p. 5-23Article in journal (Refereed)
    Abstract [en]

    In this article we present a novel neural network implementation of Associative Two-Process (ATP) theory based on an Actor–Critic-like architecture. Our implementation emphasizes the affective components of differential reward magnitude and reward omission expectation and thus we model Affective-Associative Two-Process theory (Aff-ATP). ATP has been used to explain the findings of differential outcomes training (DOT) procedures, which emphasize learning differentially valuated outcomes for cueing actions previously associated with those outcomes. ATP hypothesizes the existence of a ‘prospective’ memory route through which outcome expectations can bring to bear on decision making and can even substitute for decision making based on the ‘retrospective’ inputs of standard working memory. While DOT procedures are well recognized in the animal learning literature they have not previously been computationally modelled. The model presented in this article helps clarify the role of ATP computationally through the capturing of empirical data based on DOT. Our Aff-ATP model illuminates the different roles that prospective and retrospective memory can have in decision making (combining inputs to action selection functions). In specific cases, the model’s prospective route allows for adaptive switching (correct action selection prior to learning) following changes in the stimulus–response–outcome contingencies.

  • 28.
    Lowe, Robert
    et al.
    University of Skövde, The Informatics Research Centre. University of Skövde, School of Informatics.
    Kiryazov, Kiril
    University of Skövde, The Informatics Research Centre. University of Skövde, School of Informatics.
    Utilizing Emotions in Autonomous Robots: An Enactive Approach2014In: Emotion Modeling: Towards Pragmatic Computational Models of Affective Processes / [ed] Tibor Bosse; Joost Broekens; João Dias; Janneke van der Zwaan, Springer International Publishing Switzerland , 2014, 1, p. 76-98Chapter in book (Refereed)
    Abstract [en]

    In this chapter, we present a minimalist approach to utilizing the computational principles of affective processes and emotions for autonomous robotics applications. The focus of this paper is on the presentation of this framework in reference to preservation of agent autonomy across levels of cognitive-affective competences. This approach views autonomy in reference to (i) embodied (e.g. homeostatic), and (ii) dynamic (e.g. neural-dynamic) processes, required to render adaptive such cognitive-affective competences. We hereby focus on bridging bottom-up (standard autonomous robotics) and top-down (psychology-based dimensional theoretic) modelling approaches. Our enactive approach we characterize according to bi-directional grounding (inter-dependent bottom-up and top-down regulation). As such, from an emotions theory perspective, ‘enaction’ is best understood as an embodied and dynamic appraisal perspective. We attempt to clarify our approach with relevant case studies and comparison to other existing approaches in the modelling literature.

  • 29.
    Mahmoud, Sara
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Svensson, Henrik
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Thill, Serge
    Donders Institute for Brain,Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands.
    How to train a self-driving vehicle: On the added value (or lack thereof) of curriculum learning and replay buffers2023In: Frontiers in Artificial Intelligence, E-ISSN 2624-8212, Vol. 6, article id 1098982Article in journal (Refereed)
    Abstract [en]

    Learning from only real-world collected data can be unrealistic and time consuming in many scenario. One alternative is to use synthetic data as learning environments to learn rare situations and replay buffers to speed up the learning. In this work, we examine the hypothesis of how the creation of the environment affects the training of reinforcement learning agent through auto-generated environment mechanisms. We take the autonomous vehicle as an application. We compare the effect of two approaches to generate training data for artificial cognitive agents. We consider the added value of curriculum learning—just as in human learning—as a way to structure novel training data that the agent has not seen before as well as that of using a replay buffer to train further on data the agent has seen before. In other words, the focus of this paper is on characteristics of the training data rather than on learning algorithms. We therefore use two tasks that are commonly trained early on in autonomous vehicle research: lane keeping and pedestrian avoidance. Our main results show that curriculum learning indeed offers an additional benefit over a vanilla reinforcement learning approach (using Deep-Q Learning), but the replay buffer actually has a detrimental effect in most (but not all) combinations of data generation approaches we considered here. The benefit of curriculum learning does depend on the existence of a well-defined difficulty metric with which various training scenarios can be ordered. In the lane-keeping task, we can define it as a function of the curvature of the road, in which the steeper and more occurring curves on the road, the more difficult it gets. Defining such a difficulty metric in other scenarios is not always trivial. In general, the results of this paper emphasize both the importance of considering data characterization, such as curriculum learning, and the importance of defining an appropriate metric for the task.

    Download full text (pdf)
    fulltext
  • 30.
    Montebelli, Alberto
    et al.
    Department of Automation and Systems Technology, Aalto University, Finland.
    Lowe, Robert
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Ziemke, Tom
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Toward Metabolic Robotics: Insights from Modeling Embodied Cognition in a Biomechatronic Symbiont2013In: Artificial Life, ISSN 1064-5462, E-ISSN 1530-9185, Vol. 19, no 3-4, p. 299-315Article in journal (Refereed)
  • 31.
    Rosén, Julia
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Expectations: Approaching Social Robots2021Report (Other academic)
    Abstract [en]

    The development of robots that are able to interact socially with humans is still in its early stages. Since the beginning of the 2000's, these (so called) social robots have started to emerge in a variety of settings. Along with the emergence of social robots, there has been a parallel interest to study the different aspects of having humans interact with robots socially. There are several motivations behind developing and studying social robots; social robots may be used as test beds to study human behavior, as tools for humans to achieve certain tasks in specific contexts, or as interaction partners and thus viewed as social agents. These three perspectives often draw on the assumption that human-robot interaction (HRI) is similar to human-human interaction. Thus, humans tend to expect human-like abilities in social robots, often mismatching the robots' actual capabilities.

    In this thesis proposal, expectations of social robots are the focal point. Expectations are, in any aspect of life and not just in HRI, underlying and ever present mechanisms of human behavior. Expectations are defined as believed probabilities of future events that set the stage for the human belief system which guides our behavior, hopes, and intentions. Expectations are based on direct experience, other people, and beliefs. Once an expectation is set, it is accompanied by either positive or negative affect which turns to behavior and performance. Thus, expectations are crucial in human behavior, including when interacting with social robots. What makes social robots more rare than other technical artifacts such as computers, is the lack of personal experience for many humans. High expectations happen especially with social robots as they are purposely designed to look and behave like humans, thus creating ethical implications as it can be considered deceptive design.  Expectations are therefore usually built on beliefs based on the portrayal of social robots in media. When humans interact with social robots, they will usually have high expectations which ultimately has an effect on how successful the interaction will be. This creates a gap between what is expected, and what the robots are actually capable of.

    Expectations are thus an underlying factor in interaction with any artifact, and there is a need to get a deeper understanding of how these expectations affect HRI. Once we have gained a richer understanding of how expectations affect HRI, we can narrow the expectation gap, and create more successful interactions between humans and social robots in society. With this in mind, the aim of my PhD work is to investigate the role expectations play when interacting socially with robots, including the subsequent ethical implications of such expectations. My four objectives are to (1) theoretically identify existing research on expectations in HRI, (2) empirically investigate expectations in HRI, (3) synthesize the obtained findings from objective 1 and 2 to create an interdisciplinary theoretical framework of expectations in HRI, and (4) address the ethical implications of expectations in HRI. In this thesis proposal, I present what I have done so far to reach these objectives, as well as my research plan moving forward towards my dissertation. The intended contributions of my PhD work is to theoretically and empirically characterize the role and relevance of humans' expectations when interacting with social robots with the goal to narrow the social robot expectation gap.

  • 32.
    Rosén, Julia
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    What did you expect?: A human-centered approach to investigating and reducing the social robot expectation gap2024Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    We live in a complex world where we proactively plan and execute various behaviors by forming expectations in real time. Expectations are beliefs regarding the future state of affairs and they play an integral part of our perception, attention, and behavior. Over time, our expectations become more accurate as we interact with the world and others around us. People interact socially with other people by inferring others' purposes, intentions, preferences, beliefs, emotions, thoughts, and goals. Similar inferences may occur when we interact with social robots. With anthropomorphic design, these robots are designed to mimic people physically and behaviorally. As a result, users predominantly infer agency in social robots, often leading to mismatched expectations of the robots' capabilities, which ultimately influences the user experience. 

    In this thesis, the role and relevance of users' expectations in first-hand social human-robot interaction (sHRI) was investigated. There are two major findings. First, in order to study expectations in sHRI, the social robot expectation gap evaluation framework was developed. This framework supports the systematic study and evaluation of expectations over time, considering the unique context where the interaction is unfolding. Use of the framework can inform sHRI researchers and designers on how to manage users’ expectations, not only in the design, but also during evaluation and presentation of social robots. Expectations can be managed by identifying what kinds of expectations users have and aligning these through design and dissemination which ultimately creates more transparent and successful interactions and collaborations. The framework is a tool for achieving this goal. Second, results show that previous experience has a strong impact on users’ expectations. People have different expectations of social robots and view social robots as both human-like and as machines. Expectations of social robots can vary according to the source of the expectation, with those who had previous direct experiences of robots having different expectations than those who relied on indirect experiences to generate expectations.    

    One consequence of these results is that expectations can be a confounding variable in sHRI research. Previous experience with social robots can prime users in future interactions with social robots. These findings highlight the unique experiences users have, even when faced with the same robot. Users' expectations and how they change over time shapes the users’ individual needs and preferences and should therefore be considered in the interpretation of sHRI. In doing so, the social robot expectation gap can be reduced.

    Download full text (pdf)
    fulltext
  • 33.
    Rydin, Daniel
    University of Skövde, School of Humanities and Informatics.
    Från ord till handling: reflektioner kring arbetsprocessen bakom trailern ”Gabriel Glömmer”2008Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    Rapporten tar upp mina reflektioner dels angående projektarbetet både på ett personligt plan samt samarbetet mellan oss två som gjorde projektet. Vad har fungerat och vad har fungerat mindre bra under projekttiden? Från 2D till 3D är en röd tråd genom hela rapporten då vi gått från 2D-referernsbilder i barnboken Gabriel Glömmer till utrenderade 3D-sekvenser. Jag går igenom vad jag gjort och vad andra gjort under projekttiden då vi haft hjälp av lite olika människor. Dessa människor presenteras snabbt, något annat som presenteras är boken och min relation till densamma. Resultatet av projektarbetet blev att vi lyckades med de uppställde mål vi hade innan projektet startades.

    Download full text (pdf)
    FULLTEXT01
  • 34.
    Sandini, Giulio
    et al.
    Istituto Italiano di Tecnologia, Genova.
    Vernon, David
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    The Hows and Whys of Effective Interdisciplinarity2014In: IEEE AMD Newsletter, ISSN 1550-1914, Vol. 6, no 2, p. 6-7Article, review/survey (Other academic)
    Download full text (pdf)
    fulltext
  • 35.
    Senington, Richard
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    The Multiple Uses of Monte-Carlo Tree Search2022In: SPS2022: Proceedings of the 10th Swedish Production Symposium / [ed] Amos H. C. Ng; Anna Syberfeldt; Dan Högberg; Magnus Holm, Amsterdam; Berlin; Washington, DC: IOS Press, 2022, p. 713-724Conference paper (Refereed)
    Abstract [en]

    Modern production processes are continuing to move towards more flexible and dynamic conditions, most clearly exemplified by mass customization, but this flexibility can also be seen in technologies like; Human-Robot Collaboration, Automated Guided Vehicle fleets for just in time delivery of parts within factories and reconfigurable manufacturing. Currently, these technologies are developing independently of one another and the supporting industrial software tools such as line balancing optimisation tools, Machine Execution Systems and fleet management tools are similarly developing independently. An alternative to developing individual technologies for each problem is the use of a shared algorithmic framework that can support all of these problem types and future research into general smart factory technology. Monte Carlo Tree Search is a relatively recent Artificial Intelligence algorithm, sometimes described as a general-purpose heuristic, that has been found to be very effective in several theoretical and game-related problems. This paper will review the current growth in research into possible industrial applications of this algorithm and how a framework utilising this algorithm can help to realise the aims of the smart factory vision.

    Download full text (pdf)
    fulltext
  • 36.
    Senington, Richard
    et al.
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Schmidt, Bernard
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Syberfeldt, Anna
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Monte Carlo Tree Search for online decision making in smart industrial production2021In: Computers in industry (Print), ISSN 0166-3615, E-ISSN 1872-6194, Vol. 128, p. 1-10, article id 103433Article in journal (Refereed)
    Abstract [en]

    This paper considers the issue of rapid automated decision making in changing factory environments, situations including human-robot collaboration, mass customisation and the need to rapidly adapt activities to new conditions. The approach taken is to adapt the Monte Carlo Tree Search (MCTS) algorithm to provide online choices for the possible actions of machines and workers, interleaving them dynamically in response to the changing conditions of the production process. This paper describes how the MCTS algorithm has been adapted for use in production environments and then the proposed method is illustrated by two examples of the system in use, one simulated and one in a physical test cell.

    Download full text (pdf)
    fulltext
  • 37.
    Shao, Bing
    et al.
    Kaiserslautern Intelligent Manufacturing School, Shanghai Dianji University, China.
    Hou, Yichen
    Kaiserslautern Intelligent Manufacturing School, Shanghai Dianji University, China.
    Huang, Nianquing
    Kaiserslautern Intelligent Manufacturing School, Shanghai Dianji University, China.
    Wang, Wei
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Lu, Xin
    Department of Computing and Informatics, Bournemouth University, UK.
    Jing, Yanguo
    Faculty of Business Computing and Digital Industries, Leeds Trinity University, UK.
    Deep Learning based Coffee Beans Quality Screening2022In: Proceedings 2022 IEEE International Conference on e-Business Engineering ICEBE 2022: 14–16 October 2022 Bournemouth, United Kingdom, IEEE, 2022, p. 271-275Conference paper (Refereed)
    Abstract [en]

    Coffee bean quality screening is a time-consuming work, and its workload increases abruptly with the rapid development of coffee beverage consumer market. In this work, a CNN-based classifier is developed to categorizing the coffee beans into sour, black, broken, moldy, shell, insect damage and good beans. The screening test results show that the screening accuracy could reach more than 90% for all other beans except for shell beans (88%). Therefore, the proposed method is feasible and promising. Moreover, a cost-effective automatic coffee bean screening system using the developed classifier is manufactured and implemented for a local company. 

  • 38.
    Sivanantham, V.
    et al.
    Department of Computer Science, Periyar University Constituent College of Arts and Science, Pappireddipatti Campus, Periyar University, Salem, Tamil Nadu, India.
    Sangeetha, V.
    Department of Computer Science, Periyar University Constituent College of Arts and Science, Pappireddipatti Campus, Periyar University, Salem, Tamil Nadu, India.
    Alnuaim, Abeer Ali
    Department of Computer Science and Engineering, College of Applied Studies and Community Services, King Saud University, Riyadh, Saudi Arabia.
    Hatamleh, Wesam Atef
    Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia.
    Anilkumar, Chunduru
    Department of Information Technology, GMR Institute of Technology, Rajam, Srikakulam, Andhra Pradesh, India.
    Hatamleh, Ashraf Atef
    Department of Botany and Microbiology, College of Science, King Saud University, Riyadh, Saudi Arabia.
    Sweidan, Dirar
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Quantile correlative deep feedforward multilayer perceptron for crop yield prediction2022In: Computers & electrical engineering, ISSN 0045-7906, E-ISSN 1879-0755, Vol. 98, article id 107696Article in journal (Refereed)
    Abstract [en]

    Crop yield prediction is an essential one in agriculture. Crop yield protection is the science and practice of handling plant diseases, weeds, and other pests. Accurate information regarding the crop yield history is essential for making decisions regarding agricultural risk management. Many research studies have been undertaken for identifying crop productivity using various data mining techniques. However, the prediction accuracy of crop yields was not improved with minimum time consumption. To overcome the issues, a novel Quantile Regressive Empirical correlative Functioned Deep FeedForward Multilayer Perceptron Classification (QRECF-DFFMPC) Method is proposed for crop yield prediction. QRECF-DFFMPC Method comprises three layers such as input and output layer with one or more hidden layers. The input layer of deep neural learning receives several features and data from the dataset and then sent it to the hidden layer 1. In that layer, Empirical Orthogonal Function is used to select the relevant features with the help of orthogonal basis functions. After that, Quantile regression is used in the hidden layer 2 to analyze the features and produce the regression value for every data point. Then, the regression value of data points is sent to the output layer for improving the prediction accuracy and reducing the time complexity. Experimental evaluation is carried out on factors such as prediction accuracy, precision, and prediction time for several data points and the number of features. The result shows that the proposed technique enhanced the prediction accuracy and precision by 6% and 9% and reduces the prediction time by 32%, as compared to existing works. 

  • 39.
    Ståhl, Niclas
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment. Jönköping Artificial Intelligence Lab, Jönköping University, Sweden.
    Weimann, Lisa
    County Administrative Board of Jönköping, Sweden.
    Identifying wetland areas in historical maps using deep convolutional neural networks2022In: Ecological Informatics, ISSN 1574-9541, E-ISSN 1878-0512, Vol. 68, article id 101557Article in journal (Refereed)
    Abstract [en]

    The local environment and land usages have changed a lot during the past one hundred years. Historical documents and materials are crucial in understanding and following these changes. Historical documents are, therefore, an important piece in the understanding of the impact and consequences of land usage change. This, in turn, is important in the search of restoration projects that can be conducted to turn and reduce harmful and unsustainable effects originating from changes in the land-usage. This work extracts information on the historical location and geographical distribution of wetlands, from hand-drawn maps. This is achieved by using deep learning (DL), and more specifically a convolutional neural network (CNN). The CNN model is trained on a manually pre-labelled dataset on historical wetlands in the area of Jönköping county in Sweden. These are all extracted from the historical map called “Generalstabskartan”. The presented CNN performs well and achieves a F1-score of 0.886 when evaluated using a 10-fold cross validation over the data. The trained models are additionally used to generate a GIS layer of the presumable historical geographical distribution of wetlands for the area that is depicted in the southern collection in Generalstabskartan, which covers the southern half of Sweden. This GIS layer is released as an open resource and can be freely used. To summarise, the presented results show that CNNs can be a useful tool in the extraction and digitalisation of non-textual information in historical documents, such as historical maps. A modern GIS material that can be used to further understand the past land-usage change is produced within this research. Previously, no material of this detail and extent have been available, due to the large effort needed to manually create such. However, with the presented resource better quantifications and estimations of historical wetlands that have been lost can be made. 

    Download full text (pdf)
    fulltext
  • 40.
    Sun, Jiong
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Redyuk, Sergey
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Högberg, Dan
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Tactile Interaction and Social Touch: Classifying Human Touch using a Soft Tactile Sensor2017In: HAI '17: Proceedings of the 5th International Conference on Human Agent Interaction, New York: Association for Computing Machinery (ACM), 2017, p. 523-526Conference paper (Refereed)
    Abstract [en]

    This paper presents an ongoing study on affective human-robot interaction. In our previous research, touch type is shown to be informative for communicated emotion. Here, a soft matrix array sensor is used to capture the tactile interaction between human and robot and 6 machine learning methods including CNN, RNN and C3D are implemented to classify different touch types, constituting a pre-stage to recognizing emotional tactile interaction. Results show an average recognition rate of 95% by C3D for classified touch types, which provide stable classification results for developing social touch technology. 

    Download full text (pdf)
    fulltext
  • 41.
    Thill, Serge
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. University of Plymouth, United Kingdom.
    What we need from an embodied cognitive architecture2019In: Cognitive Architectures / [ed] Maria Isabel Aldinhas Ferreira, João Silva Sequeira, Rodrigo Ventura, Cham: Springer, 2019, p. 43-57Chapter in book (Refereed)
    Abstract [en]

    Given that original purpose of cognitive architectures was to lead to a unified theory of cognition, this chapter considers the possible contributions that cognitive architectures can make to embodied theories of cognition in particular. This is not a trivial question since the field remains very much divided about what embodied cognition actually means, and we will see some example positions in this chapter. It is then argued that a useful embodied cognitive architecture would be one that can demonstrate (a) what precisely the role of the body in cognition actually is, and (b) whether a body is constitutively needed at all for some (or all) cognitive processes. It is proposed that such questions can be investigated if the cognitive architecture is designed so that consequences of varying the precise embodiment on higher cognitive mechanisms can be explored. This is in contrast with, for example, those cognitive architectures in robotics that are designed for specific bodies first; or architectures in cognitive science that implement embodiment as an add-on to an existing framework (because then, that framework is by definition not constitutively shaped by the embodiment). The chapter concludes that the so-called semantic pointer architecture by Eliasmith and colleagues may be one framework that satisfies our desiderata and may be well-suited for studying theories of embodied cognition further.

  • 42.
    Thorvald, Peter
    et al.
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Kolbeinsson, Ari
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Fogelberg, Emmie
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    A Review on Communicative Mechanisms of External HMIs in Human-Technology Interaction2022In: 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA), IEEE, 2022Conference paper (Refereed)
    Abstract [en]

    The Operator 4.0 typology depicts the collaborative operator as one of eight operator working scenarios of operators in Industry 4.0. It signifies collaborative robot applications and the interaction between humans and robots working collaboratively or cooperatively towards a common goal. For this collaboration to run seamlessly and effortlessly, human-robot communication is essential. We briefly discuss what trust, predictability, and intentions are, before investigating the communicative features of both self-driving cars and collaborative robots. We found that although communicative external HMIs could arguably provide some benefits in both domains, an abundance of clues to what an autonomous car or a robot is about to do are easily accessible through the environment or could be created simply by understanding and designing legible motions.

    Download full text (pdf)
    fulltext
  • 43.
    Vellenga, Koen
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment. Department of Data Analytics and Engineering, R&D, Volvo Car Corporation, Sweden.
    Steinhauer, H. Joe
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Falkman, Göran
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Björklund, Tomas
    Department of Data Analytics and Engineering, R&D, Volvo Car Corporation, Sweden.
    Evaluation of Video Masked Autoencoders' Performance and Uncertainty Estimations for Driver Action and Intention Recognition2024In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), IEEE, 2024, p. 7429-7437Conference paper (Refereed)
    Abstract [en]

    Traffic fatalities remain among the leading death causes worldwide. To reduce this figure, car safety is listed as one of the most important factors. To actively support human drivers, it is essential for advanced driving assistance systems to be able to recognize the driver's actions and intentions. Prior studies have demonstrated various approaches to recognize driving actions and intentions based on in-cabin and external video footage. Given the performance of self-supervised video pre-trained (SSVP) Video Masked Autoencoders (VMAEs) on multiple action recognition datasets, we evaluate the performance of SSVP VMAEs on the Honda Research Institute Driving Dataset for driver action recognition (DAR) and on the Brain4Cars dataset for driver intention recognition (DIR). Besides the performance, the application of an artificial intelligence system in a safety-critical environment must be capable to express when it is uncertain about the produced results. Therefore, we also analyze uncertainty estimations produced by a Bayes-by-Backprop last-layer (BBB-LL) and Monte-Carlo (MC) dropout variants of an VMAE. Our experiments show that an VMAE achieves a higher overall performance for both offline DAR and end-to-end DIR compared to the state-of-the-art. The analysis of the BBB-LL and MC dropout models show higher uncertainty estimates for incorrectly classified test instances compared to correctly predicted test instances.

    Download full text (pdf)
    fulltext
  • 44.
    Vernon, David
    University of Skövde, The Informatics Research Centre. University of Skövde, School of Humanities and Informatics.
    Goal-directed Action and Eligible Forms of Embodiment2013In: Constructivist Foundations, ISSN 1782-348X, Vol. 9, no 1, p. 85-86Article, review/survey (Refereed)
  • 45.
    Vernon, David
    University of Skövde, The Informatics Research Centre. University of Skövde, School of Humanities and Informatics.
    Interpreting Ashby - But which One?2013In: Constructivist Foundations, ISSN 1782-348X, Vol. 9, no 1, p. 111-113Article, review/survey (Refereed)
  • 46.
    Vernon, David
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Reconciling Constitutive and Behavioural Autonomy: The Challenge of Modelling Development in Enactive Cognition2016In: Intellectica, ISSN 0769-4113, Vol. 65, p. 63-79Article in journal (Refereed)
    Abstract [en]

    In the enactive paradigm of cognitive science, development plays a crucial role in the realization of cognition. This position runs counter to the computational functionalism upon which cognitivist and classical artificial intelligence systems are founded, especially the assumption that cognition can be achieved by embedding pre-formed knowledge. The enactive stance involves a progressive phased transition from cognitive capacity to cognitive capability, highlighting the role of development in extending the timescale of a cognitive agent’s prospective abilities and in expanding its repertoire of effective action. We review briefly some necessary conditions for cognitive development, drawing on examples from developmental psychology, illustrating the ideas by looking at the ontogenesis of instru- mental helping and collaboration in infants, and identifying some of the essential elements of a developmental cognitive architecture. We then focus on the fact that enactive sys- tems are operationally-closed, autonomous, and self-maintaining. Consequently, there are organizational constitutive processes at play as well as behavioural ones. Reconciling these complementary processes poses a significant challenge for the creation of complete model of development that must show how constitutive autonomy is compatible with and may even give rise to behavioural autonomy. We conclude by drawing attention to recent research which could provide a way of addressing this challenge. 

  • 47.
    Vuoluterä, Fredrik
    University of Skövde, School of Engineering Science.
    Quality inspection of multiple product variants using neural network modules2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Maintaining quality outcomes is an essential task for any manufacturing organization. Visual inspections have long been an avenue to detect defects in manufactured products, and recent advances within the field of deep learning has led to a surge of research in how technologies like convolutional neural networks can be used to perform these quality inspections automatically. An alternative to these often large and deep network structures is the modular neural network, which can instead divide a classification task into several sub-tasks to decrease the overall complexity of a problem. To investigate how these two approaches to image classification compare in a quality inspection task, a case study was performed at AR Packaging, a manufacturer of food containers. The many different colors, prints and geometries present in the AR Packaging product family served as a natural occurrence of complexity for the quality classification task. A modular network was designed, being formed by one routing module to classify variant type which is subsequently used to delegate the quality classification to an expert module trained for that specific variant. An image dataset was manually generated from within the production environment portraying a range of product variants in both defective and non-defective form. An image processing algorithm was developed to minimize image background and align the products in the pictures. To evaluate the adaptability of the two approaches, the networks were initially trained on same data from five variants, and then retrained with added data from a sixth variant. The modular networks were found to be overall less accurate and slower in their classification than the conventional single networks were. However, the modular networks were more than six times smaller and required less time to train initially, though the retraining times were roughly equivalent in both approaches. The retraining of the single network did also cause some fluctuation in the predictive accuracy, something which was not noted in the modular network.

    Download full text (pdf)
    fulltext
  • 48.
    Zhu, Xiaomeng
    et al.
    Scania CV AB, Sweden ; KTH Royal Institute of Technology, Sweden.
    Bilal, Talha
    Scania CV AB, Sweden.
    Mårtensson, Pär
    Scania CV AB, Sweden.
    Hanson, Lars
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Björkman, Mårten
    KTH Royal Institute of Technology, Sweden.
    Maki, Atsuto
    KTH Royal Institute of Technology, Sweden.
    Towards Sim-to-Real Industrial Parts Classification with Synthetic Dataset2023In: Proceedings, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops: Vancouver, Canada 18 – 22 June 2023, IEEE, 2023, p. 4454-4463Conference paper (Refereed)
    Abstract [en]

    This paper is about effectively utilizing synthetic data for training deep neural networks for industrial parts classification, in particular, by taking into account the domain gap against real-world images. To this end, we introduce a synthetic dataset that may serve as a preliminary testbed for the Sim-to-Real challenge; it contains 17 objects of six industrial use cases, including isolated and assembled parts. A few subsets of objects exhibit large similarities in shape and albedo for reflecting challenging cases of industrial parts. All the sample images come with and without random backgrounds and post-processing for evaluating the importance of domain randomization. We call it Synthetic Industrial Parts dataset (SIP-17). We study the usefulness of SIP-17 through benchmarking the performance of five state-of-the-art deep network models, supervised and self-supervised, trained only on the synthetic data while testing them on real data. By analyzing the results, we deduce some insights on the feasibility and challenges of using synthetic data for industrial parts classification and for further developing larger-scale synthetic datasets. Our dataset † and code ‡ are publicly available. 

  • 49.
    Zhu, Xiaomeng
    et al.
    Scania CV AB (publ), Södertälje, Sweden ; KTH Royal Institute of Technology, Stockholm, Sweden.
    Björkman, Mårten
    KTH Royal Institute of Technology, Stockholm, Sweden.
    Maki, Atsuto
    KTH Royal Institute of Technology, Stockholm, Sweden.
    Hanson, Lars
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Mårtensson, Pär
    Scania CV AB (publ), Södertälje, Sweden.
    Surface Defect Detection with Limited Training Data: A Case Study on Crown Wheel Surface Inspection2023In: Procedia CIRP, ISSN 2212-8271, E-ISSN 2212-8271, Vol. 120, p. 1333-1338Article in journal (Refereed)
    Abstract [en]

    This paper presents an approach to automatic surface defect detection by a deep learning-based object detection method, particularly in challenging scenarios where defects are rare, i.e., with limited training data. We base our approach on an object detection model YOLOv8, preceded by a few steps: 1) filtering out irrelevant information, 2) enhancing the visibility of defects, namely brightness contrast, and 3) increasing the diversity of the training data through data augmentation. We evaluated the method in an industrial case study of crown wheel surface inspection in detecting Unclean Gear as well as Deburring defects, resulting in promising performances. With the combination of the three preprocessing steps, we improved the detection accuracy by 22.2% and 37.5% respectively while detecting those two defects. We believe that the proposed approach is also adaptable to various applications of surface defect detection in other industrial environments as the employed techniques, such as image segmentation, are available off the shelf. 

    Download full text (pdf)
    fulltext
1 - 49 of 49
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf