his.sePublications
Change search
Refine search result
1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Ericson, Stefan
    et al.
    University of Skövde, School of Technology and Society.
    Hedenberg, Klas
    University of Skövde, School of Technology and Society.
    Johansson, Ronnie
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Information Fusion for Autonomous Robotic Weeding2009In: INFORMATIK 2009: Im Focus das Leben / [ed] Stefan Fischer, Erik Maehle, Rüdiger Reischuk, Köllen Druck + Verlag GmbH , 2009, p. 2461-2473Conference paper (Refereed)
    Abstract [en]

    Information fusion has a potential applicability to a multitude of differentapplications. Still, the JDL model is mostly used to describe defense applications.This paper describes the information fusion process for a robot removing weed ina field. We analyze the robotic system by relating it to the JDL model functions.The civilian application we consider here has some properties which differ from thetypical defense applications: (1) indifferent environment and (2) a predictable andstructured process to achieve its objectives. As a consequence, situation estimatestend to deal with internal properties of the robot and its mission progress (throughmission state transition) rather than external entities and their relations. Nevertheless, the JDL model appears useful for describing the fusion activities of the weeding robot system. We provide an example of how state transitions may be detected and exploited using information fusion and report on some initial results. An additional finding is that process refinement for this type of application can be expressed in terms of a finite state machine.

  • 2.
    Hedenberg, Klas
    et al.
    University of Skövde, School of Technology and Society.
    Baerveldt, Albert-Jan
    Halmstad University.
    Stereo vision-based collision avoidance2004In: Conference proceedings - the 9th Mechatronics Forum international conference: August 30 - September 1, 2004, Ankara, Turkey / [ed] Abdulkadir Erden, Bülent E. Platin, Memis Acar, 2004, p. 259-270Conference paper (Other academic)
    Abstract [en]

    This paper investigates whether a stereo vision system based on points of interest is robust enough to detect obstacles for applications like a mobile robot in an industrial environment and for the visually impaired. Points of interest are extracted with a known method, called KLT. Two algorithms to solve the correspondence problem (Sum of Squared Difference and Variance Normalized Correlation) are used and evaluated as well as a combination of the two. An improvement is made if the two algorithms are combined. The tests show that stereo vision based on points of interest only can be used robustly for obstacle detection if there is enough texture on the obstacle. Otherwise too few points of interest on the object are detected and a reliable estimation of the distance to the object cannot be made.

  • 3.
    Hedenberg, Klas
    et al.
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Åstrand, Bjorn
    School of Information Technology, Halmstad University, Halmstad, Sweden.
    3D Sensors on Driverless Trucks for Detection of Overhanging Objects in the Pathway2016In: Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor / [ed] Roger Bostelman, Elena Messina, West Conshohocken, PA: ASTM International, 2016, p. 41-56Chapter in book (Refereed)
    Abstract [en]

    Human-operated and driverless trucks often collaborate in a mixed work space in industries and warehouses. This is more efficient and flexible than using only one kind of truck. However, because driverless trucks need to give way to driven trucks, a reliable detection system is required. Several challenges exist in the development of such a system. The first is to select interesting situations and objects. Overhanging objects are often found in industrial environments (e.g., tines on a forklift). Second is choosing a system that has the ability to detect those situations. (The traditional laser scanner situated two decimetres above the floor does not detect overhanging objects.) Third is to ensure that the perception system is reliable. A solution used on trucks today is to mount a two-dimensional laser scanner on top and tilt the scanner toward the floor. However, objects at the top of the truck will be detected too late, and a collision cannot always be avoided. Our aim is to replace the upper two-dimensional laser scanner with a three-dimensional camera, structural light, or time-of-flight (TOF) camera. It is important to maximize the field of view in the desired detection volume. Hence, the sensor placement is important. We conducted laboratory experiments to check and compare the various sensors' capabilities for different colors, using tines and a model of a tine in a controlled industrial environment. We also conducted field experiments in a warehouse. Our conclusion is that both the tested structural light and TOF sensors have problems detecting black items that are non-perpendicular to the sensor. It is important to optimize the light economy—meaning the illumination power, field of view, and exposure time—in order to detect as many different objects as possible.

  • 4.
    Hedenberg, Klas
    et al.
    University of Skövde, School of Technology and Society.
    Åstrand, Björn
    University of Halmstad.
    A Trinocular Stereo System for Detection of Thin Horizontal Structures2009In: Advances in Electrical and Electronics Engineering - IAENG Special Edition of the World Congress on Engineering and Computer Science 2008, IEEE Computer Society, 2009, p. 211-218Conference paper (Refereed)
    Abstract [en]

    Many vision-based approaches for obstacle detection often state that vertical thin structure is of importance, e.g. poles and trees. However, there are also problem in detecting thin horizontal structures. In an industrial case there are horizontal objects, e.g. cables and fork lifts, and slanting objects, e.g. ladders, that also has to be detected. This paper focuses on the problem to detect thin horizontal structures. We introduce a test apparatus for testing thin objects as a complement for the test pieces for human safety described in the European standard EN 1525 Safety of industrial trucks – Driverless trucks and their systems. The system uses three cameras, situated as a horizontal pair and a vertical pair, which makes it possible to also detect thin horizontal structures. A sparse disparity map based on edges and a dense disparity map is used to identify problems with a trinocular system. Both methods use the Sum of Absolute Difference to compute the disparity maps. Tests show that the proposed trinocular system detects all objects at the test apparatus. If a sparse or dense method is used is not critical. Further work will implement the algorithm in real time and verify it on a final system in many types of scenery.

  • 5.
    Hedenberg, Klas
    et al.
    University of Skövde, School of Technology and Society.
    Åstrand, Björn
    Halmstad University.
    Obstacle detection for thin horizontal structures2008In: World Congress on Engineering and Computer Science 2008, IAENG , 2008, p. 689-693Conference paper (Refereed)
    Abstract [en]

     Abstract— Many vision-based approaches for obstacle detection often state that vertical thin structure is of importance, e.g. poles and trees. However, there are also problem in detecting thin horizontal structures. In an industrial case there are horizontal objects, e.g. cables and fork lifts, and slanting objects, e.g. ladders, that also has to be detected. This paper focuses on the problem to detect thin horizontal structures. The system uses three cameras, situated as a horizontal pair and a vertical pair, which makes it possible to also detect thin horizontal structures. A comparison between a sparse disparity map based on edges and a dense disparity map with a column and row filter is made. Both methods use the Sum of Absolute Difference to compute the disparity maps. Special interest has been in scenes with thin horizontal objects. Tests show that a trinocular system with the sparse dense method based on the Canny detector works better for the environments we have tested.

     

  • 6. Khammari, Leila
    et al.
    De Vin, Leo
    University of Skövde, School of Technology and Society.
    Hedenberg, Klas
    University of Skövde, School of Technology and Society.
    Change detection algorithms for vision supported navigation of AVGs2004Conference paper (Other academic)
1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf