his.sePublications
Change search
Refine search result
1 - 9 of 9
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Ericson, Stefan
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Vision-Based Perception for Localization of Autonomous Agricultural Robots2017Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In this thesis Stefan investigates how cameras can be used for localization of an agricultural mobile robot. He focuses on relative measurement that can be used to determine where a weeding tool is operating relative a weed detection sensor. It incorporates downward-facing perspective cameras, forward-facing perspective cameras and omnidirectional cameras. Stefan shows how the camera’s ego-motion can be estimated to obtain not only the position in 3D but also the orientation. He also shows how line structures in the field can be used to navigate a robot along the rows.

  • 2.
    Ericson, Stefan
    et al.
    University of Skövde, School of Technology and Society.
    Hedenberg, Klas
    University of Skövde, School of Technology and Society.
    Johansson, Ronnie
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Information Fusion for Autonomous Robotic Weeding2009In: INFORMATIK 2009: Im Focus das Leben / [ed] Stefan Fischer, Erik Maehle, Rüdiger Reischuk, Köllen Druck + Verlag GmbH , 2009, p. 2461-2473Conference paper (Refereed)
    Abstract [en]

    Information fusion has a potential applicability to a multitude of differentapplications. Still, the JDL model is mostly used to describe defense applications.This paper describes the information fusion process for a robot removing weed ina field. We analyze the robotic system by relating it to the JDL model functions.The civilian application we consider here has some properties which differ from thetypical defense applications: (1) indifferent environment and (2) a predictable andstructured process to achieve its objectives. As a consequence, situation estimatestend to deal with internal properties of the robot and its mission progress (throughmission state transition) rather than external entities and their relations. Nevertheless, the JDL model appears useful for describing the fusion activities of the weeding robot system. We provide an example of how state transitions may be detected and exploited using information fusion and report on some initial results. An additional finding is that process refinement for this type of application can be expressed in terms of a finite state machine.

  • 3.
    Ericson, Stefan K.
    et al.
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Åstrand, Björn S.
    School of Information Science, Computer and Electrical Engineering, Halmstad University, Halmstad, Sweden.
    Analysis of two visual odometry systems for use in an agricultural field environment2018In: Biosystems Engineering, ISSN 1537-5110, E-ISSN 1537-5129, Vol. 166, p. 116-125Article in journal (Refereed)
    Abstract [en]

    This paper analyses two visual odometry systems for use in an agricultural field environment. The impact of various design parameters and camera setups are evaluated in a simulation environment. Four real field experiments were conducted using a mobile robot operating in an agricultural field. The robot was controlled to travel in a regular back-and-forth pattern with headland turns. The experimental runs were 1.8–3.1 km long and consisted of 32–63,000 frames. The results indicate that a camera angle of 75° gives the best results with the least error. An increased camera resolution only improves the result slightly. The algorithm must be able to reduce error accumulation by adapting the frame rate to minimise error. The results also illustrate the difficulties of estimating roll and pitch using a downward-facing camera. The best results for full 6-DOF position estimation were obtained on a 1.8-km run using 6680 frames captured from the forward-facing cameras. The translation error (x,y,z) is 3.76% and the rotational error (i.e., roll, pitch, and yaw) is 0.0482 deg m−1. The main contributions of this paper are an analysis of design option impacts on visual odometry results and a comparison of two state-of-the-art visual odometry algorithms, applied to agricultural field data.

    The full text will be freely available from 2020-03-01 00:01
  • 4.
    Ericson, Stefan
    et al.
    University of Skövde, School of Technology and Society.
    Åstrand, B.
    Halmstad University.
    A vision-guided mobile robot for precision agriculture2009In: Proceedings of 7th European Conference on Precision Agriculture / [ed] Eldert J. van Henten, D. Goense and C. Lokhorst, Wageningen Academic Publishers, 2009, p. 623-630Conference paper (Refereed)
    Abstract [en]

    In this paper we have developed a mobile robot which is able to perform crop-scale operations using vision as only sensor. The system consists of a row-following system and a visual odometry system. The row following system captures images from a front looking camera on the robot and the crop rows are extracted using Hough transform. Both distance to the rows and heading angle is provided which both are used to control the steering. The visual odometry system uses two cameras in a stereo setup pointing perpendicular to the ground. This system measures the travelled distance by measuring the ground movement and compensate for height variation. Experiments are performed on an artificial field due to the season. The result shows that the visual odometry have accuracy better than 2.1% of travelled distance.

  • 5.
    Ericson, Stefan
    et al.
    University of Skövde, School of Technology and Society.
    Åstrand, B
    Halmstad University, Box 823, 30118 Halmstad, Sweden.
    Proceedings of the 13th IASTED International Conference on Robotics and Applications, RA 2007 and Proceedings of the IASTED International Conference on Telematics2007In: Robotics and Applications and Telematics, ACTA Press, 2007, p. 287-292Conference paper (Refereed)
  • 6.
    Ericson, Stefan
    et al.
    University of Skövde, School of Technology and Society.
    Åstrand, Björn
    School of Information Science, Computer and Electrical Engineering, Halmstad University, Halmstad, Sweden.
    Row-detection on an agricultural field using omnidirectional camera2010In: The IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems (IROS 2010): Conference Proceedings, IEEE conference proceedings, 2010, p. 4982-4987Conference paper (Refereed)
    Abstract [en]

    This paper describes a method of detecting parallel  rows  on  an  agricultural  field  using  an  omnidirectional camera.  The  method  works  both  on  cameras  with  a  fisheye lens and cameras with a catadioptric lens. A combination of an edge based method and a Hough transform method is suggested to find the rows. The vanishing point of several parallel rows is estimated using a second Hough transform. The method is evaluated on synthetic images generated with calibration data from real lenses. Scenes with several rows are produced, where each  plant  is  positioned  with  a  specified  error.  Experiments are  performed  on  these  synthetic  images  and  on  real  field images. The result shows that good accuracy is obtained on the vanishing point once it is detected correctly. Further it shows that the edge based method works best when the rows consists of solid lines, and the Hough method works best when the rows consists  of  individual  plants.  The  experiments  also  show  that the combined method provides better detection than using the methods separately.

  • 7.
    Ericson, Stefan
    et al.
    University of Skövde, School of Technology and Society.
    Åstrand, Björn
    Halmstad University.
    Stereo Visual Odometry for Mobile Robots on Uneven Terrain2008In: Advances in Electrical and Electronics Engineering - IAENG Special Edition of the World Congress on Engineering and Computer Science 2008 / [ed] Sio-Iong Ao, IEEE Computer Society, 2008, p. 150-157Conference paper (Refereed)
    Abstract [en]

    In this paper we present a stereo visual odometry system for mobile robots that is not sensitive to uneven terrain. Two cameras is mounted perpendicular to the ground and height and traveled distance are calculated using normalized cross correlation. A method for evaluating the system is developed, where flower boxes containing representative surfaces are placed in a metal-working lathe. The cameras are mounted on the carriage which can be positioned manually with 0.1 mm accuracy. Images are captured every 10 mm over 700 mm. The tests are performed on eight different surfaces representing real world situations. The resulting error is less than 0.6% of traveled distance on surfaces where the maximum height variation is measured to 96 mm. The variance is measured for eight test runs, total 5.6 m, to 0.040 mm. This accuracy is sufficient for crop-scale agricultural operations.

  • 8.
    Ericson, Stefan
    et al.
    University of Skövde, School of Technology and Society.
    Åstrand, Björn
    Halmstad University.
    Visual Odometry System for Agricultural Field Robots2008In: World Congress on Engineering and Computer Science 2008, IAENG , 2008, p. 619-624Conference paper (Refereed)
    Abstract [en]

     In this paper we present a visual odometry system for agricultural field robots that is not sensitive to uneven terrain. A stereo camera system is mounted perpendicular to the ground and height and traveled distance are calculated using normalized cross correlation. A method for evaluating the system is developed, where flower boxes containing representative surfaces are placed in a metal-working lathe. The cameras are mounted on the carriage which can be positioned manually with 0.1 mm accuracy. Images are captured every 10 mm over 700 mm. The tests are performed on eight different surfaces representing real world situations. The resulting error is less than 0.6% of traveled distance on surfaces where the maximum height variation is measured to 96 mm. The variance is measured for eight test runs, total 5.6 m, to 0.040 mm. This accuracy is sufficient for crop-scale agricultural operations.

     

     

  • 9.
    Huang, Rui
    et al.
    School of Computer Science, Nanjing University of Posts and Telecommunications Nanjing City, China.
    Ericson, Stefan
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    An Efficient Way to Estimate the Focus of Expansion2018In: 2018 3rd IEEE International Conference on Image, Vision and Computing (ICIVC 2018), IEEE, 2018, p. 691-695Conference paper (Refereed)
    Abstract [en]

    Detecting independent motion from a single camera is a difficult task in computer vision. It is because the captured image sequences are the combinations of the objects' movements and the camera's ego-motion. One major branch is to find the focus of expansion (FOE) instead as the goal. This is ideal for the situation commonly seen in UAV's camera system. In this case, the translation is dominant in camera's motion while the rotation is relatively small. To separate the ego motion and scene structure, many researchers used the directional flow as the theoretic basis and extracted its properties related to FOE. In this paper, we formulate finding FOE as an optimizing problem. The position of FOE has the minimal standard deviation for the directional flow in all directions, which is also subjected to the introduced constraint. The experiments show the proposed methods out-perform the previous method.

1 - 9 of 9
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf