Högskolan i Skövde

his.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 11) Show all publications
Darwish, A., Ericson, S., Ghasemi, R., Andersson, T., Lönn, D., Andersson Lassila, A. & Salomonsson, K. (2024). Investigating the ability of deep learning to predict welding depth and pore volume in hairpin welding. Journal of Laser Applications, 36(4), Article ID 042010.
Open this publication in new window or tab >>Investigating the ability of deep learning to predict welding depth and pore volume in hairpin welding
Show others...
2024 (English)In: Journal of Laser Applications, ISSN 1042-346X, Vol. 36, no 4, article id 042010Article in journal (Refereed) Published
Abstract [en]

To advance quality assurance in the welding process, this study presents a deep learning (DL) model that enables the prediction of two critical welds’ key performance characteristics (KPCs): welding depth and average pore volume. In the proposed approach, a wide range of laser welding key input characteristics (KICs) is utilized, including welding beam geometries, welding feed rates, path repetitions for weld beam geometries, and bright light weld ratios for all paths, all of which were obtained from hairpin welding experiments. Two DL networks are employed with multiple hidden dense layers and linear activation functions to investigate the capabilities of deep neural networks in capturing the complex nonlinear relationships between the welding input and output variables (KPCs and KICs). Applying DL networks to the small numerical experimental hairpin welding dataset has shown promising results, achieving mean absolute error values of 0.1079 for predicting welding depth and 0.0641 for average pore volume. This, in turn, promises significant advantages in controlling welding outcomes, moving beyond the current trend of relying only on defect classification in weld monitoring to capture the correlation between the weld parameters and weld geometries.

Place, publisher, year, edition, pages
AIP Publishing, 2024
National Category
Manufacturing, Surface and Joining Technology Computer Sciences
Research subject
Virtual Manufacturing Processes
Identifiers
urn:nbn:se:his:diva-24525 (URN)10.2351/7.0001509 (DOI)001313856500003 ()2-s2.0-85210744287 (Scopus ID)
Funder
Vinnova, 2021-03693
Note

Author to whom correspondence should be addressed; electronic mail: amena.darwish@his.se

AIP Publishing is a wholly owned not-for-profit subsidiary of the American Institute of Physics (AIP).

Paper published as part of the special topic on Laser Manufacturing for Future Mobility

Available from: 2024-09-17 Created: 2024-09-17 Last updated: 2024-12-12Bibliographically approved
Lidell, A., Ericson, S. & Ng, A. H. C. (2022). The Current and Future Challenges for Virtual Commissioning and Digital Twins of Production Lines. In: Amos H. C. Ng; Anna Syberfeldt; Dan Högberg; Magnus Holm (Ed.), SPS2022: Proceedings of the 10th Swedish Production Symposium. Paper presented at 10th Swedish Production Symposium (SPS2022), Skövde, April 26–29 2022 (pp. 508-519). Amsterdam; Berlin; Washington, DC: IOS Press
Open this publication in new window or tab >>The Current and Future Challenges for Virtual Commissioning and Digital Twins of Production Lines
2022 (English)In: SPS2022: Proceedings of the 10th Swedish Production Symposium / [ed] Amos H. C. Ng; Anna Syberfeldt; Dan Högberg; Magnus Holm, Amsterdam; Berlin; Washington, DC: IOS Press, 2022, p. 508-519Conference paper, Published paper (Refereed)
Abstract [en]

The use of virtual commissioning has increased in the last decade, but there are still challenges before the software code validation method is widespread in use. One of the extensions to virtual commissioning is the digital twin technology to allow for further improved accuracy. The aim of this paper is to review existing standards and approaches to developing virtual commissioning, through a literature review and interviews with experts in the industry. First, the definitions and classifications related to virtual commissioning and digital twins are reviewed, followed by, the approaches for the development of virtual commissioning and digital twins reported in the literature are explored. Then, in three interviews with experts of varying backgrounds and competencies, the views of the virtual technologies are assessed to provide new insight for the industry. The findings of the literature review and interviews are, among others, the apparent need for standardisation in the field and that a sought-after standard in the form of ISO 23247-1 is underway. The key finding of this paper is that digital twin is a concept with a promising future in combination with other technologies of Industry 4.0. We also outline the challenges and possibilities of virtual commissioning and the digital twin and could be used as a starting point for further research in standardisations and improvements sprung from the new standard.

Place, publisher, year, edition, pages
Amsterdam; Berlin; Washington, DC: IOS Press, 2022
Series
Advances in Transdisciplinary Engineering, ISSN 2352-751X, E-ISSN 2352-7528 ; 21
Keywords
Virtual commissioning, digital twin, simulation, production system, literature review, interview
National Category
Production Engineering, Human Work Science and Ergonomics Robotics and automation Computer Systems
Research subject
Production and Automation Engineering; VF-KDO
Identifiers
urn:nbn:se:his:diva-21108 (URN)10.3233/ATDE220169 (DOI)001191233200043 ()2-s2.0-85132830658 (Scopus ID)978-1-64368-268-6 (ISBN)978-1-64368-269-3 (ISBN)
Conference
10th Swedish Production Symposium (SPS2022), Skövde, April 26–29 2022
Note

CC BY-NC 4.0

Corresponding Author, Anton Lidell, University of Skövde, Sweden, E-mail: antonlidell@live.se

Available from: 2022-05-02 Created: 2022-05-02 Last updated: 2025-02-05Bibliographically approved
Huang, R. & Ericson, S. (2018). An Efficient Way to Estimate the Focus of Expansion. In: 2018 3rd IEEE International Conference on Image, Vision and Computing (ICIVC 2018): . Paper presented at 2018 3rd IEEE International Conference on Image, Vision and Computing (ICIVC 2018), Chongqing, China, June 27-29, 2018 (pp. 691-695). IEEE
Open this publication in new window or tab >>An Efficient Way to Estimate the Focus of Expansion
2018 (English)In: 2018 3rd IEEE International Conference on Image, Vision and Computing (ICIVC 2018), IEEE, 2018, p. 691-695Conference paper, Published paper (Refereed)
Abstract [en]

Detecting independent motion from a single camera is a difficult task in computer vision. It is because the captured image sequences are the combinations of the objects' movements and the camera's ego-motion. One major branch is to find the focus of expansion (FOE) instead as the goal. This is ideal for the situation commonly seen in UAV's camera system. In this case, the translation is dominant in camera's motion while the rotation is relatively small. To separate the ego motion and scene structure, many researchers used the directional flow as the theoretic basis and extracted its properties related to FOE. In this paper, we formulate finding FOE as an optimizing problem. The position of FOE has the minimal standard deviation for the directional flow in all directions, which is also subjected to the introduced constraint. The experiments show the proposed methods out-perform the previous method.

Place, publisher, year, edition, pages
IEEE, 2018
Keywords
focus of expansion, directional flow, independent motion detection
National Category
Robotics and automation
Research subject
Production and Automation Engineering; INF201 Virtual Production Development
Identifiers
urn:nbn:se:his:diva-16399 (URN)10.1109/ICIVC.2018.8492881 (DOI)000448170000136 ()2-s2.0-85056554769 (Scopus ID)978-1-5386-4992-3 (ISBN)978-1-5386-4991-6 (ISBN)978-1-5386-4990-9 (ISBN)
Conference
2018 3rd IEEE International Conference on Image, Vision and Computing (ICIVC 2018), Chongqing, China, June 27-29, 2018
Available from: 2018-11-15 Created: 2018-11-15 Last updated: 2025-02-09Bibliographically approved
Ericson, S. K. & Åstrand, B. S. (2018). Analysis of two visual odometry systems for use in an agricultural field environment. Biosystems Engineering, 166, 116-125
Open this publication in new window or tab >>Analysis of two visual odometry systems for use in an agricultural field environment
2018 (English)In: Biosystems Engineering, ISSN 1537-5110, E-ISSN 1537-5129, Vol. 166, p. 116-125Article in journal (Refereed) Published
Abstract [en]

This paper analyses two visual odometry systems for use in an agricultural field environment. The impact of various design parameters and camera setups are evaluated in a simulation environment. Four real field experiments were conducted using a mobile robot operating in an agricultural field. The robot was controlled to travel in a regular back-and-forth pattern with headland turns. The experimental runs were 1.8–3.1 km long and consisted of 32–63,000 frames. The results indicate that a camera angle of 75° gives the best results with the least error. An increased camera resolution only improves the result slightly. The algorithm must be able to reduce error accumulation by adapting the frame rate to minimise error. The results also illustrate the difficulties of estimating roll and pitch using a downward-facing camera. The best results for full 6-DOF position estimation were obtained on a 1.8-km run using 6680 frames captured from the forward-facing cameras. The translation error (x,y,z) is 3.76% and the rotational error (i.e., roll, pitch, and yaw) is 0.0482 deg m−1. The main contributions of this paper are an analysis of design option impacts on visual odometry results and a comparison of two state-of-the-art visual odometry algorithms, applied to agricultural field data.

Place, publisher, year, edition, pages
Elsevier, 2018
Keywords
Visual odometry, Agricultural field robots, Visual navigation
National Category
Robotics and automation
Research subject
Production and Automation Engineering
Identifiers
urn:nbn:se:his:diva-14585 (URN)10.1016/j.biosystemseng.2017.11.009 (DOI)000424726400009 ()2-s2.0-85037985130 (Scopus ID)
Note

Available online 14 December 2017, Version of Record 14 December 2017

The authors would like to thank Mariestad Municipality for providing access to the agricultural test fields, and Anna Syberfeldt and Richard Senington for their constructive comments and suggestions on this work.

Available from: 2017-12-15 Created: 2017-12-15 Last updated: 2025-02-09Bibliographically approved
Ericson, S. (2017). Vision-Based Perception for Localization of Autonomous Agricultural Robots. (Doctoral dissertation). Skövde: University of Skövde
Open this publication in new window or tab >>Vision-Based Perception for Localization of Autonomous Agricultural Robots
2017 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

In this thesis Stefan investigates how cameras can be used for localization of an agricultural mobile robot. He focuses on relative measurement that can be used to determine where a weeding tool is operating relative a weed detection sensor. It incorporates downward-facing perspective cameras, forward-facing perspective cameras and omnidirectional cameras. Stefan shows how the camera’s ego-motion can be estimated to obtain not only the position in 3D but also the orientation. He also shows how line structures in the field can be used to navigate a robot along the rows.

Place, publisher, year, edition, pages
Skövde: University of Skövde, 2017. p. 164
Series
Dissertation Series ; 16 (2017)
National Category
Robotics and automation
Research subject
Production and Automation Engineering; INF201 Virtual Production Development
Identifiers
urn:nbn:se:his:diva-13408 (URN)978-91-982690-7-9 (ISBN)
Opponent
Supervisors
Available from: 2017-02-28 Created: 2017-02-28 Last updated: 2025-02-09Bibliographically approved
Ericson, S. & Åstrand, B. (2010). Row-detection on an agricultural field using omnidirectional camera. In: The IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems (IROS 2010): Conference Proceedings. Paper presented at 23rd IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010; Taipei; 18 October 2010 through 22 October 2010 (pp. 4982-4987). IEEE conference proceedings
Open this publication in new window or tab >>Row-detection on an agricultural field using omnidirectional camera
2010 (English)In: The IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems (IROS 2010): Conference Proceedings, IEEE conference proceedings, 2010, p. 4982-4987Conference paper, Published paper (Refereed)
Abstract [en]

This paper describes a method of detecting parallel  rows  on  an  agricultural  field  using  an  omnidirectional camera.  The  method  works  both  on  cameras  with  a  fisheye lens and cameras with a catadioptric lens. A combination of an edge based method and a Hough transform method is suggested to find the rows. The vanishing point of several parallel rows is estimated using a second Hough transform. The method is evaluated on synthetic images generated with calibration data from real lenses. Scenes with several rows are produced, where each  plant  is  positioned  with  a  specified  error.  Experiments are  performed  on  these  synthetic  images  and  on  real  field images. The result shows that good accuracy is obtained on the vanishing point once it is detected correctly. Further it shows that the edge based method works best when the rows consists of solid lines, and the Hough method works best when the rows consists  of  individual  plants.  The  experiments  also  show  that the combined method provides better detection than using the methods separately.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2010
Series
IEEE International Conference on Intelligent Robots and Systems. Proceedings, ISSN 2153-0858
National Category
Engineering and Technology
Research subject
Technology
Identifiers
urn:nbn:se:his:diva-4597 (URN)10.1109/IROS.2010.5650964 (DOI)000287672004089 ()2-s2.0-78651477189 (Scopus ID)978-1-4244-6676-4 (ISBN)978-1-4244-6675-7 (ISBN)978-1-4244-6674-0 (ISBN)
Conference
23rd IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010; Taipei; 18 October 2010 through 22 October 2010
Available from: 2011-01-20 Created: 2011-01-20 Last updated: 2024-09-13Bibliographically approved
Ericson, S. & Åstrand, B. (2009). A vision-guided mobile robot for precision agriculture. In: Eldert J. van Henten, D. Goense and C. Lokhorst (Ed.), Proceedings of 7th European Conference on Precision Agriculture: . Paper presented at Precision agriculture '09 : papers presented at the 7th European Conference on Precision Agriculture, Wageningen, the Netherlands, 6 - 8 June 2009 (pp. 623-630). Wageningen Academic Publishers
Open this publication in new window or tab >>A vision-guided mobile robot for precision agriculture
2009 (English)In: Proceedings of 7th European Conference on Precision Agriculture / [ed] Eldert J. van Henten, D. Goense and C. Lokhorst, Wageningen Academic Publishers, 2009, p. 623-630Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we have developed a mobile robot which is able to perform crop-scale operations using vision as only sensor. The system consists of a row-following system and a visual odometry system. The row following system captures images from a front looking camera on the robot and the crop rows are extracted using Hough transform. Both distance to the rows and heading angle is provided which both are used to control the steering. The visual odometry system uses two cameras in a stereo setup pointing perpendicular to the ground. This system measures the travelled distance by measuring the ground movement and compensate for height variation. Experiments are performed on an artificial field due to the season. The result shows that the visual odometry have accuracy better than 2.1% of travelled distance.

Place, publisher, year, edition, pages
Wageningen Academic Publishers, 2009
Keywords
visual odometry, row following
National Category
Engineering and Technology
Research subject
Technology
Identifiers
urn:nbn:se:his:diva-3427 (URN)2-s2.0-84893371202 (Scopus ID)978-90-8686-113-2 (ISBN)978-90-8686-664-9 (ISBN)
Conference
Precision agriculture '09 : papers presented at the 7th European Conference on Precision Agriculture, Wageningen, the Netherlands, 6 - 8 June 2009
Available from: 2009-10-15 Created: 2009-10-15 Last updated: 2024-09-13Bibliographically approved
Ericson, S., Hedenberg, K. & Johansson, R. (2009). Information Fusion for Autonomous Robotic Weeding. In: Stefan Fischer, Erik Maehle, Rüdiger Reischuk (Ed.), INFORMATIK 2009: Im Focus das Leben. Paper presented at 39th Jahrestagung der Gesellschaft fur Informatik e.V. (GI): Im Focus das Leben, INFORMATIK 2009. 39th Annual Meeting of the German Informatics Society (GI): Focus on Life, INFORMATIK 2009, Lübeck, Germany, 28 September 2009 through 2 October 2009 (pp. 2461-2473). Köllen Druck + Verlag GmbH
Open this publication in new window or tab >>Information Fusion for Autonomous Robotic Weeding
2009 (English)In: INFORMATIK 2009: Im Focus das Leben / [ed] Stefan Fischer, Erik Maehle, Rüdiger Reischuk, Köllen Druck + Verlag GmbH , 2009, p. 2461-2473Conference paper, Published paper (Refereed)
Abstract [en]

Information fusion has a potential applicability to a multitude of differentapplications. Still, the JDL model is mostly used to describe defense applications.This paper describes the information fusion process for a robot removing weed ina field. We analyze the robotic system by relating it to the JDL model functions.The civilian application we consider here has some properties which differ from thetypical defense applications: (1) indifferent environment and (2) a predictable andstructured process to achieve its objectives. As a consequence, situation estimatestend to deal with internal properties of the robot and its mission progress (throughmission state transition) rather than external entities and their relations. Nevertheless, the JDL model appears useful for describing the fusion activities of the weeding robot system. We provide an example of how state transitions may be detected and exploited using information fusion and report on some initial results. An additional finding is that process refinement for this type of application can be expressed in terms of a finite state machine.

Place, publisher, year, edition, pages
Köllen Druck + Verlag GmbH, 2009
Series
Lecture Notes in Informatics, ISSN 1617-5468 ; Vol. P-154
National Category
Computer and Information Sciences
Research subject
Technology
Identifiers
urn:nbn:se:his:diva-3525 (URN)2-s2.0-84874333160 (Scopus ID)978-3-88579-248-2 (ISBN)
Conference
39th Jahrestagung der Gesellschaft fur Informatik e.V. (GI): Im Focus das Leben, INFORMATIK 2009. 39th Annual Meeting of the German Informatics Society (GI): Focus on Life, INFORMATIK 2009, Lübeck, Germany, 28 September 2009 through 2 October 2009
Available from: 2009-12-08 Created: 2009-12-08 Last updated: 2024-09-13Bibliographically approved
Ericson, S. & Åstrand, B. (2008). Stereo Visual Odometry for Mobile Robots on Uneven Terrain. In: Sio-Iong Ao (Ed.), Advances in Electrical and Electronics Engineering - IAENG Special Edition of the World Congress on Engineering and Computer Science 2008: . Paper presented at Advances in Electrical and Electronics Engineering - IAENG Special Edition of the World Congress on Engineering and Computer Science 2008, WCECS 2008, 22-24 October 2008, San Francisco, California, USA (pp. 150-157). IEEE Computer Society
Open this publication in new window or tab >>Stereo Visual Odometry for Mobile Robots on Uneven Terrain
2008 (English)In: Advances in Electrical and Electronics Engineering - IAENG Special Edition of the World Congress on Engineering and Computer Science 2008 / [ed] Sio-Iong Ao, IEEE Computer Society, 2008, p. 150-157Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present a stereo visual odometry system for mobile robots that is not sensitive to uneven terrain. Two cameras is mounted perpendicular to the ground and height and traveled distance are calculated using normalized cross correlation. A method for evaluating the system is developed, where flower boxes containing representative surfaces are placed in a metal-working lathe. The cameras are mounted on the carriage which can be positioned manually with 0.1 mm accuracy. Images are captured every 10 mm over 700 mm. The tests are performed on eight different surfaces representing real world situations. The resulting error is less than 0.6% of traveled distance on surfaces where the maximum height variation is measured to 96 mm. The variance is measured for eight test runs, total 5.6 m, to 0.040 mm. This accuracy is sufficient for crop-scale agricultural operations.

Place, publisher, year, edition, pages
IEEE Computer Society, 2008
Keywords
Agricultural applications, Image processing, Mobile robot localization, Visual odometry
Research subject
Technology
Identifiers
urn:nbn:se:his:diva-3950 (URN)10.1109/WCECS.2008.26 (DOI)000275915300018 ()2-s2.0-70350527326 (Scopus ID)978-0-7695-3555-5 (ISBN)978-1-4244-3545-6 (ISBN)
Conference
Advances in Electrical and Electronics Engineering - IAENG Special Edition of the World Congress on Engineering and Computer Science 2008, WCECS 2008, 22-24 October 2008, San Francisco, California, USA
Available from: 2010-05-20 Created: 2010-05-20 Last updated: 2024-09-13Bibliographically approved
Ericson, S. & Åstrand, B. (2008). Visual Odometry System for Agricultural Field Robots. In: S. I. Ao; Craig Douglas; W. S. Grundfest; Lee Schruben; Jon Burgstone (Ed.), Proceedings of the World Congress on Engineering and Computer Science 2008: WCECS 2008, October 22 - 24, 2008, San Francisco, USA. Paper presented at World Congress on Engineering and Computer Science (WCECS 2008), October 22 - 24, 2008, San Francisco, USA (pp. 619-624). International Association of Engineers
Open this publication in new window or tab >>Visual Odometry System for Agricultural Field Robots
2008 (English)In: Proceedings of the World Congress on Engineering and Computer Science 2008: WCECS 2008, October 22 - 24, 2008, San Francisco, USA / [ed] S. I. Ao; Craig Douglas; W. S. Grundfest; Lee Schruben; Jon Burgstone, International Association of Engineers , 2008, p. 619-624Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present a visual odometry system for agricultural field robots that is not sensitive to uneven terrain. A stereo camera system is mounted perpendicular to the ground and height and traveled distance are calculated using normalized cross correlation. A method for evaluating the system is developed, where flower boxes containing representative surfaces are placed in a metal-working lathe. The cameras are mounted on the carriage which can be positioned manually with 0.1 mm accuracy. Images are captured every 10 mm over 700 mm. The tests are performed on eight different surfaces representing real world situations. The resulting error is less than 0.6% of traveled distance on surfaces where the maximum height variation is measured to 96 mm. The variance is measured for eight test runs, total 5.6 m, to 0.040 mm. This accuracy is sufficient for crop-scale agricultural operations.

 

 

Place, publisher, year, edition, pages
International Association of Engineers, 2008
Series
Lecture Notes in Engineering and Computer Science, ISSN 2078-0958, E-ISSN 2078-0966
Keywords
Agricultural applications, Image processing, Mobile robot localization, Visual odometry
Identifiers
urn:nbn:se:his:diva-2410 (URN)000263417100117 ()978-988-98671-0-2 (ISBN)
Conference
World Congress on Engineering and Computer Science (WCECS 2008), October 22 - 24, 2008, San Francisco, USA
Available from: 2008-12-05 Created: 2008-12-02 Last updated: 2024-09-13Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0009-0004-2331-9900

Search in DiVA

Show all publications