his.sePublications
Change search
Refine search result
12 1 - 50 of 77
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Alklind Taylor, Anna-Sofia
    et al.
    University of Skövde, School of Humanities and Informatics.
    Backlund, Per
    University of Skövde, School of Humanities and Informatics.
    Bergman, Maria Elena
    University of Skövde, School of Humanities and Informatics.
    Carlén, Urban
    University of Skövde, School of Humanities and Informatics.
    Engström, Henrik
    University of Skövde, School of Humanities and Informatics.
    Johannesson, Mikael
    University of Skövde, School of Humanities and Informatics.
    Lebram, Mikael
    University of Skövde, School of Humanities and Informatics.
    Toftedahl, Marcus
    University of Skövde, School of Humanities and Informatics.
    Spelbaserad simulering för insatsutbildning: Slutrapport2012Report (Other academic)
    Abstract [sv]

    Denna rapport är en avrapportering av projektet Spelbaserad simulering för insatsutbildning. Projektet syftar till att:

    • Studera hur serious games kan förstärka lärandemiljön i träning och utbildning
    • Praktiskt testa och analysera användningen av spelteknologi för att ta fram rekommendationer för konstruktion av träningssimulatorer
    • Skapa underlag för utveckling av Räddningsverkets utbildningsmetoder genom samarbete mellan forskare och praktiker

    Serious games och spelbaserad simulatorträning ses som en möjlighet att vidareutveckla undervisnings- och träningsmiljön inom räddningstjänst. Begreppet serious games definieras som att använda spel och spelteknik för att uppnå syften utöver ren underhållning. För att utnyttja områdets potential i största möjliga mån krävs en kombination av att utveckla och anpassa teknik så att den passar för ändamålet, detta kan till exempel innebära att utnyttja de möjligheter som modern spelteknik ger för att logga användarbeteende och resultat. Dessutom innefattar serious games en komponent av speldesign, det vill säga att utnyttja möjligheter som spel ger för att skapa en motiverande och engagerande lärandemiljö. Detta kan till exempel innebära att skapa tävlingsmoment och poängsystem som sporrar till upprepad användning. I projektet har vi utnyttjat såväl teknik- som speldesignskomponenten.

    Projektets syften har uppnåtts genom att producera och utvärdera ett prototypspel för insatsträning samt en modell för hur serious games kan användas i träning och utbildning. De huvudsakliga resultaten består av en spelprototyp av ett webbaserat spel för att träna beslutsfattande på taktisk nivå samt en pedagogisk modell för spelbaserad träning. Prototypen och modellen har testats på en distanskurs för Räddningsledarutbildning i regi av Myndigheten för samhällsskydd och beredskap (MSB). Utvärderingen visar på goda resultat vad gäller systemets användbarhet. Den pedagogiska potentialen har inte kunnat utvärderas fullt ut då prototypen inte blev en tillräckligt integrerad del i kursen där den utvärderades.

    Projektet visar att spelbaserad träning kan vara en möjlighet för pedagogisk utveckling med avseende på både teknik och pedagogisk kontext. I detta sammanhang är det viktigt att poängtera vikten av att genomföra och utvärdera pedagogiska anpassningar i samband med spelbaserad träning. Vidare har projektet har samarbetat med olika konstellationer av lärare och kursdeltagare vid MSB. En viktig lärdom är att tydliga resurser och organisatoriskt engagemang finns på plats i den här typen av samproducerande forskningsprojekt.

  • 2.
    Andler, Sten F.
    University of Skövde, School of Humanities and Informatics.
    Information Fusion from Databases, Sensors and Simulations: Annual Report 20052006Report (Other academic)
  • 3.
    Andler, Sten F.
    et al.
    University of Skövde, School of Humanities and Informatics.
    Brohede, Marcus
    University of Skövde, School of Humanities and Informatics.
    Information Fusion from Databases, Sensors and Simulations: Annual Report 20062007Report (Other academic)
  • 4.
    Andler, Sten F.
    et al.
    University of Skövde, School of Humanities and Informatics.
    Brohede, Marcus
    University of Skövde, School of Humanities and Informatics.
    Information Fusion from Databases, Sensors and Simulations: Annual Report 20072008Report (Other academic)
  • 5.
    Andler, Sten F.
    et al.
    University of Skövde, School of Humanities and Informatics.
    Brohede, Marcus
    University of Skövde, School of Humanities and Informatics.
    Information Fusion from Databases, Sensors and Simulations: Annual Report 20082009Report (Other academic)
  • 6.
    Backlund, Per
    University of Skövde, The Informatics Research Centre. University of Skövde, School of Humanities and Informatics.
    Ambulansträningscenter: Förstudie prehospitalt tränings- och simuleringscenter för Västra Götaland2013Report (Other academic)
    Abstract [sv]

    Denna förstudie presenterar förutsättningar och vision för ett ambulansträningscenter i Skövde. Förstudien är genomförd i samarbete mellan Högskolan i Skövde (Institutionen för kommunikation och information och Institutionen för vård och natur) och Ambulanssjukvårdens stabsenhet vid Skaraborgs sjukhus. Visionen för Ambulansträningscenter Skövde är ett simulatorträningscenter med inriktning mot prehospital sjukvård. Träningskonceptet integrerar vårdkedjan från omhändertagande på olycksplats till avlämning på akutmottagning så att hela processen tränas. Dessutom integreras flera aspekter av insatsen så att utryckningskörning, kommunikation, medicinskt omhändertagande, omvårdnad och teamsamarbete tränas samtidigt.

  • 7.
    Bergfeldt, Niclas
    et al.
    University of Skövde, School of Humanities and Informatics.
    Hansson, Andreas
    University of Skövde, School of Humanities and Informatics.
    Evolutionary pressure on developing simple languages2004Report (Other academic)
    Abstract [en]

    The interest for studying the origin and development of language has increased greatly in the last decades. For a language to be developed the production and understanding of it must co evolve, otherwise the users will not be able to understand each other. Here, we show that the co-evolution of language production and understanding promotes the development of an efficient language, where the efficiency is measured in terms of number of symbols needed to transmit a message and distinguish it from other possible messages. We also show how agents evolve a very simple language in order to solve the task at hand, even though the simplicity is never enforced in any way.

  • 8.
    Boström, Henrik
    et al.
    University of Skövde, School of Humanities and Informatics.
    Andler, Sten F.
    University of Skövde, School of Humanities and Informatics.
    Brohede, Marcus
    University of Skövde, School of Humanities and Informatics.
    Johansson, Ronnie
    University of Skövde, School of Humanities and Informatics.
    Karlsson, Alexander
    University of Skövde, School of Humanities and Informatics.
    van Laere, Joeri
    University of Skövde, School of Humanities and Informatics.
    Niklasson, Lars
    University of Skövde, School of Humanities and Informatics.
    Nilsson, Marie
    University of Skövde, School of Humanities and Informatics.
    Persson, Anne
    University of Skövde, School of Humanities and Informatics.
    Ziemke, Tom
    University of Skövde, School of Humanities and Informatics.
    On the Definition of Information Fusion as a Field of Research2007Report (Other academic)
    Abstract [en]

    A more precise definition of the field of information fusion can be of benefit to researchers within the field, who may use uch a definition when motivating their own work and evaluating the contribution of others. Moreover, it can enable researchers and practitioners outside the field to more easily relate their own work to the field and more easily understand the scope of the techniques and methods developed in the field. Previous definitions of information fusion are reviewed from that perspective, including definitions of data and sensor fusion, and their appropriateness as definitions for the entire research field are discussed. Based on strengths and weaknesses of existing definitions, a novel definition is proposed, which is argued to effectively fulfill the requirements that can be put on a definition of information fusion as a field of research.

  • 9.
    Brohede, Marcus
    University of Skövde, School of Humanities and Informatics.
    Bounded recovery in distributed discrete real-time simulations2006Report (Other (popular scientific, debate etc.))
    Abstract [en]

    This thesis proposal defines the problem of recovery in distributed discrete real-time simulations with external actions; real-time simulations with simulation actions in the real world. A problem that these simulations encounter is that they cannot rely on rollback-based recovery (use of checkpoints) for two reasons. First, some actions in the "real world" cannot be undone, and second, the time allowed for recovery tends to be short and bounded. As a result there is a need for some form of error masking for this category of simulations. We propose an infrastructure for these simulations with external actions based on an active distributed real-time database that features replication of the distributed simulation. The degree of replication is based on the dependability requirements of the individual nodes in the simulation.

    A guideline for how to decompose a distributed real-time simulation into parts with different requirements on the replication protocol is also defined as an interesting topic to investigate further. We introduce the simulation infrastructure "Simulation DeeDS" featuring a replication-based recovery strategy for the category of simulations mentioned. We also show that some information fusion applications are indeed examples of applications that need real-time simulation with external actions and as such can benefit from the proposed infrastructure.

  • 10.
    Engström, Henrik
    et al.
    University of Skövde, Department of Computer Science.
    Berndtsson, Mikael
    University of Skövde, Department of Computer Science.
    Lings, Brian
    University of Skövde, Department of Computer Science.
    ACOOD Essentials1997Report (Other academic)
    Abstract [en]

    This paper describes the active object-oriented database system ACOOD developed at the universities of Skövde and Exeter. ACOOD adds active functionality on top of the commercially available Ontos DB. The active behaviour is modelled by using Event-Condition-Action (ECA) rules. ACOOD offers all essential functionality associated with an active database. The semantics and user interface have been clearly defined in order to produce a prototype that can be used to develop database applications. The historical background of active databases and the development of ACOOD are covered in the paper together with a detailed description of the latest, redesigned version of the system. There is also a discussion of experience gained through the work with ACOOD and a comparison with similar systems.

  • 11.
    Engström, Henrik
    et al.
    University of Skövde, Department of Computer Science.
    Chakravarthy, Sharma
    The University of Texas at Arlington, USA.
    Lings, Brian
    University of Exeter, UK.
    A Holistic Approach to the Evaluation of Data Warehouse Maintenance Policies2000Report (Other academic)
    Abstract [en]

    The research community is addressing a number of issues in response to increased reliance of organisations on data warehousing. Most work addresses individual aspects related to incremental view maintenance, propagation algorithms, consistency requirements, performance of OLAP queries etc. There remains a need to consolidate relevant results into a cohesive framework for data warehouse maintenance. Although data propagation policies, source database characteristics, and user requirements have been addressed individually, their co-dependencies and relationships have not been explored. In this paper, we present a comprehensive, cost-based framework for evaluating data propagation policies against data warehouse requirements and source database characteristics. We formalize data warehouse specification along the dimensions of freshness (or staleness), response time, storage, and computation cost and classify source databases according to their data propagation capabilities. A detailed cost model is presented for a representative set of policies. A prototype implementation has allowed an exploration of the various trade-offs. The results presented in this paper are for a single source, but the approach and the framework are extensible. Current work is addressing a broader class of sources and a more detailed data warehouse specification that includes multiple sources.

  • 12.
    Engström, Henrik
    et al.
    University of Skövde, Department of Computer Science.
    Chakravarthy, Sharma
    The University of Texas at Arlington, USA.
    Lings, Brian
    University of Exeter, UK.
    Data Integration in Heterogeneous Environments: Multi-Source Policies, Cost Model and Implementation2002Report (Other academic)
    Abstract [en]

    The research community is addressing a number of issues in response to an increased reliance of organisations on data warehousing. Most work addresses aspects related to the internal operation of a data warehouse server, such as selection of views to materialise, maintenance of aggregate views and performance of OLAP queries. Issues related to data warehouse maintenance, i.e. how changes to autonomous sources should be detected and propagated to a warehouse, have been addressed in a fragmented manner.

    We have shown earlier that a number of maintenance policies based on source characteristics and timing are relevant and meaningful to single source views. In this report we detail how this work has been extended for multiple sources. We focus on exploring policies for data integration from heterogeneous sources. As the number of policies is very large, we first analyse their behaviour intuitively with respect to broader source and policy characteristics. Further, we extend the single source cost model to these policies and incorporate it into a Policy Analyser for Multiple sources (PAM). We use this to analyse the effect of source characteristics and join alternatives on various policies. We have developed a Testbed for Maintenance of Integrated Data (TMID). We report on experiments conducted to validate the policies that are recommended by the tool, and confirm our initial analysis. Finally, we distil a set of heuristics for the selection of multi-source policies based on quality of service and other requirements.

  • 13.
    Engström, Henrik
    et al.
    University of Skövde, Department of Computer Science.
    Gelati, Gionata
    University of Modena and Reggio Emilia, Italy.
    Lings, Brian
    University of Exeter, UK.
    A Benchmark Comparison of Maintenance Policies in a Data Warehouse Environment2001Report (Other academic)
    Abstract [en]

    A data warehouse contains data originating from autonomous sources. Various maintenance policies have been suggested which specify when and how changes to a source should be propagated to the data warehouse. Engström et al.(HS-IDA-TR-00-001) present a cost-based model which makes it possible to compare and select policies based on quality of service as well as system properties. This paper presents a simulation environment for benchmarking maintenance policies. The main aim is to compare benchmark results with predictions from the cost-model. We report results from a set of experiments which all have a close correspondence with the cost-model predictions. The process of developing the simulation environment and conducting experiments has, in addition, given us valuable insights into the maintenance problem, which are reported in the paper.

  • 14.
    Ericsson, AnnMarie
    University of Skövde, School of Humanities and Informatics.
    Enabling Tool Support for Formal Analysis of Predictable sets of ECA Rules2006Report (Other academic)
    Abstract [en]

    This thesis proposal addresses support for utilizing an existing formal analysis tool when predictable rule-based systems are developed. One of the main problems of the rule-based paradigm is that it is hard to analyze the behavior of rule sets, which is conflicting with the high predictability requirements typically associated with real-time systems. Further, analysis tools developed for rule-based systems typically address a specific platform or a specific part of the development chain.

    In our approach, rules are initially specified in a high-level language. We enable a powerful analysis tool not designed for rule based development, to be utilized for analyzing the rule base. This is done by transforming the set of rules, with maintained semantics, to a representation suitable for the target analysis tool. Our approach provides non-experts in formal methods with the ability to formally analyze a set of rules.

  • 15.
    Ericsson, AnnMarie
    University of Skövde, School of Humanities and Informatics.
    Verifying an industrial system using REX2008Report (Other academic)
    Abstract [en]

    The use of formal methods for enhancing software quality is still not used in its full potential in industry. We argue that seamless support in a high-level specification tool is a viable way to provide industrial system designers with complex and powerful formal verification techniques.

    The REX tool supports specification of applications constructed as a set of rules and complex events. REX provides seamless support for specifying and verifying application specific requirement properties in the timed automata model-checking tool Uppaal. The rules, events and requirements of an application design is automatically transformed to a timed automaton representation and verified in the Uppaal tool.

    In order to validate the applicability of our approach, we present experimental results from a case-study of an industrial system. Based on the case-study results, we conclude that complex applications can be efficiently verified using our approach.

  • 16.
    Eriksson, Anders
    University of Skövde, School of Humanities and Informatics.
    Research Proposal: Strategy for Platform Independent Testing2012Report (Other academic)
    Abstract [en]

    This work addresses problems associated with software testing in a Model Driven Development (MDD) environment. Today, it is possible to create platform independent models that can be executed and therefore, dynamically tested. However, when developing safety-critical software systems there is a requirement to show that the set of test cases covers the structure of the implementation. Since the structure of the implementation might vary depending on e.g., compiler and target language, this is normally done by transforming the design model to code, which is compiled and executed by tests until full coverage of the code structure is reached. The problem with such approach is that testing becomes platform dependent. Moving the system from one platform to another becomes time-consuming since the test activities to a large extent must start again for the new platform. To meet the goals of MDD, we need methods that allow us to perform structural coverage analysis on platform independent models in a way that covers as much as possible of the the structure of any implementation. Moreover, such method must enable us to trace specific test artifacts between the platform independent model and the generated code. Without such trace a complete analysis must be done at code level and much of the advantage of MDD is lost. 

    We propose a framework for structural coverage analysis at a platform independent level. The framework includes: (i ) functionality for generation of test requirements, (ii ) creation of structural variants with respect to the translation to code, and (iii ) traceability between test artifacts at different design levels. The proposed framework uses a separate representation for structural constructs involved in coverage criteria for software in safety-critical systems. The representation makes it possible to create variants of structural constructs already at the top design level. These variants represent potential differences in the structure at lower design levels, e.g., target language or executable object code. Test requirements are then generated for all variants, thus covering the structure of different implementations. Test suites created to satisfy these test requirements are therefore, robust to different implementations.

  • 17.
    Farmer, Robert
    University of Skövde, School of Humanities and Informatics.
    Lockande och användarvänligt?: Är det möjligt att skapa en lanseringswebbsida för ett TV-/dataspel som både är användarvänlig och lever upp till målgruppens förväntningar?2010Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    Är det möjligt att skapa en lanseringswebbsida för ett TV-/dataspel som både är användarvänlig och lever upp till målgruppens förväntningar? Med hjälp av Tommy Sundströms användbarhetsteori och andra hjälpmedel ska en fiktiv lanseringssida byggas för att testas på en målgrupp. Först samlade jag in ett urval av spelwebbsidor som jag sedan begränsade till tre. Dessa tre spelsidor analyserade jag utefter användbarhetsteorin där jag granskade typografin, navigationen och hastigheten. Från denna analys och med hjälp av användbarhetsteorin byggde jag sajten Bravocharlie. Den fiktiva lanseringssidan prövades i två olika undersökningar. Konstruktionen av sajten och de utförda undersökningarna gav mig svar på min frågeställning: 1: Är det möjligt att kombinera en hög användarvänlighet med att sajten även levererar användarens förväntningar på produkten? 2: Hur mottaglig är målgruppen för tv/dataspel till en sida där användarens förväntningar på produkten bekräftats eller infriats och på samma sätt är användarvänlig?

  • 18.
    Gamalielson, Jonas
    University of Skövde, School of Humanities and Informatics.
    Developing Semantic Pathway Alignment Algorithms for Systems Biology2006Report (Other academic)
    Abstract [en]

    Systems biology is an emerging multi-disciplinary field in which the behaviour of complex biological systems is studied by considering the interaction of all cellular and molecular constituents rather than using a "traditional" reductionist approach where constituents are studied individually. Systems are often studied over time with the ultimate goal of developing models which can be used to predict and understand complex biological processes, such as human diseases. To support systems biology, a large number of biological pathways are being derived for many different organisms, and these are stored in various databases. There is a lack of and need for algorithms for analysis of biological pathways. Here, a thesis is proposed where three related methods are developed for semantic analysis of biological pathways utilising the Gene Ontology. It is believed that the methods will be useful to biologists in order to assess the biological plausibility of derived pathways, compare different pathways for semantic similarities, and to derive hypothetical pathways that are semantically similar to documented biological pathways. To our knowledge, all methods are novel, and will therefore extend the bioinformatics toolbox that biologists can use to make new biological discoveries.

  • 19.
    Gamalielson, Jonas
    University of Skövde, School of Humanities and Informatics.
    Methods for Assessing the Interestingness of Rules Induced from Microarray Gene Expression Data2003Report (Other academic)
    Abstract [en]

    Microarray technology makes it possible to simultaneously measure the expression of thousands of genes. Gene expression data can be analysed in many different ways to produce putative knowledge on for example co-regulated genes, differentially expressed genes and how genes interact with each other. One way to derive gene interactions is to use rule induction algorithms such as association rule discovery algorithms or decision trees. The application of such algorithms to gene expression data sets typically generates a large set of rules serving as hypotheses of how genes interact. It is necessary to apply different measures to assess the interestingness of the rule hypotheses. There are well known domain independent objective measures, but there is a lack of domain specific interestingness measures tailored for microarray gene expression data. Without domain specific interestingness measures it is impossible to know if the hypotheses are interesting from a biological perspective, without resorting to time consuming manual evaluation of every single rule. The aim and contribution of this work is to develop a method for assessing the interestingness of rules induced from microarray gene expression data using a combination of objective and domain specific measures.

  • 20.
    Gamalielson, Jonas
    et al.
    University of Skövde, School of Humanities and Informatics.
    Olsson, Björn
    University of Skövde, School of Humanities and Informatics.
    On the Robustness of Algorithms for Clustering of Gene Expression Data2003Report (Other academic)
    Abstract [en]

    The progress in microarray technology is evident and huge amounts of gene expression data are currently being produced. A complicating matter is that there are various sources of uncertainty in microarray experiments, as well as in the analysis of expression data. This problem has generated an increased interest in the validation of methods for analysis of expression data. Clustering algorithms have been found particularly useful for the study of coexpressed genes, and this paper therefore concerns the robustness of partitional clustering algorithms. These algorithms use a predefined number of clusters and assign each gene to exactly one cluster. The effect of repeated clustering using identical algorithm parameters and input data is investigated for the self-organizing map (SOM) and the $k$-means algorithm. The susceptibility to measurement noise is also studied. A reproducibility measure is proposed and used to assess the results from the performed clustering experiments. Well-known publicly available datasets are used. Results show that clusterings are not necessarily reproducible even when identical algorithm parameters are used, and that the problems are aggravated when measurement noise is introduced.

  • 21.
    Gamalielsson, Jonas
    University of Skövde, School of Humanities and Informatics.
    Thesis Methods: Assessing the Biological Plausibility of Regulatory Hypotheses2005Report (Other academic)
    Abstract [en]

    Many algorithms that derive gene regulatory networks from microarray gene expression data have been proposed in the literature. The performance of such an algorithm is often measured by how well a genetic network can recreate the gene expression data that the network was derived from. However, this kind of performance does not necessarily mean that the regulatory hypotheses in the network are biologically plausible. We have therefore proposed a Gene Ontology based method for assessing the biological plausibility of regulatory hypotheses at the gene product level using prior biological knowledge in the form of Gene Ontology (GO) annotation of gene products and regulatory pathway databases (Gamalielsson et al. 2005). Templates were designed to encode general knowledge, derived by generalizing from known interactions to typical properties of interacting gene product pairs. By matching regulatory hypotheses to templates, the plausible hypotheses can be separated from inplausible ones. This document elaborates on how the present method can be improved and extended.

  • 22.
    Gamalielsson, Jonas
    et al.
    University of Skövde, School of Humanities and Informatics.
    Olsson, Björn
    University of Skövde, School of Humanities and Informatics.
    GOSAP: Gene Ontology Based Semantic Alignment of Biological Pathways2005Report (Other academic)
    Abstract [en]

    A large number of biological pathways have been assembled in later years, and are being stored in databases. Hence, the need for methods to analyse these pathways has emerged. One class of methods compares pathways, in order to discover parts that are evolutionary conserved between species or to discover intra-species similarites. Most previous work has been focused on methods targeted at metabolic pathways utilising the EC enzyme hierarchy. Here, we propose a Gene Ontology (GO) based approach for finding semantic local alignments when comparing paths in biological pathways where the nodes are gene products. The method takes advantage of all three sub-ontologies, and uses a measure of semantic similarity to calculate a match score between gene products. Our proposed method is applicable to all types of biological pathways, where nodes are gene products, e.g. regulatory pathways, signalling pathways and metabolic enzyme-to-enzyme pathways. It would also be possible to extend the method to work with other types of nodes, as long as there is an ontology or abstraction hierarchy available for categorising the nodes. We demonstrate that the method is useful for studying protein regulatory pathways in S. cerevisiae, as well as metabolic pathways for the same organism.

  • 23.
    Gamalielsson, Jonas
    et al.
    University of Skövde, School of Humanities and Informatics.
    Olsson, Björn
    University of Skövde, School of Humanities and Informatics.
    Nilsson, Patric
    University of Skövde, School of Humanities and Informatics.
    A Gene Ontology based Method for Assessing the Biological Plausibility of Regulatory Hypotheses2005Report (Other academic)
    Abstract [en]

    Many algorithms that derive gene regulatory networks from microarray gene expression data have been proposed in the literature. The performance of such an algorithm is often measured by how well a genetic network can recreate the gene expression data that the network was derived from. However, this kind of performance does not necessarily mean that the regulatory hypotheses in the network are biologically plausible. We therefore propose a Gene Ontology based method for assessing the biological plausibility of regulatory hypotheses at the gene product level using prior biological knowledge in the form of Gene Ontology annotation of gene products and regulatory pathway databases. Templates are designed to encode general knowledge, derived by generalizing from known interactions to typical properties of interacting gene product pairs. By matching regulatory hypotheses to templates, the plausible hypotheses can be separated from inplausible ones. In a cross-validation test we verify that the templates reliably identify interactions which have not been used in the template creation process, thereby confirming the generality of the approach. The method also proves useful when applied to an example network reconstruction problem, where a Bayesian approach is used to create hypothetical relations which are evaluated for biological plausibility. The cell cycle pathway and the MAPK signaling pathway for S. cerevisiae and H. sapiens are used in the experiments.

  • 24.
    Grindal, Mats
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Thesis Proposal: Evaluation of Combination Strategies for Practical Testing2004Report (Other academic)
    Abstract [en]

    A number of combination strategies have been proposed during the last fifteen years. Combination strategies are test case selection methods where test cases are identified by combining interesting values of the test object's input parameters. Although some results, achieved from small isolated experiments and investigations, point in the direction that these methods are useful in practical testing. Few attempts have been made to investigate these methods under realistic testing conditions. We outline a thesis proposal that is an attempt to determine if combination strategies are feasible alternatives to the currently used test case selection methods in practical testing.

    For combination strategies to be feasible alternatives to use in practical testing we require two things. Firstly, the combination strategies need to be effective in finding faults, at least as effective as currently used methods. Secondly, the cost per fault found when using combination strategies should not exceed the corresponding cost for the currently used methods.

    To investigate the effectiveness and efficiency of combination strategies we need to establish a benchmark from practical testing and then compare that with how combination strategies perform in the same or similar situations.

    Further, we need a testing process targeted for the use of combination strategies to be able to assess the complete cost of using combination strategies. Thus, an important part of this research project is to develop a combination strategies testing process. In particular, the activities of selecting combination strategies to use and transforming the requirements on the test object into a format suitable for combination strategies are focused on. These activities are specific to combination strategies and not very well understood.

    The methods used for achieving our research goal include literature surveys, investigation of the state-of-practice, with respect to used test case selection methods and cost of testing, experiments, tool implementations, and proof-of-concept, in the form of a case study. In addition to the direct results of our investigations we expect this research to result in detailed information about how to use the suggested test process. This information will include work instructions covering the manual parts. The process information will also include functional descriptions of the tools as well as interface descriptions of the input and output formats of each tool. These tool descriptions will make the test process generic in the sense that alternative tool implementations can be evaluated keeping everything else constant.

  • 25.
    Grindal, Mats
    et al.
    University of Skövde, School of Humanities and Informatics.
    Lindström, Birgitta
    University of Skövde, School of Humanities and Informatics.
    Offutt, Jeff
    University of Skövde, School of Humanities and Informatics.
    Andler, Sten F
    University of Skövde, School of Humanities and Informatics.
    An Evaluation of Combination Strategies for Test Case Selection2003Report (Other academic)
    Abstract [en]

    In this report we present the results from a comparative evaluation of five combination strategies. Combination strategies are test case selection methods that combine interesting values of the input parameters of a test object to form test cases. One of the investigated combination strategies, namely the Each Choice strategy, satisfies 1-wise coverage, i.e., each interesting value of each parameter is represented at least once in the test suite. Two of the strategies, the Orthogonal Arrays and Heuristic Pair-Wise strategies both satisfy pair-wise coverage, i.e., every possible pair of interesting values of any two parameters are included in the test suite. The fourth combination strategy, the All Values strategy, generates all possible combinations of the interesting values of the input parameters. The fifth and last combination strategy, the Base Choice combination strategy, satisfies 1-wise coverage but in addition makes use of some semantic information to construct the test cases.

    Except for the All Values strategy, which is only used as a reference point with respect to the number of test cases, the combination strategies are evaluated and compared with respect to number of test cases, number of faults found, test suite failure density, and achieved decision coverage in an experiment comprising five programs, similar to Unix commands, seeded with 131 faults. As expected, the Each Choice strategy finds the smallest number of faults among the evaluated combination strategies. Surprisingly, the Base Choice strategy performs as well, in terms of detecting faults, as the pair-wise combination strategies, despite fewer test cases. Since the programs and faults in our experiment may not be representative of actual testing problems in an industrial setting, we cannot draw any general conclusions regarding the number of faults detected by the evaluated combination strategies. However, our analysis shows some properties of the combination strategies that appear significant in spite of the programs and faults not being representative. The two most important results are that the Each Choice strategy is unpredictable in terms of which faults will be detected, i.e., most faults found are found by chance, and that the Base Choice and the pair-wise combination strategies to some extent target different types of faults.

  • 26.
    Grindal, Mats
    et al.
    University of Skövde, School of Humanities and Informatics.
    Offutt, Jeff
    Information and Software Engineering, George Mason University, Fairfax, VA, USA.
    Mellin, Jonas
    University of Skövde, School of Humanities and Informatics.
    Handling Constraints in the Input Space when Using Combination Strategies for Software Testing2006Report (Other academic)
    Abstract [en]

    This study compares seven different methods for handling constraints in input parameter models when using combination strategies to select test cases. Combination strategies are used to select test cases based on input parameter models. An input parameter model is a representation of the input space of the system under test via a set of parameters and values for these parameters. A test case is one specific combination of values for all the parameters. Sometimes the input parameter model may contain parameters that are not independent. Some sub-combinations of values of the dependent parameters may not be valid, i.e., these sub-combinations do not make sense. Combination strategies, in their basic forms, do not take into account any semantic information. Thus, invalid sub-combinations may be included in test cases in the test suite. This paper proposes four new constraint handling methods and compares these with three existing methods in an experiment in which the seven constraint handling methods are used to handle a number of different constraints in different sized input parameter models under three different coverage criteria. All in all, 2568 test suites with a total of 634,263 test cases have been generated within the scope of this experiment.

  • 27.
    Gunnar, Ulrika
    et al.
    University of Skövde, School of Life Sciences.
    Lindman, Sahra
    University of Skövde, School of Life Sciences.
    Att leva med venösa bensår: en kvalitativ intervjustudie om patienters upplevelser2009Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Approximately 50 000 people in Sweden are suffering from leg ulcers which is defined as "wounds on the legs and/or foot below the knee, which is not healed in 6 weeks", about half of them are venous leg ulcers. Most people who suffer from leg ulcers are 65 years or older. The purpose of this study was to describe patients' experiences of living with venous leg ulcers. The study is based on a qualitative method with an inductive approach. Data were collected through interviews from six patients and analyzed with help of content analyzing. The results are based on two different categories: limited and restricted life, and desire to be seen. These categories formed the theme; to be whole but still not. Based on this study, staff who care for and treat leg ulcer patients can increase knowledge about how patients feel it is to live with venous leg ulcers. Given this, it would be desirable that there are effective and well-structured care practices designed to achieve holistic healthcare and treatment of patients with leg ulcers.

  • 28.
    Gustafsson, Marie
    University of Skövde, School of Humanities and Informatics. Chalmers University of Technology.
    Representing Knowledge in Oral Medicine: Remodeling Clinical Examinations Using OWL2006Report (Other academic)
    Abstract [en]

    This report describes the remodeling of the representation of clinical examinations in oral medicine, from the previous proprietary format used by the MedView project, to using the World Wide Web Consortium's recommendations Web Ontology Language (OWL) and Resource Description Framework (RDF). This includes the representation of (1) examination templates, (2) lists of values that can be included in individual examination records, and (3) aggregates of such values used for e.g., analyzing and visualizing data. It also includes the representation of (4) individual examination records. We describe how OWL and RDF are used to represent these different knowledge components of MedView, along with the design decisions made in the remodeling process. These design decisions are related to, among other things, whether or not to use the constructs of domain and range, appropriate naming in URIs, the level of detail to initially aim for, and appropriate use of classes and individuals. A description of how these new representations are used in the previous applications and code base is also given, as well as their use in the Swedish Oral Medicine Web (SOMWeb) online community. We found that OWL and RDF can be used to address most, but not all, of the requirements we compiled based on the limitations of the MedView knowledge model. Our experience in using OWL and RDF is that, while there is much useful support material available, there is some lack of support for important design decisions and best practice guidelines are still under development. At the same time, using OWL gives us access to a potentially beneficial array of externally developed tools and the ability to come back and refine the knowledge model after initial deployment.

  • 29.
    Hajjar, Elie
    University of Skövde, School of Humanities and Informatics.
    Estetisk och teknisk karaktärsdesign: Hur en kvinnlig antagonist skapas med konventioner utifrån etablerade spel .2009Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Examensarbetets stora fokus baseras på hur en kvinnlig antagonist skapas både utifrånestetiska och praktiska premisser. För den estetiska och tekniska normen utgick jag ifrånspelet Devil May Cry 4 (Capcom, 2008). Utifrån spelets modeller kunde en studie genomföraspå enstaka karaktärer där de tekniska begränsningar och den estetiska utformningen kundefastställas. Först har jag skapat ett koncept som baseras på de estetiska valen, och utifrånstudien fick jag fram informationen där jag kunde med hjälp av 3d program som användsinom spelbranschen tag fram en egen karaktär, i det här fallet en kvinnlig antagonist.Jag har velat skapa en kvinnlig spelkaraktär utifrån konventioner som samtidigt utmanas. Denkvinnliga antagonisten baseras på estetiska val och teman som framhäver karaktärenspersonlighet. Mina problemställningar är Hur kan koder och konventioner användas för attskapa en stark och framträdande kvinnlig karaktär? Och Hur kan de estetiska tillvalenappliceras med dagens tekniska metoder?

  • 30.
    Helldin, Tove
    et al.
    University of Skövde, School of Humanities and Informatics.
    Erlandsson, Tina
    University of Skövde, School of Humanities and Informatics.
    Decision support system in the fighter aircraft domain: the first steps2011Report (Other academic)
  • 31.
    Hemeren, Paul
    et al.
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Johannesson, Mikael
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Lebram, Mikael
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Eriksson, Fredrik
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Ekman, Kristoffer
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Systems Biology Research Centre.
    Veto, Peter
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    URBANIST: Signaler som används för att avläsa cyklisters intentioner i trafiken2013Report (Other academic)
    Abstract [sv]

    Genom att titta på ett fåtal bestämda signaler kan man med god träffsäkerhet förutsäga cyklisters beteende, vilket tyder på att de identifierade signalerna är betydelsefulla. Vetskapen om dessa kan, bland annat, praktiskt användas för att utveckla enkla hjälpmedel – såsom medveten placering av fluorescerande eller reflekterande material på leder och/eller införande av olikfärgade hjälmsidor. Dylika kan förväntas förstärka kommunikationen av viktiga signaler. Vetskapen kan även användas för att utbilda oerfarna bilförare. Båda fallen kan i förlängningen ge en säkrare trafikmiljö för oskyddade trafikanter.

  • 32.
    Ilves, Peter
    University of Skövde, School of Humanities and Informatics.
    Karaktärsdesign och spelbalans i actionstrategispel: Skapandet av karaktärer med fokus på balansering i spelet Bloodline Champions2009Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

     

     

    Detta arbete är en reflexiv rapport som beskriver det praktiska arbetet bakom skapandet av nya karaktärer med fokus på balansering i spelet Bloodline Champions. Inledningsvis presenteras problemen och de teorier som finns kring ämnet. Vidare presenteras spelet och en detaljerad beskrivning ges av spelmekaniken. Avslutningsvis presenteras arbetsprocessen och en diskussion kring arbetets utförande och resultat.

     

    Arbetet resulterade i fyra spelbara karaktärer till spelet Bloodline Champions, en uppsättning riktlinjer för hur utvecklingen av karaktärer går till i Bloodline Champions samt ett balanseringsverktyg som går under namnet Statsviewer.

     

  • 33.
    Jacobsson, Henrik
    University of Skövde, School of Humanities and Informatics.
    Rule Extraction from Recurrent Neural Networks: A Taxonomy and Review2004Report (Other academic)
    Abstract [en]

    Rule extraction (RE) from recurrent neural networks (RNNs) refers to finding models of the underlying RNN, typically in the form of finite state machines, that mimic the network to a satisfactory degree. RE from RNNs can be argued to allow a deeper and more profound form of analysis of RNNs than other, more or less ad hoc methods. RE may give us understanding of RNNs in the intermediate levels between quite abstract theoretical knowledge of RNNs as a class of computing devices and quantitative performance evaluations of RNN instantiations. The development of techniques for extraction of rules from RNNs has been an active field since the early nineties. In this paper, the progress of this development is reviewed and analysed in detail. In order to structure the survey and to evaluate the techniques, a taxonomy, specifically designed for this purpose, has been developed. Moreover, important open research issues are identified, that, if addressed properly, possibly can give the field a significant push forward.

  • 34.
    Jacobsson, Henrik
    et al.
    University of Skövde, School of Humanities and Informatics.
    Ziemke, Tom
    University of Skövde, School of Humanities and Informatics.
    Reducing Complexity of Rule Extraction from Prediction RNNs trough Domain Interaction2003Report (Other academic)
    Abstract [en]

    This paper presents a quantitative investigation of the differences between rule extraction through breadth first search and through sampling the states of the RNN in interaction with its domain. We show that for an RNN trained to predict symbol sequences in formal grammar domains, the breadth first search is especially inefficient for languages sharing properties with realistic real world domains. We also identify some important research issues, needed to be resolved to ensure further development in the field of rule extraction from RNNs.

  • 35.
    Karlsson, Alexander
    University of Skövde, School of Humanities and Informatics.
    Dependable and generic high-level information fusion: methods and algorithms for uncertainty management2007Report (Other academic)
    Abstract [en]

    The main goal of information fusion can be seen as exploiting diversities in information to improve decision making. The research field of information fusion can be divided into two parts: low-level information fusion and high-level information fusion. Most of the research so far, has concerned the lower levels, e.g., signal processing and multisensor data fusion, while high-level information fusion, e.g., clustering of entities, has been relatively uncharted. High-level information fusion aims at providing decision support (human or automatic) concerning situations. A crucial issue for decision making based on such support is trust, defined as “accepted dependence”, where dependence or dependability is an overall term for other concepts, e.g., reliability. Dependability requirements in high-level information fusion refer to properties of belief measures and hypotheses regarding situations. Even though meeting such requirements is considered to be a precondition for trust in fusion-based decision-making; research in high-level information fusion that addresses this issue is scarce. Since most of the research in high-level information fusion relate to defense applications, another important issue is to generalize existing terminology, methods, and algorithms, in order to allow for researchers in other domains to more easily adopt such results. In this report, it is argued that more research is needed for these issues and a set of research questions for future research is presented.

  • 36.
    Karlsson, Alexander
    University of Skövde, School of Humanities and Informatics.
    Evaluating Credal Set Theory as a Belief Framework in High-Level Information Fusion for Automated Decision-Making2008Report (Other academic)
    Abstract [en]

    The goal of high-level information fusion is to provide effective decision-support regarding situations, e.g., relations between events. One of the main ways that has been proposed in order to achieve this is to reduce uncertainty regarding the situation by utilizing multiple sources of information. There exist two types of uncertainty: aleatory and epistemic. Aleatory uncertainty, also known as uncertainty due to chance, cannot be reduced regardless of the amount of information. Epistemic uncertainty, on the other hand, also known as uncertainty due to lack of information, can be reduced if more information becomes available. Since the goal of high-level information fusion states that we want to reduce uncertainty by utilizing information, we conclude that the type of uncertainty referred to is epistemic in nature. Uncertainty in high-level information fusion is most often expressed via a belief framework. The most common such framework in high-level information fusion is  precise Bayesian theory. In this thesis proposal we argue that precise Bayesian theory cannot adequately represent epistemic uncertainty and that there exists another befief framework referred to as credal set theory that possesses this ability. This can actually be demonstrated by such simple as tossing a coin. In precise Bayesian theory, assuming no prior information about the coin, the same probability of "Head" can be adopted as the belief before any information is available, as a prior, as well as later when a large amount of information is available, as a posterior. By utilizing credal set theory, where a credal set is defined as a closed convex set of probability measures, this case amounts to representing the prior of "Head" as a probability interval, and a posterior with a smaller interval. The idea is that when a large amount of information is available, the interval converges into a point, i.e., the length of the interval, or degree of imprecision, reflects the degree of epistemic uncertainty. In precise Bayesian theory, a common automated decision-making strategy is to decide for the action that maximizes the expected utility with respect to a utility function and a probability measure. Since the probability measure cannot adequately reflect the amount of information of which it is based on, this is an approach that does not take epistemic uncertainty into consideration, i.e., it is possible to decide for an action based on a high degree of epistemic uncertainty, and not even be aware of it. By utilizing credal set theory, epistemic uncertainty is reflected by imprecision in both probabilities and expected utilities. The main problem addressed in this thesis proposal is to decide if better automated decisions can be made by utilizing credal set theory as a belief framework in high-level information fusion, in comparison to precise Bayesian theory. The research question addressed is whether it is possible to characterize, in terms of degree of epistemic uncertainty, when and why one framework is better suited than the other for this purpose.

  • 37.
    Lindström, Birgitta
    et al.
    University of Skövde, School of Humanities and Informatics.
    Nilsson, Robert
    University of Skövde, School of Humanities and Informatics.
    Ericsson, AnnMarie
    University of Skövde, School of Humanities and Informatics.
    Grindal, Mats
    University of Skövde, School of Humanities and Informatics.
    Andler, Sten F.
    University of Skövde, School of Humanities and Informatics.
    Eftring, Bengt
    University of Skövde, School of Humanities and Informatics.
    Offutt, Jeff
    George Mason University, Fairfax, VA, USA.
    Six Issues in Testing Event-Triggered Real-Time Systems2007Report (Other academic)
    Abstract [en]

    Verification of real-time systems is a complex task, with problems coming from issues like concurrency. A previous paper suggested dealing with these problems by using a time-triggered design, which gives good support both for testing and formal analysis. However, a

    time-triggered solution is not always feasible and an event-triggered design is needed. Event-triggered systems are far more difficult to test than time-triggered systems.

    This paper revisits previously identified testing problems from a new perspective and identifies additional problems for event-triggered systems. The paper also presents an approach to deal with these problems. The TETReS project assumes a model-driven development

    process. We combine research within three different fields: (i) transformation of rule sets between timed automata specifications and ECA rules with maintained semantics, (ii) increasing testability in event-triggered system, and (iii) development of test case generation methods for event-triggered systems.

  • 38.
    Lindström, Birgitta
    et al.
    University of Skövde, School of Humanities and Informatics.
    Pettersson, Paul
    Uppsala University.
    Model-Checking with Insufficient Memory Resources2006Report (Other academic)
    Abstract [en]

    Resource limitations is a major problem in model checking. Space and time requirements of model-checking algorithms grow exponentially with respect to the number of variables and parallel automata of the analyzed model. We present a method that is the result of experiences from a case study. It has enabled us to analyze models with much bigger state-spaces than what was possible without our method. The basic idea is to build partitions of the state-space of an analyzed system by iterative invocations of a model-checker. In each iteration the partitions are extended to represent a larger part of the state space, and if needed the partitions are further partitioned. Thereby the analysis problem is divided into a set of subproblems that can be analyzed independently of each other. We present how the method, implemented as a meta algorithm on-top of the Uppaal tool, has been applied in the case study.

  • 39.
    Lubovac, Zelmina
    University of Skövde, School of Humanities and Informatics.
    Thesis Materials: Knowledge-based Methods for Identification of Functional Modules in Protein Interaction Networks2006Report (Other academic)
    Abstract [en]

    The majority of the current methods for identifying modules in protein interaction networks are based solely on analysing topological features of the networks. In contrast, the main idea that underpins the planned thesis is that combining topological information with knowledge about protein function will result in more biologically plausible modules than using approaches based solely on topology. We here propose approaches that use a combination of domain-specific knowledge, derived from Gene Ontology, and topological properties, to generate functional modules from protein interaction networks. By using yeast two hybrid (Y2H) interactions from /S/. /Cerevisiae/ and knowledge in terms of Gene Ontology (GO) annotations, we have elucidated functional modules of interacting proteins.

    In this report, a summary of the proposed approaches is presented. The methods with the same rationale but slightly different designs have been implemented, tested and evaluated. The first approach, where we combine clusters of proteins based on their mutual neighbours profiles with the corresponding clusters based on GO semantic similarity profiles, treats each of the aspects (functional knowledge and topology) separately to obtain functional clusters, and thereafter merges the clusters into one single structure. In contrast, the other approaches integrate both aspects from the beginning. The two other approaches are two versions of a method named SWEMODE (Semantic WEights for MODule Elucidation), which uses knowledge-based clustering coefficient to identify network modules. The first one is uses the original protein interaction graph, and the second one is a recently designed extension of SWEMODE where the /k/-cores of the graph are emphasised. We demonstrate that all three methods are able to identify the key functional modules in protein interaction networks.

    The first method was applied to smaller well-studied networks, that are known to contain modules of signalling pathways, while SWEMODE was applied on a large network containing 2 231 proteins and 6 379 interactions. The methods were also used to study intermodule connections, which is a step towards revealing a higher order hierarchy between modules.

    In this report, we describe and discuss the proposed approaches, along with their strengths and weaknesses. We also propose further extensions and improvements of the proposed methods, some of which may be attempted as the final steps in the implementation phase of the dissertation

  • 40.
    Lubovac, Zelmina
    et al.
    University of Skövde, School of Humanities and Informatics.
    Olsson, Björn
    University of Skövde, School of Humanities and Informatics.
    Towards Reverse Engineering of Genetic Regulatory Networks2003Report (Other academic)
    Abstract [en]

    The major goal of computational biology is to derive regulatory interactions between genes from large-scale gene expression data and other biological sources. There have been many attempts to reach this goal, but the field needs more research before we can claim that we have reached a complete understanding of reverse engineering of regulatory networks. One of the aspects that have not been considered to a great extent in the development of reverse engineering approaches is combinatorial regulation. Combinatorial regulation can be obtained by the presence of modular architectures in regulation, where multiple binding sites for multiple transcription factors are combined into modular units.

    When modelling regulatory networks, genes are often considered as "black boxes", where gene expression level is an input signal and changed level of expression is the output. We need to shed light on reverse engineering of regulatory networks by modelling the gene "boxes" at a more detailed level of information, e.g., by using regulatory elements as input to gene boxes as a complement to expression levels. Another problem in the context of inferring regulatory networks is the difficulty of validating inferred interactions because it is practically impossible to test and experimentally confirm hundreds to thousands of predicted interactions. Therefore, we need to develop an artificial network to evaluate the developed method for reverse engineering. One of the major research questions that will be proposed in this work is: Can we reverse engineer the cis-regulatory logic controlling the network organised by modular units?

    This work is aiming to give an overview of possible research directions in this field as well as the chosen direction for the future work where more research is needed. It also gives a theoretical foundation for the reverse engineering problem, where key aspects are reviewed.

  • 41.
    Mathiason, Gunnar
    University of Skövde, School of Humanities and Informatics.
    A Simulation Approach for Evaluating Scalability of a Virtually Fully Replicated Real-time Database2006Report (Other academic)
    Abstract [en]

    We use a simulation approach to evaluate large scale resource usage in a distributed real-time database. Scalability is often limited by that resource usage is higher than what is added to the system when a system is scaled up. Our approach of Virtual Full Replication (VFR) makes resource usage scalable, which allows large scale real-time databases. In this paper we simulate a large scale distributed real-time database with VFR, and we compare it to a fully replicated database (FR) for a selected set of system parameters used as independent variables. Both VFR and FR support local timeliness of transactions by ensuring local availability for data objects accessed by transactions. The difference is that VFR has a scalable resource usage due to lower bandwidth usage for data update replication. The simulation shows that a simulator has several advantages for studying large scale distributed real-time databases and for studying scalability in resource usage in such systems.

  • 42.
    Mathiason, Gunnar
    University of Skövde, School of Humanities and Informatics.
    Virtual Full Replication for Scalable Distributed Real-Time Databases2006Report (Other academic)
    Abstract [en]

    Distributed real-time systems increase in size an complexity, and the nodes in such systems become difficult to implement and test. In particular, communication for synchronization of shared information in groups of nodes becomes complex to manage. Several authors have proposed to using a distributed database as a communication subsystem, to off-load database applications from explicit communication. This lets the task for information dissemination be done by the replication mechanisms of the database. With increasingly larger systems, however, there is a need for managing the scalability for such database approach. Furthermore, timeliness for database clients requires predictable resource usage, and scalability requires bounded resource usage in the database system. Thus, predictable resource management is an essential function for realizing timeliness in a large scale setting.

    We discuss scalability problems and methods for distributed real-time databases in the context of the DeeDS database prototype. Here, all transactions can be executed timely at the local node due to main memory residence, full replication and detached replication of updates. Full replication contributes to timeliness and availability, but has a high cost in excessive usage of bandwidth, storage, and processing, in sending all updates to all nodes regardless of updates will be used there or not. In particular, unbounded resource usage is an obstacle for building large scale distributed databases. For many application scenarios it can be assumed that most of the database is shared by only a limited number of nodes. Under this assumption it is reasonable to believe that the degree of replication can be bounded, so that a bound also can be set on resource usage.

    The thesis proposal identifies and elaborates research problems for bounding resource usage in large scale distributed real-time databases. One objective is to bound resource usage by taking advantages of pre-specified data needs, but also by detecting unspecified data needs and adapting resource management accordingly. We elaborate and evaluate the concept of virtual full replication, which provides an image of a fully replicated database to database clients. It makes data objects available where needed, while fulfilling timeliness and consistency requirements on the data.

    In the first part of our work, virtual full replication makes data available where needed by taking advantages of pre-specified data accesses to the distributed database. For hard real-time systems, the required data accesses are usually known since such systems need to be well specified to guarantee timeliness. However, there are many applications where a specification of data accesses can not be done before execution. The second part of our work extends virtual full replication to be used with such applications. By detecting new and changed data accesses during execution and adapt database replication, virtual full replication can continuously provide the image of full replication while preserving scalability.

    One of the objective of the thesis work is to quantify scalability in the database context, so that actual benefits and achievements can be evaluated. Further, we find out the conditions for setting bounds on resource usage for scalability, under both static and dynamic data requirements.

  • 43.
    Mathiason, Gunnar
    et al.
    University of Skövde, School of Humanities and Informatics.
    Amirijoo, Medhi
    Linköping University.
    Real-time Communication Through a Distributed Resource Reservation Approach2004Report (Other academic)
    Abstract [en]

    Bandwidth reservation for real-time networks offers an approach for real-time networking in a switched Ethernet or IP setting. In a switched network, switches prevent from indeterministic back-off times at collisions and here bandwidth reservation limits send rates to protects from overallocation of link bandwidth. The work in this paper aims at avoiding problems with a centralized bandwidth broker for resource reservation for bandwidth in a throttled real-time network. A centralized broker (such as a ’GlobeThrottle’) is a single point of failure in a distributed system and is also a hot-spot resource, which all nodes of the system use for registering new real-time channels. To avoid both these problems we propose a distributed algorithm to be used instead of a central bandwidth broker. Also, the GlobeThrottle approach uses TCP/IP communication for channel allocation, which gives indeterministic time channel allocation. However, this problem is not addressed in this paper.

    In the proposed solution, Real-time Communication Through a Distributed Resource Reservation Approach (STRUTS), real-time channels are throttled and the throttle level for each sending node is agreed between all nodes. The agreement must be atomic to avoid transitional bandwidth over-usage due to temporary inconsistencies between nodes (’mutual inconsistencies’) in throttling level information for different nodes. In this paper we use two-phase-commit (2PC) for atomic node agreements, where changes in channels information will be visible at the same time instant at all nodes. Thus, the throttling based on the agreed channel information will guarantee that the maximal bandwidth is not exceeded. Using a distributed agreement avoids the problems of a single point of failure and hot-spot behavior, and is thus scalable to some extent. However, 2PC commitment incurs other problems with scalability since it requires that the network is not partitioned and also that nodes are locked during the agreement process, which prevents other nodes from allocating channels for the same node concurrently. When using 2PC for agreement it is not possible to have deterministic channel allocation time when there are no guarantees for the maximum locking time on the channel data. We have through experimental results verified that by using STRUTS, we avoid overallocation of links.

  • 44.
    Mellin, Jonas
    University of Skövde, School of Humanities and Informatics.
    Event Monitoring & Detection in Distributed Real Time Systems1996Report (Other academic)
    Abstract [en]

    This report is a survey of monitoring and event detection in distributed fault-tolerant real-time systems, as used in primarily active database systems, for testing and debugging purposes. It contains a brief overview of monitoring in general, with examples of how software systems can be instrumented in a distributed environment, and of the active database area with additional constraints of real-time discussed. The main part is a survey of event monitoring mostly taken from the active database area with additional discussion concerning distribution and fault-tolerance. Similarities between testing and debugging distributed real-time systems are described.

  • 45.
    Niklasson, Lars
    et al.
    University of Skövde, Department of Computer Science.
    Sharkey, Noel E.
    University of Sheffield, UK.
    Systematicity and Generalisation in Connectionist Compositional Representation1993Report (Other academic)
    Abstract [en]

    It has been argued that models, that are claimed to be models of the mind, have to exhibit a behaviour closely related to human thought. This includes dealing with the issues of compositionality, systematicity and productivity. This paper starts by describing a non-concatenative mode of combination for connectionist patterns of neural activation. We then turn to the issue of systematicity, i.e. structure sensitive processes. We explore this issue to some level of detail, e.g. the importance of choosing the `right' type of representation and how the construction of the training set could result in different types of systematicity.

  • 46.
    Nilsson, Maria
    University of Skövde, School of Humanities and Informatics.
    Human decision making and information fusion: Extending the concept of decision support2007Report (Other academic)
    Abstract [en]

    ecision making is one of the most important human abilities. We utilise it from the very small decision of choosing a lunch restaurant to the much more complex decision situations involving heterogeneous groups of people cooperating towards a common goal. Although, the problem is still the same; we perceive a lot of information from many different sources all around us, enabling us to access a large quantity of information, yet, somehow we have difficulties to make a decision from that information. Not surprisingly, this problem will not magically disappear or become any easier, considering the current development of advanced technology, enabling us to access even more information, in real-time. This report address the new challenges put on decision making where you are to handle and make sense of a vast array of information from different sources. The report explores the use of information fusion to support this specific issue. The report identifies the limitations of current research concerning the intersection between decision making, decision support and information fusion, and makes suggestion for future research.

  • 47.
    Nilsson, Maria
    et al.
    University of Skövde, School of Humanities and Informatics.
    Riveiro, Maria
    University of Skövde, School of Humanities and Informatics.
    Ziemke, Tom
    University of Skövde, School of Humanities and Informatics.
    Investigating human-computer interaction issues in information-fusion-based decision support2008Report (Other academic)
    Abstract [en]

    Information fusion is a research area which focuses on how to combine information from many different sources to support decision making. Commonly used information fusion systems are often complex and used in military and crises management domains. The focus of information fusion research so far has been mainly on the technological aspects. There is still a lack of understanding relevant user aspects that affect the information fusion systems as a whole. This paper presents a framework of HCI issues which considers users as embedded in the context of information fusion systems. The framework aims at providing insights regarding factors that affect user interaction to inform the development of future information fusion systems. Design considerations are presented together with a heuristic evaluation of an information fusion prototype.

  • 48.
    Nilsson, Robert
    University of Skövde, School of Humanities and Informatics.
    Automated Timeliness Testing of Dynamic Real-Time systems2003Report (Other scientific)
    Abstract [en]

    We address problems associated with testing real-time systems with on-line scheduling where no exact estimations of worst-case execution times or load patterns can be acquired. Under these circumstances, testing the timeliness of a real-time system is imperative for gaining confidence in its correctness. In such real-time systems a huge e®ort is associated with testing due to nondeterminism of the execution environment. A framework for testing is proposed, which include an approach for test case generation, testing criteria for timeliness testing, and methods for automating the test-case execution process. The suggested framework uses a formalized model for specifying the execution environment and applications so that relevant execution orders of tasks can be selected. Test data is then produced for demonstrating that critical execution orders do not cause timing constraints to be violated.

  • 49.
    Nilsson, Robert
    et al.
    University of Skövde, School of Humanities and Informatics.
    Offutt, Jeff
    University of Skövde, School of Humanities and Informatics.
    Mellin, Jonas
    University of Skövde, School of Humanities and Informatics.
    Test case generation for testing of timeliness: Extended version2005Report (Other academic)
    Abstract [en]

    Temporal correctness is crucial for real-time systems. There are few methods to test temporal correctness and most methods used in practice are ad-hoc. A problem with testing real-time applications is the response-time dependency on the execution order of concurrent tasks. Execution orders in turn depends on scheduling protocols, task execution times, and use of mutual exclusive resources apart from the points in time when stimuli is injected. Model-based mutation testing has previously been proposed to determine the execution orders that need to be tested to increase confidence in timeliness. An effective way to automatically generate such test cases for dynamic real-time systems is still needed. This paper presents a method using heuristic-driven simulation for generation of test cases.

  • 50.
    Olsson, Björn
    University of Skövde, School of Humanities and Informatics.
    Optimization Using a Host-Parasite Model with Variable-Size Distributed Populations1996Report (Other academic)
    Abstract [en]

    This paper presents a model of coevolution between two variable-size, spatially distributed populations, evolving in an environment with a flow of resources. The two populations have a host-parasite relationship where the host species is dependent on the uptake of resources from the environment for their reproduction. The parasites seek to "infect'' host organisms and parasitize on their resources to produce parasite offspring. We show how the approach can be used for optimization tasks by using instances of the problem task to determine the outcome of each interaction between a host and a parasite organism. As an initial test of the model, we apply it to the problem of designing sorting networks for several problem sizes: 6, 7, 8, and 9-input.

12 1 - 50 of 77
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf