his.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 15) Show all publications
Said, A. & Torra, V. (2019). Data Science: An Introduction. In: Alan Said, Vicenç Torra (Ed.), Data Science in Practice: (pp. 1-6). Springer
Open this publication in new window or tab >>Data Science: An Introduction
2019 (English)In: Data Science in Practice / [ed] Alan Said, Vicenç Torra, Springer, 2019, p. 1-6Chapter in book (Refereed)
Abstract [en]

This chapter gives a general introduction to data science as a concept and to the topics covered in this book. First, we present a rough definition of data science, and point out how it relates to the areas of statistics, machine learning and big data technologies. Then, we review some of the most relevant tools that can be used in data science ranging from optimization to software. We also discuss the relevance of building models from data. The chapter ends with a detailed review of the structure of the book.

Place, publisher, year, edition, pages
Springer, 2019
Series
Studies in Big Data, ISSN 2197-6503, E-ISSN 2197-6511 ; 46
National Category
Computer and Information Sciences Other Computer and Information Science
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-16778 (URN)10.1007/978-3-319-97556-6_1 (DOI)000464719500002 ()978-3-319-97556-6 (ISBN)978-3-319-97555-9 (ISBN)
Available from: 2019-04-15 Created: 2019-04-15 Last updated: 2019-09-30Bibliographically approved
Holst, A., Bouguelia, M.-R., Görnerup, O., Pashami, S., Al-Shishtawy, A., Falkman, G., . . . Soliman, A. (2019). Eliciting structure in data. In: Christoph Trattner, Denis Parra, Nathalie Riche (Ed.), CEUR Workshop Proceedings: . Paper presented at 2019 Joint ACM IUI Workshops, ACMIUI-WS 2019, Los Angeles, United States, 20 March 2019. CEUR-WS, 2327
Open this publication in new window or tab >>Eliciting structure in data
Show others...
2019 (English)In: CEUR Workshop Proceedings / [ed] Christoph Trattner, Denis Parra, Nathalie Riche, CEUR-WS , 2019, Vol. 2327Conference paper, Published paper (Refereed)
Abstract [en]

This paper demonstrates how to explore and visualize different types of structure in data, including clusters, anomalies, causal relations, and higher order relations. The methods are developed with the goal of being as automatic as possible and applicable to massive, streaming, and distributed data. Finally, a decentralized learning scheme is discussed, enabling finding structure in the data without collecting the data centrally. 

Place, publisher, year, edition, pages
CEUR-WS, 2019
Series
CEUR Workshop Proceedings, ISSN 1613-0073 ; 2327
Keywords
Anomaly detection, Causal inference, Clustering, Distributed analytics, Higher-order structure, Information visualization, Information systems, User interfaces, Causal inferences, Data acquisition
National Category
Computer Sciences Human Computer Interaction
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-16748 (URN)2-s2.0-85063227224 (Scopus ID)
Conference
2019 Joint ACM IUI Workshops, ACMIUI-WS 2019, Los Angeles, United States, 20 March 2019
Available from: 2019-04-05 Created: 2019-04-05 Last updated: 2019-09-30Bibliographically approved
Said, A., Bae, J., Parra, D. & Pashami, S. (2019). IDM-WSDM 2019: Workshop on interactive data mining. In: WSDM 2019 - Proceedings of the 12th ACM International Conference on Web Search and Data Mining: . Paper presented at 12th ACM International Conference on Web Search and Data Mining, WSDM 2019, 11 February 2019 through 15 February 2019 (pp. 846-847). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>IDM-WSDM 2019: Workshop on interactive data mining
2019 (English)In: WSDM 2019 - Proceedings of the 12th ACM International Conference on Web Search and Data Mining, Association for Computing Machinery (ACM), 2019, p. 846-847Conference paper, Published paper (Refereed)
Abstract [en]

The first Workshop on Interactive Data Mining is held in Melbourne, Australia, on February 15, 2019 and is co-located with 12th ACM International Conference on Web Search and Data Mining (WSDM 2019). The goal of this workshop is to share and discuss research and projects that focus on interaction with and interactivity of data mining systems. The program includes invited speaker, presentation of research papers, and a discussion session. © 2019 held by the owner/author(s).

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2019
Keywords
Data mining, Human-in-the-loop, Interactive classification and clustering, Interactive dashboards, Visual modeling, Information retrieval, Websites, Data mining system, Interactive classification, Interactive data mining, Melbourne, Australia, Research papers, Visual model
National Category
Other Computer and Information Science Information Systems Interaction Technologies
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-16671 (URN)10.1145/3289600.3291376 (DOI)000482120400120 ()2-s2.0-85061736320 (Scopus ID)978-1-4503-5940-5 (ISBN)
Conference
12th ACM International Conference on Web Search and Data Mining, WSDM 2019, 11 February 2019 through 15 February 2019
Note

Conference code: 144821; Export Date: 1 March 2019; Conference Paper

Available from: 2019-03-01 Created: 2019-03-01 Last updated: 2019-09-12Bibliographically approved
Bellogín, A. & Said, A. (2019). Information Retrieval and Recommender Systems. In: Alan Said, Vicenç Torra (Ed.), Data Science in Practice: (pp. 79-96). Springer
Open this publication in new window or tab >>Information Retrieval and Recommender Systems
2019 (English)In: Data Science in Practice / [ed] Alan Said, Vicenç Torra, Springer, 2019, p. 79-96Chapter in book (Refereed)
Abstract [en]

This chapter gives a brief introduction to what artificial intelligence is. We begin discussing some of the alternative definitions for artificial intelligence and introduce the four major areas of the field. Then, in subsequent sections we present these areas. They are problem solving and search, knowledge representation and knowledge-based systems, machine learning, and distributed artificial intelligence. The chapter follows with a discussion on some ethical dilemma we find in relation to artificial intelligence. A summary closes this chapter.

Place, publisher, year, edition, pages
Springer, 2019
Series
Studies in Big Data, ISSN 2197-6503, E-ISSN 2197-6511 ; 46
National Category
Computer and Information Sciences Computer Sciences Philosophy
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-16809 (URN)10.1007/978-3-319-97556-6_5 (DOI)000464719500006 ()978-3-319-97556-6 (ISBN)978-3-319-97555-9 (ISBN)
Available from: 2019-04-24 Created: 2019-04-24 Last updated: 2019-09-30Bibliographically approved
Bogers, T., Koolen, M., Mobasher, B., Said, A. & Petersen, C. (2018). 2ndWorkshop on Recommendation in Complex Scenarios (ComplexRec 2018). In: RecSys 2018 - 12th ACM Conference on Recommender Systems: . Paper presented at 2nd workshop on recommendation in complex scenarios (complexrec 2018), 12th ACM Conference on Recommender Systems, Vancouver, Canada, 2nd-7th October 2018 (pp. 510-511). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>2ndWorkshop on Recommendation in Complex Scenarios (ComplexRec 2018)
Show others...
2018 (English)In: RecSys 2018 - 12th ACM Conference on Recommender Systems, Association for Computing Machinery (ACM), 2018, p. 510-511Conference paper, Published paper (Refereed)
Abstract [en]

Over the past decade, recommendation algorithms for ratings prediction and item ranking have steadily matured. However, these state-of-the-art algorithms are typically applied in relatively straightforward scenarios. In reality, recommendation is often a more complex problem: it is usually just a single step in the user's more complex background need. These background needs can often place a variety of constraints on which recommendations are interesting to the user and when they are appropriate. However, relatively little research has been done on these complex recommendation scenarios. The ComplexRec 2018 workshop addresses this by providing an interactive venue for discussing approaches to recommendation in complex scenarios that have no simple one-size-fits-all solution. © 2018 ACM. 978-1-4503-5901-6/18/10. . . $15.00

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2018
Keywords
Complex recommendation, Constraint-based recommendation, Context-aware recommendation, Feature-driven recommendation, Query-driven recommendation, Task-based recommendation, Constraint-based, Context-aware recommendations, Task-based, Recommender systems
National Category
Computer Sciences
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-16465 (URN)10.1145/3240323.3240332 (DOI)000458675100093 ()2-s2.0-85056774362 (Scopus ID)9781450359016 (ISBN)
Conference
2nd workshop on recommendation in complex scenarios (complexrec 2018), 12th ACM Conference on Recommender Systems, Vancouver, Canada, 2nd-7th October 2018
Available from: 2019-01-30 Created: 2019-01-30 Last updated: 2019-07-10Bibliographically approved
Said, A. & Bellogín, A. (2018). Coherence and inconsistencies in rating behavior: estimating the magic barrier of recommender systems. User modeling and user-adapted interaction, 28(2), 97-125
Open this publication in new window or tab >>Coherence and inconsistencies in rating behavior: estimating the magic barrier of recommender systems
2018 (English)In: User modeling and user-adapted interaction, ISSN 0924-1868, E-ISSN 1573-1391, Vol. 28, no 2, p. 97-125Article in journal (Refereed) Published
Abstract [en]

Recommender Systems have to deal with a wide variety of users and user types that express their preferences in different ways. This difference in user behavior can have a profound impact on the performance of the recommender system. Users receive better (or worse) recommendations depending on the quantity and the quality of the information the system knows about them. Specifically, the inconsistencies in users' preferences impose a lower bound on the error the system may achieve when predicting ratings for one particular user -- this is referred to as the magic barrier.

In this work, we present a mathematical characterization of the magic barrier based on the assumption that user ratings are afflicted with inconsistencies -- noise. Furthermore, we propose a measure of the consistency of user ratings (rating coherence) that predicts the performance of recommendation methods. More specifically, we show that user coherence is correlated with the magic barrier; we exploit this correlation to discriminate between easy users (those with a lower magic barrier) and difficult ones (those with a higher magic barrier).We report experiments where the recommendation error for the more coherent users is lower than that of the less coherent ones.We further validate these results by using two public datasets, where the necessary data to identify the magic barrier is not available, in which we obtain similar performance improvements.

Place, publisher, year, edition, pages
Springer, 2018
Keywords
benchmarking, context, evaluation, evaluation metrics, magic barrier, noise, rating coherence, ratings, recommender systems, user behavior
National Category
Other Computer and Information Science
Research subject
Skövde Artificial Intelligence Lab (SAIL); INF301 Data Science
Identifiers
urn:nbn:se:his:diva-15038 (URN)10.1007/s11257-018-9202-0 (DOI)000433165000001 ()2-s2.0-85045270728 (Scopus ID)
Available from: 2018-04-12 Created: 2018-04-12 Last updated: 2019-11-19Bibliographically approved
Trattner, C., Said, A., Boratto, L. & Felfernig, A. (2018). Evaluating Group Recommender Systems. In: Alexander Felfernig, Ludovico Boratto, Martin Stettinger, Marko Tkalčič (Ed.), Group Recommender Systems: An Introduction (pp. 59-71). Springer
Open this publication in new window or tab >>Evaluating Group Recommender Systems
2018 (English)In: Group Recommender Systems: An Introduction / [ed] Alexander Felfernig, Ludovico Boratto, Martin Stettinger, Marko Tkalčič, Springer, 2018, p. 59-71Chapter in book (Refereed)
Abstract [en]

In the previous chapters, we have learned how to design group recommender systems but did not explicitly discuss how to evaluate them. The evaluation techniques for group recommender systems are often the same or similar to those that are used for single user recommenders. We show how to apply these techniques on the basis of examples and introduce evaluation approaches that are specifically useful in group recommendation scenarios.

Place, publisher, year, edition, pages
Springer, 2018
National Category
Other Computer and Information Science
Research subject
Skövde Artificial Intelligence Lab (SAIL); INF301 Data Science
Identifiers
urn:nbn:se:his:diva-15037 (URN)978-3-319-75067-5 (ISBN)978-3-319-75066-8 (ISBN)
Available from: 2018-04-12 Created: 2018-04-12 Last updated: 2018-09-03Bibliographically approved
Bellogín, A. & Said, A. (2018). Recommender Systems Evaluation (2ed.). In: Reda Alhajj, Jon Rokne (Ed.), Encyclopedia of Social Network Analysis and Mining: . Springer
Open this publication in new window or tab >>Recommender Systems Evaluation
2018 (English)In: Encyclopedia of Social Network Analysis and Mining / [ed] Reda Alhajj, Jon Rokne, Springer, 2018, 2Chapter in book (Refereed)
Place, publisher, year, edition, pages
Springer, 2018 Edition: 2
National Category
Other Computer and Information Science
Research subject
Skövde Artificial Intelligence Lab (SAIL); INF301 Data Science
Identifiers
urn:nbn:se:his:diva-15039 (URN)10.1007/978-1-4939-7131-2_110162 (DOI)978-1-4939-7130-5 (ISBN)978-1-4939-7131-2 (ISBN)978-1-4939-7132-9 (ISBN)
Note

The evaluation of RSs has been, and still is, the object of active research in the field. Since the advent of the first RS, recommendation performance has been usually equated to the accuracy of rating prediction, that is, estimated ratings are compared against actual ratings, and differences between them are computed by means of the MAE and RMSE metrics. In terms of the effective utility of recommendations for users, there is however an increasing realization that the quality (precision) of a ranking of recommended items can be more important than the accuracy in predicting specific rating values. As a result, precision-oriented metrics are being increasingly considered in the field, and a large amount of recent work has focused on evaluating top-N ranked recommendation lists with the above type of metrics. Besides that, other dimensions apart from accuracy – such as coverage, diversity, novelty, and serendipity – have been recently taken into account and analyzed when considered what makes a good recommendation (Said et al, 2014b; Cremonesi et al, 2011; McNee et al, 2006; Bellog´ın and de Vries, 2013; Bollen et al, 2010). So, what makes a good evaluation? The realization that high prediction accuracy might not translate to a higher perceived performance from the users has brought a plethora of novel metrics and methods, focusing on other aspects of recommendation (Said et al, 2013a; Castells et al, 2015; Vargas and Castells, 2014). Recent trends in evaluation methodologies point towards there being a shift from traditional methods solely based on statistical analyses of static data, i.e., raising precision performance of algorithms on offline data (Ekstrand et al, 2011b) – offline data in this case being recorded user interactions such as movie ratings or product purchases. Evaluation is the key to identifying how well an algorithm or a system works. Deploying a new algorithm in a new system will have an effect on the overall performance of the system – in terms of accuracy and other types of metrics. Both prior deploying the algorithm, and after the deployment, it is important to evaluate the system performance. It is in the evaluation of a RS one needs to decide on what should be sought-for, e.g., depending on whether the evaluation is to be performed from the users’ perspective (accuracy, serendipity, novelty), the vendor’s perspective (catalog, profit, churn), or even from the technical perspective of the system running the RS (CPU load, training time, adaptability). Given the context of the system, there might be other perspectives as well; in summary, what is important is to define the Key Performance Indicator (KPI) that one wants to measure. Let us imagine an online marketplace where customers buy various goods, an improved recommendation algorithm could result in, e.g., increased numbers of sold goods, more expensive goods sold, more goods from a specific section of the catalog sold, customers returning to the marketplace more often, etc. When evaluating a system like this, one needs to decide on what is to be evaluated – what the soughtfor quality is – and how it is going to be measured.

Available from: 2018-04-12 Created: 2018-04-12 Last updated: 2019-02-14Bibliographically approved
Elsweiler, D., Schäfer, H., Ludwig, B., Torkamaan, H., Said, A. & Trattner, C. (2018). Third international workshop on health recommender systems (HealthRecSys 2018). In: RecSys 2018 - 12th ACM Conference on Recommender Systems: . Paper presented at 12th ACM Conference on Recommender Systems, Vancouver, Canada, 2nd-7th October 2018 (pp. 517-518). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Third international workshop on health recommender systems (HealthRecSys 2018)
Show others...
2018 (English)In: RecSys 2018 - 12th ACM Conference on Recommender Systems, Association for Computing Machinery (ACM), 2018, p. 517-518Conference paper, Published paper (Refereed)
Abstract [en]

The 3rd International Workshop on Health Recommender Systems was held in conjunction with the 2018 ACM Conference on Recommender Systems in Vancouver, Canada. Following the two prior workshops in 2016 [4] and 2017 [2], the focus of this workshop is to deepen the discussion on health promotion, health care as well as health related methods. This workshop also aims to strengthen the HealthRecSys community, to engage representatives of other health domains into cross-domain collaborations, and to exchange and share infrastructure. 

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2018
Keywords
Health Recommender System, Health-aware computing, Health-aware information systems, Recommender Systems, Well-being, Cross domain collaboration, Health promotion, International workshops, Vancouver, Canada, Well being
National Category
Information Systems, Social aspects Health Care Service and Management, Health Policy and Services and Health Economy
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-16464 (URN)10.1145/3240323.3240336 (DOI)000458675100097 ()2-s2.0-85056773170 (Scopus ID)9781450359016 (ISBN)
Conference
12th ACM Conference on Recommender Systems, Vancouver, Canada, 2nd-7th October 2018
Available from: 2019-01-30 Created: 2019-01-30 Last updated: 2019-07-10Bibliographically approved
Ventocilla, E., Bae, J., Riveiro, M. & Said, A. (2017). A Billiard Metaphor for Exploring Complex Graphs. In: Marijn Koolen, Jaap Kamps, Toine Bogers, Nick Belkin, Diane Kelly, Emine Yilmaz (Ed.), Second Workshop on Supporting Complex Search Tasks: . Paper presented at Second Workshop on Supporting Complex Search Tasks co-located with the ACM SIGIR Conference on Human Information Interaction & Retrieval (CHIIR 2017), Oslo, Norway, March 11, 2017 (pp. 37-40). , 1798
Open this publication in new window or tab >>A Billiard Metaphor for Exploring Complex Graphs
2017 (English)In: Second Workshop on Supporting Complex Search Tasks / [ed] Marijn Koolen, Jaap Kamps, Toine Bogers, Nick Belkin, Diane Kelly, Emine Yilmaz, 2017, Vol. 1798, p. 37-40Conference paper, Published paper (Refereed)
Abstract [en]

Exploring and revealing relations between the elements is a fre-quent task in exploratory analysis and search. Examples includethat of correlations of attributes in complex data sets, or facetedsearch. Common visual representations for such relations are di-rected graphs or correlation matrices. These types of visual encod-ings are often - if not always - fully constructed before being shownto the user. This can be thought of as a top-down approach, whereusers are presented with a full picture for them to interpret andunderstand. Such a way of presenting data could lead to a visualoverload, specially when it results in complex graphs with highdegrees of nodes and edges. We propose a bottom-up alternativecalled Billiard where few elements are presented at rst and fromwhich a user can interactively construct the rest based on whats/he nds of interest. The concept is based on a billiard metaphorwhere a cue ball (node) has an eect on other elements (associatednodes) when stroke against them.

Series
CEUR Workshop Proceedings, E-ISSN 1613-0073 ; 1798
Keywords
Visualization, interaction, correlation
National Category
Computer Systems
Research subject
Skövde Artificial Intelligence Lab (SAIL); INF301 Data Science
Identifiers
urn:nbn:se:his:diva-14775 (URN)2-s2.0-85019592292 (Scopus ID)
Conference
Second Workshop on Supporting Complex Search Tasks co-located with the ACM SIGIR Conference on Human Information Interaction & Retrieval (CHIIR 2017), Oslo, Norway, March 11, 2017
Available from: 2018-02-27 Created: 2018-02-27 Last updated: 2018-09-24Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-2929-0529

Search in DiVA

Show all publications