Sort by year:

in: S. Ramanna, L. C. Jain i R. J. Howlett (Eds.) Emerging Paradigms in Machine Learning, pp. 249-275, Springer Verlag.

We present a general scheme of interaction and we discuss the role of

interactions in modeling of perception processes. We also discuss the role of information

systems in interactive computing used to build perception modeling. In

particular, we illustrate use of information systems for representation of actions

or plans, their pre and post conditions (predicted as well as real). These information

systems create a starting point for perception modeling, i.e., modeling of the

process of understanding of sensory measurements.

in: L. Czaja (Ed.) Proceedings of the Concurrency, Specification and Programming’2003 Workshop, vol. 2, University of Warsaw Press.

The paper is aimed in comparing Rough Set Theory (RST) and Formal Concept Analysis (FCA)

with respect to algebraic structures of concepts appearing in both theories, namely algebras

of definable sets and concept lattices. The paper presents also basic ideas and concepts of RST

and FCA together with some set theoretical concepts connected with set spaces which can serve as a

convenient platform for a comparison of RST and FCA. In the last section there are shown necessary

and sufficient conditions for the fact that families of definable sets and concept extents determined by

the same formal contexts are equal what in finite cases is equivalent to an isomorphism of respective

structures and generally reflects a very specific situation when both theories give the same conceptual

hierarchies.

Fundamenta Informaticae, 127(1-5), 2013, pp. 529-544

We present a method for improving the detection of outlying Fire Service’s reports based

on domain knowledge and dialogue with Fire & Rescue domain experts. The outlying report is considered

as an element which is significantly different from the remaining data. We follow the position

of Professor Andrzej Skowron that effective algorithms in data mining and knowledge discovery in

big data should interact with domain experts or/and be domain oriented. Outliers are defined and

searched on the basis of domain knowledge and dialogue with experts. We face the problem of

reducing high data dimensionality without loosing specificity and real complexity of reported incidents.

We solve this problem by introducing a knowledge based generalization level intermediating

between analyzed data and experts domain knowledge. In our approach we use the Formal Concept

Analysis methods for both generation of the appropriate categories from data and as tools supporting

communication with domain experts. We conducted two experiments in finding two types of outliers

in which outlier detection was supported by domain experts.

Theoretical Computer Science, 454, 2012, pp. 240-260

We discuss the role of interactions in the modeling of perception processes. Interactive

information systems, introduced in this paper, play the important role in this modeling.

Moreover, the proposed approach opens a new research direction in rough set theory. In

this approach, partial information about the environment used for the approximation of

concepts is changing dynamically in a network of interacting information systems contrary

to static information systems typically used in rough set theory so far. In particular, we

illustrate the use of such information systems for representation of actions or plans, their

(changing in time) pre and post conditions. These information systems create a starting

point for perception modeling, i.e., modeling of the process of understanding of sensory

measurements. We also propose interactive grammars as a tool for modeling interactive

computations in perception based computing.

Journal of Advanced Mathematics and Applications, 1, 2012, pp. 63-75

The risk management is of the great importance for the success of behaviors of individuals, groups,

societies, as well as the whole civilization. The aim of this paper is to present a step toward building

risk management systems based on computational models for interactive systems. Computations

in such systems are performed in an integrated distributed environments on objects of different

kinds of complexity, called here as information granules and some parts of matter (or hunks, for

short). The computations are progressing by interactions among information granules and physical

objects. We distinguish global and local computations. The former ones are performed by the

environment (the nature) while the local computations are, in a sense, projections of the global

computations on local systems and they represent information on global computations perceived

by local systems. We assume that, the laws of the nature are only partially known by the local

systems. The approach is based on Interactive Rough-Granular approach. In particular, one can

consider local computations relative to a given agent or a given society of agents. In the discussed

approach, the risk management tasks are considered as control tasks aiming at achieving the satisfactory

performance of (societies of) agents. The novelty of the approach is the use of complex

vague concepts as the guards of control actions. These vague concepts are represented in domain

ontologies. The rough set approach is used for approximation of the vague concepts relative to

attributes (features) available to the risk management systems. In the presented computing models,

a mixture of reasoning based on deduction and induction is used. This approach seems also to be

of some importance for developing computing models in different areas such as natural computing

(e.g., computing models for meta-heuristics or computations models for complex processes in

molecular biology), computing in distributed environments under uncertainty realized by multi-agent

systems, modeling of computations for feature extraction (constructive induction) for approximation

of complex vague concepts, hierarchical learning, discovery of planning strategies or strategies for

coalition formation by intelligent systems as well as for approximate reasoning about interactive

computations based on such computing models.

Information Science, 195, 2012, pp. 211-225

This article considers the origins, theoretical aspects and applications of tolerance spaces.

In terms of the origin of tolerance spaces, this article calls attention to the seminal work by

J.H. Poincaré (1854–1912) and E.C. Zeeman (1925–) on establishing the foundations for tolerance

spaces. During the period from 1895 to 1912, Poincaré introduced sets of sensations

and sequences of almost the same sensations as a means of characterizing the physical

spectrum. The perception of physical objects that are almost the same leads to a tolerance

space view of visual perception as well as other forms of perception such as touch and

sound. Roughly 60 years later (in 1962), Zeeman formally introduced the notion of a tolerance

space as a useful means of investigating a geometry of visual perception. In addition

to the general theory of tolerance spaces, this article also carries forward earlier work on

perceptual tolerance relations and considers the resemblance (nearness) between tolerance

spaces. From an information systems point of view, it can be observed that tolerance

spaces have proved to be fruitful in a number of research areas. Evidence of the utility of

tolerance spaces in information systems can be seen in the introduction of tolerance rough

sets, tolerance approximation spaces, and tolerance near sets. The contribution of this article

is an overview of tolerance spaces considered in the context of visual perception and a

presentation of a formal basis for the study of perceptual tolerance spaces.

Theoretical Computer Science, 412, 2011, pp. 5939–5959

In this paper, we discuss the importance of information systems in modeling interactive

computations performed on (complex) granules and we propose a formal approach to

interactive computations based on generalized information systems and rough sets which

can be combined with other soft computing paradigms such as fuzzy sets or evolutionary

computing, but also with machine learning and data mining techniques. Information

systems are treated as dynamic granules used for representing the results of the interaction

of attributes with the environment. Two kinds of attributes are distinguished, namely,

the perception attributes, including sensory attributes, and the action attributes. Sensory

attributes are the basic perception attributes, other perception attributes are constructed

on the basis of the sensory ones. Actions are activated when their guards, being often

complex and vague concepts, are satisfied to a satisfactory degree. The guards can be

approximated on the basis of measurements performed by sensory attributes rather than

defined exactly. Satisfiability degrees for guards are results of reasoning called the adaptive

judgment. The approximations are induced using hierarchical modeling. We show that

information systems can be used for modeling more advanced forms of interactions in

hierarchical modeling. The role of hierarchical interactions is emphasized in the modeling

of interactive computations. Some illustrative examples of interactions used in the ACT-R

6.0 system are reported. ACT-R 6.0 is based on a cognitive architecture and can be treated as

an example of a highly interactive complex granule which can be involved in hierarchical

interactions. For modeling of interactive computations, we propose much more general

information systems than the studied dynamic information systems (see, e.g., Ciucci (2010)

[8] and Pałasiński and Pancerz (2010) [32]). For example, the dynamic information systems

are making it possible to consider incremental changes in information systems. However,

they do not contain the perception and action attributes necessary for modeling interactive

computations, in particular for modeling intrastep interactions.

Transactions on Rough Sets XIII, subline journal of Lectures Notes in Computer Science, 6499, 2011, pp. 159-174

This paper elaborates on the introduction of perceptual tolerance

intersection of sets as an example of a near set operation. Such

operations are motivated by the need to consider similarities between

digital images viewed as disjoint sets of points. The proposed approach

is in keeping with work by E.C. Zeeman on tolerance spaces and visual

perception and work by J.H. Poincar´e on sets of similar sensations

used to define representative spaces (aka tolerance spaces) such as visual,

tactile and motile spaces. Perceptual tolerance intersection of sets is a

direct consequence of recent work on near sets. The theory of perceptual

set intersection has many practical applications such as a solution to the

problem of how one goes about measuring the closeness of digital images.

The main contribution of this article is a description-based approach to

formulating perceptual set intersections between disjoint sets that resemble

each other. A practical application of the proposed approach is the

discovery of resemblances between sets of points in digital image regions

that represent tolerance rough sets.

Control & Cybernetics, 40(2), 2011, pp. 1-23

This paper introduces an approach to the foundations of information science considered in

the context of near sets. Perceptual information systems (or, more concisely, perceptual

systems) provide stepping stones leading to nearness relations, near sets and a framework

for classifying perceptual objects. This work has been motivated by an interest in finding a

solution to the problem of how one goes about discovering affinities between perceptual

granules such as images. Near set theory provides a formal basis for observation, comparison

and classification of perceptual granules. This is made clear in this article by considering

various nearness relations that define coverings of sets of perceptual objects that

are near each other. In the near set approach, every perceptual granule is a set of objects

that have their origin in the physical world. Objects that have, in some degree, affinities

are considered perceptually near each other, i.e., objects with similar descriptions. This article

includes a comparison of near sets with other approaches to approximate knowledge

representation and a sample application in image analysis. The main contribution of this

article is the introduction of a formal foundation for near sets and a demonstration that

the family of near sets is a Grätzer slash lattice.

in: A, Skowron, Z. Suraj (Eds.) Rough Sets and Intelligent Systems - Professor Zdzisław Pawlak in Memoriam, pp. 577-600, Springer Verlag.

Pseudometric spaces are presented form the point of view of their connections

with approximation spaces. A special way of determining equivalence relations

by pseudometric spaces is considered and open sets in pseudometric spaces

are studied. Investigations focus on the class of pseudometric spaces which are lower

bounded in each point since open sets in these spaces coincide with definable sets of

some prescribed approximation spaces. It is also shown that all equivalence and non

transitive tolerance relations can be determined by pseudometric spaces in specified ways.

in: R. Bembenik, Ł. Skonieczny, H. Rybiński, M. Niezgódka (Eds.) Intelligent Tools for Building a Scientific Information Platform, pp. 107-120, Springer Verlag.

The paper discuss fundamentals of semantic evaluation of

information retrieval systems. Semantic evaluation is understood in two

ways. Semantic evaluation sensu stricto consists of automatic global

methods of information retrieval evaluation which are based on knowl-

edge representation systems. Semantic evaluation sensu largo includes

also evaluation of retrieved results presented using new methods and

comparing them to previously used which evaluated unordered set of

documents or lists of ranked documents. Semantic information retrieval

methods can be treated as storing meaning of words which are basic

building blocks of retrieved texts. In the paper, ontologies are taken as

systems which represent knowledge and meaning. Ontologies serve as a

basis for semantic modeling of information needs, which are modeled

as families of concepts. Semantic modeling depends also on algorithmic

methods of assigning concepts to documents. Some algebraic and par-

tially ordered set methods in semantic modeling are proposed leading

to dierent types of semantic modeling. Then semantic value of a docu-

ment is discussed, it is relativized to a family of concepts and essentially

depends on the used ontology. The paper focuses on sematic relevance

of documents, both binary and graded, together with semantic ranking

of documents. Various types of semantic value and semantic relevance

are proposed and also some semantic versions of information retrieval

evaluation measures are given.

in: A.E. Hassanien, Z. Suraj, D. Ślęzak, P. Lingras (Eds.) Rough Computing. Theories, Technologies and Applications, pp. 1 – 37, Information Science Reference.

We present three types of knowledge, which can be specified according to the Rough Set

theory. Then, we present three corresponding types of algebraic structures appearing in the

Rough Set theory. This leads to three following types of vagueness: crispness, classical

vagueness, and a new concept of “intermediate” vagueness. We also propose two

classifications of information systems and approximation spaces. Based on them, we

differentiate between information and knowledge.

Lecture Notes in Computer Science 8171, Springer 2013

in: L. Popova-Zeugmann (Eds.) Proceedings of 23th International Workshop on Concurrency, Specification and Programming (CS&P 2014), pp. 269-280, Humboldt Universitat.

We present our research on acquiring domain knowledge related

to urban vehicular traffic by means of interaction with experts.

Such knowledge is needed in knowledge discovery and data mining for approximation

of complex vague concepts from the road traffic. According

to perception based computing paradigm, this can be done by construction

of hierarchical classifiers supported with expert knowledge. We treat

traffic, especially urban traffic, as a complex process having hierarchical

structure. Complexity of this process makes traffic data massive and complex,

what makes domain oriented hierarchical classifiers indispensable

here. We propose a method of traffic domain knowledge acquisition by

interaction with experts aimed at construction of such classifiers.

in: D. Ślęzak, G. Schaefer, S.T. Vuong, Y-S. Kim (Eds.) Proceedings of 10th International Conference Active Media Technology (AMT 2014), Lecture Notes in Computer Science, 8610, pp. 525-536.

In the paper we propose an adaptive system for intelligent

traffic management in smart cities. We argue why the traffic is difficult

and complex phenomenon, why such traffic management systems are

necessary for the smart city, what is the state of the art in the traffic

science and traffic management and how to improve existing solutions

using methods that we develop, based on the Perception Based Computing

paradigm.

in: P. Cellier, F. Distel, B. Ganter (Eds.): Contributions to the 11th International Conference on Formal Concept Analysis, ICFCA’2013, pp. 35-50.

We present a methodology for improving the detection of

outlying Fire Service’s reports based on domain knowledge and dialogue

with Fire & Rescue domain experts. The outlying report is considered as

element which is significantly different from the remaining data. Outliers

are defined and searched on the basis of domain knowledge and dialogue

with experts. We face the problem of reducing high data dimensionality

without loosing specificity and real complexity of reported incidents. We

solve this problem by introducing a knowledge based generalization level

intermediating between analysed data and experts domain knowledge.

In the methodology we use the Formal Concept Analysis methods for

both generation appropriate categories from data and as tools supporting

communication with domain experts. We conducted two experiments in

finding two types of outliers in which outliers detection was supported

by domain experts.

in: S. Shimizu, T. Bosomaier (Eds.) Proceedings of COGNITIVE 2013, pp. 120-125, IARIA.

In the paper, we outline our research on obtaining

domain knowledge related to vehicular traffic in cities using

interaction with experts. The goal of acquiring such knowledge

is to construct hierarchical domain oriented classifiers for

approximation of complex vague concepts related to the road

traffic. Interaction with experts in construction of hierarchical

classifiers is supported by the software for agent-based simulation

of vehicular traffic in cities, Traffic Simulation Framework,

developed by the first author.

in: T. Li, H.S. Nguyen, G. Wang, J. Grzymala-Busse, R. Janicki, A.E. Hassanien, H. Yu (Eds.) Proceedings of 7th International Conference Rough Sets and Knowledge Technology, (RSKT 2012), Lecture Notes in Artificial Intelligence, 7414, 1-10.

Rough separability in topology is discussed by its connections

with pseudometric spaces and rough sets. Pseudometric spaces are

presented from the point of view of their connections with approximation

spaces. A special way of determining equivalence relations by pseudometric

spaces is considered and open sets in pseudometric spaces are studied.

Investigations focus on the class of pseudometric spaces which are lower

bounded in each point since open sets in these spaces coincide with definable

sets of some prescribed approximation spaces. It is also shown that

all equivalence and non transitive tolerance relations can be determined

by pseudometric spaces in specified ways.

in: W. Kisner (Ed.) Proceedings of CEEA'2012, pp. 51-62, University of Manitoba

Diverse attempts are being made to develop new computers, machines and systems that could act not only autonomously, but also in an increasingly intelligent, perceptual and cognitive manner. This paper discusses some of the educational challenges stemming from this emerging modelling and design paradigm, including teaching appropriate subjects to undergraduate and graduate students in university engineering programs.

in: L. Popova-Zeugmann (Ed.) Proceedings of Concurrency, Specification and Programimng’2012, 358-369, Humboldt Universitat.

The aim of this paper is to present a step toward building

computational models for interactive systems. Such computations

are performed in an integrated distributed environments on objects of

different kinds of complexity, called here as information granules. The

computations are progressing by interactions among information granules

and physical objects. We distinguish global and local computations.

The former ones are performed by the environment (the nature) while the

local computations are, in a sense, projections of the global computations

on local systems and they represent information on global computations

perceived by local systems. We assume that, the laws of the nature are

only partially known by the local systems. This approach seems to be

of some importance for developing computing models in different areas

such as natural computing (e.g., computing models for meta-heuristics

or computations models for complex processes in molecular biology),

computing in distributed environments under uncertainty realized by

multi-agent systems, modeling of computations for feature extraction

(constructive induction) for approximation of complex vague concepts,

hierarchical learning, discovery of planning strategies or strategies for

coalition formation by intelligent systems as well as for approximate reasoning

about interactive computations based on such computing models.

In the presented computing models, a mixture of reasoning based on deduction

and induction is used.

in: T. Li, H.S. Nguyen, G. Wang, J. Grzymala-Busse, R. Janicki, A.E. Hassanien, H. Yu (Eds.) Proceedings of 7th International Conference Rough Sets and Knowledge Technology, (RSKT 2012), Lecture Notes in Artificial Intelligence, 7414, 416-421.

This article is focused on the recognition and prediction of blockages

in the fire stations using granular computing approach. Blockage refers to the

situation when all fire units are out and a new incident occurs. The core of the

method is an estimation of the expected return times for the fire brigades based

on the granularisation of source data. This estimation, along with some other

considerations allows for evaluation of the probability of the blockage.

in: J. Watada, T. Watanabe, G. Phillips-Wren, R.J. Howlett, L.C. Jain (Eds.) Intelligent Decision Technologies, Smart Innovation, Systems and Technologies vol. 16 str. 391-402, Springer Verlag.

We present a general scheme of interaction and we discuss the role of

interactions in modeling of perception processes. We use information systems as a

starting point for perception modeling, i.e., modeling of the process of understanding

of sensory measurements. The novelty of the paper is an attempt to present the

perception process by means of interactive grammars.

in: T.-h. Kim, Y-h Lee, B-H. Kang, D. Ślęzak (Eds.) Proceedings of 2nd International Conference Future Generation Information technology (FGIT 2010), Lecture Notes in Computer Science, 6485, pp. 12–25.

We discuss basic notions of Perception Based Computing

(PBC). Perception is characterized by sensory measurements and ability

to apply them to reason about satisfiability of complex vague concepts

used, e.g., as guards for actions or invariants to be preserved by agents.

Such reasoning is often referred as adaptive judgment. Vague concepts

can be approximated on the basis of sensory attributes rather than defined

exactly. Approximations usually need to be induced by using hierarchical

modeling. Computations require interactions between granules of

different complexity, such as elementary sensory granules, granules representing

components of agent states, or complex granules representing

classifiers that approximate concepts. We base our approach to interactive

computations on generalized information systems and rough sets.

We show that such systems can be used for modeling advanced forms of

interactions in hierarchical modeling. Unfortunately, discovery of structures

for hierarchical modeling is still a challenge. On the other hand, it

is often possible to acquire or approximate them from domain knowledge.

Given appropriate hierarchical structures, it becomes feasible to perform

adaptive judgment, starting from sensory measurements and ending with

conclusions about satisfiability degrees of vague target guards. Thus, our

main claim is that PBC should enable users (experts, researchers, students)

to submit domain knowledge, by means of a dialog. It should

be also possible to submit hypotheses about domain knowledge to be

checked semi-automatically. PBC should be designed more like laboratories

helping users in their research rather than fully automatic data

mining or knowledge discovery toolkit. In particular, further progress in

understanding visual perception – as a special area of PBC – will be

possible, if it becomes more open for cooperation with experts from neuroscience,

psychology or cognitive science. In general, we believe that

PBC will soon become necessity in many research areas.

in: M. Szczuka, M. Kryszkiewicz, R. Jensen, Q. Hu (Eds.) Proceedings of 7th International Conference on Rough Sets and Current Trends in Computing (RSCTC’2010) Lecture Notes in Artificial Intelligence 6068, pp 277-286.

This paper introduces a perceptual tolerance intersection of

sets as an example of near set operations. Such operations are motivated

by the need to consider similarities between digital images viewed as disjoint

sets of points. The proposed approach is in keeping with work by

E.C. Zeeman on tolerance spaces and visual perception and J.H. Poincar´e

on sets of similar sensations used to define representative (aka tolerance)

spaces such as visual, tactile and motile spaces. Perceptual tolerance

intersection of sets is a direct consequence of recent work on near sets and

a solution to the problem of how one goes about discovering

affinities between digital images. The main contribution of this article

is a description-based approach to assessing the resemblances between

digital images.

in: M. Szczuka, M. Kryszkiewicz, R. Jensen, Q. Hu (Eds.) Proceedings of 7th International Conference on Rough Sets and Current Trends in Computing (RSCTC’2010) Lecture Notes in Artificial Intelligence 6068, pp 277-286.

In this paper we discuss the importance of information systems

in modeling interactive computations performed on (complex) granules

and propose a formal approach to interactive computations based

on information systems. The basic concepts of information systems and

rough sets are interpreted in the framework of interactive computations.

We also show that information systems can be used for modeling more

advanced forms of interactions such as hierarchical ones. The role of

hierarchical interactions is emphasized in modeling interactive computations.

Some illustrative examples of interactions used in the hierarchical

multimodal classification method as well as in the ACT-R 6.0 system are

reported.

in: A. Wakulicz – Deja (Ed.) Proceedings of Decision System Support Conference,4, 2009, pp. 33–45, Silesian University Press, Katowice.

In the paper we discuss the importance of information systems in modelling interactive computations performed on (complex) granules and propose a formal approach to interactive computations based on information systems. The basic concepts of information systems and rough sets are interpreted in the framework of interactive computations. We also shows that information systems can be used for modeling more advanced forms of interactions such as hierarchical ones. The role of hierarchical interactions is emphasized in modeling interactive computations. Some illustrative examples of interactions used in the hierarchical multimodal classification method as well as in the ACT-R system are reported.

in: A. An, J. Stefanowski, S. Ramana, C.J. Butz, W. Pedrycz, G. Wanng (Eds.) Proceedings of 11th International Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular Computing, Lecture Notes in Artificial Intelligence 448, pp 435-442.

A novel approach to extend the notions of definability and

rough set approximations in information systems with non-equivalence

relations is proposed. The upper approximation is defined as set-theoretic

complement of negative region of a given concept; therefore, it does not

need to be definable. Fundamental properties of new approximation operators

are compared with the previous ones reported in literature. The

proposed idea is illustrated within tolerance approximation spaces. In

particular, granulation based on maximal preclasses is considered.

in: D. Ślęzak, M. Szczuka, I. Duentsch, Y. Yao (Eds.) Proceedings of 10th International Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular Computing, Lecture Notes in Artificial Intelligence 3641, pp. 114 – 123.

The aim of this paper is to compare concept lattices and

approximation spaces. For this purpose general approximation spaces

are introduced. It is shown that formal contexts and information systems

on one hand and general approximation spaces on the other could

be mutually represented e.g. for every information system exists a general

approximation space such that both structures determines the same

indiscernibility relation. A close relationship between Pawlak’s approximation

spaces and general approximation spaces also holds: for each approximation

space exists a general approximation space such that both

spaces determine the same definable sets. It is shown on the basis of these

relationships that an extent of the every formal concept is a definable

set in some Pawlak’s approximation space. The problem when concept

lattices are isomorphic to algebras of definable sets in approximation

spaces is also investigated.

in: The Barcelona Conference on Set Theory, Centre de Recerca Matematica, Bellaterra, 2003.

in: Volume of Abstracts, 12th International Congress of Logic and Philosophy of Science, Oviedo 2003.

The Bulletin of Symbolic Logic, 31(1)

in: VIth Barcelona Logic Meeting, Centre de Recerca Matematica, Barcelona 2000.

in: M. Kryszkiewicz, S. Bandyopadhyay, H. Rybinski, S. K. Pal (Eds.) Proceedings of 6th International Conference on Patern Recognition and Machine Intelligence (PReMI’2015), Lecture Notes in Computer Science, 9124, pp. 303-313

This paper is a preliminary step towards proposing a scheme for synthesis

of a concept out of a set of concepts focusing on the following aspects.

The first is that the semantics of a set of simple (or independent) concepts would

be understood in terms of its prototypes and counterexamples, where these instances

of positive and negative cases may vary with the change of the context,

i.e., a set of situations which works as a precursor of an information system. Secondly,

based on the classification of a concept in terms of the situations where

it strictly applies and where not, a degree of application of the concept to some

new situation/world would be determined. This layer of reasoning is named as

logic of prototypes and counterexamples. In the next layer the method of concept

synthesis would be designed as a graded concept based on the already developed

degree based approach for logic of prototypes and counterexamples.

in: Z. Suraj, L. Czaja (Eds.) Proceedings of 24th International Workshop on Concurrency, Specification and Programming (CS&P 2015), pp. 234-236

in: Z. Suraj, L. Czaja (Eds.) Proceedings of 24th International Workshop on Concurrency, Specification and Programming (CS&P 2015), pp. 126-133