<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="http://hdl.handle.net/10174/29618">
    <title>DSpace Collection:</title>
    <link>http://hdl.handle.net/10174/29618</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="http://hdl.handle.net/10174/39300" />
        <rdf:li rdf:resource="http://hdl.handle.net/10174/33387" />
        <rdf:li rdf:resource="http://hdl.handle.net/10174/33362" />
        <rdf:li rdf:resource="http://hdl.handle.net/10174/31972" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-04T10:51:58Z</dc:date>
  </channel>
  <item rdf:about="http://hdl.handle.net/10174/39300">
    <title>Translating Natural Language Questions into CIDOC-CRM SPARQL Queries to Access Cultural Heritage Knowledge Bases</title>
    <link>http://hdl.handle.net/10174/39300</link>
    <description>Title: Translating Natural Language Questions into CIDOC-CRM SPARQL Queries to Access Cultural Heritage Knowledge Bases
Authors: Varagnolo, Davide; Melo, Dora; Pimenta Rodrigues, Irene
Editors: Rodriguez Echavarria, Karina
Abstract: To explore information on the semantic web, SPARQL queries or DL-queries are suitable tools. However, users interested in exploring the content of such knowledge bases often find it challenging to employ formal query languages, as this requires familiarity with the target domain’s representation model. To address these challenges, a question-answering system that automatically translates natural language questions into SPARQL queries, over the Smithsonian American Art Museum CIDOC-CRM representation is presented. The proposed approach uses an ontology, named Query Ontology, defined to represent the natural language concepts and relations specific to the question’s domain. This system’s architecture uses a traditional natural language processing symbolic approach, with a pipeline of modules for the syntactic, semantic, and pragmatic analysis. An evaluation of the proposed system is presented and shows very promising results.</description>
    <dc:date>2025-04-18T23:00:00Z</dc:date>
  </item>
  <item rdf:about="http://hdl.handle.net/10174/33387">
    <title>Fifty Years of Prolog and Beyond</title>
    <link>http://hdl.handle.net/10174/33387</link>
    <description>Title: Fifty Years of Prolog and Beyond
Authors: Koerner, Philipp; Leuschel, Michael; Barbosa, João; Santos Costa, Vitor; Dahl, Verónica; Hermenegildo, Manuel; Morales, José; Wielemaker, Jan; Diaz, Daniel; Abreu, Salvador; Ciatto, Giovanni
Abstract: Both logic programming in general and Prolog in particular have a long and fascinating history, intermingled with that of many disciplines they inherited from or catalyzed. A large body of research has been gathered over the last 50 years, supported by many Prolog implementations. Many implementations are still actively developed, while new ones keep appearing. Often, the features added by different systems were motivated by the interdisciplinary needs of programmers and implementors, yielding systems that, while sharing the “classic” core language, in particular, the main aspects of the ISO-Prolog standard, also depart from each other in other aspects. This obviously poses challenges for code portability. The field has also inspired many related, but quite different languages that have created their own communities. This article aims at integrating and applying the main lessons learned in the process of evolution of Prolog. It is structured into three major parts. First, we overview the evolution of Prolog systems and the community approximately up to the ISO standard, considering both the main historic developments and the motivations behind several Prolog implementations, as well as other logic programming languages influenced by Prolog. Then, we discuss the Prolog implementations that are most active after the appearance of the standard: their visions, goals, commonalities, and incompatibilities. Finally, we perform a SWOT analysis in order to better identify the potential of Prolog and propose future directions along with which Prolog might continue to add useful features, interfaces, libraries, and tools, while at the same time improving compatibility between implementations.</description>
    <dc:date>2022-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://hdl.handle.net/10174/33362">
    <title>Intelligent Decision Support for Cybersecurity Incident Response Teams: Autonomic Architecture and Mitigation Search</title>
    <link>http://hdl.handle.net/10174/33362</link>
    <description>Title: Intelligent Decision Support for Cybersecurity Incident Response Teams: Autonomic Architecture and Mitigation Search
Authors: Correa, Camilo; Robin, Jacques; Mazo, Raul; Abreu, Salvador
Abstract: Critical infrastructures must be able to mitigate, at runtime, suspected ongoing cyberattacks that have eluded preventive security measures. To tackle this issue, we first propose an autonomic computing architecture for a Cyber-Security Incident Response Team Intelligent Decision Support System (CSIRT-IDSS) with a precise set of technologies for each of its components. We then zoom in on the component responsible for proposing to the CSIRT, automatically ranked sets of runtime actions to mitigate suspected ongoing cyber-attacks. We formalize its task as a Constraint Optimization Problem (COP). We then propose to implement it by a Constraint Object-Oriented Logic Program (COOLP) deployed as a containerized web service through the integration of three orthogonal extensions of Logic Programming (LP): Web Service Oriented LP (WSOLP), Constraint LP (CLP) and Object-Oriented LP (OOLP). This integration supports seamlessly reusing platform and task independent cybersecurity ontological knowledge to dynamically build a mitigation action search COP that is customized to an input suspected cyberattack action set. This customization then allows the COP, to be solved by a generic CLP engine efficiently enough to propose mitigation actions to the CSIRT team while they can still be effective. To validate this approach, we implemented a prototype called CARMAS (Cyber Attack Runtime Mitigation Action Search) and ran scalability tests on simulated attacks with various COP construction strategies.</description>
    <dc:date>2022-04-08T23:00:00Z</dc:date>
  </item>
  <item rdf:about="http://hdl.handle.net/10174/31972">
    <title>NanoSen-AQM: From Sensors to Users</title>
    <link>http://hdl.handle.net/10174/31972</link>
    <description>Title: NanoSen-AQM: From Sensors to Users
Authors: Lucas, Pedro; Silva, Jorge; Araujo, Filipe; Silva, Catarina; Gil, Paulo; P. Aarrais, Joel; Coutinho, Daniel; Salgueiro, Pedro; Saias, José; Nogueira, Vítor; Rato, Luís
Abstract: With the raising of environmental concerns regarding pollution, interest in monitoring air quality is increasing. However, air pollution data is mostly originated from a limited number of government-owned sensors, which can only capture a small fraction of reality. Improving air quality coverage in-volves reducing the cost of sensors and making data widely available to the public. To this end, the NanoSen-AQM project proposes the usage of low-cost nano-sensors as the basis for an air quality monitoring platform, capa-ble of collecting, aggregating, processing, storing, and displaying air quality data. Being an end-to-end system, the platform allows sensor owners to manage their sensors, as well as define calibration functions, that can im-prove data reliability. The public can visualize sensor data in a map, define specific clusters (groups of sensors) as favorites and set alerts in the event of bad air quality in certain sensors. The NanoSen-AQM platform provides easy access to air quality data, with the aim of improving public health.</description>
    <dc:date>2020-04-03T23:00:00Z</dc:date>
  </item>
</rdf:RDF>

