Tuesday 10 December 2019

Power Integrated: Large-Scale Experiments for IoT Security & Trust - ARMOUR

Project Center in Trichy

The ARMOUR project is focused on carrying out a largescale experimentally-driven research. To achieve it, a proper experimentation methodology will be implemented, technologies subject to experimentation will be benchmarked and a new certification scheme will be designed in order to perform a quality label for proposed solutions. Also, security and trust of large-scale IoT applications will be studied to establish a design guidance for developing applications that are secure and trusted in the large-scale IoT. Finally, data and benchmarks from experiments will be properly handled, preserved and made available via FIESTA IoT/Cloud infrastructure1 . For this purpose, this facility will be adapted and configured to hold the ARMOUR experimentation data and benchmarks. In this way, research data are properly preserved and made available to the research communities, making possible to compare results of experiments performed in different testbeds and/or also to confront results of disparate security and trust technologies. 

A. Experimentation methodology 
The ARMOUR project considers a large-scale experimentally-driven research approach due to IoT complexity (high dimensionality, multi-level interdependencies and interactions, non-linear highly-dynamic behavior, etc.). This research approach makes possible to experiment and validate research technological solutions in large-scale conditions and very close to real-life environments. Besides, it is performed through a well-established methodology for conducting good experiments that are reproducible, extensible, applicable and revisable. Thus, the used methodology aims at checking the repeatability, reproducibility and reliability conditions to ensure generalization of experimental results as well as verifying their credibility. This methodology consists of four phases which are depicted below. The first phase, Experimentation Definition & Support, marks the start of the experimentation process and involves: • Definition of the IoT security and trust experiments (testing scenarios, needed conditions, analysis dimensions) and the technological architecture for ARMOUR experimentation; • Research and development of the ARMOUR technological experimentation suite and benchmarking methodology for executing, managing and benchmarking large-scale security and trust IoT experiments; • Analysis of the FIT IoT-LAB testbed [9] and FIESTA IoT/Cloud platform for evaluating their composition, supports and services from the perspective of ARMOUR experimentation. In the next phase, Testbeds Preparation & Experimentation Set-up, the conditions for conducting the ARMOUR experimentation using the selected testbeds are established as well as preparing the IoT security and trust experiments. This phase involves: Project Center in Trichy • Extend, adapt and configure the FIT IoT-LAB testbed with the ARMOUR experimentation suite to enable IoT large-scale security and trust experiments from FIT IoT-LAB and FIESTA IoT/Cloud testbeds; • Prepare the FIESTA IoT/Cloud platform to hold data set from ARMOUR experiments enabling researchers to perform security and trust oriented experiments, generate new datasets and perform benchmark; • Setting-up and preparing the ARMOUR experiments by specifying the security and trust test patterns for the experimentation that will be used to execute and to manage such experiments. The third phase, Experiments Execution, Analysis and Benchmark, represents the research core of the ARMOUR project to achieve proven security and trust solutions for the large-scale IoT. This phase involves the following sub-phases which are performed iteratively: • Configure. Install and configure the scenarios of the IoT large-scale security and trust experiments. • Measure. Take measurements and collect the data from experiments. • Pre-process. Perform pre-processing of stored experimentation data (e.g. data-cleaning) as well as organize them (e.g. transformations, semantic annotations). • Analyse. Analyse experimentation data (prove hypothesis), perform experiments benchmarking and compare the performance. • Report. Inform on experiments results and possibility to publish them for project dissemination. The last phase of this methodology, Certification/Labelling & Applications Framework, is focused on the creation of the certification label and the establishment of a framework for secure and trusted IoT applications. It involves: • Develop a new labelling scheme for large-scale IoT security and trust solutions that provides the user and market confidence needed on their deployment, adoption and use. • Define a framework to specify how the different security and trust solutions can be used to support the design and deployment of secure and trusted applications for large-scale IoT. 

B. Experimentation execution 
The ARMOUR experiments will be executed on a largescale IoT facility: the FIT IoT-LAB testbed. This facility has been enhanced for supporting secure and trusted experimentation and offers a testbed (more than 2700 wireless IoT nodes) for developing and deploying large-scale IoT experiments. In addition, services already deployed, like monitoring or sniffer, and advanced services developed in the ARMOUR context, like replay or denial traffic injection, will allow to carry out large-scale IoT experiments in the domain of security and trust. Thus, examples of experiments that could be performed include: 1. End-to-end connectivity between IoT nodes using protocols like IPv6SEC, CoAP [10] / DTLS [11] or 6LoWPAN [12] and to test secure solutions using standard IETF protocols on different HW/OS. 2. Test specific secure mac layer or low-level cryptographic under very stressful environment where attacker could inject or try to replay certain data traffic. Also, the sniffer service could be used intensively in the test. 

C. Secured and trusted applications for large-scale 
IoT Nowadays, there are many areas of people's life in which IoT applications can be extended. Specifically, within the ARMOUR project the following application domains will be considered. The first one would be “remote healthcare” in which the status of the patient is continuously monitored and data is provided to the remote healthcare centre. Thus, doctors can apply certain corrective actions when there is a change of status. In this context, privacy risks should be investigated to ensure that confidential data is not distributed or eavesdropped by malicious entities. To address these risks, it is necessary that IoT applications implement various solutions, such as defining different roles with different levels of security for data access (e.g. RBAC) or apply cryptographic mechanisms to prevent security risks are become safety risks. Another domain is “business from home (teleworking)”. In this one, workers are connected to various remote centres and exchange information for business purposes. Because of this, it is required that IoT applications provide appropriate authentication mechanisms and ensure confidentiality to protect business data and ensure that only authenticated and authorized remote users can access to services business. The “integration with Smart City” is also considered and involves applications like Intelligent Transport Systems (ITS) or e-Government which could be integrated in a Smart Home environment. This integration between multiple heterogeneous systems and the definition of different levels of security are the main aspects in this context. Finally, “smart mobile devices” are taken into account and include applications related to the deployment of IoT mobile devices in various contexts (e.g. shopping centre, logistics centre) like robotic systems for cleaning or maintenance and production systems. In this case, it is necessary to prevent certain hazards, disclosure of confidential information and ensure that related coordinated operations cannot be disrupted by malicious attackers. 

D. Experimentation data and benchmarks 
In ARMOUR, experimentation research data are mainly represented as data sets. These data sets may be preserved in different ways, where it may be imposed different data access or usage policies. ARMOUR foresees the usage of data repositories but especially FIESTA-IoT platform to store, archive and preserve these data sets. FIESTA-IoT differs from typical data repositories by allowing the execution of experiments over the stored data sets. Moreover, FIESTA-IoT provides itself as a channel to give data access to experimenters and researchers, either from FIRE (Future Internet Research Experimentation) community or other research communities.
https://powerintegrated.in/
 ARMOUR datasets will endow FIESTA-IoT with the capability to perform security oriented experiments. At the same time, experimenters can use platform capabilities to run specific data processing algorithms to used, transform data sets and generate new ARMOUR data sets, and to establish benchmarks to rank and compare results from different experiments and IoT security and trust technological solutions. 

EXPERIMENTS’ DESCRIPTION 
Once the main aspects of the ARMOUR project have been detailed, in this section we present the design of the proposed experiments on which several tests will be carried out in order to verify the security and trust that present in the IoT context. Specifically, three experiments, corresponding to the bootstrapping, group sharing and software programming stages, in order to solve or mitigate those threats that may arise in each of these stages are proposed. The first experiment is focused on providing a mechanism that allows devices to authenticate and to request authorization to publish information in an IoT platform. Moreover, the second experiment is intended to define a mechanism for secure information exchange between groups of devices through the platform. Finally, the third experiment focuses on the security mechanisms that ensure that both the programed device and programming entity are legitimate. 
Before presenting the design of the experiments, it is appropriate to briefly describe the entities that will appear in them. Such entities model different types of devices that are part of the IoT world. • Smart Object or Device. It is a device with constrained capabilities in terms of processing power, memory and communication bandwidth (sensors, actuators, etc.). In the bootstrapping stage, Smart Objects act both PANA Client (PaC) [13] and Data Producer, that is, it tries to get access to the network to publish subsequently certain information in an IoT platform. On the other hand, in the group sharing stage, it can act as Data Producer (publishing data on the IoT platform) or as Data Consumer (receiving data from the IoT platform). • Gateway. It allows Smart Objects are able to communicate with the Internet. In bootstrapping stage, it presents the functionality of a PANA Authentication Agent (PAA) being responsible for authenticating and authorizing the network access to different Smart Objects acting as PaCs. • AAA Server. Its functionality is to manage a very large number of devices in terms of authentication and authorization. In a particular way, in the bootstrapping stage, it is responsible for the management of Smart Objects that need to be authenticated and authorized to access the IoT network. • Policy Decision Point (PDP). It is the component of the policy-based access control system that takes the determination of whether or not to authorize a subject to perform an action over a resource, evaluating its access control policies. In the bootstrapping stage, it is responsible for taking the decision to allow or not a Smart Object acting as Data Producer to publish information on an IoT platform. • Capability Manager. It generates authorization tokens for different Smart Objects. In the bootstrapping stage, interacts with the PDP to get authorization decisions and to generate capability tokens accordingly. • Attribute Authority. It issues and manages private keys. In the group sharing stage, it calculates the CP-ABE key [14] associated with the set of attributes of the Smart Object that request it. • Pub/Sub Server. It allows carrying out the registration, update and query of data, as well as sending notifications when changes on the registered information take place. In both stages (bootstrapping and group sharing), the Pub/Sub Server receives information publications from a certain Data Producer and sends notifications with updated data to set of subscribed Data Consumers. • Provisioning Server (PS). Repository that stores the software images meant to be installed on devices. This server is also responsible for the validation of the identity of devices that request software updates. 

A. Bootstrapping experiment Figure 1 shows the bootstrapping experiment. 

It begins with the network access phase, in which the Smart Object, acting as PaC, and AAA Server exchange EAP messages [15] in order to verify the identity of the first. To do this, a RADIUS-based AAA infrastructure with pass-through configuration is used. Note that the number of EAP messages exchanged between these two entities depends on the authentication method used (EAP-TLS, EAP-MD5, EAP-AKA, etc.). From the result of  authentication (Accept / Reject), the Gateway, acting as PAA, will deny or allow network access to that Smart Object (PaC).

Once the network access phase has been successfully completed, the phase for obtaining the capability token starts. In this way, the Smart Object initiates the communication with the Capability Manager to request the token. The latter, before generate it, need to know if the Smart Object (subject) is able to publish (action) on the platform (resource). Thus, the Capability Manager sends to the PDP a XACML request [16] and then, it evaluates its access control policies and takes an authorization decision. In the event that the decision is PERMIT, the Capability Manager generates and sends the capability token to the Smart Object, which allows publishing data on the IoT platform by the latter. Power Integrated When the Smart Object is in possession of the capability token, the data publication phase begins. In this phase, the Smart Object, acting as Data Producer, communicates with the Pub/Sub Server in order to publish information on the IoT platform, attached the token obtained in the previous phase. When receiving the request, the Pub/Sub Server verifies the capability token and, if such verification is successful, the information is published. 

B. Group sharing experiment 
The experiment for the group sharing stage is based, is shown in Figure 2. As observed, the experiment begins with the phase in which Smart Objects obtain their private keys. In this phase, different Smart Objects that will act as Data Consumers, request to Attribute Authority the corresponding CP-ABE key. Note that, in that request, Data Consumers include their certificate (X.509 or attributes certificate) so that the Attribute Authority can extract from it the set of attributes of such Data Consumers and generate the corresponding CP-ABE keys for them. The subscription phase starts once the keys have been received by the Data Consumers. In this phase, the Data Consumers send subscription requests to the IoT platform for a given topic, such as humidity or temperature. Thus, when a Smart object acting as Data Producer changes the topic value, the subscribed Data Consumers will receive a notification with the updated information. Next, the Data Producer initiates the publication and notification phase. Such Data Producer, before to publish the information on the platform, uses the CP-ABE cryptographic scheme to encrypt data under a policy of identity attributes (e.g. atr1 || (atr2 && atr3)). Once encrypted information, the Data Producer sends it to the Pub/Sub Server to publish it over a given topic. When the Pub/Sub Server receives the message, it checks if it has subscriptions registered of Data Consumers for that topic. Then, for each of the subscribed Data Consumer, the Pub/Sub server sends them a notification with the new encrypted information associated with that topic. Finally, when a Data Consumer receives the data, it tries to decrypt them using its CP-ABE private key (it was obtained in phase 1), which is associated with the set of attributes of such Data Consumer. So, if its set of attributes satisfies the policy used in the encryption process, the information will be revealed. Note that all Data Consumers able to decrypt the data form a group with dynamic and ephemeral trust relationships between such entities.

C. Software programming experiment 
Figure 3 depicts the interactions between entities that take part of the software update stage experiment. Here, the Provisioning Server (PS) receives a new software image to be executed by devices. The possible security issues related with this step are out of the scope of this testing. After receiving a new software image, the PS announces it to all devices in the network by the means of broadcast/multicast messages. When a device receives the announcement, it may decide to request a software update and sends a Software Access Request (step 3), starting the authentication process. This authentication protocol is strongly inspired on the Extensible Authentication Protocol (EAP) and includes some adaptations to improve the efficiency based on the conditions of the testing scenario. In the Software Access Request message, the device sends its identification information which allows the receiver to identify its hardware and the current version of the software that is running at the device. When the PS receives this message, it tries to validate the Device ID by, for instance, consulting registry tables or running validation algorithms over the ID. If the Device ID is considered valid, then the PS will send a challenge to the device. An exchange of challenges occurs to allow the PS to prove its identity to the device and the  latter to prove that the software, which it is running, is legitimate. To achieve this purpose, a Keyed-Hashing for Message Authentication (HMAC) mechanism is to be used where expressions and software fingerprint information is combined using a Hashing algorithm. Software fingerprints can be obtained from the execution of specific functions that collect and process metrics from the software, or can consist in a message/code embedded within the software that can be obtained using specific methods. Software fingerprints are meant to be used to verify the authenticity of the software because fingerprints are supposed to change whenever the software is changed. Therefore, only the entities that know the fingerprint information of the software version being used will be able to produce the hash code. When the device informs the PS which software version it is running, then the PS can search in its database which fingerprint to use in the challenge process.
 If the challenge process was successful, then a secure connection can be established and the software image is transferred from the PS to the device through an encrypted channel. In the whole process neither the software nor the fingerprints should be exchanged using unprotected communication channels. Different encryption algorithms, hashing functions, and software fingerprinting methods will be tested in order to identify the one(s) that provide the best security and energy performances.
https://www.facebook.com/Power-Integrated-Solutions-416076345230977/?ref=page_internal

width="600" height="450" frameborder="0" style="border:0;" allowfullscreen="">

EDOT - Requirements for Knowledge Base Gain and Technological Upgrade in Industry Working in Train Parts Processing and Testing

 This paper proposes a preliminary project of a technology upgrade for a case of study concerning an industry working on processing and machining of train parts. The proposed facilities are related to the traceability process of working activities, to the technological improvements of testing machines, and to electronic solutions providing new controls of wagons assigned for railway infrastructure works. The proposed study is suitable in order to understand how can be formulated specifications for a research industry project following research and development principles based on knowledge gain and prototype implementations. Project Center in Tirunelveli  The proposed work concerns a preliminary case of study of technologies suitable for the improvement of activities of a company operating in the railway sector, having a particular know-how in the processing train pieces. To date, the company has a warehouse of special pieces, of used parts and of unobtainable parts, whose processing and adaptability represent the real added value of the company. Another important value of the industry is the experience in carrying out specified tests. In this context, the company is increasingly seeking to combine, structure and use existing scientific and technological capabilities in order to produce new processes linked to the automation of current processing procedures. The automation of the processes to be implemented is mainly oriented on the warehouse traceability and on new tests improving existing processes, and achieving efficiency in terms of work quality and of work reliability. Furthermore, the automatisms is planned to be implemented by means of prototypal mechanical components and sensors to adapt to the current processing and testing systems. In this context the proposed study is mainly oriented at acquire new knowledge for the engineering processes linked to the introduction of technologies to support processing. Following Frascati guidelines [1], the defined specifications are able to gain the knowledge by engineering the processes using traceability and mechanical/pneumatic/mechatronic prototypes and measurement systems which are not present in the market (adaptive mechanical system, particular assembly of components following a functional logic, etc.). Specifically the traceability system is an integral part of Research and Development (R&D) topics, and it is used for industrial engineering oriented on process innovation. Furthermore, the proposed prototypes which will be analyzed in the project, are defined as prototypical plant and as potentially patentable pilot plants. The process innovation to achieve follows the functional logic of Industry 4.0 about the digitization of information associated with worker activities. For the definition of the project specifications, a preliminary feasibility study was carried out, by analyzing the following state of the art of science and of industrial research. These studies therefore serve to define the scenario of the research project, and to introduce suitable technologies and processes to meet the needs emerging as a result of preliminary inspections. In [2] some researchers have shown how the Internet of Things (IoT) can constitute an important evolution of the enabling technologies of Industry 4.0. In this context it is therefore important to design processes able to integrate technologies that can be interfaced with internet facilities, enabling information digitization of production processes, data acquisition automation, and digital linking to the production sites.
In Industry 4.0 traceability plays an important role [3], besides the information digitization represents the first step for the automation of processes [4]. Element of novelty is the use of traceability for the specific case of study where traceability is adopted to find pieces in the warehouse by tracing at the same time the processing history. Each process of upgrading the company information flow, will therefore correspond to a specific process modeling. The modeling and the engineering of the traceability processes in the "manufacturing control systems" can be performed by different model proposed in [5]. Among the versatile technologies that could potentially be used for the development of traceability processes, we recall the barcode and QR code [6]. Furthermore the QR code technology can be used also for the marketing field [7], and can contain data in a "two-dimensional" way [8] thus increasing the knowledge associated with a web page. In [9] and [11] some researchers have analyzed the differences between barcode and QR code systems, finding greater versatility for the latter system. The use of the QR code can therefore potentially bring different benefits to the companies that use it [10], generating at the same time a competitive advantage due to an higher quality of the processes. The barcode and the QR code technologies are suitable for the whole supply chain management [12],[13]. According with the state of the art, QR code could provide major information about, origin, piece processing, and piece history. The traceability is also important for the management of risk processes [14], and can potentially reduce the risks related to misinformation about the history of the processing of mechanical components as for the case of study. The traceability concept can be applied also to test procedures [15], following the steps indicated in [16]. A good traceability tool must be characterized by specific requirements such as those indicated in [17]. In order to improve the production traceability, it is almost always necessary to intervene on the technological upgrade by adding Internet of Things -IoTfunctionalities [18]. https://edottechnologies.in/ The concept of automatic process control, which is the basis of IoT systems, follows a given data flow model that integrates data storage and data process coming from multiple systems [19]. The innovation proposed in the paper concerns also the implementation of IoT prototypes able to measure and optimize mechanical processing and testing by measuring the processing precision and the parameters to control. Another important aspect is the mapping of processes which can be performed by Fishbone Diagram, PDCA (PlanDo-Check-Act) cycle and t Xm-R charts [20]-[23]. According with recent works found in literature, alternative approaches to map process are the enhanced DMAIC (eDMAIC) model [24], machine learning oriented on production quality [25], and artificial neural networks enabling predictive maintenance in Industry 4.0 [26]. All these methodologies can be applied to map the process of the proposed prototypes and to all the technological upgrades in order to scientifically improve the process mapping by defining at the same time an innovative model to predict processing inefficiencies. The goal of the paper is to show how by combining industry needs and suggested requirements can be formulated a scientific research industry project.
PRELIMINAR INDUSTRY PROJECT SPECIFICATIONS The main criteria and phases adopted for the requirements definition are the following ones: - mapping of the actual industry process (processing mapping “As Is”); - finding needs about quality process improvements; - proposing new technologies able to improve production processes (processing mapping “To Be”); - formulating a preliminary project integrating new prototypes and new analysis methodologies. In this section are listed the main project specifications based on the state of the art and on the needs of the company: A. Traceability system The production team have to manage a warehouse of special pieces for trains which will be processed and tested in order to become finished installable products. Each piece must be cataloged by assigning an unique ID, and labeled with a QR code. The QR code will contain a link to a specific web page that will contain all the information related to the component to process that will become a prototype as a modified piece to adapt inoperative mechanical systems. The piece can be found in the following four different states: • Semi-finished; • Processed; • Tested; • Finished. EDOT Technologies   Once the work piece has been identified, the same is processed through some processing steps that will allow it to become a product to be installed on a train. At the end of the work the testing phase will be carried out to certify that the piece is suitable for the new installation. All performed tests must be stored into a special database system containing all the information about the processing traceability: at any time it is possible, through the reading of the QR code, to visualize the whole piece history (main data, works carried out, tests carried out, measurements, etc.). In Fig. 1 is illustrated a designed layout concerning the traceability system of the case of study: the layout indicates the network infrastructure with the classification of the areas subject to traceability (see also block diagram of Fig. 2). The proposed network infrastructure represents a technological upgrade necessary for the definition of new processes linked to traceability. About the proposed layout, will be implemented the following elements: • 1 physical server to be placed at the offices; • 1 connection PC; • 1 labeller to be placed at the offices; • 5 access points; • 1 network switch; • 2 QR code readers of mobile type;
https://twitter.com/edottechno1

CONTEXT SERVICE DESCRIPTION LANGUAGE (CSDL) - EDOT Technologies

 Project center in tiruchendur - EDOT

Project center in tiruchendur


This section presents a brief overview of the existing services description languages and matchmaking approaches and highlights their limitations for describing and discovering context services. During the last two decades, a large body of research has been conducted on definition and composition of semantic service, especially in the domain of Semantic Web Service (SWS). These efforts led to the development of several web service description languages, such as Semantic Markup for Web Services (OWL-S) [3], Web Service Modelling Ontology (WSMO) [4], and Semantic Annotation for WSDL and XML Schema (SAWSDL)[5]. Most of the above-mentioned languages to some extent allow specifying services in terms of their signature (i.e., inputs and outputs of the service), behavioral specification (i.e., preconditions and effects) and the non-functional properties (NFPs). However, all of these languages suffer from the same limitation, lack of support for describing the contextual characteristics of a service, that makes them insufficient to describe context services. To overcome these shortcomings, a number of different approaches have been proposed [6], [7]. However, they do not fully support different types and aspects of context and lack an expressive language to represent them. In this regard, existing semantic service matchmaking techniques [8]–[10] also suffer from a similar drawback, i.e., a lack of a mechanism for discovering services based on contextual characteristics and their associated entities. 
In this section, we present the Context Service Description Language (CSDL). CSDL is a JSON-LD-based language that enables developers of context services to describe their services in terms of semantic signature and contextual behavioral specification. Further, CSDL allows developers to describe their services using a standard language (underpinned by semantic computing). CSDL enables the fast development of IoT applications that can discover and consume context services owned and operated by different individuals, and organizations. For describing the semantics of context services, we adopt Web Ontology Language for Services (OWL-S) [3] which is a W3C recommendation, as the basis of CSDL. OWLS is an ontology language which is developed based on the Web Ontology Language (OWL) to enable automatic discovery, invocation, and composition of web services. However, as OWL-S was originally designed for describing web services and does not support the semantic description of context, we extended the OWL-S by adding the context description of the entities associated with context services. Similar to OWL-S, CSDL consists of three main components: (i) Service Profile, (ii) Service Grounding, and (iii) Service Model. Service Model gives a detailed description of service signature, namely its input and output, and identifies the semantic vocabularies which are supported by the given service. Service Grounding provides details on how to interact with a service. https://edottechnologies.in/  This component identifies which type of communication needs to be used to call the service (e.g., HTTP get, XMPP, Google Cloud Messaging). Further, based on the type of the communication, it will provide other required information to make the service invocation possible (e.g., URI in the case of HTTP get). Lastly, Service Profile is used to make service advertising and discovery possible. This component indicates the type of the entity that a service interacts with. Further, it defines the context-aware behavior of the service.
COAAS ARCHITECTURE AND IMPLEMENTATION
Communication Manager is responsible for handling all the incoming and outgoing messages, i.e. context requests, service description updates, and service responses. CQE has two main responsibilities: parsing the incoming service queries represented in CDQL [2] language, and producing the final query result. This component will receive the incoming queries, break them into several context requests, and determine the query’s execution plan. Furthermore, it is also responsible for aggregating the context request’s responses to produce the final query answer. CSDS is in charge of finding the most appropriate services for an incoming request. CSR is designed to register new context services in the context service repository. This module accepts the incoming context service descriptions represented in CSDL, and register them in the system after validating its syntax and semantics. Lastly, CSI is responsible for invoking the context services from the corresponding service providers to retrieve the required information. Based on the presented architecture, we implemented a prototype of CoaaS framework. This prototype has a scalable, fault-tolerant micro-services based design, making it ideal for cloud deployment. In this regard, each of the components described in this section is implemented as a microservice using Java EE technologies.
SMART CARPARK RECOMMENDER USE CASE 
One of the most common IoT applications in smart cities is car park service recommendation [14]. Developing an IoT application that utilizes IoT-services to suggest the best available parking is a challenging task. First of all, it is essential to have access to live-data, coming from context services, about availability of different parking facilities. The fact that these services owned by different providers (e.g., city administrators, building owners, and organizations) makes the process of service retrieval even more complex. Further, to be able to provide personalized suggestions to users, it is important to obtain additional factors, such as user preferences, car specifications, and weather conditions that might need reasoning and inference before it can be used. The query can become even more complex if the user wants to avoid places where snow cleaning, waste collection [15] is happening or other road problems [16] are reported. The CoaaS framework enables retrieving all the abovementioned information using a single query. To demonstrate a practical implementation of the use case, we have developed an Android mobile application which automatically provides suggestions about available parking spaces to drivers using real-time data obtained from context services. To achieve this goal, we composed a parameterized context query, using the CDQL [2] language, that will be triggered when the consumer car is getting close to the user’s destination. This query takes different contextual attributes such as weather conditions, walking distance, required parking facilities, and cost into account. Further, this application provides an interface for users to enter their parking-related preferences in the application. Moreover, the application is connected via Bluetooth to an OBD II device which reads the sensory data (e.g., VIN, speed, and fuel level) coming from the car’s CAN bus. The application collects this information, includes it in the query, and makes an HTTP POST request to CoaaS. We registered four different context services in CoaaS based on the requirements of the scenario under consideration, namely Monash Parking API, VIN checker API, Google Location API, and Weather API. We used CSDL to describe these services. We registered the Monash Parking API in CoaaS as the main parking context service. Monash University has 10 different parking areas in Clayton campus which are equipped with occupancy sensors. Further, Monash University has a private web API that offers the real-time vacancy information of its parking facilities. Further, to retrieve the specifications of the consumer car, we registered a context service that accepts a Vehicle Identity Number (VIN) as input and provides the make and model of the car as output. Moreover, it provides car specifications such as height and width. Another service that is used in this scenario is Google Location API. This API has been used for reverse geocoding purposes to convert the address to coordinates. Lastly, to fetch information about the weather conditions, we registered a weather service that accepts location coordinates as input and provides the weather conditions as output.
https://twitter.com/edottechno1

EXAMPLES OF IOT PROJECTS - EDOT Technologies

Project center in rajapalayam - EDOT Technologies

A. Server room monitoring system (use case 1) The prototype was designed to monitor at least two key temperatures, humidity and a flood sensor in order to detect malfunctioning of the cooling system in a computer room with servers running 24/7. The hardware prototype assembled with basic hardware components contains a Base module, a Power module, and a Core module with attached temperature tags, a humidity tag and a flood sensor connected via the Sensor module to Core module. The BigClown prototype was connected to a Raspberry PI that simulates a small communication server and acts as a microcomputer to process and store data. Influx (DB) and Grafana (Dashboard) with its dependencies were installed to retrieve and display data from the BigClown platform. Additionally, we have installed the BigClown USB dongle to act as a MQTT gateway for radio devices on the Raspberry PI. This allows the implementation of a standalone wireless solution. Figure 6 shows a fully assembled wireless device with battery module, temperature tags, humidity tag and flood detector (right of the figure), and the wireless gateway module connected to Raspberry Pi server with an additional USB dongle for RF communication (left of the figure).

Project Center in Rajapalayam
Add caption

Since the BigClown platform works together with a Raspberry PI that uses the Debian OS (Linux), we have access to multiple software solutions to save and track the data. Influx Database has been chosen for storing data, as a real-time database that is designed for monitoring and controlling sensors and devices and for storing data metrics and events, which suits our application goals.
B. Seat occupancy monitoring in meeting rooms (use case 2) https://edottechnologies.in/ This project is aimed at monitoring meeting room occupancy and evaluation of room usage as compared to reservations. Additional technical requirements were specified as follows: • The device should use RF-module for communication. Even though Bluetooth and Wi-Fi are very popular ways to connect devices, we have avoided those protocols for the sake of reducing power consumption and improving connectivity distance. • Sensors will be heavily used (chairs are used daily for about 3-5 hours), so that avoiding mechanical and direct contact sensor components will reduce the risk of mechanical failure or damage of components. • Sensor unit should monitor each chair or seat to increase accuracy of the collected data. Many different sensors were considered and evaluated, including temperature sensors, strain gauge sensors, IR sensors, ultrasonic sensors, PIR sensors, lux sensors, piezoelectric sensors, capacitive sensors, etc. The BigClown platform was selected due to its flexibility and also because of the availability of a temperature module, PIR module and a lux meter module in the kit; most of the sensors provided traceable signals. The Figure 8 shows an example of PIR sensor data within a short sitting session, illustrating that even small  movements of a person sitting on the chair are detected.
The PIR sensor solution seemed to be optimal when compared to other sensors we evaluated. Unfortunately, the overall cost of this solution was not acceptable when using a separate sensor for each seating place (chair). The estimated price for several meeting rooms with an average number up to 10 chairs per room became too high. We have decided to implement an inexpensive solution using a custom made capacitive sensor modules. Capacitance sensing is a technology based on capacitive coupling that can detect and measure anything that is conductive or has a dielectric difference from air. This technology is commonly used in touch screens and track pads.
1) Capacitive sensor module Capacitive sensor electrical signal goes through high-value resistor; the output of the resistor is connected to input pin of the MCU (Microcontroller unit). There is a “sensing area” connected to the output pin of the resistor. This area influences the time required to change input pin state of the MCU. Interestingly, the area doesn’t require direct contact with source of the capacitance (i.e. human body) [9]. Step by step assembly procedure is outlined in the Figures 9 and 10.
Project Center in Rajapalayam
2) Practical implementation 
 A custom made capacitive sensing module was placed on top of BigClown modules in similar manner to other BigClown native modules. The seat cover contains embedded sensing area. It is important for monitoring to keep the input pin discharged after the sensing is done to avoid the preceding values influencing later measurements, i.e. avoiding a situation where the sensor is continuously sending high signal values indicating seat occupancy. Sensing algorithm has to follow alternating switching of input and output pin values (LOW and HIGH). The algorithm includes delays needed to fully discharge the input pin and measuring time needed for signal to change the state on receive pin to HIGH. 
All values are displayed in arbitrary units that indicate time needed to receive pin on MCU to change its state. As shown in Figure 11, around 15:51 the sensor signals rises from ~600 arbitrary units to over 5000 arbitrary units. It clearly indicates the moment when person sat down on the seat. At around 16:00 the person got up, resulting in a sharp signal fall shown on the chart, with signal returning to the baseline.
We can clearly identify the time when a person sits down on the chair (value rises) and when a person leaves the chair (value drops back to the baseline). On top of that we have reduced sensor price to a negligible amount and have improved strength and reliability. The installation within the seat cover allows testing of the solution with up to 32 channels prior to the eventual incorporation of the setup into chairs.