Building the Internet of Things



Download 258.06 Kb.
Page11/13
Date10.06.2017
Size258.06 Kb.
#20222
1   ...   5   6   7   8   9   10   11   12   13

n.Publishing insights


After data stored in the system has been processed into information of value to others, the question becomes how to approach this exposure in a secure and compatible manner that is easy to discover and consume. Some organizations want to make their data available to partners both up and down the supply chain to realize efficiencies that result in lower costs and improve margins. Others are realizing the data they have can be directly monetized as services available for consumption by individuals, corporations and governments around the world. In addition to the stand-alone value of the data, it may also be seen as valuable to augment other data services. Data that may seem uninteresting to those within the organization could in reality be a key ingredient used in a number of potential external applications or analytical recipes. For an in-depth discussion of data-publishing considerations, see the paper “Making Public Data Public” from Microsoft.87 The following sections discuss many aspects of this topic.

Audience


The target audience for the data will have a significant impact on how it is published. Will it be used to enhance analysis of other data? Will it be used through data visualization tools, such as PowerBI or Tableau? Will it be metered and have a price associated with it? Or will there be different views and price points of the data for different partners?

Publishing format


The choice of publishing format will be influenced by the targeted audience and the type of information being published. Similar to the discussion earlier in this paper about the incoming message format, the most likely choices for publishing data are XML, JSON, and AtomPub. OData88 is a standardized protocol for creating and consuming data APIs. OData originated at Microsoft, but it has become well-accepted in the industry. OData supports both JSON and AtomPub, so it is widely consumable by nearly all current tools and programming languages.

There are tools that can help scale, secure, and normalize the data publishing task. The Microsoft Azure DataMarket89 is a global marketplace for data and applications that provides discoverability, interface normalization, and a monetization approach. Microsoft Azure API Management90 is a service that facilitates publishing APIs. It includes features for API translation, versioning, aggregation, discovery, authorization, caching, and quotas. Both Azure DataMarket and Azure API Management can be part of the publishing strategy, using DataMarket for the broad exposure of large datasets, and API Management to expose APIs securely with usage metrics and management capabilities.


Cost modeling and estimation


Determining the cost of an Internet of Things (IoT) solution focused on predictive maintenance is generally a complex problem. This section will list an initial approach that we have used with our customers to estimate the cost of the architecture to support their predictive maintenance solutions. With any calculation, it is very specific to a scenario and this model will not be applicable to all situations or be totally complete.

Before we go into the specifics of determining the cost for a solution, we want to stress that cost modeling, like capacity planning, is an iterative exercise. The process repeats itself, and performance testing and other data gathered will change capacity distribution (for example, different workloads could be combined in a single unit to save cost because these workloads are compatible in load profile) and tune the model over time. In other words, the first cost estimate will not be perfect, and it provides only an indicator of the cost of the solution.


o.A common architecture for IoT


Although you need to verify whether is satisfies your specific requirements, from our work with customers, a reference architecture surfaced which helps in implementing the Service Assisted Connectivity pattern by acting as the mentioned gateway. This architecture is built on top of Microsoft Azure Service Bus. Within Service Bus, it utilizes Event Hubs for the ingress (device to cloud) of data and topics for sending Command & Control messages as well as replies.

Event Hubs


Event Hubs is a new feature of Microsoft Azure Service Bus. It stands next to topics and queues as a Service Bus entity, and provides a different type of queue, offering time based retention, client-side cursors, publish subscribe support, and high scale stream ingestion. Although it could be argued the use of topics could satisfy the technical requirement for receiving data from devices, Event Hubs supports higher throughput and has an increased horizontal capacity.

Architectural details


Starting at the logical architecture level, the main architectural components are depicted in the following figure.



Figure . Reference architecture conceptual overview

The previous conceptual architecture figure includes four important components within the system:

2.The provisioning service that takes in information on authorized devices, creates its configuration, and stores access keys.

3.Devices that interact using either AMQP or HTTP towards Service Bus directly, or a component called the Custom Protocol Gateway Host, which hosts adapters for other protocols, such as MQTT and CoAP.

4.Telemetry requests that are distributed by the router, using adapters to communicate with downstream storage and processing engines.

5.Commands send to devices through the use of the notification/command router that is internally surfaced through the Command API host.

To ensure the architecture is able to support a large number of devices, a partitioned model where the device population is divided into manageable groups is used. This partition model can be seen in the following figure.



Figure . Reference architecture details and partition overview

The figure details some important aspects of the reference architecture:



Master. Part of the requirements assumption for the architecture is that solutions built on top of it will aim for a unified global or at least regional management model, independent from technical scale limitations that might inform how large a particular partition may grow.

This motivates an overarching architectural model with a common ‘‘Master’’ service, shown on the far left of the figure, that takes care of shared management and deployment tasks, as well as of device provisioning and placement, and several parallel and independent deployments of ‘‘Partition’’ services that each take ownership of one or more logical system partitions.



Partition. Instead of looking at a population of millions of connected devices as a whole, the system divides the device population into smaller, more manageable partitions of large numbers of devices each.

Each resource in the distributed system has a throughput- and storage-capacity ceiling, limiting the number of devices associated with any single Service Bus ingress entity so that the events sent by the devices will not exceed that entity’s ingestion throughput capacity, and any message backlog that might temporarily build up does not exceed the entity’s storage capacity.

In order to allocate appropriate compute resources and not overload the storage backend with too many concurrent write operations, a relatively small set of resources with reasonably well-known performance characteristics is bundled into an autonomous, and mostly isolated “scale-unit.”

Each scale-unit supports a maximum and tested number of devices, which is also important for limiting risks in a scalability ramp-up. The principle behind this is that a production system can only be scaled up as much as it can be scaled up in testing on a regular basis.

A benefit of introducing scale-units is that they significantly reduce the risk of full system outages. If a system depends on a single data store and that store has availability issues, the whole system is affected. However, if the system consists of 10 scale-units that each maintain an independent store, issues in one store only affect 10 percent of the system.

The principle of running all traffic ingestion through asynchronous Service Bus messaging entities, instead of into a service edge that writes data straight to the database, is that Service Bus already provides a scaled-out and secure network service gateway for messaging, and it is specifically designed to deal with bad network conditions, traffic bursts, and even sustained traffic peaks. A back-end datastore that is the target of the ingested data should not be dimensioned to handle specific bursts, such as vehicle telemetry during core European or U.S. East Coast rush hours.

The group called “partition” is a set of resources focused on handling data from a well-defined and known device population that has been assigned to and configured into the partition through provisioning. Cross-partition distribution of devices will be based on your solution-specific logic, and allocation within the partition is handled by provisioning.

The “partition” group is the unit of scale. Through testing, the load specifications for the partition have to be determined and a so-called scale-unit can be defined. A scale-unit is a group of resources that can effectively support a well-known load profile for the system, allowing replication of the scale-unit to provide support for an extrapolation of this load profile. Within the “partition” group, there are two basic paths, ingestion (sending data from the device to the cloud) and egress (sending data from the cloud to the device). These paths accomplish the following:



Ingestion. Ingestion has a given device connect through its supported protocol, delivering messages to its specific Event Hub, using its assigned credentials.

Egress. Egress routes messages (replies, Command & Control) to their device destination.

Device Repo. The device repository contains configuration information about the registered devices for a given partition.


Download 258.06 Kb.

Share with your friends:
1   ...   5   6   7   8   9   10   11   12   13




The database is protected by copyright ©ininet.org 2024
send message

    Main page