Guavus Plus SQLstream means Broad and Deep for IoT Data Science

## History

From the first time that Damian Black, founder of SQLstream, and Dr. Anukool Lakhina, founder of Guavus, first met almost a decade ago, the synergies and complementary nature of their visions was apparent to both of them. At the time though, each chose their own path, with Guavus using open source solutions to become a leader in big data, real-time analytics, firmly focused on the Telecommunications CSP (Customer Service Profiles) and operational efficiency market. Meanwhile, SQLstream built off of Eigenbase components to create one of the first true streaming analytics engines, while having strict compliance to SQL standards; on the business side, finding a niche in the burgeoning IoT market, especially in Transportation, all while remaining an horizontal solution.

Guavus was acquired by Thales in 2017. The Thales Group, a large, international player in aerospace and defense, with a significant presence in transportation, expressed interest in SQLstream about four years ago. It was at this point that Damian and Anukool realized that the solutions Guavus and SQLstream had developed since their earlier discussions, had become even more strongly complementary, with Guavus' deep domain expertise in telecommunications, machine learning and data science, and SQLstream as a pioneer and leader in streaming analytics with an horizontal platform. In addition, Guavus is following Thales lead in broadening their domain expertise into the Industrial Internet of Things. SQLstream has had great success in the Transportation area, as well as in other sensor analytics ecosystems (SensAE). In addition, Guavus recognizes the need to process the vast amount of telecoms and IoT data closer to the source. In January of 2019, Guavus acquired SQLsream.

## Integration

Although the merger is only a month old, the two companies are already working as one to bring the strengths of each together for greater customer success. Over the next six to 12 months, the two will be integrated into a single platform with the ability to scale-up to mind-numbingly large data flows, and to scale-down very finely-tuned small aggregates where and as needed throughout the ecosystem. This will allow greater operational efficiency as separating signal from noise, close to the source, allows processing the data immediately, providing value timely and cost effectively. Data rates are growing, per Damian, by 50% as edge sources increase in importance, but data storage and management costs are only decreasing by 12-14%. Only by pushing the algorithms – the machine learning models – into the streaming pipeline, will organizations be able to actually draw value from this data. Guavus has some of the best data science expertise in the industry for their customers in Telecom. As this domain experience grows to include Transportation, and IIoT in general, companies growing in IoT maturity will be able to perform streaming analytics and machine learning augmented analytics on appropriately aggregated data throughout their ecosystems.

With our integrated solutions, CSPs to IIoT customers will be able to take advantage of something that’s radically different as we deliver AI-powered analytics from the network edge to the network core. With this solution, our customers can now analyze their operational, customer, and business data anywhere in the network in real time, without manual intervention, so they can make better decisions, provide smarter new services, and reduce their costs." — Guavus Press Release

This matches well with what we have seen, and what we present for SensAE architecture, that the ebb and flow of data throughout the ecosystem must allow for appropriate aggregation and analytics at each point within the ecosystem.

## Future

At MWC19, there has been a lot of interest in these specific solution, and also in building trust throughout the ecosystem, with security, and, as our research has shown, with the ability to select the desirable levels of privacy and transparency. Responding to these industry concerns is already in the Thales/Guavus/SQLstream roadmap.

The SQLstream products have the ability to analyze, filter, and aggregate data at the network edge in real-time and forward the information to the network core where the Guavus’ Reflex® platform can apply AI-powered analytics, giving customers a widely distributed and scalable architecture with better price/performance and total cost of ownership." — Guavus Press Release

The next few months are going to be exciting with SQLstream, Guavus and Thales bringing together their expertise in streaming analytics, data management, telecommunications, transportation, machine learning, data science, industrial needs and system engineering.

Did You See What Ockam.io Just DID

The W3C Decentralized Identifiers (DID) is a working specification from the credentials community group. While not yet a standard, a working group has been proposed to develop the current specification into a full W3C standard. Amazingly, this proposed working group has support from over 60 companies (15 is the minimum required, and 25 is considered very healthy). Building trust into interoperability is important for the Internet of Things to mature allowing the development of sensor analytics ecosystems (SensAE). The W3C DID is not focused on things alone, but on people as well as things, providing the opportunity for every user of the Internet to have public and private identities adhering to the guidelines of Privacy by Design. We often talk about the need within SensAE for privacy, transparency, security and convenience to be provided in a flexible fashion. The W3C DID is an excellent first step in allowing this flexibility to be controlled by each individual and thing on the Internet, indeed on each data stream in human-to-human, human-to-machine, machine-to-human and machine-to-machine exchange or transaction. But every specification and every architecture need to be implemented to have value. Moreover, they need to be implemented in such a way that the complexity is abstracted out, both to increase adoption, and to reduce compromising errors. This is where the recently announced Ockam.io open source (under the Apache 2 license) software development kit and the Ockam TestNetwork come into play.

Currently, organizations entering IoT are either trying to build all of the pieces themselves, or searching for full stack IoT Platforms. Either of these approaches can limit interoperability. Think of a simple example, such as in the Smart Home area, where device vendors need to choose among such vendor-centric platforms as offered by Amazon, Apple, Google or Samsung, with no hope of easy interoperability. Such vendor lock-in limits adoption. This is also true of the industrial IoT Platform vendors. Manufacturers that might want two-way traceability from the mine to the assembly line to user environments to retirement of a product are stymied by the lack of interoperability and secure means to share data among all the players in their supply chain, vendor, customer and environmental ecosystem. Standards can be confusing and also cause lock-in. For example, there are two main standards bodies addressing Smart Grids, each with hundreds of standards and specifications that are not consistent from one body to the other, and do not allow for secure data exchange among all involved parties.

The W3C DID specification seeks to support interoperability and trust, with individual control of data privacy and transparency. The overall specification requires specific DID implementation specifications to ensure identity management while maintaining interoperability among all organizations adhering to the overall DID specification. This means that on-boarding IoT devices in a secure fashion with integration among all the devices in an organization's ecosystem can be done in seconds rather than months (or not at all). Even though, say, the OckamNetwork has never coordinated with Sovrin, or some large corporation's Active Directory, one can register a DID claim for a device in the Ockam TestNetwork and instantly have trusted interoperability and exchanges with users of Sovrin or any DID compliant organization. This means that an organization can move their IoT maturity immediately from simple connection, to trusted communication. Let's look at an example from the Ockam.io SDK.

Did You See What Ockam.io Just DID

With just a few simple lines of code, a developer can register their device in such a way that the device is uniquely identified from original manufacture through any installations, incorporation in a system, deployments and changes in location, to reuse or retirement. What this means within the OckamNetwork, is that life events, and metadata from the life events, are continuous to build trust throughout the life of the device. As always, metadata in one situation is useful data in another, such that the DID document that defines the identity also defines and leverages the subject’s metadata. The developer is free within the W3C DID model to define the metadata as needed. This allows key management through a decentralized blockchain network of the developer's choosing, without creating silos. This also allows the end-user of the device, or the system that contains many devices, to trust the system, with reasonable confidence that the system will not be compromised in a botnet.

To be successful, the W3C DID requires broad uptake. Major corporations are involved (beyond the 60 companies mentioned above). Visit Decentralized Identity Foundation to see the list.

Yes, the W3C DID and compliant DID such as OckamNetwork use blockchain. Contrary to common belief due to the most common use of blockchain technology in currencies such as Bitcoin or Etherium, where all the blocks or headers need to be tracked since genesis, the amount of data that needs to be exchanged to validate trust is not huge. This is due to CAP or Brewer's Theorem, where a blockchain can have two of Consistency, Availability or Partition-tolerance. The OckamNetwork is CP based. Because of this absolute consistency, with instant commit, with 100% finality in the commit, one only needs to track the most recent block. Another interesting side effect of CP is that it allows for low-power and casual connectivity, two important features for IoT devices which may need to conserve power and may need to connect only at need.

Another interesting feature of the W3C DID is that the issuer of a claim can make claims about any subject – any other entity in the network. This means that the problem often seen in IoT where a failed sensor is replaced with another sensor, while in-spec, has slightly different parameters than the previous sensor. Also, the important thing is that the data stream remains consistent, and that users of that data understand that the data are still about the same system or location and that differences in the data are due to a change in sensor not a change in the system. The W3C DID model of claims allows a graph model of sensor to location that ensures consistency of the data stream while ensuring trust in the source of the data, through a signer of the issuer of the claim that is proven by a two-thirds majority of provers in the chain. Thus, the state of the blockchain is modeled as a graph that consistently tracks the flow of data from all the different sensors and types of sensors, graphed to each location, to each subsystem, to each system, whether that system is a vehicle or a city.

The beauty of the Ockam.io SDK is that the developer using it, does not need to know that there is a blockchain, doesn't need to know how to implement cryptography; they can, but there is no requirement to do so. These functions are available to the developer while the complexity is abstracted away. With ten lines of code, one can join a blockchain and cryptographically sign the identity of the device, but the developer does not need to learn the complexities behind this to provide orders of magnitudes of better security for their IoT implementation. The whole SDK is built around interfaces, such that one can adapt to various hardware characteristics, for example RaspberryPi vs microcontroller. All of this is handled by the layers of abstraction in the system. The hundreds of validator nodes in the system that maintain consensus need to process every transaction, and to repeat every transaction. To maximize the throughput of the system, Ockam.io use graph tools that are very lightweight. Thus, as OckamNetwork matures, it will use the open source graph libraries, without using the full capabilities of a full graph database management or analytics systems, such as neo4j. This will allow for micronodes that don't have cryptographic capability to still leverage the blockchain by being represented by . A low-power device with limited bandwidth only needs to wake up at need, transmit a few hundred bits of data and confirm that the data was written either by staying awake for a few seconds or by confirming the next time it wakes up. Micronodes can take advantage of the Ockam Network by having a software guard extension (SGX) system represent the device on the chain. Another aspect is that, much like older SOA systems, descriptive metadata enhances interoperability and self-discovery of devices that need to interoperate in a trusted fashion.

Beyond the technical, an important part of any open source project is community. There is no successful open source project that does not have a successful community. Ockam.io is building a community through GitHub, Slack, open source contributions to other projects (such as a DID parser in GoLang) and IoT, security and blockchain meetups. There are also currently six pilots being implemented with early adopters of the Ockam,io SDK that are growing the community. The advisor board includes futurists and advisors that are both proselytizing and mentoring to make the community strong.

It is early days for the W3C DID, but with companies like Ockam.io building open source SDKs that implement the DID specification with specific implementations, the future for both DID and Ockam.io is bright and will help overcome the silo'd development that we've seen in IoT that is limiting IoT success. Ockam.io is not focused on any market vertical, but are focused on use-case verticals. This is applicable to all the solution spaces that make up the IoT from SmartHomes to SmartRegions, from Supply-chains to Manufacturers, from oil rigs to SmartGrids, and from Fitness Trackers to Personalized, Predictive, Preventative Healthcare.

Developers who wish to provide strong identity management quickly and conveniently should check out the OckamNetwork architecture, download the repo from GitHub and up your IoT trust game by orders of magnitude.

An AI Powered 4D Printed Facial Tissue Drone

Imagine that you are in a future, augmented city. The sensors around you, through machine learning scoring and artificial narrow intelligence realize that you are about to sneeze…even before you do. In response, a nearby 4D printer makes a handkerchief that feels as though it is made of the softest cotton-linen blend, and indeed those materials are part of the weave, but only a part. A variety of nano-materials make up the rest, incorporating soft sensors, and various mechanical properties that allow the handkerchief to fly to you from the 4D printer. And indeed, this is 4D, as the material properties change from a flying bird shape with powerful wings, to a soft facial tissue, landing in your hand, just in time to capture your sneeze. Now whether the sneeze was caused by some errant dust – this is, after all, an augmented city with integrated agriculture and green spaces – or an allergen, the handkerchief's sensors now analyze the sputum and mucous that you sneezed into it, just as secondary assurance that you aren't about to spread cold, flu, or more serious viral or bacterial contamination around you. The handkerchief is fully reusable and recyclable and repurporseable, to be sterilized and become a face mask fro you, to protect you from the dust or allergens, or to protects others from your disease vector, or to become something else all together.

Machine learning scoring at the sensor package level – that is being done today by companies such as Simularity.

Machine learning and deep learning being incorporated into software to help guide augmented human decisions and autonomous machine decisions – a variety of companies, such as the ones we wrote about in our Data Grok posts over the past few years.

Artificial narrow Intelligence – is appearing in everything from chatbots to surgical robots, and is being investigated by more companies than we can add to this post.

Soft sensors – are currently being researched mostly in the textile and fashion industries.

IoT Architecture that includes hardware, firmware and software from the sensor to the Fog and Edge, through multiple intermediate aggregation points into a distributed Core of on-premises and multi-Cloud infrastructures and services – not implemented anywhere that I know of, and our own development of this architecture is still nascent.

Completion of the 5Cs IoT Maturity Model that we help to develop in 2014, and are still working on today – again, not that I know of.

Fully augmented smart cities – there are projects and megaprojects and conferences everywhere, but all silo'd and incomplete to date.

A sensor analytics ecosystem that would allow this to occur, with proper provisioning of privacy, transparency, security and convenience while building trust through two-way accountability – not yet, and perhaps never, but something that we are working toward.

And finally, the framework of an ethical core, along with the cultural, regulatory, economic, political and environmental factors[in draft, coming soon] to bring such a sensor analytics ecosystem and augmented city into existence, need to be understood.

A New Age for Data Quality

Once, most data quality issues were from human errors and inadequate business processes. While these still exist, new data sources, such as sensor data and third-party data from social media, openData and "wisdom of the crowd" introduce new sources of potential error. And yet, the old ways of storing "data" in log books, engineering journals, paper notes and filing cabinets are still widely practiced. At the same time, data quality is more important than ever as organizations rely more on predictive algorithms, machine learning, deep learning, artificial intelligence and cognitive computing. The basics of data quality have remained the same, but the means by which we can assure data quality are changing.

Data Quality Basics

Fundamentally, data quality is about trust; that the decisions made from the data are good decisions, based upon trustworthy data. To achieve this trust, data must be:

  1. correct
  2. valid
  3. accurate
  4. timely
  5. complete
  6. consistent
  7. singular (no duplications that affect count, aggregates, etc)
  8. unique
  9. [have] referential integrity
  10. [apply] domain integrity (data rules)
  11. [enforce] business rules

Now, these principles must be applied to all the new sources and uses of data, often as part of streaming or real-time decision support, automated decisions, or autonomous systems.

Moreover, the data rules and the business rules must reflect reality, including evolving cultural norms and regulatory requirements. For example, in many areas of the world, gender is no longer based simply on biology at birth, but includes gender identification that may be more than just male or female, and may change over time as an individual's self-awareness changes. As another example, regulations in some areas of the world are imposing stricter restrictions around individual privacy, such as the General Data Protection Regulation (GDPR) in the EU with full application coming in May of 2018.

Data Verification

Third-party data verification tools have been around for decades, are often purchased and installed on-premises, including their own databases of information. Today, data verification may be done through such tools, or through openData and openGov databases; modern data preparation tools may even recommend freely available data sources, such as demographic data, to enhance and verify the data that your organization has collected or generated. Other data, such as social media data, is also available to enhance your understanding of customers, markets, culture, regulations and politics that might influence your decisions. Current third-party data is most often accessed through Application Programming Interfaces (APIs) that may be HTTP or ReSTful, or might be proprietary. Use, or rather, misuse of these APIs have the potential to degrade, rather than enhance your decisions support process. Another issue is that you may not know how third-party data is governed according to the basics of data quality. Again, modern data preparation and API management tools can help with these issues, as can open architectures and specifications.

Data from sensors and from sensor-actuator feedback loops, aren't new. Data from connected sensors, actuators, feedback loops, and all kinds of things, from pills to diagnostic machines, from wearables to cars, from parking sensors to a city's complete transportations system, some of which may be available through openGov initiatives, are new. Many of the organizations using such IoT data have never used such data before.

Now that we have taken a very brief look into data quality and new opportunities, let's go into the new tools we have to use these new data opportunities.

Data Stewardship through AI

In the spirit of drinking one’s own champagne, many of the new uses of data – the output of data science – are being applied to data management. As software has consumed the world, machine learning is eating software; deep learning and artificial intelligence are rapidly becoming the top of this food chain. Once, a dozen or so source systems made for a good size data warehouse, with nightly ETL updates. Now, organizations are streaming hundreds of sources into data lakes. The people, processes and technologies for data quality can only keep up through augmentation through the use of advanced analytic algorithms. Machine Learning uses metadata to continuously update business catalogues as artificial intelligence augments the data stewards. Metadata is changing as well, to provide semantic layers within data management tools, and to better understand the data sets coming from the IoT, social media, or open data initiatives.

The first players to apply these techniques to data management and analytics became our first "Data Grok" companies, data that helps humans grok data and how that data can be used. Since then, the first companies to earn the DataGrok designation, Paxata and Ayasdi, have been joined by many others adding machine learning, deep learning and even artificial narrow intelligence (ANI) to provide recommendations and guardrails to data scientists, data stewards, business analysts, and any individual using organizational data to make decisions.

Data Quality Relations

Data Management development through the execution of enterprise architecture, policies, practices and procedures encompasses the interaction among data quality, data governance, and data integrity. Regulatory and process compliance are dependent upon all three. Ownership of each data set, data element and even datum, is critical to assuring data quality and data integrity, and is the first step to providing data governance. Business metadata, technical metadata and object metadata come together through business, technical and operational ownership of the data to build data stewardship and data custodian policies. The architectural frameworks used for Enterprise, IoT and Data architectures result in specifications for each critical data element that provide an overarching view across all business, technical and operational functions.

Data governance interacts with architectural activities in an agile and continuous improvement process that allows standards and specifications to reflect changing organizational needs. The processes and people can assure that data specifications are applicable to the needs of each organizational unit while assuring that data standards are uniformly applied across the organization. The size and culture of an organization determines the formality and structure of data governance and may include a governing council, sponsorship at various organizational levels, executive sponsorship (at a minimum), data ownership, data stewardship, data custodianship, change control and monitoring. But even with all this, the goal of data governance must be to provide appropriate access to data, and not restrict the use of data…from any source.

IT Must Adapt

Information Technology has often been seen as a bottleneck. Many times in our consulting work, we have found ourselves in the position of arbiter between IT and the business. Self-service BI, Analytics and Data Preparation mean IT must become an enabler of data usage, providing trustworthy data without restricting the users. The productionalizing of data science again means that IT must be an enabler of data usage, including the machine learning and other advanced analytics models that data science teams produce. As data science and data management & analytics tools come together, the need for IT to guide the use of data and tools without limiting that use becomes paramount. At the same time, privacy and security must be retained within data governance. Patient data must only be available to the patient and those healthcare professionals and caregivers who require access to that data. Personally Identifiable Information (PII) must be controlled. Regulatory compliance, such as GDPR and PCI, must be adhered to.

There is also a need for two-way traceability from the datum to the end-use in reports and analytics, training sets or scoring, and from the end-use to the source system, including lineage of all transformations along the way. This lineage of source and use enables both regulatory compliance and collaboration. Such transparent history also helps builds trust in the data, and in what other users and IT data management professionals have done to the data.

IT and OT must Work Together

As connected products mature through the 5Cs of our IoT maturity model (connection, communication, collaboration, contextualization and cognition), information technology and operations technology, business systems and engineering systems, must share data under a unified architecture. Much of the promise of the IoT can only be achieved through IT and OT working together. Consumer and marketing information being merged with supply chain and production quality information to build predictive models that allow just-in-time inventory control and agile, custom product delivery is only one example of changes to consumer expectation, whether that consumer is another business, a government or an individual. Industries from every market, such as the energy sector, consumer packaged goods and pharmaceutical manufacturing have reaped the benefits of IT and OT working together, of SCADA/Historians data being integrated with Cloud marketing and sales data or ERP data. But for this partnership between IT and OT to work, they each must trust the data of the other, and that only happens through data governance and data quality efforts.

Metadata and Master Data Management in DQ

Metadata and Master Data Management (MDM) are fundamental in ensuring data quality, and key to using trustworthy data throughout a modern data ecosystem from the most modern data sources and analytic requirements at the Edge to the most enduring legacy systems at the Core; from the droplets in the Fog to the globally distributed multi-Cloud and hybrid architectures. Metadata and MDM have been part of the solution all along, but now must be applied in new ways, both at the core and at the Edge, and distributed through multiple Cloud, hybrid architectures, on-premises, and out into the furthest reaches of the Fog, as all these resources elastically scale up and down at need.

Sensor Data Makes for Interesting DQ

Some of us have been dealing with sensors, sensor-actuator feedback loops and the concepts of the large, complex system for all of our careers, but for many, the fundamentals of connected hardware will be new. Sensor data can be messy. Two sensors from the same manufacturer will be slightly different in the data sets produced, even though they both meet specification; two sensors from different manufacturers will certainly be different in center point, range, precision and accuracy, and how the data are packaged. Sensors drift over time, and will need calibration against public standards. Sensors age, and may be replaced, and both of these conditions affect all the previous points.

Data architecture and DQ

Having worked in System Engineering for aerospace, I go to Deming's definition of Quality as conformance to specifications well suited to the customer, and, for data, specifications come from the architecture.

Architecture abstracts out the organizational needs as a series of views representing the perspectives of the people, processes and technologies affected by and effected through that solution, system or ecosystem. A standalone quality solutions architecture is not a good idea, as quality must be pervasive through an architecture. However, adding quality as a view within an architecture assures that data quality, data governance and compliance are properly represented within the architecture. {Though outside the scope of this post, I would also consider adding security as a separate view.} There are many architectural frameworks, and even controversy about their effectiveness; TOGAF, MIKE2, 4+1 and BOST are the main frameworks. Architectural frameworks focus on enterprise, data and solutions (application) architectures, with a recent interest in Internet of Things (IoT) architecture. Adherence to a framework or method is not as important as that the process by which an architecture is created meets the culture and needs of the organization.

Standards


For reference purposes, here are a list of data quality standards and methods that you might find useful:

  • ISO9001 Quality Management Family of Standards
  • ISO 8000 Data Quality Family of Standards
  • EFQM Quality Management Framework and Excellence Model
  • TOGAF The Object Group Architectural Framework for Data Architecture
  • BOST [PDF] An Introduction to the BOST Framework and Reference Models by Informatica
  • MIKE2 The Open Source Standard for Information Management
  • 4+1 Views [PDF] Architectural Blueprint by Philippe Kruchten [citation in alt tag]
  • TDWI Data Improvement Documents

Informatica is First in Customer Loyalty, Again, AND Continues to Innovate

We began using Informatica in its very early days. By 1998, we were using it for an ambitious enterprise data warehouse project spanning three divisions of a Fortune 100 company, taking in transactional and operational data from over 40 operating companies. The days are long gone when we would have implemented complex data architectures and data flows using Informatica Power Center and Power Mart in hub-and-spoke arrangements. But the need to provide powerful data management for analytics around business processes has only grown, as sales, services and customer touch-points have grown. We now generate data every minute of the day, awake or asleep. We tweet, email, and post to social media, personal blogs, and photography and video sharing sites. The things that make the things we use, and all the things around us have embedded computers and are sensor enabled, and generate even more data. Because of this, we have changed the focus of data management from simply extracting from common source systems, transforming so all the data conformed to internal standards, and loaded into that mystical single source of truth [the ETL of old]. Today, our focus is on discovering and exploring data relevant to our organizational and individual needs, no matter the source. And yet, all this data must be vetted; data quality and data governance are more important than ever. While the idea of a single source of truth is passé, trust in our data is not. Whether we are trying to improve our personal fitness or determine the impact of the latest marketing campaign, or bring the perpetrators of genocide to justice, we expect consistency in the answers to the questions we ask of all these sources of data.

Informatica has been amazingly innovative in expanding its capabilities for data management. Informatica solutions and products keep up with where industry is going. Informatica was one of the first data management companies to realize the importance of the Internet of Things (IoT). Their development of the Intelligent Data Platform is seen as a hallmark in handling all these new sources of data. Their attention to metadata and master data management has also improved, and even outpaced, the industry. Informatica can still be deployed on-premises, in one’s own data center, or in private or hybrid clouds, or in public Cloud platforms. Real-time data management, and continuous event processing are also part of Informatica’s suite of products. All of this innovation has been rewarded again today, as for the 11th year in a row, Informatica has been named #1 in Customer Loyalty for data integration. Informatica has earned top marks in customer loyalty in the annual Data Integration Customer Satisfaction Survey conducted by independent research from Kantar TNS.

To show that Informatica is not resting on its laurels, they have also announced today new and enhanced products and services:

  • Cloud Support Offerings
  • Business Critical Success Plan for On-Premises Deployments
  • New Big Data Support Accelerator

You can read more about the Customer Loyalty award and the Informatica announcements in their press release.

June 2019
Mon Tue Wed Thu Fri Sat Sun
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
 << <   > >>
The TeleInterActive Press is a collection of blogs by Clarise Z. Doval Santos and Joseph A. di Paolantonio, covering the Internet of Things, Data Management and Analytics, and other topics for business and pleasure. 37.540686772871 -122.516149406889

Search

Categories

The TeleInterActive Lifestyle

Yackity Blog Blog

The Cynosural Blog

Open Source Solutions

DataArchon

The TeleInterActive Press

  XML Feeds