Did You See What Ockam.io Just DID

The W3C Decentralized Identifiers (DID) is a working specification from the credentials community group. While not yet a standard, a working group has been proposed to develop the current specification into a full W3C standard. Amazingly, this proposed working group has support from over 60 companies (15 is the minimum required, and 25 is considered very healthy). Building trust into interoperability is important for the Internet of Things to mature allowing the development of sensor analytics ecosystems (SensAE). The W3C DID is not focused on things alone, but on people as well as things, providing the opportunity for every user of the Internet to have public and private identities adhering to the guidelines of Privacy by Design. We often talk about the need within SensAE for privacy, transparency, security and convenience to be provided in a flexible fashion. The W3C DID is an excellent first step in allowing this flexibility to be controlled by each individual and thing on the Internet, indeed on each data stream in human-to-human, human-to-machine, machine-to-human and machine-to-machine exchange or transaction. But every specification and every architecture need to be implemented to have value. Moreover, they need to be implemented in such a way that the complexity is abstracted out, both to increase adoption, and to reduce compromising errors. This is where the recently announced Ockam.io open source (under the Apache 2 license) software development kit and the Ockam TestNetwork come into play.

Currently, organizations entering IoT are either trying to build all of the pieces themselves, or searching for full stack IoT Platforms. Either of these approaches can limit interoperability. Think of a simple example, such as in the Smart Home area, where device vendors need to choose among such vendor-centric platforms as offered by Amazon, Apple, Google or Samsung, with no hope of easy interoperability. Such vendor lock-in limits adoption. This is also true of the industrial IoT Platform vendors. Manufacturers that might want two-way traceability from the mine to the assembly line to user environments to retirement of a product are stymied by the lack of interoperability and secure means to share data among all the players in their supply chain, vendor, customer and environmental ecosystem. Standards can be confusing and also cause lock-in. For example, there are two main standards bodies addressing Smart Grids, each with hundreds of standards and specifications that are not consistent from one body to the other, and do not allow for secure data exchange among all involved parties.

The W3C DID specification seeks to support interoperability and trust, with individual control of data privacy and transparency. The overall specification requires specific DID implementation specifications to ensure identity management while maintaining interoperability among all organizations adhering to the overall DID specification. This means that on-boarding IoT devices in a secure fashion with integration among all the devices in an organization's ecosystem can be done in seconds rather than months (or not at all). Even though, say, the OckamNetwork has never coordinated with Sovrin, or some large corporation's Active Directory, one can register a DID claim for a device in the Ockam TestNetwork and instantly have trusted interoperability and exchanges with users of Sovrin or any DID compliant organization. This means that an organization can move their IoT maturity immediately from simple connection, to trusted communication. Let's look at an example from the Ockam.io SDK.

Did You See What Ockam.io Just DID

With just a few simple lines of code, a developer can register their device in such a way that the device is uniquely identified from original manufacture through any installations, incorporation in a system, deployments and changes in location, to reuse or retirement. What this means within the OckamNetwork, is that life events, and metadata from the life events, are continuous to build trust throughout the life of the device. As always, metadata in one situation is useful data in another, such that the DID document that defines the identity also defines and leverages the subject’s metadata. The developer is free within the W3C DID model to define the metadata as needed. This allows key management through a decentralized blockchain network of the developer's choosing, without creating silos. This also allows the end-user of the device, or the system that contains many devices, to trust the system, with reasonable confidence that the system will not be compromised in a botnet.

To be successful, the W3C DID requires broad uptake. Major corporations are involved (beyond the 60 companies mentioned above). Visit Decentralized Identity Foundation to see the list.

Yes, the W3C DID and compliant DID such as OckamNetwork use blockchain. Contrary to common belief due to the most common use of blockchain technology in currencies such as Bitcoin or Etherium, where all the blocks or headers need to be tracked since genesis, the amount of data that needs to be exchanged to validate trust is not huge. This is due to CAP or Brewer's Theorem, where a blockchain can have two of Consistency, Availability or Partition-tolerance. The OckamNetwork is CP based. Because of this absolute consistency, with instant commit, with 100% finality in the commit, one only needs to track the most recent block. Another interesting side effect of CP is that it allows for low-power and casual connectivity, two important features for IoT devices which may need to conserve power and may need to connect only at need.

Another interesting feature of the W3C DID is that the issuer of a claim can make claims about any subject – any other entity in the network. This means that the problem often seen in IoT where a failed sensor is replaced with another sensor, while in-spec, has slightly different parameters than the previous sensor. Also, the important thing is that the data stream remains consistent, and that users of that data understand that the data are still about the same system or location and that differences in the data are due to a change in sensor not a change in the system. The W3C DID model of claims allows a graph model of sensor to location that ensures consistency of the data stream while ensuring trust in the source of the data, through a signer of the issuer of the claim that is proven by a two-thirds majority of provers in the chain. Thus, the state of the blockchain is modeled as a graph that consistently tracks the flow of data from all the different sensors and types of sensors, graphed to each location, to each subsystem, to each system, whether that system is a vehicle or a city.

The beauty of the Ockam.io SDK is that the developer using it, does not need to know that there is a blockchain, doesn't need to know how to implement cryptography; they can, but there is no requirement to do so. These functions are available to the developer while the complexity is abstracted away. With ten lines of code, one can join a blockchain and cryptographically sign the identity of the device, but the developer does not need to learn the complexities behind this to provide orders of magnitudes of better security for their IoT implementation. The whole SDK is built around interfaces, such that one can adapt to various hardware characteristics, for example RaspberryPi vs microcontroller. All of this is handled by the layers of abstraction in the system. The hundreds of validator nodes in the system that maintain consensus need to process every transaction, and to repeat every transaction. To maximize the throughput of the system, Ockam.io use graph tools that are very lightweight. Thus, as OckamNetwork matures, it will use the open source graph libraries, without using the full capabilities of a full graph database management or analytics systems, such as neo4j. This will allow for micronodes that don't have cryptographic capability to still leverage the blockchain by being represented by . A low-power device with limited bandwidth only needs to wake up at need, transmit a few hundred bits of data and confirm that the data was written either by staying awake for a few seconds or by confirming the next time it wakes up. Micronodes can take advantage of the Ockam Network by having a software guard extension (SGX) system represent the device on the chain. Another aspect is that, much like older SOA systems, descriptive metadata enhances interoperability and self-discovery of devices that need to interoperate in a trusted fashion.

Beyond the technical, an important part of any open source project is community. There is no successful open source project that does not have a successful community. Ockam.io is building a community through GitHub, Slack, open source contributions to other projects (such as a DID parser in GoLang) and IoT, security and blockchain meetups. There are also currently six pilots being implemented with early adopters of the Ockam,io SDK that are growing the community. The advisor board includes futurists and advisors that are both proselytizing and mentoring to make the community strong.

It is early days for the W3C DID, but with companies like Ockam.io building open source SDKs that implement the DID specification with specific implementations, the future for both DID and Ockam.io is bright and will help overcome the silo'd development that we've seen in IoT that is limiting IoT success. Ockam.io is not focused on any market vertical, but are focused on use-case verticals. This is applicable to all the solution spaces that make up the IoT from SmartHomes to SmartRegions, from Supply-chains to Manufacturers, from oil rigs to SmartGrids, and from Fitness Trackers to Personalized, Predictive, Preventative Healthcare.

Developers who wish to provide strong identity management quickly and conveniently should check out the OckamNetwork architecture, download the repo from GitHub and up your IoT trust game by orders of magnitude.

Setting up the Server for OSS DSS

The first thing to do when setting up your server with open source solutions [OSS] for a decision support system [DSS] is to check all the dependencies and system requirements for the software that you're installing.

Generally, in our case, once you make sure that your software will work on the version of your operating system that you're running, the major dependency is Java. Some of the software that we're running may have trouble with openJDK, and others may require the Java software development kit [JDK or Java SDK], and not just the runtime environment [JRE]. For example, Hadoop 0.20.2 may have problems with openJDK, and versions before LucidDB 0.9.3 required the JDK. Once upon a time, two famous database companies would issue system patches that we're required for their RDBMS to run, but would break the other, forcing customers to have only one system on a host. A true pain for development environments.

Since I don't know when you'll be reading this, or if you're planning to use different software than I'm using, I'm just going to suggest that you check very carefully that the system requirements and software dependencies are fulfilled by your server.

Now that we're sure that the *Nix or Microsoft operating system that we're using will support the software that we're using, the next step is to set up a system user for each software package. Here's examples for a *Nix operating systems: Linux kernel 2.x derived and the BSD derived, MacOSX. I've tested this on Red Hat Enterprise Linux 5, OpenSUSE 11, MacOSX 10.5 [Leopard] and 10.6 [Snow Leopard].

On Linux, at the command line interface [CLI]:

useradd -c "name your software Server" -s /bin/bash -mr USERNAME
- c COMMENT is the comment field used as the user's full name
-s SHELL defines the login shell
-m create the home directory
-r create as a system user

Likely, you will need to run this command through sudo, and may need the full path:

/usr/sbin/useradd

Change the password

sudo passwd USERNAME

Here's one example, setting up the Pentaho system user.

poc@elf:~> sudo /usr/sbin/useradd -c "Pentaho BI Server" -s /bin/bash -mr pentaho
poc@elf:~> sudo passwd pentaho
root's password:
Changing password for pentaho.
New Password:
Reenter New Password:
Password changed.
phpoc@elf:~>

On the Mac, do the following

vate:~ poc$ sudo dscl /Local/Default -create /Users/_pentaho RealName "PentahoCE BI Server" UserShell /bin/bash
vate:~ poc$ sudo sudo passwd _pentaho
Changing password for _pentaho.
New Password:
Reenter New Password:
Password changed.
vate:~ poc$

On Windows you'll want to set up your server software as service, after the installation.

If you haven't already done so, you'll want to download the software that you want to use from the appropriate place. In many cases this will be Sourceforge. Alternate sources might be the Enterprise Editions of Pentaho, the DynamoBI downloads for LucidDB, SQLstream, SpagoWorld, The R-Project, Hadoop, and many more possibilities.

Installing this software is no different than installing any other software on your particular operating system:

  • On any system you may need to unpack an archive indicated by a .zip, .rar, .gz or .tar file extension. On Windows & MacOSX you will likely just double-click the archive file to unpack it. On *Nix systems, including MacOSX and linux, you may also use the CLI and a command such as gunzip, unzip, or tar xvzf
  • On Windows, you'll likely double-click a .exe file and follow the instructions from the installer.
  • On MacOSX, you might double-click a .dmg file and drag the application into the Applications directory, or you'll do something more *Nix like.
  • On Linux systems, you might, at the CLI, execute the .bin file as the system user that you set up for this software.
  • On *Nix systems, you may wish to install the server-side somewhere other than a user-specific or local Applications directory, such as /usr/local/ or even in a web-root.

One thing to note is that most of the software that you'll use for an OSS DSS uses Java, and that the latest Pentaho includes the latest Java distribution. Most other software doesn't. Depending on your platform, and the supporting software that you have installed, you may wish to point [softwareNAME]_JAVA_HOME to the Pentaho Java installation, especially if the version of Java included with Pentaho meets the system requirements for other software that you want to use, and you don't have any other compatible Java on your system.

For both security, and a to avoid any confusion, you might want to change the ports used by the software you installed from their defaults.

You may need to change other configuration files from their defaults for various reasons as well, though I generally find the defaults to be satisfactory. You may need to install other software from one package into another package, for compatibility or interchange. For example, if you're trying out, or if you've purchased, Pentaho Enterprise Edition with Hadoop, Pentaho provides Java libraries [JAR files]and licenses to install on each Hadoop node, including code that Pentaho has contributed to the Hadoop project.

Also remember that Hadoop is a top-level Apache project, and not usable software in and of itself. It contains subprojects that make it useful:

  • Hadoop Commons containing the utilities that support all the rest
  • HDFS - the Hadoop Distributed File System
  • MapReduce - the software framework for distributed processing of data on clusters

You may also want one or more of the other Apache subprojects related to Hadoop:

  • Avro - a data serialization system
  • Chukwa - a data collection system
  • HBase - a distributed database management system for structured data
  • Hive - a data warehouse infrastructure
  • Mahout - a data mining library
  • Pig - an high-level data processing language for parallelization
  • Zookeeper - a coordination service for distributed applicaitons

Reading Pentaho Kettle Solutions

On a rainy day, there's nothing better than to be sitting by the stove, stirring a big kettle with a finely turned spoon. I might be cooking up a nice meal of Abruzzo Maccheroni alla Chitarra con Polpettine, but actually, I'm reading the ebook edition of Pentaho Kettle Solutions: Building Open Source ETL Solutions with Pentaho Data Integration on my iPhone.

Some of my notes made while reading Pentaho Kettle Solutinos:

…45% of all ETL is still done by hand-coded programs/scripts… made sense when… tools have 6-figure price tags… Actually, some extractions and many transformations can't be done natively in high-priced tools like Informatica and Ab Initio.

Jobs, transformations, steps and hops are the basic building blocks of KETTLE processes

It's great to see the Agile Manisto quoted at the beginning of the discussion of AgileBI. 

BayAreaUseR October Special Event

Zhou Yu organized a great special event for the San Francisco Bay Area Use R group, and has asked me to post the slide decks for download. Here they are:

No longer missing is the very interesting presentation by Yasemin Atalay showing the difference in plotting analysis using the Windermere Humic Aqueous Model for river water environmental factors, without using R and then the increased in variety and accuracy of analysis and plotting gained by using R.

Search Terms for Data Management & Analytics

Recently, for a prospective customer, I created a list of some search terms to provide them with some "late night" reading on data management & analytics. I've tried these terms out on Google, and as suspected, for most, the first hit is for Wikipedia. While most articles in Wikipedia need to be taken with a grain of salt, they will give you a good overview. [By the way, I use the "Talk" page on the articles to see the discussion and arguments about the article's content as an indicator of how big a grain of salt is needed for that article] &#59;) So plug these into your favorite search engine, and happy reading.

  • Reporting - top two hits on Google are Wikipedia, and, interestingly, Pentaho
  • Ad-hoc reporting
  • OLAP - one of the first page hits is for Julian Hyde's blog, creator of the open source tool for OLAP, Mondrian, as well as real-time analytics engine, SQLstream
  • Enterprise dashboard - interestingly, Wikipedia doesn't come up in the top hits for this term on Google, so here's a link for Wikipedia: http://en.wikipedia.org/wiki/Dashboards_(management_information_systems)
  • Analytics - isn't very useful as a search term, but the product page from SAS gives a nice overview
  • Advanced Analytics - is mostly marketing buzz, so be wary of anything that you find using this as search term

Often, Data Mining, Machine Learning and Predictives are used interchangeably. This isn't really correct, as you can see from the following five search terms…

  • Data Mining
  • Machine Learning
  • Predictive Analytics
  • Predictive Intelligence - is an earlier term for Predictives that has mostly been supplanted by Predictive Analytics. I actually prefer just "Predictives".
  • PMML - Predictive Modeling Markup Language - is a way of transporting predictive models from one software package to another. Few packages will both export and import PMML. The lack of that capability can lock you into a solution, making it expensive to change vendors. The first hit for PMML on Google today is the Data Mining Group, which is a great resource. One company listed, Zementis, is a start-up that is becoming a leader in running data mining and predictive models that have been created anywhere
  • R - the R statistical language, is difficult to search on Google. Go to http://www.r-project.org/ and http://www.rseek.org/ … instead. R is useful for writing applications for any type of statistical analysis, and is invaluable for creating new algorithms and predictive models
  • ETL - Extract, Transform & Load, is the most common way of getting information from source systems to analytic systems
  • ReSTful Web Services - Representational State Transfer - can expose data as a web service using the four verbs of the web
  • SOA
  • ADBMS - Analytic Database Management Systems doesn't work well as a search term. Start with the Eigenbase.org site and follow the links from the Eigenbase subproject, LucidDB. Also, check out AsterData
  • Bayes - The Reverend Thomas Bayes came up with this interesting approach to statistical analysis in the 1700s. I first started creating Bayesian statistical methods and algorithms for predicting reliability and risk associated with solid propellant rockets. You'll find good articles using Bayes as a search term in Google. A bit denser article can be found at http://www.scholarpedia.org/article/Bayesian_statistics And some interesting research using Bayes can be found at: Andrew Gelman's Blog. You're likely familiar with one common Bayesian algorithm, naïve Bayes, which is used by most anti-spam email programs. Other forms are objective Bayes with non-informative priors and the original Subjective Bayes. I have an old aerospace joke about the Rand Corporation's Delphi method, based on subjective Bayes :-) I created my own methodology, and don't really care for naïve Bayes nor non-informative priors.
  • Sentiment Analysis - which is one of Seth Grimes' current areas of research
  • Decision Support Systems - in addition to searching on Google, you might find my recent OSS DSS Study Guide of interest

Let me know if I missed your favorite search term for data management & analytics.

Data Artisan Smith or Scientist

Over the past few months, a debate has been proceeding on whether or not a new discipline, a new career path, is emerging from the tsunami of data bearing down on us. The need for a new type of Renaissance [Wo]Man to deal with the Big Data onslaught. To whit, Data Science.

I'm writing about this now, because last night, at an every-three-week get together devoted to cask beer and data analysis, the topic came up. [Yes, every-THREE-weeks - a month is too long to go without cask beer fueled discussions of Rstats, BigData, Streaming SQL, BI and more.] The statisticians in the group, including myself, strongly disagreed with the way the term is being used; the software/database types were either in favor or ambivalent. We all agreed that a new, interdisciplinary approach to Big Data is needed. Oh, and I'll stay on topic here, and not get into another debate as to the definition of "Big Data". &#59;)

This lively conversation reinforced my desire to write about Data Science that swelled up in me after reading "What is Data Science?" by Mike Loukides published on O'Reilly Radar, and a subsequent discussion on Twitter held the following weekend, concerning data analytics.

The term "Data Science" isn't new, but it is taking on new meanings. The Journal of Data Science published JDS volume 1, issue 1 in January of 2003. The Scope of the JDS is very clearly related to applied statistics

By "Data Science", we mean almost everything that has something to do with data: Collecting, analyzing, modeling...... yet the most important part is its applications --- all sorts of applications. This journal is devoted to applications of statistical methods at large.
-- About JDS, Scope, First Paragraph

There is also the CODATA Data Science Journal, which appears to have last been updated in August of 2007, and currently has no content, other than its self-description as

The Data Science Journal is a peer-reviewed electronic journal publishing papers on the management of data and databases in Science and Technology.

I think that two definitions can be derived from these two journals.

  1. Data Science is systematic study, through observation and experiment, of the collection, modeling, analysis, visualization, dissemination, and application of data.
  2. Data Science is the use of data and database technology within physical and natural sciences and engineering.

I can agree with the first, especially with the JDS Scope clearly stating that Data Science is applied statistics.

The New Oxford American Dictionary, on which the Apple Dictionary program is based, defines science as a noun

the intellectual and practical activity encompassing the systematic study of the structure and behaviour of the physical and natural world through observations and experiments.

And a similar definition of science can be found on Dictionary.com.

In many ways, I like Mike Loukides' article "What is Data Science?" in how it highlights the need for this new discipline. I just don't like what he describes to be the new definition of "data science". Indeed, I very much disagree with this statement from the article.

Using data effectively requires something different from traditional statistics, where actuaries in business suits perform arcane but fairly well-defined kinds of analysis. What differentiates data science from statistics is that data science is a holistic approach. We're increasingly finding data in the wild, and data scientists are involved with gathering data, massaging it into a tractable form, making it tell its story, and presenting that story to others.

A statistician is not an actuary. They're very different roles. I know this because I worked for over a decade applying statistics to determining the reliability and risk associated with very large, complex systems such as rockets and space-borne astrophysics observatories. I once hired a Cal student as an intern because she feared that the only career open to her as a math major, was to be an actuary. I showed her a different path. So, yes, I know, from experience, that a statistician is not an actuary. Actually, the definition of a data scientist given, that is "gathering data, massaging it into a tractable form, making it tell its story, and presenting that story to others" is exactly what a statistician does.

I do however see the need for a new discipline, separate from applied statistics, or data science. The massive amount of data to come from an instrumented world with strongly interconnected people and machines, and real-time analysis, inference and prediction from those data, will require inter-disciplinary skills. But I see those skills coming together in a person who is more of a smith, or, as Julian Hyde put it last night, an artisan. Falling back on the old dictionary again, a smith is someone who is skilled in creating something with a specific material; an artisan is someone who is skilled in a craft, making things by hand.

Another reason that I don't like the term "data science" for this interdisciplinary role, stems from what Mike Loukides describes in his article "What is Data Science?" as the definition for this new discipline "Data science requires skills ranging from traditional computer science to mathematics to art". I agree that the new discipline requires these three things, and more, even softer skills. I disagree that these add up to data science.

I even prefer "data geek", as defined by Michael E. Driscoll in "The Three Sexy Skills of Data Geeks". Michael Driscoll's post of 2009 May 27 certainly agrees skill-wise with Mike Loukides post of 2010 June 02.

  1. Skill #1: Statistics (Studying)
  2. Skill #2: Data Munging (Suffering)
  3. Skill #3: Visualization (Storytelling)

And I very much prefer "Data Munging" to "Computer Science" as one of the three skills.

I'll stick to the definition that I gave above for data science as "systematic study, through observation and experiment, of the collection, modeling, analysis, visualization, dissemination, and application of data". This is also applied statistics. So, what else is needed for this new discipline? Well, Mike and Michael are correct: computer skills, especially data munging, and art. Well, any statistician today has computer skills, generally in one or more of SAS, SPSS, R, S-plus, Python, SQL, Stata, MatLab and other software packages, as well as familiarity with various data storage & management methods. Some statisticians are even artists, perhaps as story tellers, as evidenced by that rare great teacher or convincing expert witness, perhaps as visualizers, creating statistically accurate animations to clearly describe the analysis, as evidenced by the career of that intern I hired so many years ago.

The data smith, the data artisan, must be comfortable with all forms of data:

  • structured,
  • unstructured and
  • semi-structured

Just as any other smith, someone following this new discipline might serve an apprenticeship creating new things from these forms of data such as a data warehouse or an OLAP cube, a sentiment analysis or a streaming SQL sensor web, or a recommendation engine or complex system predictives. The data smith must become very comfortable with putting all forms of data together in new ways, to come to new conclusions.

Just as a goldsmith will never make a piece of jewelry identical to the one finished days before, just as art can be forged but not duplicated, the data smith, the data artisan will glean new inferences every time they look at the data, will make new predictions with every new datum, and the story they tell, the picture they paint, will be different each time.

And perhaps then, the data smith becomes a master, an artisan.

PS: Here's a list of links to that Twitter conversation among some of the most respected people in the biz, on Data Analytics

  1. https://twitter.com/NeilRaden/status/15512935981
  2. https://twitter.com/NeilRaden/status/15513225191
  3. https://twitter.com/NeilRaden/status/15513275261
  4. https://twitter.com/NeilRaden/status/15513453916
  5. https://twitter.com/datachick/status/15513460384
  6. https://twitter.com/NeilRaden/status/15513488053
  7. https://twitter.com/datachick/status/15513677836
  8. https://twitter.com/CMastication/status/15513772446
  9. https://twitter.com/NeilRaden/status/15513821393
  10. https://twitter.com/NeilRaden/status/15513854916
  11. https://twitter.com/NeilRaden/status/15513915694
  12. https://twitter.com/alecsharp/status/15513980301
  13. https://twitter.com/NeilRaden/status/15514104372
  14. https://twitter.com/alecsharp/status/15514097194
  15. https://twitter.com/CMastication/status/15514374095
  16. https://twitter.com/estrenuo/status/15514634644
  17. https://twitter.com/NeilRaden/status/15515243453
  18. https://twitter.com/CMastication/status/15516185085
  19. https://twitter.com/annmariastat/status/15516321715
  20. https://twitter.com/NeilRaden/status/15519544709
  21. https://twitter.com/NeilRaden/status/15519597061
  22. https://twitter.com/NeilRaden/status/15519621974
  23. https://twitter.com/skemsley/status/15519932631
  24. https://twitter.com/skemsley/status/15519932631
  25. https://twitter.com/aristippus303/status/15520146540
  26. https://twitter.com/NeilRaden/status/15520478566
  27. https://twitter.com/SethGrimes/status/15520765766
  28. https://twitter.com/SethGrimes/status/15520851678
  29. https://twitter.com/NeilRaden/status/15521050387
  30. https://twitter.com/NeilRaden/status/15521106901
  31. https://twitter.com/NeilRaden/status/15521133647
  32. https://twitter.com/NeilRaden/status/15521192977
  33. https://twitter.com/SethGrimes/status/15521579977
  34. https://twitter.com/ryanprociuk/status/15521637974

Technology for the OSS DSS Study Guide

'Tis been longer than intended, but we finally have the technology, time and resources to continue with our Open Source Solutions Decision Support System Study Guide (OSS DSS SG).

First, I want to thank SQLstream for allowing us to use SQLstream as a part of our solution. As mentioned in our "First DSS Study Guide" post, we were hoping to add a real-time component to our DSS. SQLstream is not open source, and not readily available for download. It is however, a co-founder and core contributer to the open source Eigenbase Project, and has incorporated Eigenbase technology into its product. So, what is SQLstream? To quote their web site, "SQLstream enables executives to make strategic decisions based on current data, in flight, from multiple, diverse sources". And that is why we are so interested in having SQLstream as a part of our DSS technology stack: to have the capability to capture and manipulate data as it is being generated.

Today, there are two very important classes of technologies that should belong to any DSS: data warehousing (DW) and business intelligence (BI). What actually comprises these technologies is still a matter of debate. To me, they are quite interrelated and provide the following capabilities.

  • The means of getting data from one or more sources to one or more target storage & analysis systems. Regardless of the details for the source(s) and the target(s), the traditional means in data warehousing is Extract from the source(s), Transform for consistency & correctness, and Load into the target(s), that is, ETL. Other means, such as using data services within a services oriented architecture (SOA) either using provider-consumer contracts & Web Service Definition Language (WSDL) or representational state transfer (ReST) are also possible.
  • Active storage over the long term of historic and near-current data. Active storage as opposed to static storage, such as a tape archive. This storage should be optimized for reporting and analysis through both its logical and physical data models, and through the database architecture and technologies implemented. Today we're seeing an amazing surge of data storage and management innovation, with column-store relational database management systems (RDBMS), map-reduce (M-R), key-value stores (KVS) and more, especially hybrids of one or several of old and new technologies. The innovation is coming so thick and fast, that the terminology is even more confused than in the rest of the BI world. NoSQL has become a popular term for all non-RDBMS, and even some RDBMS like column-store. But even here, what once meant No Structured Query Language now is often defined as Not only Structured Query Language, as if SQL was the only way to create an RDBMS (can someone say Progress and its proprietary 4GL).
  • Tools for reporting including gathering the data, performing calculations, graphing, or perhaps more accurately, charting, formating and disseminating.
  • Online Analytical Processing (OLAP) also known as "slice and dice", generally allowing forms of multi-dimensional or pivot analysis. Simply put, there are three underlying concepts for OLAP: the cube (a.k.a. hypercube, multi-dimensional database [MDDB] or OLAP engine), the measures (facts) & dimensions, and aggregation. OLAP provides much more flexibility than reporting, though the two often work hand-in-hand, especially for ad-hoc reporting and analysis.
  • Data Mining, including machine learning and the ability to discover correlations among disparate data sets.

For our purposes, an important question is whether or not there are open source, or at least open source based, solutions for all of these capabilities. The answer is yes. As a matter of fact, there are three complete open source BI Suites [there were four, but the first, written in PERL, the Bee Project from the Czech Republic, is no longer being updated]. Here's a brief overview of SpagoBI, JasperSoft, and Pentaho.

Capability SpagoBI JasperSoft Pentaho
ETL Talend Talend
JasperETL
KETTLE
PDI
Included
DBMS
HSQLDB
MySQL
Reporting BIRT
JasperReport
JasperReports
iReports
jFreeReports
Analyzer jPivot
PaloPivot
JasperServer
JasperAnalysis
jPivot
PAT
OLAP Mondrian Mondrian Mondrian
Data Mining Weka None Weka

We'll be using Pentaho, but you can use any of the these, or any combination of the OSS projects that are used by these BI Suites, or pick and choose from the more than 60 projects in our OSS Linkblog, as shown in the sidebar to this blog. All of the OSS BI Suites have many more features than shown in the simple table above. For example, SpagoBI has good tools for geographic & location services. Also, JasperSoft Professional and Enterprise Editions have many features than their Community Edition, such as Ad Hoc Reporting and Dashboards. Pentaho has a different Analyzer in their Enterprise Edition than either jPivot or PAT, Pentaho Analyzer, based upon the SaaS ClearView from the now-defunct LucidEra, as well as ease-of-use tools such as an OLAP schæma designer, and enterprise class security and administration tools.

Data warehousing using general purpose RDBMS systems such as Oracle, EnterpriseDB, PostrgeSQL or MySQL, are gradually giving way to analytic database management system (ADBMS), or, as we mentioned above, the catch-all NoSQL data storage systems, or even hybrid systems. For example, Oracle recently introduced hybrid column-row store features, and Aster Data has a column-store Massive Parallel Processing (MPP) DBMS|map-reduce hybrid [updated 20100616 per comment from Seth Grimes]. Pentaho supports Hadoop, as well as traditional general purpose RDBMS and column-store ADMBS. In the open source world, there are two columnar storage engines for MySQL, Infobright and Calpont InfiniDB, as well as one column-store ADBMS purpose built for BI, LucidDB. We'll be using LucidDB, and just for fun, may throw some data into Hadoop.

In addition, a modern DSS needs two more primary capabilities. Predictives, sometimes called predictive intelligence or predictive analytics (PA), which is the ability to go beyond inference and trend analysis, assigning a probability, with associated confidence, or likelihood of an event occurring in the future, and full Statistical Analysis, which includes determining the probability density or distribution function that best describes the data. Of course, there are OSS projects for these as well, such as The R Project, the Apache Common Math libraries, and other GNU projects that can be found in our Linkblog.

For statistical analysis and predictives, we'll be using the open source R statistical language and the open standard predictive model markup language (PMML), both of which are also supported by Pentaho.

We have all of these OSS projects installed on a Red Hat Enterprise Linux machine. The trick will be to get them all working together. The magic will be in modeling and analyzing the data to support good decisions. There are several areas of decision making that we're considering as examples. One is fairly prosaic, one is very interesting and far-reaching, and the others are somewhat in between.

  1. A fairly simple example would be to take our blog statistics, a real-time stream using SQLstream's Twitter API, and run experiments to determine whether or not, and possibly how, Twitter affects traffic to and interaction with our blogs. Possibly, we could get to the point where we can predict how our use of Twitter will affect our blog.
  2. A much more far-reaching idea was presented by Ken Winnick to me, via Twitter, and has created an on-going Twitter conversation and hashtag, #BPgulfDB. Let's take crowd sourced, government, and other publicly available data about the recent oilspill in the Gulf of Mexico, and analyze it.
  3. Another idea is to take historical home utility usage plus current smart meter usage data, and create a real-time dashboard, and even predictives, for reducing and managing energy usage.
  4. We also have the opportunity of using public data to enhance reporting and analytics for small, rural and research hospitals.

OSS DSS Formalization

The next step in our open source solutions (OSS) for decision support systems (DSS) study guide (SG), according to the syllabus, is to make our first decision: a formal definition of "Decision Support System". Next, and soon, will be a post listing the technologies that will contribute to our studies.

The first stop in looking for a definition of anything today, is Wikipedia. And indeed, Wikipedia does have a nice article on DSS. One of the things that I find most informative about Wikipedia articles, is the "Talk" page for an article. The DSS discussion is rather mild though, no ongoing debate as can be found on some other talk pages, such as the discussion about Business Intelligence. The talk pages also change more often, and provide insight into the thoughts that go into the main article.

And of course, the second stop is a Google search for Decision Support System; a search on DSS is not nearly as fruitful for our purposes. :)

Once upon a time, we might have gone to a library and thumbed through the card catalog to find some books on Decision Support Systems. A more popular approach today would be to search Amazon for Decision Support books. There are several books in my library that you might find interesting for different reasons:

  1. Pentaho Solutions: Business Intelligence and Data Warehousing with Pentaho and MySQL by Roland Bouman & Jos van Dongen provides a very good overview of data warehousing, business intelligence and data mining, all key components to a DSS, and does so within the context of the open source Pentaho suite
  2. Smart Enough Systems: How to Deliver Competitive Advantage by Automating Hidden Decisions by James Taylor & Neil Raden introduces business concepts for truly managing information and using decision support systems, as well as being a primer on data warehousing and business intelligence, but goes beyond this by automating the data flow and decision making processes
  3. Business Intelligence Roadmap: The Complete Project Lifecycle for Decision-Support Applications by Larissa T. Moss & Shaku Atre takes a business, program and project management approach to implementing DSS within a company, introducing fundamental concepts in a clear, though simplistic level
  4. Competing on Analytics: The New Science of Winning by Thomas H. Davenport & Jeanne G. Harris in many ways goes into the next generation of decision support by showing how data, statistical and quantitative analysis within a context specific processes, gives businesses a strong lead over their competition, albeit, it does so at a very simplistic, formulaic level

These books range from being technology focused to being general business books, but they all provide insight into how various components of DSS fit into a business, and different approaches to implementing them. None of them actually provide a complete DSS, and only the first focuses on OSS. If you followed the Amazon search link given previously, you might also have noticed that there are books that show Excel as a DSS, and there is a preponderance of books that focus on the biomedical/pharmaceutical/healthcare industry. Another focus area is in using geographic information systems (actually one of the first uses for multi-dimensional databases) for decision support. There are several books in this search that look good, but haven't made it into my library as yet. I would love to hear your recommendations (perhaps in the comments).

From all of this, and our experiences in implementing various DW, BI and DSS programs, I'm going to give a definition of DSS. From a previous post in this DSS SG, we have the following:

A DSS is a set of processes and technology that help an individual to make a better decision than they could without the DSS.
-- Questions and Commonality

As we stated, this is vague and generic. Now that we've done some reading, let's see if we can do better.

A DSS assists an individual in reaching the best possible conclusion, resolution or course of action in stand-alone, iterative or interdependent situations, by using historical and current structured and unstructured data, collaboration with colleagues, and personal knowledge to predict the outcome or infer the consequences.

I like that definition, but your comments will help to refine it.

Note that we make no mention of specific processes, nor any technology whatsoever. It reflects my bias that decisions are made by individuals not groups (electoral systems not withstanding). To be true to our "TeleInterActive Lifestyle" &#59;) I should point out that the DSS must be available when and where the individual needs to make the decision.

Any comments?

R the next Big Thing or Not

Recently, AnnMaria De Mars, PhD (multiple) and Dr. Peter Flom, PhD have stirred up a bit of a tempest in a tweet-pot, as well as in the statistical blogosphere, with comparisons of R and SAS, IBM/SPSS and the like. I've commented on both of their blogs, but decided to expand a bit here, as the choice of R is something that we planned to cover in a later post to our Open Source Solutions Decision Support Systems Study Guide. First, let me say that Dr. De Mars and Dr. Flom appear to have posted completely independently of each other, and further, that their posts have different goals.

In The Next Big Thing, Dr. De Mars is looking for the next big thing, both to keep her own career on-track, and to guide students into areas of study that will be survive in the job market in the coming decades. This is always difficult for mentors, as we can't always anticipate the "black swan" events that might change things drastically. The tempestuous nature of her post came from one little sentence:

Contrary to what some people seem to think, R is definitely not the next big thing, either. -- AnnMaria De Mars, The Next Big Thing, AnnMaria's Blog

In SAS vs. R, Introduction and Request, Dr. Flom starts a series comparing R and SAS from the standpoint of a statistician deciding upon tools to use.

There are several threads in Dr. De Mars post. I agree with Dr. De Mars that two of the "next big things" in data management & analysis are data visualization and dealing with unstructured data. I'm of the opinion that there is a third area, related to the "Internet of Things" and the tsunami of data that will be generated by it. These are conceptual areas, however. Dr. De Mars quickly moves on to discussing the tools that might be a part of the solutions of these next big things. The concepts cited are neither software packages nor computing languages. The software packages SAS, IBM/SPSS, Stata, Pentaho and the like, and the computing language S, with its open source distribution R, and its proprietary distribution S+ are none likely to be the next big things, as they are currently useful tools to know.

I find it interesting that both Dr. De Mars and Dr. Flom, as well as the various commenters, tweeters, and other posters, are comparing software suites and applications with a computing language. I think that a bit more historical perspective might be needed in bringing these threads together.

In 1979, when I first sat down with a FORTRAN programmer to turn my Bayesian methodologies into practical applications to determine the reliability and risk associated with the STAR48 kick motor and associated Payload Assist Module (PAM), the statistical libraries for FORTRAN seemed amazing. The ease with which we were able to create the program and churn through decades of NASA data (after buying a 1MB memory box for the mainframe) was wondrous &#59;)

Today, there's not so much wonder from such a feat. The evolution of computing has drastically affected the way in which we apply mathematics and statistics today. Several of the comments to these posts argue both sides of the statement that anyone doing statistics today should be a programmer, or shouldn't. It's an interesting argument, that I've also seen reflected in chemistry, as fewer technicians are used in the lab, and the Ph.D.s work directly with the robots to prepare the samples and interpret the results.

Approximately 15 years ago, I moved from solving scientific and engineering problems directly with statistics, to solving business problems through vendor's software suites. The marketing names for this endeavor have gone through several changes: Decision Support Systems, Very Large Databases, Data Warehousing, Data Marts, Corporate Information Factory, Business Intelligence, and the like. Today, Data Mining, Data Visualization, Sentiment Analysis, "Big Data", SQL Streaming, and similar buzzwords reflect the new "big thing". Software applications, from new as well as established vendors, both open source and proprietary, are coming to the fore to handle these new areas that represent real problems.

So, one question to answer for students, is which, if any, of these software packages will best survive with and aid the growth of, their maturing careers. Will Tableau, LyzaSoft, QlikView or Viney@rd be in a better spot in 20 years, through growth or acquisition, than SAS or IBM/SPSS? Will the open source movement take down the proprietary vendors or be subsumed by them? Is Pentaho/Weka the BI & data mining solution for their career? Maybe, maybe not. But what about that other beast of which everyone speaks? Namely, R, the r-project, the R Statistical Language. What is it? Is it a worthy alternative to SAS or IBM/SPSS or Pentaho/Weka? Or is it a different genus altogether? That's a question I've been seeking to answer for myself, in my own career evolution. After 15 years, software such as SAP/Business Objects and IBM/Cognos, haven't evolved into anything that I like, with their pinnacle of statistical computation being the "average", the arithmetic mean. SAS and IBM/SPSS are certainly better, and with data mining, machine learning and predictives becoming important to business, certainly likely to be a good choice for the future. But are they really powerful enough? Are they flexible enough? Can they be used to solve the next generation of data problems?  They're very likely to evolve into software that can do so.  But how quickly?  And like all vendor software, they have limitations based upon the market studies and business decisions of the corporation.

How is R different?

Well, first, R is a computing language. Unlike SAP/Business Objects, IBM/Cognos, IBM/SPSS, SAS, Pentaho, JasperSoft, SpagoBI, or Oracle, it's not a company, nor a BI Suite, nor even a collection of software applications.  Second, R is an open source project. It's an open source implementation of S. Like C, and the other single letter named languages, S came out of Bell Labs, and in the case of R, in 1976. The open source implementation, R comes from R. Ihaka and R. Gentleman, first revealed in 1996 through the article, R: A language for data analysis and graphics. Journal of Computational and Graphical Statistics, 5:299–314, and is often associated with the Department of Statistics, University of Auckland.

While I'm not a software engineer, R is a very compelling statistical tool. As a language, it's very intuitive… for a statistician. It's an interactive, interpretive, functional, object oriented, statistical programming language. R itself is written in R, C, C++ and FORTRAN; it's powerful. As an open source project, it has attracted thousands upon thousands of users who have formed a strong community. There are thousands upon thousands of community contributed packages for R. It's flexible, and growing. One of the main goals of R was data visualization, and it has a wonderful new package for data visualization in ggplot2. It's ahead of the curve. There are packages for  parallel processing (some quite specific), for big data beyond in-memory capacity, for servers, and for embedding in a web site.  Get the idea?  If you think you need something in R, search CRAN, RForge, BioConductor or Omegahat.

As you can tell, I like R. :) However, in all honesty, I don't think that the SAS vs. R controversy is an either/or situation. SAS, IBM/SPSS and Pentaho complement R and vice-versa. Pentaho, IBM/SPSS and some SAS products support R. R can read data from SAS, IBM/SPSS , relational databases, Excel, mapReduce and more. The real question isn't is one tool better than another but rather selecting the best tool to answer a particular question.  That being said, I'm looking forward to Dr. Flom's comparison, as well as the continuing discussion on Dr. De Mars' blog.

For us, the question is building a decision support system or stack from open source components. It looks like we'll have a good time doing so.

OSS DSS Studies Introduction

First, let me say that we're talking about systems supporting the decisions that are made by human beings, not "expert systems" that automate decisions.  As an example, let's look at inventory management.  A human might use various components of a DSS to determine the amount of an item in stock, the demand for that item as a trend to determine when it might be out of stock, and predictives as to various factors (internal, external, environmental, political, etc) that might affect supply, to come to a decision as to how much and when to order more of that item.  An expert system might be created that could also determine when and how much of an item to oder, using neural networks, Bayesian nets or other algorithms.  The expert system might even take from the same DSS components (or directly from their underlying data) as the human might.  One could even run the expert system in parallel with humans making the decisions, scoring or otherwise evaluating the two, until the expert system is comparable or better than the expert system.  But, we're not really interested in expert systems in this study guide.  We'll be focusing on systems that help humans to make better decisions, not on automated feedback and control loops.

To me, a technology doesn't matter very much if it's not supporting some process, or a step within a process.  That process may be for personal reasons or supporting work activities. For this study guide, let's begin by continuing the discussion that we began in the previous posts, about the process by which one makes a decision, the steps, the events, the triggers and the consequences of making a decision.

I have my own process in making decisions.  I've played in executive and management roles for many years, and have been responsible for 5 P/L centers.  But this is a study guide, and while I intend to offer my own opinions and interpretations, we need some objective sources to study.  Let's start with a Google search.  Of course, Wikipedia has an article.  A site of which I've not heard before has the first hit with their article on problem-solving and decision-making.  Science Daily has a timely article from 2010 March 13 on how we really make decisions, our brain activity during decision making.  I also like the map from The Institute for Strategic Clarity.  Mindtools sets out a list of techniques and tools for aiding in the decision making process, and provides an important caveat "Do remember, though, that the tools in this chapter exist only to assist your intelligence and common sense. These are your most important assets in good Decision Making".  Reading through various reviews, the one book on decision making that I want to add to my library is The Managerial Decision-Making Process, 5th ed. by E. Frank Harrison.  From the Glossary of Political Economy Terms, we have:


Where formal organizations are the setting in which decisions are made, the particular decisions or policies chosen by decision-makers can often be explained through reference to the organization's particular structure and procedural rules. Such explanations typically involve looking at the distribution of responsibilities among organizational sub-units, the activities of committees and ad hoc coordinating groups, meeting schedules, rules of order etc. The notion of fixed-in-advance standard operating procedures (SOPs) typically plays an important role in such explanations of individual decisions made. -- Organizational process models of decision-making


Let's revisit and expand upon the summary that we gave in the third post in this series.

  1. As an individual faced with making a decision, I may want input from others, I may want consensus, but in the end, it is an individual decision, and I will bear the fruits of having made that decision.
  2. I need to put the problem, and my decision making, into context.  I have a variety of resources at my disposal to do so:

    • historical data
    • current information
    • structured data from transactional systems, master data, metadata, data warehouse, and other possible sources
    • unstructured data from blogs, wikis, Zotero libraries, Evernote, searches, bookmarks and similar sources
    • email
    • non-electronic correspondence, notes and conversations
    • personal experience
    • the experience of others garnered through water cooler and hallway conversations, formal meetings, twitter, phone calls and the like
  3. Now I need to understand all of these facts, opinions and conjecture at my disposal.  Part of this sifting all of it through my internal filters, using my "gut".  Part is using the various reporting and analytical tools at my disposal, and then filtering those through my gut.  And really, this and the next point will constitute the majority of this OSS DSS Study Guide - the tools we use.
  4. As I contemplate the various decisions that I might make from all of this, I want to understand the consequences of each potential decision: might this decision lead to a better product, more profit, less profit, broader market penetration, higher reliability, or even an alternate universe.
  5. As I make this decision, I'll want to collaborate with others.  Ideally, I'll want to collaborate within the context of my decision support system. Once upon a time we would do this by embedding the tools within a portal system, now we take a more master data management approach, and use a services oriented architecture with either web services description language (WSDL) or representational state transition (ReST) application programming interfaces (APIs) to the collaborative environment, usually a wiki.

In summary, this introduction has set up a framework for a decision-making process for an individual to use a decision support system.  The majority of this study guide will be to expore the actual decision support system, and the open source tools from which we can build such a system.

May 2019
Mon Tue Wed Thu Fri Sat Sun
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31    
 << <   > >>

At the beginning, The Open Source Solutions Blog was a companion to the Open Source Solutions for Business Intelligence Research Project, and book. But back in 2005, we couldn't find a publisher. As Apache Hadoop and its family of open source projects proliferated, and in many ways, took over the OSS data management and analytics world, our interests became more focused on streaming data management and analytics for IoT, the architecture for people, processes and technology required to bring value from the IoT through Sensor Analytics Ecosystems, and the maturity model organizations will need to follow to achieve SAEIoT success. OSS is very important in this world too, for DMA, API and community development.

37.652951177164 -122.490877706959

Search

  XML Feeds