The first thing to do when setting up your server with open source solutions [OSS] for a decision support system [DSS] is to check all the dependencies and system requirements for the software that you're installing.
Generally, in our case, once you make sure that your software will work on the version of your operating system that you're running, the major dependency is Java. Some of the software that we're running may have trouble with openJDK, and others may require the Java software development kit [JDK or Java SDK], and not just the runtime environment [JRE]. For example, Hadoop 0.20.2 may have problems with openJDK, and versions before LucidDB 0.9.3 required the JDK. Once upon a time, two famous database companies would issue system patches that we're required for their RDBMS to run, but would break the other, forcing customers to have only one system on a host. A true pain for development environments.
Since I don't know when you'll be reading this, or if you're planning to use different software than I'm using, I'm just going to suggest that you check very carefully that the system requirements and software dependencies are fulfilled by your server.
Now that we're sure that the *Nix or Microsoft operating system that we're using will support the software that we're using, the next step is to set up a system user for each software package. Here's examples for a *Nix operating systems: Linux kernel 2.x derived and the BSD derived, MacOSX. I've tested this on Red Hat Enterprise Linux 5, OpenSUSE 11, MacOSX 10.5 [Leopard] and 10.6 [Snow Leopard].
On Linux, at the command line interface [CLI]:
useradd -c "name your software Server" -s /bin/bash -mr USERNAME
- c COMMENT is the comment field used as the user's full name
-s SHELL defines the login shell
-m create the home directory
-r create as a system user
Likely, you will need to run this command through sudo, and may need the full path:
Change the password
sudo passwd USERNAME
Here's one example, setting up the Pentaho system user.
poc@elf:~> sudo /usr/sbin/useradd -c "Pentaho BI Server" -s /bin/bash -mr pentaho
poc@elf:~> sudo passwd pentaho
Changing password for pentaho.
Reenter New Password:
On the Mac, do the following
vate:~ poc$ sudo dscl /Local/Default -create /Users/_pentaho RealName "PentahoCE BI Server" UserShell /bin/bash
vate:~ poc$ sudo sudo passwd _pentaho
Changing password for _pentaho.
Reenter New Password:
On Windows you'll want to set up your server software as service, after the installation.
If you haven't already done so, you'll want to download the software that you want to use from the appropriate place. In many cases this will be Sourceforge. Alternate sources might be the Enterprise Editions of Pentaho, the DynamoBI downloads for LucidDB, SQLstream, SpagoWorld, The R-Project, Hadoop, and many more possibilities.
Installing this software is no different than installing any other software on your particular operating system:
One thing to note is that most of the software that you'll use for an OSS DSS uses Java, and that the latest Pentaho includes the latest Java distribution. Most other software doesn't. Depending on your platform, and the supporting software that you have installed, you may wish to point [softwareNAME]_JAVA_HOME to the Pentaho Java installation, especially if the version of Java included with Pentaho meets the system requirements for other software that you want to use, and you don't have any other compatible Java on your system.
For both security, and a to avoid any confusion, you might want to change the ports used by the software you installed from their defaults.
You may need to change other configuration files from their defaults for various reasons as well, though I generally find the defaults to be satisfactory. You may need to install other software from one package into another package, for compatibility or interchange. For example, if you're trying out, or if you've purchased, Pentaho Enterprise Edition with Hadoop, Pentaho provides Java libraries [JAR files]and licenses to install on each Hadoop node, including code that Pentaho has contributed to the Hadoop project.
Also remember that Hadoop is a top-level Apache project, and not usable software in and of itself. It contains subprojects that make it useful:
You may also want one or more of the other Apache subprojects related to Hadoop:
Zhou Yu organized a great special event for the San Francisco Bay Area Use R group, and has asked me to post the slide decks for download. Here they are:
is the very interesting presentation by Yasemin Atalay showing the difference in plotting analysis using the Windermere Humic Aqueous Model for river water environmental factors, without using R and then the increased in variety and accuracy of analysis and plotting gained by using R.
Recently, for a prospective customer, I created a list of some search terms to provide them with some "late night" reading on data management & analytics. I've tried these terms out on Google, and as suspected, for most, the first hit is for Wikipedia. While most articles in Wikipedia need to be taken with a grain of salt, they will give you a good overview. [By the way, I use the "Talk" page on the articles to see the discussion and arguments about the article's content as an indicator of how big a grain of salt is needed for that article] So plug these into your favorite search engine, and happy reading.
Often, Data Mining, Machine Learning and Predictives are used interchangeably. This isn't really correct, as you can see from the following five search terms…
Let me know if I missed your favorite search term for data management & analytics.
Over the past few months, a debate has been proceeding on whether or not a new discipline, a new career path, is emerging from the tsunami of data bearing down on us. The need for a new type of Renaissance [Wo]Man to deal with the Big Data onslaught. To whit, Data Science.
I'm writing about this now, because last night, at an every-three-week get together devoted to cask beer and data analysis, the topic came up. [Yes, every-THREE-weeks - a month is too long to go without cask beer fueled discussions of Rstats, BigData, Streaming SQL, BI and more.] The statisticians in the group, including myself, strongly disagreed with the way the term is being used; the software/database types were either in favor or ambivalent. We all agreed that a new, interdisciplinary approach to Big Data is needed. Oh, and I'll stay on topic here, and not get into another debate as to the definition of "Big Data".
This lively conversation reinforced my desire to write about Data Science that swelled up in me after reading "What is Data Science?" by Mike Loukides published on O'Reilly Radar, and a subsequent discussion on Twitter held the following weekend, concerning data analytics.
The term "Data Science" isn't new, but it is taking on new meanings. The Journal of Data Science published JDS volume 1, issue 1 in January of 2003. The Scope of the JDS is very clearly related to applied statistics
By "Data Science", we mean almost everything that has something to do with data: Collecting, analyzing, modeling...... yet the most important part is its applications --- all sorts of applications. This journal is devoted to applications of statistical methods at large.
-- About JDS, Scope, First Paragraph
There is also the CODATA Data Science Journal, which appears to have last been updated in August of 2007, and currently has no content, other than its self-description as
The Data Science Journal is a peer-reviewed electronic journal publishing papers on the management of data and databases in Science and Technology.
I think that two definitions can be derived from these two journals.
I can agree with the first, especially with the JDS Scope clearly stating that Data Science is applied statistics.
The New Oxford American Dictionary, on which the Apple Dictionary program is based, defines science as a noun
the intellectual and practical activity encompassing the systematic study of the structure and behaviour of the physical and natural world through observations and experiments.
And a similar definition of science can be found on Dictionary.com.
In many ways, I like Mike Loukides' article "What is Data Science?" in how it highlights the need for this new discipline. I just don't like what he describes to be the new definition of "data science". Indeed, I very much disagree with this statement from the article.
Using data effectively requires something different from traditional statistics, where actuaries in business suits perform arcane but fairly well-defined kinds of analysis. What differentiates data science from statistics is that data science is a holistic approach. We're increasingly finding data in the wild, and data scientists are involved with gathering data, massaging it into a tractable form, making it tell its story, and presenting that story to others.
A statistician is not an actuary. They're very different roles. I know this because I worked for over a decade applying statistics to determining the reliability and risk associated with very large, complex systems such as rockets and space-borne astrophysics observatories. I once hired a Cal student as an intern because she feared that the only career open to her as a math major, was to be an actuary. I showed her a different path. So, yes, I know, from experience, that a statistician is not an actuary. Actually, the definition of a data scientist given, that is "gathering data, massaging it into a tractable form, making it tell its story, and presenting that story to others" is exactly what a statistician does.
I do however see the need for a new discipline, separate from applied statistics, or data science. The massive amount of data to come from an instrumented world with strongly interconnected people and machines, and real-time analysis, inference and prediction from those data, will require inter-disciplinary skills. But I see those skills coming together in a person who is more of a smith, or, as Julian Hyde put it last night, an artisan. Falling back on the old dictionary again, a smith is someone who is skilled in creating something with a specific material; an artisan is someone who is skilled in a craft, making things by hand.
Another reason that I don't like the term "data science" for this interdisciplinary role, stems from what Mike Loukides describes in his article "What is Data Science?" as the definition for this new discipline "Data science requires skills ranging from traditional computer science to mathematics to art". I agree that the new discipline requires these three things, and more, even softer skills. I disagree that these add up to data science.
I even prefer "data geek", as defined by Michael E. Driscoll in "The Three Sexy Skills of Data Geeks". Michael Driscoll's post of 2009 May 27 certainly agrees skill-wise with Mike Loukides post of 2010 June 02.
And I very much prefer "Data Munging" to "Computer Science" as one of the three skills.
I'll stick to the definition that I gave above for data science as "systematic study, through observation and experiment, of the collection, modeling, analysis, visualization, dissemination, and application of data". This is also applied statistics. So, what else is needed for this new discipline? Well, Mike and Michael are correct: computer skills, especially data munging, and art. Well, any statistician today has computer skills, generally in one or more of SAS, SPSS, R, S-plus, Python, SQL, Stata, MatLab and other software packages, as well as familiarity with various data storage & management methods. Some statisticians are even artists, perhaps as story tellers, as evidenced by that rare great teacher or convincing expert witness, perhaps as visualizers, creating statistically accurate animations to clearly describe the analysis, as evidenced by the career of that intern I hired so many years ago.
The data smith, the data artisan, must be comfortable with all forms of data:
Just as any other smith, someone following this new discipline might serve an apprenticeship creating new things from these forms of data such as a data warehouse or an OLAP cube, a sentiment analysis or a streaming SQL sensor web, or a recommendation engine or complex system predictives. The data smith must become very comfortable with putting all forms of data together in new ways, to come to new conclusions.
Just as a goldsmith will never make a piece of jewelry identical to the one finished days before, just as art can be forged but not duplicated, the data smith, the data artisan will glean new inferences every time they look at the data, will make new predictions with every new datum, and the story they tell, the picture they paint, will be different each time.
And perhaps then, the data smith becomes a master, an artisan.
'Tis been longer than intended, but we finally have the technology, time and resources to continue with our Open Source Solutions Decision Support System Study Guide (OSS DSS SG).
First, I want to thank SQLstream for allowing us to use SQLstream as a part of our solution. As mentioned in our "First DSS Study Guide" post, we were hoping to add a real-time component to our DSS. SQLstream is not open source, and not readily available for download. It is however, a co-founder and core contributer to the open source Eigenbase Project, and has incorporated Eigenbase technology into its product. So, what is SQLstream? To quote their web site, "SQLstream enables executives to make strategic decisions based on current data, in flight, from multiple, diverse sources". And that is why we are so interested in having SQLstream as a part of our DSS technology stack: to have the capability to capture and manipulate data as it is being generated.
Today, there are two very important classes of technologies that should belong to any DSS: data warehousing (DW) and business intelligence (BI). What actually comprises these technologies is still a matter of debate. To me, they are quite interrelated and provide the following capabilities.
For our purposes, an important question is whether or not there are open source, or at least open source based, solutions for all of these capabilities. The answer is yes. As a matter of fact, there are three complete open source BI Suites [there were four, but the first, written in PERL, the Bee Project from the Czech Republic, is no longer being updated]. Here's a brief overview of SpagoBI, JasperSoft, and Pentaho.
We'll be using Pentaho, but you can use any of the these, or any combination of the OSS projects that are used by these BI Suites, or pick and choose from the more than 60 projects in our OSS Linkblog, as shown in the sidebar to this blog. All of the OSS BI Suites have many more features than shown in the simple table above. For example, SpagoBI has good tools for geographic & location services. Also, JasperSoft Professional and Enterprise Editions have many features than their Community Edition, such as Ad Hoc Reporting and Dashboards. Pentaho has a different Analyzer in their Enterprise Edition than either jPivot or PAT, Pentaho Analyzer, based upon the SaaS ClearView from the now-defunct LucidEra, as well as ease-of-use tools such as an OLAP schæma designer, and enterprise class security and administration tools.
Data warehousing using general purpose RDBMS systems such as Oracle, EnterpriseDB, PostrgeSQL or MySQL, are gradually giving way to analytic database management system (ADBMS), or, as we mentioned above, the catch-all NoSQL data storage systems, or even hybrid systems. For example, Oracle recently introduced hybrid column-row store features, and Aster Data has a
column-store Massive Parallel Processing (MPP) DBMS|map-reduce hybrid . Pentaho supports Hadoop, as well as traditional general purpose RDBMS and column-store ADMBS. In the open source world, there are two columnar storage engines for MySQL, Infobright and Calpont InfiniDB, as well as one column-store ADBMS purpose built for BI, LucidDB. We'll be using LucidDB, and just for fun, may throw some data into Hadoop.
In addition, a modern DSS needs two more primary capabilities. Predictives, sometimes called predictive intelligence or predictive analytics (PA), which is the ability to go beyond inference and trend analysis, assigning a probability, with associated confidence, or likelihood of an event occurring in the future, and full Statistical Analysis, which includes determining the probability density or distribution function that best describes the data. Of course, there are OSS projects for these as well, such as The R Project, the Apache Common Math libraries, and other GNU projects that can be found in our Linkblog.
For statistical analysis and predictives, we'll be using the open source R statistical language and the open standard predictive model markup language (PMML), both of which are also supported by Pentaho.
We have all of these OSS projects installed on a Red Hat Enterprise Linux machine. The trick will be to get them all working together. The magic will be in modeling and analyzing the data to support good decisions. There are several areas of decision making that we're considering as examples. One is fairly prosaic, one is very interesting and far-reaching, and the others are somewhat in between.
The next step in our open source solutions (OSS) for decision support systems (DSS) study guide (SG), according to the syllabus, is to make our first decision: a formal definition of "Decision Support System". Next, and soon, will be a post listing the technologies that will contribute to our studies.
The first stop in looking for a definition of anything today, is Wikipedia. And indeed, Wikipedia does have a nice article on DSS. One of the things that I find most informative about Wikipedia articles, is the "Talk" page for an article. The DSS discussion is rather mild though, no ongoing debate as can be found on some other talk pages, such as the discussion about Business Intelligence. The talk pages also change more often, and provide insight into the thoughts that go into the main article.
Once upon a time, we might have gone to a library and thumbed through the card catalog to find some books on Decision Support Systems. A more popular approach today would be to search Amazon for Decision Support books. There are several books in my library that you might find interesting for different reasons:
These books range from being technology focused to being general business books, but they all provide insight into how various components of DSS fit into a business, and different approaches to implementing them. None of them actually provide a complete DSS, and only the first focuses on OSS. If you followed the Amazon search link given previously, you might also have noticed that there are books that show Excel as a DSS, and there is a preponderance of books that focus on the biomedical/pharmaceutical/healthcare industry. Another focus area is in using geographic information systems (actually one of the first uses for multi-dimensional databases) for decision support. There are several books in this search that look good, but haven't made it into my library as yet. I would love to hear your recommendations (perhaps in the comments).
From all of this, and our experiences in implementing various DW, BI and DSS programs, I'm going to give a definition of DSS. From a previous post in this DSS SG, we have the following:
A DSS is a set of processes and technology that help an individual to make a better decision than they could without the DSS.
-- Questions and Commonality
As we stated, this is vague and generic. Now that we've done some reading, let's see if we can do better.
A DSS assists an individual in reaching the best possible conclusion, resolution or course of action in stand-alone, iterative or interdependent situations, by using historical and current structured and unstructured data, collaboration with colleagues, and personal knowledge to predict the outcome or infer the consequences.
I like that definition, but your comments will help to refine it.
Note that we make no mention of specific processes, nor any technology whatsoever. It reflects my bias that decisions are made by individuals not groups (electoral systems not withstanding). To be true to our "TeleInterActive Lifestyle" I should point out that the DSS must be available when and where the individual needs to make the decision.
Recently, AnnMaria De Mars, PhD (multiple) and Dr. Peter Flom, PhD have stirred up a bit of a tempest in a tweet-pot, as well as in the statistical blogosphere, with comparisons of R and SAS, IBM/SPSS and the like. I've commented on both of their blogs, but decided to expand a bit here, as the choice of R is something that we planned to cover in a later post to our Open Source Solutions Decision Support Systems Study Guide. First, let me say that Dr. De Mars and Dr. Flom appear to have posted completely independently of each other, and further, that their posts have different goals.
In The Next Big Thing, Dr. De Mars is looking for the next big thing, both to keep her own career on-track, and to guide students into areas of study that will be survive in the job market in the coming decades. This is always difficult for mentors, as we can't always anticipate the "black swan" events that might change things drastically. The tempestuous nature of her post came from one little sentence:
There are several threads in Dr. De Mars post. I agree with Dr. De Mars that two of the "next big things" in data management & analysis are data visualization and dealing with unstructured data. I'm of the opinion that there is a third area, related to the "Internet of Things" and the tsunami of data that will be generated by it. These are conceptual areas, however. Dr. De Mars quickly moves on to discussing the tools that might be a part of the solutions of these next big things. The concepts cited are neither software packages nor computing languages. The software packages SAS, IBM/SPSS, Stata, Pentaho and the like, and the computing language S, with its open source distribution R, and its proprietary distribution S+ are none likely to be the next big things, as they are currently useful tools to know.
I find it interesting that both Dr. De Mars and Dr. Flom, as well as the various commenters, tweeters, and other posters, are comparing software suites and applications with a computing language. I think that a bit more historical perspective might be needed in bringing these threads together.
In 1979, when I first sat down with a FORTRAN programmer to turn my Bayesian methodologies into practical applications to determine the reliability and risk associated with the STAR48 kick motor and associated Payload Assist Module (PAM), the statistical libraries for FORTRAN seemed amazing. The ease with which we were able to create the program and churn through decades of NASA data (after buying a 1MB memory box for the mainframe) was wondrous
Today, there's not so much wonder from such a feat. The evolution of computing has drastically affected the way in which we apply mathematics and statistics today. Several of the comments to these posts argue both sides of the statement that anyone doing statistics today should be a programmer, or shouldn't. It's an interesting argument, that I've also seen reflected in chemistry, as fewer technicians are used in the lab, and the Ph.D.s work directly with the robots to prepare the samples and interpret the results.
Approximately 15 years ago, I moved from solving scientific and engineering problems directly with statistics, to solving business problems through vendor's software suites. The marketing names for this endeavor have gone through several changes: Decision Support Systems, Very Large Databases, Data Warehousing, Data Marts, Corporate Information Factory, Business Intelligence, and the like. Today, Data Mining, Data Visualization, Sentiment Analysis, "Big Data", SQL Streaming, and similar buzzwords reflect the new "big thing". Software applications, from new as well as established vendors, both open source and proprietary, are coming to the fore to handle these new areas that represent real problems.
So, one question to answer for students, is which, if any, of these software packages will best survive with and aid the growth of, their maturing careers. Will Tableau, LyzaSoft, QlikView or Viney@rd be in a better spot in 20 years, through growth or acquisition, than SAS or IBM/SPSS? Will the open source movement take down the proprietary vendors or be subsumed by them? Is Pentaho/Weka the BI & data mining solution for their career? Maybe, maybe not. But what about that other beast of which everyone speaks? Namely, R, the r-project, the R Statistical Language. What is it? Is it a worthy alternative to SAS or IBM/SPSS or Pentaho/Weka? Or is it a different genus altogether? That's a question I've been seeking to answer for myself, in my own career evolution. After 15 years, software such as SAP/Business Objects and IBM/Cognos, haven't evolved into anything that I like, with their pinnacle of statistical computation being the "average", the arithmetic mean. SAS and IBM/SPSS are certainly better, and with data mining, machine learning and predictives becoming important to business, certainly likely to be a good choice for the future. But are they really powerful enough? Are they flexible enough? Can they be used to solve the next generation of data problems? They're very likely to evolve into software that can do so. But how quickly? And like all vendor software, they have limitations based upon the market studies and business decisions of the corporation.
How is R different?
Well, first, R is a computing language. Unlike SAP/Business Objects, IBM/Cognos, IBM/SPSS, SAS, Pentaho, JasperSoft, SpagoBI, or Oracle, it's not a company, nor a BI Suite, nor even a collection of software applications. Second, R is an open source project. It's an open source implementation of S. Like C, and the other single letter named languages, S came out of Bell Labs, and in the case of R, in 1976. The open source implementation, R comes from R. Ihaka and R. Gentleman, first revealed in 1996 through the article, R: A language for data analysis and graphics. Journal of Computational and Graphical Statistics, 5:299–314, and is often associated with the Department of Statistics, University of Auckland.
While I'm not a software engineer, R is a very compelling statistical tool. As a language, it's very intuitive… for a statistician. It's an interactive, interpretive, functional, object oriented, statistical programming language. R itself is written in R, C, C++ and FORTRAN; it's powerful. As an open source project, it has attracted thousands upon thousands of users who have formed a strong community. There are thousands upon thousands of community contributed packages for R. It's flexible, and growing. One of the main goals of R was data visualization, and it has a wonderful new package for data visualization in ggplot2. It's ahead of the curve. There are packages for parallel processing (some quite specific), for big data beyond in-memory capacity, for servers, and for embedding in a web site. Get the idea? If you think you need something in R, search CRAN, RForge, BioConductor or Omegahat.
As you can tell, I like R. However, in all honesty, I don't think that the SAS vs. R controversy is an either/or situation. SAS, IBM/SPSS and Pentaho complement R and vice-versa. Pentaho, IBM/SPSS and some SAS products support R. R can read data from SAS, IBM/SPSS , relational databases, Excel, mapReduce and more. The real question isn't is one tool better than another but rather selecting the best tool to answer a particular question. That being said, I'm looking forward to Dr. Flom's comparison, as well as the continuing discussion on Dr. De Mars' blog.
For us, the question is building a decision support system or stack from open source components. It looks like we'll have a good time doing so.
First, let me say that we're talking about systems supporting the decisions that are made by human beings, not "expert systems" that automate decisions. As an example, let's look at inventory management. A human might use various components of a DSS to determine the amount of an item in stock, the demand for that item as a trend to determine when it might be out of stock, and predictives as to various factors (internal, external, environmental, political, etc) that might affect supply, to come to a decision as to how much and when to order more of that item. An expert system might be created that could also determine when and how much of an item to oder, using neural networks, Bayesian nets or other algorithms. The expert system might even take from the same DSS components (or directly from their underlying data) as the human might. One could even run the expert system in parallel with humans making the decisions, scoring or otherwise evaluating the two, until the expert system is comparable or better than the expert system. But, we're not really interested in expert systems in this study guide. We'll be focusing on systems that help humans to make better decisions, not on automated feedback and control loops.
To me, a technology doesn't matter very much if it's not supporting some process, or a step within a process. That process may be for personal reasons or supporting work activities. For this study guide, let's begin by continuing the discussion that we began in the previous posts, about the process by which one makes a decision, the steps, the events, the triggers and the consequences of making a decision.
I have my own process in making decisions. I've played in executive and management roles for many years, and have been responsible for 5 P/L centers. But this is a study guide, and while I intend to offer my own opinions and interpretations, we need some objective sources to study. Let's start with a Google search. Of course, Wikipedia has an article. A site of which I've not heard before has the first hit with their article on problem-solving and decision-making. Science Daily has a timely article from 2010 March 13 on how we really make decisions, our brain activity during decision making. I also like the map from The Institute for Strategic Clarity. Mindtools sets out a list of techniques and tools for aiding in the decision making process, and provides an important caveat "Do remember, though, that the tools in this chapter exist only to assist your intelligence and common sense. These are your most important assets in good Decision Making". Reading through various reviews, the one book on decision making that I want to add to my library is The Managerial Decision-Making Process, 5th ed. by E. Frank Harrison. From the Glossary of Political Economy Terms, we have:
Where formal organizations are the setting in which decisions are made, the particular decisions or policies chosen by decision-makers can often be explained through reference to the organization's particular structure and procedural rules. Such explanations typically involve looking at the distribution of responsibilities among organizational sub-units, the activities of committees and ad hoc coordinating groups, meeting schedules, rules of order etc. The notion of fixed-in-advance standard operating procedures (SOPs) typically plays an important role in such explanations of individual decisions made. -- Organizational process models of decision-making
Let's revisit and expand upon the summary that we gave in the third post in this series.
I need to put the problem, and my decision making, into context. I have a variety of resources at my disposal to do so:
In summary, this introduction has set up a framework for a decision-making process for an individual to use a decision support system. The majority of this study guide will be to expore the actual decision support system, and the open source tools from which we can build such a system.
As promised, here's the syllabus for our study guide to decision support systems using open source solutions. We'll start with a first draft on 2010-03-23, and update and change based on ideas, comments and lessons learned. So, please comment. The updates will be marked. Deletions will be marked with a
strike-though and not removed.
In the introduction to our open source solutions (OSS) for decision support systems (DSS) study guide (SG), I gave a variety of examples of activities that might be considered using a DSS. I asked some questions as to what common elements exist among these activities that might help us to define a modern platform for DSS, and whether or not we could build such a system using open source solutions.
In this post, let's examine the first of those questions, and see if we can start answering those questions. In the next post, we will lay out a syllabus of sorts for this OSS DSS SG.
The first common element is that in all cases, we have an individual doing the activity, not a machine nor a committee.
Secondly, the individual has some resources at their disposal. Those resources include current and historical information, structured and unstructured data, communiqués and opinions, and some amount of personal experience, augmented by the experience of others.
Thirdly, though not explicit, there's the idea of digesting these resources and performing formal or informal analyses.
Fourthly, though again, not explicit, the concept of trying to predict what might happen next, or as a result of the decision is inherent to all of the examples.
Finally, there's collaboration involved. Few of us can make good decisions in a vacuum.
Of course, since the examples are fictional, and created by us, they represent our biases. If you had fingered our domain server back in 1993, or read our .project and .plan files from that time, you would have seen that we were interested in sharing information and analyses, while providing a framework for making decisions using such tools as email, gopher and electronic bulletin boards. So, if you identify any other commonalities, or think anything is missing, please join the discussion in the comments.
From these commonalities, can we begin to answer the first question we had asked: "What does this term [DSS] really mean?". Let's try.
A DSS is a set of processes and technology that help an individual to make a better decision than they could without the DSS.
That's nice and vague; generic enough to almost meaningless, but provides some key points that will help us to bound the specifics as we go along. For example, if a process or technology doesn't help us to make a better decision, than it doesn't fit. If something allows us to make a better decision, but we can't define the process or identify the technology involved, it doesn't belong (e.g. "my gut tells me so").
Let's create a list from all of the above.
What do you think? Does a modern system to support decisions need to cover all of these elements and no others? Is this list complete and sufficient? The comments are open.
|<< <||> >>|
The Open Source Solutions Blog is a companion to the Open Source Solutions for Business Intelligence Research Project, sponosred by InterActive Systems & Consulting, Inc. This Blog, a Wiki and Lens will be used to develop, support and publish the findings of our research into enterprise open source projects.InterActive Systems & Consulting, Inc. (IASC) performs research in the areas of data analytics, collaboration and remote access.
InterASC Professional Services, a service mark of IASC, provides strategic consulting and project management for data warehousing, business intelligence and collaboration projects using proprietary and open source solutions. We formulate vendor-independent strategies and implement solutions for information management in an increasingly complex and distributed business environment, allowing secure data analysis and collaboration that provides enterprise information in the most valuable form to the right person, whenever and wherever needed.
TeleInterActive Networks, a service mark of IASC, hosts open source applications for small and medium enterprises including CMS, blogs, wikis, database applications, portals and mobile access. We provide the tools for SME to put their customer at the center of their business, and leverage information management in a way previously reserved for larger organizations.