How to choose the right tool in IBM Watson Studio

  • Tabular data in delimited files or relational data in remote data sources
  • Image files
  • Textual data in documents
  • Prepare data: cleanse, shape, visualize, organize, and validate data.
  • Analyze data: identify patterns and relationships in data, and display insights.
  • Build models: build, train, test, and deploy models to classify data, make predictions, or optimize decisions.
  • Code editor tools: Use the Jupyter notebook editor or the RStudio IDE to write code to work with any type of data and do any type of task.
  • Graphical canvas tools: Use menus and drag-and-drop to visually program. Build dashboards to analyze data or build multi-step flows to prepare data, analyze data, or build models.
  • Automatic builder tools: Use to build and train models with very limited user input.

Tools for tabular or relational data

Tools for tabular or relational data by task:

Tools for textual data

Tools for building a model that classifies textual data:

Tools for image data

Tools for building a model that classifies images:

Jupyter notebook editor

Use the Jupyter notebook editor to create a notebook in which you run code to prepare, visualize, and analyze data, or build and train a model.

  • Write code in Python, R, or Scala
  • Include rich text and media with your code
  • Work with any kind of data in any way you want
  • Use preinstalled or install other open source and IBM libraries and packages
  • Schedule runs of your code
  • Import a notebook from a file, a URL, or the Community
  • Share read-only copies of your notebook externally

Data Refinery

Use Data Refinery to prepare and visualize tabular data with a graphical flow editor. You create and then run a Data Refinery flow as a set of ordered operations on data.

  • Cleanse, shape, organize data with over 60 operations
  • Save refined data as a new data set or update the original data
  • Annotate data with crowd annotation platforms
  • Profile data to validate it
  • Write R scripts to manipulate data
  • Schedule recurring operations on data

Streams flow editor

Use the streams flow editor to access and analyze streaming data. You can create a streams flow with a wizard or with a flow editor on a graphical canvas.

  • Ingest streaming data
  • Aggregate, filter, and process streaming data
  • Process streaming data for a model

Dashboard editor

Use the Dashboard editor to create a set of visualizations of analytical results on a graphical canvas.

  • Create graphs without coding
  • Include text, media, web pages, images, and shapes in your dashboard
  • Share interactive dashboards externally

SPSS Modeler

Use SPSS Modeler to create a flow to prepare data and build and train a model with a flow editor on a graphical canvas.

  • Use automatic data preparation functions
  • Write SQL statements to manipulate data
  • Cleanse, shape, sample, sort, and derive data
  • Visualize data with over 40 graphs
  • Identify the natural language of a text field
  • Build predictive models
  • Choose from over 40 modeling algorithms
  • Use automatic modeling functions
  • Model time series or geospatial data
  • Classify textual data
  • Identify relationships between the concepts in textual data

Spark MLlib modeler

Use the SparkML modeler to create a flow to prepare relational data and build and train a model with a flow editor on a graphical canvas.

  • Transform data with SQL statements
  • Build predictive or classification models
  • Choose from 10 Spark MLlib modeling algorithms

Neural network modeler

Use the Neural Network Modeler to design a neural network for text and image data with a flow editor on a graphical canvas.

  • Create a deep learning flow to design and run experiments without coding
  • Tune many hyperparameters
  • Standardize the components of a deep learning experiment for easier collaboration

AutoAI tool

Use the AutoAI tool to automatically analyze your tabular data and generate candidate model pipelines customized for your predictive modeling problem.

  • Train a binary classification, multiclass classification, or regression model
  • View a tree infographic that shows the sequences of AutoAI training stages
  • Generate a leaderboard of model pipelines ranked by cross-validation scores
  • Save a pipeline as a model

Synthesized Neural Network tool

Use the Synthesized Neural Network tool to fully automate the synthesis and training of a neural network with your image or text training data.

  • Create a deep learning flow to design and run experiments
  • Use built-in training data
  • Automatically test a series of algorithm and optimization options
  • Track, audit, and tune the model in production on a Watson OpenScale dashboard

Experiment builder

Use the Experiment builder to build deep learning experiments and run hundreds of training runs. This method requires that you provide code to define the training run. You run, track, store, and compare the results in the Experiment Builder graphical interface, then save the best configuration as a model.

  • Write Python code to specify metrics for training runs
  • Write a training definition in Python code
  • Define hyperparameters, or choose the RBFOpt method or random hyperparameter settings
  • Find the optimal values for large numbers of hyperparameters by running hundreds or thousands of training runs
  • Run distributed training with GPUs and specialized, powerful hardware and infrastructure
  • Compare the performance of training runs
  • Save a training run as a model

Visual Recognition modeler

Use the Visual Recognition modeler to automatically train a model to classify images for scenes, objects, faces, and other content.

  • Collaborate to classify images
  • Use one of five built-in models
  • Test the model with sample images
  • Use CoreML to develop iOS apps
  • Provide as few as 10 images per class
  • Add or remove images to retrain the model
  • Use Watson Visual Recognition APIs in applications

Natural Language Classifier modeler

Use the Natural Language Classifier modeler to automatically train a model to classify text according to classes you define.

  • Provide as few as 3 text samples per class
  • Collaborate to classify text samples
  • Test the model with sample text
  • Add or remove test data to retrain the model
  • Classify text in eight languages other than English
  • Use Watson Natural Language Classifier APIs in applications

RStudio IDE

Use RStudio IDE to analyze data or create Shiny applications by writing R code.

  • Write code in R
  • Create Shiny apps
  • Use open source libraries and packages
  • Include rich text and media with your code
  • Prepare data
  • Visualize data
  • Discover insights from data
  • Build and train a model using open source libraries

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Inge Halilovic

Inge Halilovic

I’m a content strategist at IBM. I architect the documentation for Cloud Pak for Data as a Service.