Need help with android-sdk?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

140 Stars 103 Forks Apache License 2.0 221 Commits 11 Opened issues


:high_brightness: Android SDK to use the IBM Watson services.

Services available


Need anything else?

Contributors list

IBM Watson Developer Cloud Android SDK Build Status

Android client library to assist with using the Watson services, a collection of REST APIs and SDKs that use cognitive computing to solve complex problems.

Table of Contents



Download the aar here.

The minimum supported Android API level is 19. Now, you are ready to see some examples.


The examples below assume that you already have service credentials. If not, you will have to create a service in IBM Cloud.

Service Credentials

Getting the Credentials

  1. Sign up for an IBM Cloud account.
  2. Create an instance of the Watson service you want to use and get your credentials:
    • Go to the IBM Cloud catalog page and select the service you want.
    • Log in to your IBM Cloud account.
    • Click Create.
    • Click Show to view the service credentials.
    • Copy the
      value, or copy the
      values if your service instance doesn't provide an
    • Copy the

Adding the Credentials

Once you've followed the instructions above to get credentials, they should be added to the

file shown below.



If you are having difficulties using the APIs or have a question about the IBM Watson Services, please ask a question on dW Answers or Stack Overflow.

You can also check out the wiki for some additional information.


This SDK is built for use with the Watson Java SDK.

The examples below are specific for Android as they use the Microphone and Speaker; for actual services refer to the Java SDK. You can use the provided example app as a model for your own Android app using Watson services.


Provides simple microphone access within an activity.

MicrophoneHelper microphoneHelper = new MicrophoneHelper(this);

The MicrophoneHelper object allows you to create new MicrophoneInputStream objects and close them. The MicrophoneInputStream class is a convenience class for creating an

from device microphone. You can record raw PCM data or data encoded using the ogg codec.
// record PCM data without encoding
MicrophoneInputStream myInputStream = microphoneHelper.getInputStream(false);

// record PCM data and encode it with the ogg codec MicrophoneInputStream myOggStream = microphoneHelper.getInputStream(true);

An example using a Watson Developer Cloud service would look like

speechService.recognizeUsingWebSocket(new MicrophoneInputStream(),
getRecognizeOptions(), new BaseRecognizeCallback() {
  public void onTranscription(SpeechResults speechResults){
    String text = speechResults.getResults().get(0).getAlternatives().get(0).getTranscript();

@Override public void onError(Exception e) { }

@Override public void onDisconnected() { }


Be sure to take a look at the example app to get a working example of putting these all together.


Provides the ability to directly play an

. Note: The
must come from a PCM audio source. Examples include WAV files or Audio/L16.
StreamPlayer player = new StreamPlayer();

Since this SDK is intended to be used with the Watson APIs, a typical use case for the

class is for playing the output of a Watson Text to Speech call. In that case, you can specify the type of audio file you'd like to receive from the service to ensure it will be output properly by your Android device.
SynthesizeOptions synthesizeOptions = new SynthesizeOptions.Builder()
  .text("I love making Android apps")
  .accept(SynthesizeOptions.Accept.AUDIO_WAV) // specifying that we want a WAV file
InputStream streamResult = textService.synthesize(synthesizeOptions).execute();

StreamPlayer player = new StreamPlayer(); player.playStream(streamResult); // should work like a charm

Another content type that works from the Text to Speech APIs is the Audio/L16 type. For this you need to specify the sample rate, and you can do so with the alternate version of the

method. The default sample rate on the single-argument version is 22050.
SynthesizeOptions synthesizeOptions = new SynthesizeOptions.Builder()
  .text("I love making Android apps")
  .accept("audio/l16;rate=8000") // specifying our content type and sample rate
InputStream streamResult = textService.synthesize(synthesizeOptions).execute();

StreamPlayer player = new StreamPlayer(); player.playStream(streamResult, 8000); // passing in the sample rate


Provides simple camera access within an activity.

CameraHelper cameraHelper = new CameraHelper(this);

@Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data);

if (requestCode == CameraHelper.REQUEST_IMAGE_CAPTURE) {



Like the CameraHelper, but allows for selection of images already on the device.

To open the gallery:

GalleryHelper galleryHelper = new GalleryHelper(this);

@Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data);

if (requestCode == GalleryHelper.PICK_IMAGE_REQUEST) {
  System.out.println(galleryHelper.getFile(resultCode, data));



Testing in this SDK is accomplished with Espresso.

To run the tests, in Android Studio:

Within the example package, right-click the androidTest/java folder and click Run 'All Tests'.

Build + Test

Use Gradle (version 1.x) to build and test the project you can use


  $ cd android-sdk
  $ gradle test # run tests

Open Source @ IBM

Find more open source projects on the IBM Github Page


This library is licensed under Apache 2.0. Full license text is available in LICENSE.



We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.