An implementation pattern for updating Android widgets using alarms

I recently added a widget to the Shush! app.  I hadn't implemented a widget before, and I found that there was some complexity involved that I didn't anticipate, so I thought I'd share the approach I ended up taking.

Some quick background - Shush! is a utility app that lets you set a "quiet time" for your ringer, restoring it back after the duration you request.  I wanted a widget that would show the time remaining while in "Shush! mode" (among other features).

The widget therefore has two modes:

1) when Shush! mode is not active, the widget is visually static:


2) when Shush! mode is active, the widget needs to be updated frequently (once per minute) to count down the remaining time


To implement the countdown behavior, when activating Shush! mode, I register a PendingIntent with the AlarmManager for one minute in the future.  The intent calls back to a BroadcastReceiver that updates the widget's display and registers another PendingIntent for a minute later.  When Shush! mode is deactivated, I unregister the outstanding alarm and switch the widget display to its static state.

OK, that wasn't too hard.  Ship it!

Hmm, wait a sec.  During testing, I noticed that during Shush! mode, even though I'm using a "non-waking" alarm, the alarms continued to be triggered while the screen is off.  It turns out that "screen off" != "sleeping".  It doesn't make sense to waste CPU cycles updating the widget if the screen is off, so to avoid that, when activating Shush! mode, I decided to register a BroadcastReceiver that filters for ACTION_SCREEN_OFF and ACTION_SCREEN_ON.  But (sigh) it turns out that one cannot set up a BroadcastReceiver that filters on these actions via the manifest (apparently it's a special case), so I needed to do it programmatically.  Of course, if you're going to do it programmatically, you have to keep that BroadcastReceiver instance around in memory for as long as you need it to be listening, so I created a Service to keep it alive.  I start the Service when Shush! mode is activated, and the Service creates and registers the screen on/off BroadcastReceiver.   When Shush! mode is deactivated, I stop the Service, which unregisters the BroadcastReceiver.

The BroadcastReceiver handles ACTION_SCREEN_OFF by canceling any outstanding alarm, and handles ACTION_SCREEN_ON by updating the widget, and setting a new alarm.

Great, that should do it.  Oh, wait, what about reboots?  Well, unfortunately Android forgets all alarms during a reboot, so they have to be reestablished again afterward.  I do this with a BroadcastReceiver that filters for BOOT_COMPLETED.  Upon receiving the broadcast, this receiver updates the widget and, if Shush! mode is activated, starts the Service.

So there you have it - a widget, a BroadcastReceiver to handle alarms, a BroadcastReceiver to handle screen on/off events, a BroadcastReceiver to handle reboots, and a Service to keep the screen on/off receiver alive.  Never a dull day as an Android developer!

fb-android-dagger 1.0.6 released

This release includes two new injecting base classes:

  • InjectingAppWidgetProvider
  • InjectingActionBarActivity (suggested by Tyler Chesley)

In addition, InjectingApplication was enhanced with a new addSeedModules() method to facilitate unit testing with test-method-specific dagger modules. See Initialize your Android component’s dagger object graph on a per-method basis in your Robolectric tests for a detailed explanation of how to use this feature.

Finally, InjectingServiceModule's provideApplicationContext() provider method was replaced with provideServiceContext, which seems to make more sense in that, uh... context.

Initialize your Android component's dagger object graph on a per-method basis in your Robolectric tests

{note: this blog entry is about dagger 1}

When writing unit tests for Android applications, my go-to tools are Robolectric, dagger, and Mockito.  The combination covers a lot of ground in terms of providing "stand ins" for a lot of Android system functionality and factoring out dependencies from the objects under test by injecting mocks into them.  In my production code, my components derive from helper classes in my fb-android-dagger library, which takes care of creating object graphs and self-injecting those components during their initialization logic. fb-android-dagger's component base classes allows subclasses (including test classes) to participate in establishing the dagger modules to use when creating the object graph.  Thus, a production component might use a module which provides dependency X, while a test component could subclass from the production component and tack on a test module which overrides the production module and provides a mock X instead.

This all works pretty well, but it has a limitation -- the opportunity to override production modules with test modules exists only at the time that the component's initialization method runs.  In practice, this means that if the different test methods in a test class want different mocks to be injected into the component under test, they need to employ some technique to ensure that (1) the set of modules used to create the object graph is established by code which is specific to the test method (which rules out the @Before method, since it is method-independent), and (2) the component under test doesn't get initialized until after (1).

My usual approach to this problem is to defer initialization (which includes object graph generation and injection) of the component under test until inside the test method (as opposed to the @Before method).  In the test method, I first set up my mocks (which are fields of the test class) to provide the desired behavior, then initialize the component under test.  For the test subclass of my component, I override getModules() to supply a module (with overrides=true) that has provider methods which provide the mocks configured in the test method.  Note that this overriding module class must be non-static to have access to the mocks in the test class.

In the following example, assume that MyBroadcastReceiver is the class of the object under test, and it extends InjectingBroadcastReceiver from fb-android-dagger.

public class MyTest {
    private X mockX = Mockito.mock(X.class);
    BroadcastReceiver testReceiver = new TestMyBroadcastReceiver();
    public void onReceive_doesSomethingWhenXIsEnabled() {
        // for this test, I want to simulate an "enabled" X
        // onReceive will build the object graph and inject itself using a provider
        // which returns mockX.
        // ... verification logic...
    public void someMethod_doesSomethingDifferentWhenXIsDisabled() {
        // for this test, I want to simulate a "disabled" X
        // etc.
    private class TestMyBroadcastReceiver extends MyBroadcastReceiver {
        protected List<Object> getModules() {
            List<Object> result = super.getModules();
            result.add(new MockXModule());
            return result;
    @Module(injects=MyBroadcastReceiver, overrides=true)
    private class MockXModule {
        public X provideX() {
            return mockX;

However, this approach doesn't work if you want to tweak the Application-scope object graph on a per-method basis, because Robolectric creates and initializes (and hence injects) the Application object for you implicitly before your test method runs.  Even if you have a test application class that overrides getModules() and adds modules with provider methods that leverage the state of the test object (i.e. using the technique above), it won't work, because getModules() will get called before the test method runs, and therefore the test method will have no opportunity to configure the test object's state prior to injection of the application object.

Robolectric does provide hooks in the form of the TestLifecycleApplication interface, whose beforeTest, prepareTest, and afterTest methods you can implement in your test application class, but they aren't helpful in this case, since they all run after your test application's onCreate() method has already been called (and thus its object graph has already been created and used to inject it).

One workaround to this problem would be to use different test application classes for different methods, using Robolectric 2.4's @Config(application=...) annotation on a per-method basis.  With this approach, you'd need your different test application classes to specify different modules in their getModules() implementations, each hardcoded to provide the desired mock behavior for the corresponding test method.  Note: here we're assuming that MyBroadcastReceiver's dependency on X is satisfied from the Application-scope object graph.

public void onReceive_doesSomethingWhenXIsEnabled() { ... }

public void onReceive_doesSomethingDifferentWhenXIsDisabled() { ... }

private class TestMyBroadcastReceiverWithXEnabled extends MyBroadcastReceiver { 
    @Override protected List<Object> getModules() { 
    List<Object> result = super.getModules();
    result.add(new MockXEnabledModule()); 
    return result; 

private class TestMyBroadcastReceiverWithXDisabled extends MyBroadcastReceiver { 
    @Override protected List<Object> getModules() { 
        List<Object> result = super.getModules(); 
        result.add(new MockXDisabledModule()); 
        return result; 

@Module(injects=MyBroadcastReceiver, overrides=true) 
private class MockXEnabledModule { 
    @Provides public X provideX() { 
        X mockX = mock(X.class); 
        return mockX; 

@Module(injects=MyBroadcastReceiver, overrides=true) 
private class MockXDisabledModule { 
    @Provides public X provideX() {
        X mockX = mock(X.class);
        return mockX; 


That can get a little unwieldy when you start having combinations of method-specific modules, since each combination would need a different test application subclass.  To avoid that kind of proliferation, I've implemented a subclass of RobolectricTestRunner called InjectingRobolectricTestRunner, which looks for annotations like

public void onReceive_doesSomethingWhenXIsEnabled() { ... } 

and takes care of seeding the annotation-specified modules into the Application object prior to its onCreate() method being called.  This required two small enhancements to InjectingApplication in fb-android-dagger: (a) a new setter method by which the InjectingRobolectricTestRunner can assign the seed modules, and (b) an update to its getModules() method, to add the seed modules into the list it returns.

In some cases, this approach may eliminate the need for test application subclasses altogether, but it works with them just as well -- the annotations establish the method-specific modules, and the test application class' getModules() method tacks on any others that are not method-specific.  As with the first approach described above, note that the test-specific modules must be hardcoded to exhibit the desired method-specific behavior. Here's the code for InjectingRobolectricTestRunner:

public class InjectingRobolectricTestRunner extends RobolectricTestRunner {
    public InjectingRobolectricTestRunner(Class testClass) throws InitializationError { 

    @Override protected Class getTestLifecycleClass() { 
        return InjectingTestLifecycle.class; 

    public static class InjectingTestLifecycle extends DefaultTestLifecycle {

        @Override public Application createApplication(Method method, AndroidManifest appManifest, Config config) { 
            InjectingApplication injectingApplication = (InjectingApplication) super.createApplication(method, appManifest, config);

            List<Object> seedModules = new ArrayList<>();

            if (method.isAnnotationPresent(With.class)) { 
                With with = method.getAnnotation(With.class); 
                for (Class<?> clazz : with.appModules()) { 
                    try { 
                    } catch (InstantiationException e) { 
                    } catch (IllegalAccessException e) { 

            return injectingApplication; 

    public @interface With { 
        Class<?>[] appModules(); 

tip: to bind dagger singletons to a particular object graph scope, use injects=


{note: this blog entry is about dagger 1}

If you want singleton semantics for injected objects, dagger offers two approaches - you can implement a @Singleton-annotated provider method, or you can implement an @Inject-annotated constructor in a @Singleton-annotated class. The latter approach is obviously a little more concise, but it raises one issue, which is that in an application with multiple object graphs, some of which extend others (e.g. an Activity-scope graph that extends an Application-scope graph), dagger will maintain singletons of those @Singleton-annotated classes in each object graph that is requested to inject one.  So if you've got a @Singleton-annotated Foo class, you could end up with one Foo object shared by objects that are injected via the Activity-scope graph, and another Foo object shared by objects injected via the Application-scope graph.

There are two ways to avoid this.  Let's assume you want only one Foo instance and you want it to be bound to the Application-scope graph.  In your Application module declaration, you could (1) add a @Singleton-annotated provider method for Foo, or (2) include Foo.class in the "injects=" part of your @Module annotation.  When I first read about this, I found it a little confusing, as I thought the "injects=" list was just for enumerating the classes that the module injects into, but some experimentation confirmed that specifying classes to inject has the effect of binding their @Inject-annotated constructor to that module (essentially creating an implicit provider), and thus to the object graph created from it.

fb-android-dagger 1.0.5 released

Just a quick note for you fb-android-dagger users out there... I just released a new version of fb-android-dagger.  Pretty minor update, but a few things to note:

  1. I've updated the dagger dependency in the POM to version 1.2.1 (the latest).  There's a 2.0 version of dagger coming out at some point, but for now, this is what we've got.
  2. The POM file dependencies on the Android SDK and the Support v4 library have been updated to the latest versions.  I also switched over to using the groupId/artifactId/version values created by the Maven Android SDK Deployer, rather than the ones from Maven Central (which are unofficial and outdated).
  3. Misc other minor POM file updates.
  4. Thanks to a contribution from Tobias Preuss, the dependency on Guava has been removed.

Get it from Github or from Maven Central using:


Introducing fb-android-dagger - a helper lib for using dagger with Android

A while back, Square released a new dependency injection framework called dagger, emphasizing simplicity and performance (via compile-time code generation rather than runtime reflection). Dagger actually has no Android dependencies and thus can be used in other contexts, but it has gained traction among Android developers. As I started working with it, it quickly became clear that it would be handy to have a set of reusable base classes that integrate dagger with the standard Android components like Application, Service, BroadcastReceiver, Activity, and Fragment. And so, fb-android-dagger was born. I open sourced it about a year ago, and a few people seem to be using it, so it's probably (past) time I introduced it.

Creating object graphs

I'll skip over the dagger basics, as they're explained well on Square's dagger page, but I'll note that a useful implementation pattern that quickly emerged is to have a component base class do the ObjectGraph initialization and injection of "this" in its first lifecycle method (e.g. Activity.onCreate()). To initialize the graph, that code needs the set of applicable dagger modules, so the base class defines a getModules() method which it expects subclasses to override by supplying the modules they want to contribute to the graph.

Here's a typical subclass implementation of getModules():

public class MyActivity extends InjectingActivity {

// ...

    @Override protected List<Object> getModules() {
        modules = super.getModules();
        modules.add(new MyModule());
        return modules; 

s you can see, the idea is for the module list to be built cooperatively via the getModule() overrides in the chain of classes deriving from the injecting base class. Each class' getModules() method first gets its superclass' module list, add its own modules to the list and returns the combined list.

fb-android dagger's set of component base classes each implement this mechanism, in a lifecycle method appropriate to the component:

  • InjectingApplication (onCreate)
  • InjectingBroadcastReceiver (onReceive)
  • InjectingService (onCreate)
  • InjectingActivity (onCreate)
  • InjectingFragmentActivity (onCreate)
  • InjectingPreferenceActivity (onCreate)
  • InjectingFragment (onAttach)
  • InjectingListFragment (onAttach)
  • InjectingDialogFragment (onAttach)

In addition, the graphs that fb-android-dagger creates are scoped to their associated components, so the graph created by MyActivityOne is distinct from the graph created for MyActivityTwo. However, it does reuse and extend graphs created by other components, working "outward" from the Application graph:

  • an Application-scope graph is created by InjectingApplication
  • InjectingBroadcastReceiver creates a BroadcastReceiver-scoped graph that extends the Application-scope graph.
  • InjectingServiceReciever creates a Service-scoped graph that extends the Application-scoped graph
  • InjectingActivity, InjectingFragmentActivity, and InjectingPreferenceActivity create Activity-scoped graphs that extend the Application-scoped graph.
  • InjectingFragment, InjectingDialogFragment, and InjectingListFragment create Fragment-scoped graphs that extend their associated Activity's graph.

Helper modules

fb-android-dagger also provides a set of dagger modules and qualifier annotations to facilitate injection of objects relevant to their component type.

For example, a class injected by an Activity-scope graph could get the Activity's context and the Application context injected like so:

class Foo1 { 
    @Inject @Application Context appContext;
    @Inject @Activity Context activityContext; // ... 

A class can also access relevant components themselves, directly:

class Bar { 
    @Inject Application theApp;
    @Inject Activity myActivity; 
    @Inject Fragment myFragment; // ... 

Feel free to give a try!

Source is on Github

Maven users can download from Maven Central:


Speed up YouTube videos by 33-50%

I watch a lot of lengthy YouTube videos, mostly instructional-type videos from Google Developer Advocates and the like, explaining how to use APIs, tools, etc. They're good, but every time I look at one and see that it's 45 minutes, or 70 minutes, or whatever, I kind of cringe and sigh. Sometimes I just put it on a "to watch later" list, and maybe I don't get around to it for a long time, if ever. Recently I discovered a useful way to shave a good chunk of time off that watching experience. It's been around for a while, but I wasn't aware of it. YouTube has a trial version of an HTML5 video player that allows you to set the playback speed to 1.5x or 2x normal speed. I find 1.5x to be useful; 2x seems to be too fast if you want to understand the audio, but it could be useful for fast-forwarding to the next interesting part.

For details on enabling this feature, checkout the write-up here.

watch out when using Activity.getPreferences()!

The Android SDK has a handy little method for getting SharedPreferences -- the Activity class has a getPreferences() method that implicitly opens/creates a preferences file named after the simple class name of the Activity (e.g. "MyActivity"). Cool, it works.  Ship it.

OK, now in version 2, you add a new Activity called MyOtherActivity, and want to access those same preferences.  Hmm, you can't use getPreferences() in MyOtherActivity; that would use a different filename.  So, you have to use (from the Context class) getSharedPreferences(MyActivity.class.getSimpleName(), ...) instead.

A little ugly.  But wait, it gets worse.

Maybe later, you decide to rename MyActivity to ABetterNameForMyActivity.  Now things are really a nuisance, because you can't use getPreferences() inside MyActivity anymore, and the hack in MyOtherActivity won't work, either (in an upgrade scenario, the new version of the app won't use the preference file left over from the previous version).

You can replace both usages to use Context.getSharedPreferences("MyActivity", ...), i.e. explicitly specify the name of the no-longer-existing activity, but that's going to be pretty confusing to the next guy who looks at the code.

So, to avoid all this, my recommendation is to create a helper class right from the beginning that encapsulates the name of the preference file (which would be unrelated to any particular Activity).  This would also be a good place to define static Strings for the preference value names used when getting and setting preference values.  Something like this (a little crude, but you get the idea):

public class SharedPrefs { 
    public static String PREF_VALUE1 = "value1";
    public static String PREF_VALUE2 = "value2";

    private static String prefsFileName = "prefs"; 
    private SharedPreferences prefs;

    public SharedPrefs(Context context) { 
        prefs = context.getSharedPreferences(prefsFileName, Context.MODE_PRIVATE); 

    public SharedPreferences getPreferences() { 
        return prefs; 

excluding src\main\resources from source folders in IntelliJ build

Real quick... I recently switched from Eclipse to IntelliJ, and one thing that's been annoying me is that IntelliJ keeps putting the src\main\resources folder of my Android project into its list of source folders.  Removing it only helps temporarily; it gets restored again later (I think when I build from the "maven projects" window). Solution: exclude this folder at the compiler level, in the project settings.  File->Settings->Compiler->Excludes.

Credit for this solution goes to Sergey Evdokimov, who posted it here.

App Crashers!

When I was a little kid, I had a great-uncle whom the family called "George With The Hat", to distinguish him from another (hat-less) great-uncle also called George.  George With The Hat was a farmer, and raised sheep.  I remember that when visiting him, he would go out to the pasture to feed the sheep, and as he approached, he would call out "baaa!", and the sheep would all reply in unison and come running over to him.  I thought that was cool, but when I tried it, they ignored me. What does this have to do with software?  Bear with me.

I've integrated my Android app with a crash reporting mechanism, so that I can find out about problems my users encounter.  I use ACRA, which I've tied in with BugSense, but there are others like Crittercism, Flurry, HockeyApp, etc.

I wanted to have an easy way to test that the crash-reporting mechanism was working in release builds of my app, but I didn't want to have to put some obscure UI affordance into the app to trigger a crash.

So, I built a separate little "App Crasher" app.  All it does it show a button that, when pressed, sends out an intent with a custom action.  Then, in my app that contains the crash reporting logic, I implemented a simple activity that intentionally causes a crash in its onCreate() method.  In the manifest, that activity registers an intent filter to receive the intent sent by the App Crasher. As an extra precaution (here's where George With The Hat comes in), I implemented a custom permission that the "crash-ee" requires of the "crash-er" as part of the intent filtering.

It's all pretty simple, but handy, and it demonstrates how to use intents with custom verbs and permissions to communicate between apps.

Without further ado, the code:

In the App Crasher app, the main activity does a setContentView() on a layout containing a button with the ID "boom", looks up that button via findViewById(), and sets an onClickListener on it that calls startActivity(), specifying the custom action.

public class MainActivity extends Activity {

    @Override public void onCreate(Bundle savedInstanceState) {

        final Button button = (Button) findViewById(;
        button.setOnClickListener(new View.OnClickListener() { 
            @Override public void onClick(View v) { 
                try { 
                    startActivity(new Intent("com.fizzbuzz.appcrasher.CRASH"));
                } catch (ActivityNotFoundException e) {
                    Toast.makeText(MainActivity.this, "No apps found that want to crash :-(", Toast.LENGTH_SHORT).show();

The App Crasher manifest declares a custom permission that the receiving app's intent filter will require of intent senders:


In the app where I want to receive the intent and produce a crash, I added a simple activity:

public class CrashActivity extends Activity { 
    @Override protected void onCreate(Bundle savedInstanceState) { 
        int i = 1 / 0; // boom! 

... and in that app's manifest, I declared the activity and an intent filter for it, specifying the custom permission and the custom action:

Maven enforcer plugin vs. dependencyManagement

I recently found myself in a situation where I needed to detect whether my maven dependencies, both direct and indirect (transitive), were resolving to inconsistent versions. For example, let's say that artifact A depends on B and C, and B also depends on C.  When building A, I want to know if it is picking up two different versions of C, one directly, and one transitively through B.

As with all Maven problems, "there's a plugin for that" -- in this case, the maven-enforcer-plugin.  It has a variety of interesting rules, but the one that addresses the need I was having is called "dependencyConvergence".

So, I plugged it in to a top-level parent POM, so I could use it in all my projects:

                <DependencyConvergence />

I use Eclipse with m2e, and I wanted this enforcement to happen in my Eclipse builds, too, so I also added:

            <execute />

I ran a build, and sure enough, I did have some mismatches.  This is a really handy plugin -- I have a decent number of my own artifacts involved, plus a variety of 3rd party ones that    show up frequently in my projects, and with all of them getting updated pretty regularly, it's pretty tough to keep track of everything.

In one case where I had a mismatch, I decided to resolve the problem by moving the specification of C's version up to a parent POM shared by A and B.  Two options occurred to me:

1) I could specify a property like


and have A's and B's POMs use that inherited property in their definitions of the C dependency.  Or,

2) I could put an entry for C into the parent POM's dependencyManagement section:


and omit C's version from A's and B's POM.

I tried #2 first, since it seemed a little simpler.  As a first step, I added the dependencyManagement entry to the parent POM and removed the version number for C from A's POM, then ran a build.  Guess what:  maven-enforcer-plugin stopped complaining!  But wait, I didn't change B's POM yet; I should still have a discrepancy, shouldn't I?  I thought that all dependencyManagement did was specify default versions for descendant POMs that omitted the version for that dependency.  I checked my Maven: The Definitive Guide book to see if there was more to it.  Nope, no mention of any other purpose or side effects.  I checked the Sonatype web site's description, in case it was more detailed.  Nope.  Hmm.  Then I went to the horse's mouth, the Apache docs, which say:

Dependency management - this allows project authors to directly specify the versions of artifacts to be used when they are encountered in transitive dependencies or in dependencies where no version has been specified. In the example in the preceding section a dependency was directly added to A even though it is not directly used by A. Instead, A can include D as a dependency in its dependencyManagement section and directly control which version of D is used when, or if, it is ever referenced.

[Note: that page is worth a detailed read; it has some good examples which helped firm up my understanding.]

Interesting.  So the version specified in dependencyManagement serves as a default value if none is specified in a descendant POM (but doesn't override the value in a descendant POM if one is specified).  However, it does override a specified value in a transitive dependency.  Because of this behavior, the version of C in B's POM was being overridden, and therefore maven-enforcer-plugin didn't detect the discrepancy.

This dependencyManagement behavior has pros and cons.  On one hand, you can use it to silence the maven-enforcer-plugin in situations where you can't get all the artifacts involved to use the same version (as might be the case if there are 3rd party artifacts involved).  Of course, if you do that, you're taking a risk that the things could go awry at runtime, if the version you specify in dependencyManagement is incompatible in some way with an artifact that had wanted to use a different version.  But sometimes you don't have a choice, and in this situation you should most likely choose the highest of the requested versions, since it's possible that it contains bug fixes or features needed by the artifact that requested it.

The downside of using dependencyManagement willy-nilly as a DRY technique is precisely that maven-enforcer-plugin will no longer give you a heads up about those discrepancies.

So what I'm doing now is this:

  • I don't put dependencies in the dependencyManagement section of my top-level POM.  I want to be alerted by maven-enforcer-plugin when I've got mismatches.  Instead, I use version properties, as mentioned in my approach #1 above.
  • When maven-enforcer-plugin notifies me of discrepancies, I try to see if I can get the artifacts involved to use the same version of the divergent dependency.  If all the dependencies involved are in my own artifacts, I try to get them aligned on the same version of the dependency.  If some artifacts are mine and some are from 3rd parties, I try to align my dependences with the 3rd parties, and/or look for other versions of the 3rd party artifacts that have dependency versions that align with each other, and my code.
  • If after doing the above, I still have unresolvable discrepancies, I choose what I think is the "best fit" version of the problematic artifact and specify that in the dependencyManagement section of the project POM where maven-enforcer-plugin reported the problem (not in my top-level POM).  I add a comment to the dependency declaration in that POM noting the issue and the workaround, so that in the future, should I upgrade to a newer version of the dependency, I'll see the note and can revisit whether the discrepancy can possibly then be resolved.

Connecting to an app engine development server from remote clients

OK, I'm writing this down just in case I run into again the future, because I just spent HOURS tearing my hair out before finding the solution.  Maybe it will help someone else someday, too. I recently messed around with a whole bunch of configuration-related stuff for my Google App Engine app, requiring me to reestablish my my Eclipse run configuration parameters for executing my app in the development server.  In the past, I had set up port forwarding rules in my router, so that I could hit my app running in my dev server from remote clients like browsers running on other machines, from my client Android app, etc., and it had been working fine.

As is so often the case, after making configuration changes, there is a hill to be re-climbed in order to get everything working again.  In my case, although I could access my app in the dev server from clients running on the same machine, my remote clients could not connect.  I checked my port forwarding rules in my router -- they were still there.  I tested them using PFPortCheck tool, and it verified the ports were forwarding correctly.

I turned off my firewall; nope, that's not the problem.

I rebooted.  Nope.

I power cycled my router and my cable modem (obviously getting desperate here, since I'd already verified that port forwarding wasn't the issue, but I've also found that sometimes challenging your assumptions helps).  But nope.

I tried different ports.  I checked to make sure there weren't other apps listening on the same port and stealing the requests.  Nope, nope.

I set up Fiddler in reverse-proxy mode, and get this -- it started working.  That is, with my dev server listening to port 8080 (the forwarded port) and Fiddler listening on 8888 (also forwarded), my outside requests to 8888 were being picked up by Fiddler, rerouted to, and successfully received by my app.  Damn, a Heisenbug. OK, interesting, but still frustrating -- why did that work, and what does it tell me about the problem?

After much Googling and cursing, I eventually stumbled across the culprit.  I needed to provide an "--address=..." argument in the run configuration's command line, to tell the server to listen for connections to addresses other than  After setting this argument to the value of my externally-visible IP address (see WhatsMyIP), I was back in business!


[follow-up: if you specify the --address argument as, you can connect from remote clients using the externally-visible IP address, and still connect locally using]

Capturing incoming requests to your Google App Engine development server, using Fiddler

I'm working on an app the leverages Google App Engine, and I recently was stumped trying to figure out how to use Fiddler to capture incoming HTTP request to my server when running in development mode, when those requests were coming from another machine.  I'm jotting this down to save others from the hair-pulling I went through.

  1. Change the run/debug configurations in Google Plugin for Eclipse so that the development server uses some port other than 8888.  In my case, I changed it to 8080.  The problem with 8888 is that Fiddler wants to use that port, too.  I tried changing the port that Fiddler listens on, in its options dialog, but it didn't work for me.
  2. Set up port forwarding on your router, so that the port your development server is listening to is being forwarded to your development machine.
  3. Follow the instructions at to set up Fiddler as a remote proxy, but be sure to use option #2 (setting up a custom rule).  The registry approach doesn't seem to work for requests coming in from another machine.
  4. Ensure the external machine making requests to your development server is using the externally-visible IP address for your machine (see and port 8888.  If it uses port 8080, the request will still go to your server, but Fiddler won't capture it.

Note: I have found that there can be some flakiness in getting this setup to take effect.  I haven't figured out exactly what the trick is, whether it's restarting Fiddler multiple times, or simply waiting for some cached value somewhere to time out, ...  If somebody solves that part of the puzzle, please let me know.

Eliminating the lag after Android gesture completion

Thought I'd write this up quickly, since I ran into it not too long ago, and then I saw someone asking about it recently online... If you're implementing some gesture recognition in your Android app, one of the first things you'll notice after you get it basically working, is that there's an annoying and mysterious delay between the time that you finish drawing the gesture, and the time that the processing of that gesture begins (i.e. when GestureOverLayView.onGesturePerformed gets called).

This is caused by the so-called "fade offset".  If you're showing the gesture on the screen, then during this time period, the drawn gesture fades away.  However, if you're not showing it, it's just a waste of time.

To eliminate this lag (the default duration of which is 400 milliseconds), call the following methods on your GestureOverlayView:

setFadeEnabled(false) setFadeOffset(0)

or use the equivalent android:fadeOffset and android:fadeEnabled properties in your view's layout XML.