Showing posts with label Information Technology. Show all posts
Showing posts with label Information Technology. Show all posts

Friday, December 14, 2012

mHealth and Data Liquidity

Healthcare Needs APIs More than Apps

This post was written with my colleagues Joris Van Dam, Translational Sciences Strategic Project Leader at Novartis Institutes for Biomedical Research (NIBR); and John R. Walker, director of NIBR IT for Novartis.

As frequent attendees of mobile health conferences, we’ve been excited by the momentum and creativity we’ve seen in mobile health solutions (Health Apps). But we've also been concerned.

We’re excited because of the number of Health Apps being developed for patients and consumers in different disease areas, for wellness, prevention and care. As big believers in the potential of Health Apps to improve outcomes and lower healthcare costs, we’re glad to see this market really take off.

At the same time, it seems there’s an App for everything – and anything. Yet there’s very little talk of interoperability or data exchange – heck, even about preserving my personal health data when I want to exchange my App for a new one, when I need to re-image my phone, or when I want to buy a new phone.

It seems like in the rush to capture the value of mHealth, we’re launching and creating a staggering amount of new health data silos – making health data less liquid and shareable when we need just the opposite.  This is a big concern, because in the longer term, data liquidity is the key to sparking innovation, improving outcomes, and reducing healthcare costs.

At the recent 2012 mHealth Summit, it was encouraging to see that this approach is starting to shift. It seems that more and more healthcare companies are recognizing that even though their patients or their customers need these Apps, perhaps jumping head-first into Apps development isn’t their best bet. Instead, maybe healthcare companies should let “the market” develop these Apps (they seem to know what they’re doing!) while they support the market – with funding, with infrastructure, and with content (liquid data).

Getting It Right

One of the great examples at mHealth Summit was presented by Aetna, a 160-year-old health insurance company. Rather than joining the Apps Race, Aetna has developed and published the CarePass platform. The platform provides a set of APIs that allows you to build your own Apps, plus it provides interoperability among your App and all the other Apps developed using CarePass  including the 20+ Apps that Aetna has developed and acquired itself, such as iTriage.  Update of January 24, 2013:  Check out this great video interview with Aetna Vice President Martha Wofford.

AT&T and Qualcomm Life also presented their platforms, with APIs that connect to a variety of different health and wellness devices. These APIs allow you to build Apps that connect to these devices, yet also thereby become independent of the devices. For example, you could switch from one fitness sensor to another, buy a different wireless weight scale, or use a new blood-pressure monitor while the App preserves your health data.

Joris Van Dam
Entirely in the same vein, earlier this year Athena Health, the Watertown, Mass-based provider of cloud-based Electronic Health Record (EHR) systems, launched their “More Disruption Please” program. Participants in the program receive access to the API of Athena Health’s EHR system, a platform to build Apps. They can even get access to start-up funding and launch support for their Apps in the network of Athena Health customers.

Allscripts launched a similar developer program for their EHR. And in 2009 Cerner launched its uDevelop platform, providing their customers with a platform and APIs to develop new Apps, and an App store to share these Apps with each other.

Of course, the largest, and perhaps most influential, organization to get it right is the ONC, which has been setting the tone for improving data liquidity in healthcare, and thereby stimulating healthcare innovation in Health Apps development, through an array of policies and programs. These include health information exchanges, meaningful use criteria for EHR adoption, Project Blue Button, and the Health platform – to name just a few.

Many of these programs also provide funding for you to develop Apps with their APIs. And, if not, there is always Health 2.0, which announced in May 2012 that it had already awarded more than $1 million to mobile health initiatives through its Developer Challenge series.

Where to Go from Here

There will always be a new App, a better App, an App that fits my health needs better as they evolve over time – and that’s OK. Yet all this tremendous activity and investment in Health Apps will only really and truly pay off, for individuals and their individual health needs as well as for a sustained impact on the healthcare system overall, if those Apps are built on, and contribute to, an underlying layer of liquid health data.

John R. Walker
As patients and consumers, we need to be able to switch from one device to the other, carry our health data from one provider to the other, switch insurance companies as we take new jobs, exchange one App for the other and so on – as our personal healthcare needs evolve over time. And those personal healthcare needs will really and truly only be served if our health data moves with us, as opposed to being caught in the latest fad and locked in the latest App.

For mHealth to pay off, we need APIs more than we need Apps.  It’s great to see that so many companies are getting it right. A year ago, we felt both excited and concerned at the prospect of mHealth. Today, we’re just pumped.

Tuesday, November 27, 2012

Open Patient Consent and Clinical Research Data

Overcoming Lingering Concerns

In my last several posts, I wrote about how crucial it is to increase the liquidity of clinical research data  particularly clinical trial data  and how we can achieve improved data liquidity with patient-centric systems and software.

But there’s a remaining  and not insignificant  challenge:  patient consent.

Before sharing their health information, people want to know it’s going to be secure and beneficial to do so.  Until you’ve got people who are willing to share their data, it’s tough to justify the investment in building secure systems. A classic chicken-and-egg problem.

This is a very similar dynamic to e-commerce back in the 1990s.  People were afraid to enter their credit card numbers into a Web site. Many people were even saying that no one would ever trust personal financial data to the Web. Today, you can take a picture of a check with your iPhone, deposit the check electronically and throw the physical check away.  The convenience of electronic financial transactions via the Web far outweighed the security risks (both real and perceived).  

I believe that we’re going through a similar transition with electronic health data that we went through with personal financial data back in the 1980s and 1990s.  It may take a decade or two for people to be comfortable sharing their anonymized and aggregated medical information to benefit research...but maybe not.  In my humble opinion, the benefits of portability of our medical information now far outweigh the security risks/concerns.  

Some of the benefits are very pragmatic:
  • Portability: When you switch doctors, you can bring your medical history with you electronically
  • Accessibility:  When you have an emergency and the ER team needs your history immediately and will want to search for all your allergies quickly
  • Reference: When your doctor asks you when you had your last immunizations
You might recall that some high-profile early efforts at personal patient record systems failed ― specifically, Google Health and Microsoft HealthVault.  I think of these not as failures, but as invaluable experiments that helped us all learn what works and doesn’t work in managing and sharing health data securely and efficiently.  And perhaps most importantly, these experiments began to socialize the ideas of medical information being represented electronically and of patients owning their medical records. 

Meet Project Green Button

Project Green Button is an important experiment in creating data liquidity and sharing medical information.
You may have heard of Project Blue Button.  This was a fantastic project launched by the US Department of Veterans Affairs, enabling VA patients to download a copy of their own health data from the VA’s systems  by clicking on a Blue Button.  This is a really great example of empowering patients to own their own medical data and improving the portability of their medical information.  

Now let’s think about it the other way around.

Research has suggested that if people were presented with a Green Button (see picture above) that provided a single-click way to share their data with researchers, more than 80% of people would press it.  My close friend and trusted colleague John Wilbanks gave a great TED Talk about this a few months ago and has been doing ground-breaking work on open consent, the philosophy behind the Green Button.

I’m very hopeful that John will be successful in his mission to empower patients to share their own data to benefit research – after all, it is their data  and that most people, when asked, will be willing to share their anonymized information to benefit research.  
The team at the LIVESTRONG Foundation, led by Director of Evaluation and Research Ruth Rechlis-Oekler, Ph.D,  did a great study a few years ago about cancer patients’ and survivors’ willingness to share their information in the interest of improving research.  The results were compelling: the majority of patients and survivors WERE willing to share their de-identified and aggregated health information with researchers in the interest of improving health care for others. 

So, the willingness is there.  All we are missing are the systems to enable this. And the time to create those systems (similar to what I described in my last blog post) is NOW.   One of the most interesting companies working in this area is Avado; founder Dave Chase is one of the thought leaders in this space and has fantastic vision for where the industry needs to go over the next 10 years.  

Wanted: A Trusted “Zone”

In order to prime the pump of online personal health data, we need patient-controlled solutions and a trusted zone where we can connect patients securely with their data. This trusted zone would be a place where:
  • Each patient has a “dashboard” through which he can get access to all data about his own health, regardless of where the referential data is located.
  • Each patient can determine whether and with whom to share pieces of his health data – with his family doctor, relatives or loved ones.
  • Each patient can track his own health, and record and track his own experiences.
The result would be something that feels like a simple dashboard: a single environment, available to the patient, where he can access, share, and manage all his relevant health information.

Once we can “free the data” in a trusted environment like this, it’s possible to bolt on a whole battery of cool apps that you can’t even anticipate today  and that you don’t even have to develop yourself.

With today’s rapid app development technologies, we can build apps for physicians and for many other use cases, and simultaneously make the apps accessible via browser-based applications. 

A great example is the LIVESTRONG Cancer Guide and Tracker for iPad, which collects and combines patient-reported data for patients living with cancer.  We currently have a pilot project under way to connect the Tracker app with traditional clinical systems and a secure Web-based application, enabling the patient and his doctor to collaborate more effectively during office visits.  We're using the fantastic SMART Platform developed at Children's Hospital in Boston under the leadership of Zak Kohane and Ken Mandl. 
A Call to Action

So, here are a few action items:

First, we in the biopharmaceutical industry should ask ourselves:  “What data can we ‘free up’ about our products and our studies to better support patients and physicians?"  Let's start with the basic inclusion/exclusion criteria for our existing studies that we already share with our clinical partners.  As I mentioned in my previous post, if we would just increase the liquidity of this data in the biopharmaceutical industry, we could have a HUGE impact on the efficiency of research through the ability for the right patients to find the right studies at the right time.  

Second, we should support the idea of open consent by supporting the Consent to Research project (  Let's all give the flexibility to patients to support research with their own medical information and, in the process, radically improve the efficiency of the healthcare system by simplifying the complex spider web of consents that our dysfunctional healthcare system has created.  The current system of consents does not protect patients, but rather confuses them.

This blog post came out of a presentation that I recently delivered at Rev Forum, a conference sponsored by Lance Armstrong’s LIVESTRONG Foundation and Genentech. I have worked with LIVESTRONG and various biopharmaceutical companies on new health care information products and apps that take advantage of data liquidity to help patients combat cancer and other difficult diseases.

Monday, November 19, 2012

Better Systems for Clinical Data Collaboration

Innovation in Systems and Software

In my last two posts, I wrote about the need for more liquidity in clinical research data.  As a foundation for sharing this new more-liquid clinical research data, we need more patient-centric systems, where patients can create, consume and maintain relevant medical information.

However, in the average hospital, most patient data is generated and organized in the clinic, and typically stored in a variety of different legacy hospital systems.  Pretty illiquid by definition. Therefore, we need fresh approaches to sharing that data across hospital systems   and then across multiple hospitals.  

Fortunately, innovation in systems and software is beginning to happen on this front.

Several organizations  including Dana-Farber Cancer Institutethe LIVESTRONG Foundation, and Boston Children’s Hospital  are working to build a reference implementation.  This is a technology model that would describe how a hospital could publish the information contained in its systems easily, securely and efficiently to other institutions. 

Another very interesting technology is the SMART Platform, developed under the leadership of Dr. Zak Kohane and Dr. Ken Mandl at Children’s Hospital in Boston.  The SMART Platform and i2b2 Analytical tool are truly a step in the right direction.   

These new technical approaches provide the ability to build cool new apps very quickly – apps that combine data reported by the patient in an interface like the LIVESTRONG Cancer Guide and Tracker iPad app alongside data collected by traditional clinical systems. Using the same approach, we can build apps for physicians and many other use cases and simultaneously make the apps accessible via browser-based applications.

Let’s think about what else we can do as an industry  and encourage many people and companies to start writing new apps quickly.  

In my next post, I’ll talk about big remaining challenge to the liquidity of clinical research data – patient consent – and how we win patient trust and cooperation.

This blog post came out of a presentation that I recently delivered at Rev Forum, a conference sponsored by Lance Armstrong’s LIVESTRONG Foundation and Genentech. I have worked with LIVESTRONG and various biopharmaceutical companies on new health care information products and apps that take advantage of data liquidity to help patients combat cancer and other difficult diseases.

Tuesday, November 13, 2012

Improving Data Liquidity in Clinical Research

Empowering Patients and Doctors

In my last post, I wrote about the need for data liquidity in clinical research – and the need for biopharmaceutical companies and healthcare institutions to take the lead by freeing up data about their studies, clinical trials and drugs.

To accelerate clinical research efforts for diseases like cancer, we need two things.  First, stakeholder institutions (like biopharmaceutical companies and healthcare institutions) need to free up their data.  Second, we need new systems and software that can share and manage that data – securely and at scale – across our complex healthcare ecosystem. 

Start-ups are being formed every day that are pushing the envelope on applications and technologies that take advantage of health care data liquidity. In Boston/Cambridge alone, we have three start-up incubators dedicated to health information technologies, each with 10+ start-ups.  That’s more than 30 top-tier, vetted start-ups focused on health information technologies being developed at any given point and time. For example, check out:
Pharma companies have long been proponents of “free your data”  as long as it means freeing data from patients, from claims clearinghouses, from pharmacies and so on.  Pharma companies have been less enthusiastic about freeing their own data, about their studies, clinical trials and drugs.  But the logjam is starting to break.

As I mentioned in my previous post, recently GlaxoSmithKline (GSK) publicly announced it would make detailed data from clinical trials available to researchers.  The company’s detailed patient data (data that forms the basis of trials of approved drugs as well as discontinued investigational drugs) would be made accessible to researchers.  Researchers’ requests for access will be reviewed by an independent panel of experts, and the patient data will be anonymous.
Last year, one of the large biopharmaceutical companies  in collaboration with the electronic health record company Cerner  began an initiative to build an open interface that will enable sponsors of clinical research studies such as Novartis, Genentech, GSK or Pfizer to publish inclusion/exclusion criteria about their clinical studies to specific clinical partners – in much greater detail than what’s available on the National Institutes of Health's today.  

The goal of this project is to create a electronic mechanism to ensure that all eligible patients are identified for appropriate studies via their doctors.  This mechanism would be able  without changing any data privacy – to flag patients’ records when they are diagnosed or when new studies or updates to studies become available; and to dynamically notify doctors and match patients with new or evolving studies based on patients' clinical profiles and the inclusion/exclusion criteria of the trial.

Under this new electronic standard for study information, study eligibility criteria are expressed in a standardized, machine-readable format. Any EHR system can ingest the study data automatically as new studies are created and as existing studies change.  With this more-liquid data, providers can then match the inclusion/exclusion criteria against the health records in their systems.

A provider configures its systems to flag records of potentially qualifying patients using a form of research-study-recommendation engine. The provider runs and tunes this engine so that the next time a clinician pulls up a patient’s health record (or the patient’s health record changes), the EHR system will suggest to the doctor that his patient may qualify for a clinical trial – both local and not-so-local trials (think truly global patient recruitment with little or no extra effort). 

In addition, when a new trial is published, doctors with patients can be notified that there is a new trial of possible interest and eligibility for specific patients.

Through this type of simple standard for study information exchange – one that empowers doctors and is run by providers – doctors and patients could be automatically made aware of trials regardless of where the study is being run or when a new study starts.

Using this simple standard, Cerner has worked with various large biopharmaceutical companies to build and test end-to-end Proofs of Concept   in the process successfully demonstrating that this approach can work very well with relatively little extra effort on the part of the study sponsors, the providers or the biopharmaceutical companies.   There is no additional risk of information privacy, since these criteria have been published previously to the providers, and the patient data does not have to be shared at all. The standard just enables getting better data on studies to providers in a more targeted way.

In short, starting to improve liquidity of clinical research data just requires leadership from pharma companies and cooperation from health care providers to prime the pump and adopt  standards – and perhaps the encouragement of trusted brokers such as LIVESTRONG to bring the parties together. 

Increasing the liquidity of data in ways like this could improve enrollment in studies, especially for rare diseases with small patient populations. Doctors and patients would have a proactive monitoring system that reminds them about all relevant research studies – especially new studies in rare indications – regardless of geography or the distractions in their daily lives.

This blog post came out of a presentation that I recently delivered at Rev Forum, a conference sponsored by Lance Armstrong’s LIVESTRONG Foundation and Genentech. I have worked with LIVESTRONG and various biopharmaceutical companies on new health care information products and apps that take advantage of data liquidity to help patients combat cancer and other difficult diseases.

Wednesday, November 7, 2012

The Need for Data Liquidity in Clinical Research

Much of the content in the next few posts was developed jointly with my close friend and trusted colleague Joris Van Dam.  Joris is truly a superstar and is doing fantastic work around the world related to eHealth and improving the liquidity of data in healthcare.  

In the course of creating new drugs and therapies, organizations in the biopharmaceutical and healthcare industries amass huge amounts of clinical trial data.  Unfortunately, much of that data remains locked up in individual IT systems, making key data unavailable to the many of the participants in clinical trials: physicians and their patients.  As our society intensely seeks cures for cancer and other diseases, this is nuts – and completely unnecessary.  It's time for a change.  The data required to empower researchers can be shared securely and appropriately.  

In the Information Age where we can do so much via the web, our smartphones, our iPads and the "Cloud," we shouldn’t accept word of mouth as the best tool for patients to find the right studies.  It’s clearly not.  Nor should we accept data illiquidity as an obstacle to timely, broad availability of information about clinical trials. 

The Story of Melissa

Meet Melissa.  Melissa isn’t her real name and that isn’t her real picture above, but this is a true story.

Earlier this year Melissa was diagnosed with one of the deadliest forms of cancer.  Despite her predicament, Melissa is one of those patients (like many of those involved in the LIVESTRONG community) who decided to take an active interest in her own care. 

She wanted to proactively explore any and all kinds of treatment opportunities, including experimental treatment in a clinical research study.

Melissa was smart enough to understand that clinical trials offer no guarantee of improving her condition, let alone a cure.  But the opportunity to participate in a clinical trial would give her hope. It would give her the ability to fight, and the satisfaction, that through her disease, she might be able to contribute to better treatment and ultimately perhaps even a cure, if not for herself then potentially for others like her. It would give her the feeling that her pain and suffering mattered and that she could make a difference in the world.

Melissa believed that it was important for her to have the opportunity to join a clinical research study. So she spent a lot of time on the Internet, educating herself about her disease and treatment opportunities. One day, using the U.S. National Institutes of Health’s, she found a study for which she appeared to qualify  one being run not too far from her home.

Unfortunately, didn’t list the name or contact number for the investigative site.  It just said that it was a  study being run by a large biopharmaceutical company and that she could call the main switchboard number. She called and they really couldn’t help her  so Melissa was stuck.  Next, she turned to a clinical-trials matching web site and asked if there was anything they could do to help her.

Fortunately, by chance the team at the research group of the large biopharmaceutical company happened to know the person who runs that matching web site. That person connected Melissa with someone who offered to help coordinate.

The folks at the biopharmaceutical company went into their clinical trials database to identify the study manager. Then, they contacted the study manager, who went to the matching web site asking if it was okay to share their contact details with Melissa.  A few weeks and many phone calls later, Melissa finally had a screening appointment at the clinic. Within a short time Melissa  through her own perseverance and a lot of luck  was screened and enrolled in the study.

Now the shocker in this story is that... 

....the study investigator was Melissa’s own doctor!  

This was the very doctor who had diagnosed Melissa just a few months earlier.

It’s tempting to think that this doctor dropped the ball.  But in fact he hadn’t. He’s an extremely competent and compassionate physician, not to mention a great study investigator.  He just had a lot going on, and the timing of the start of the trial had been off a bit with the timing of Melissa’s diagnosis. 

It might also be tempting to say that the biopharmaceutical company was at fault for not listing the investigator’s contact details on But the clinic in question is based in Europe, where regulations are such that pharmaceutical companies have to obtain explicit consent from each individual investigative site before its contact details can be listed on  The company just hadn’t dealt with all that red tape yet and there are no systems set up for information to flow more easily.

So, this situation was no one’s fault in particular, but rather a matter of circumstances, bad timing and the lack of data liquidity.

Time for Big Pharma and Healthcare Institutions to Step Up to the Plate was an important milestone and a catalyst when it was launched. On the back of emerged a slew of applications that help patients and doctors navigate the data, and find studies that are particular to a condition.

These include:
More recently, there are new smartphone apps such as TrialX and CoActive.

This is what data liquidity is all about:  Making data appropriately available to encourage an ecosystem of applications that help patients and physicians  and ultimately help drive down the cost of health care and improve outcomes. 

But was launched 12+ years ago ― 7 years before the first iPhone.  We now need to go much further and faster in liberating clinical research data, and I believe that the large biopharmaceutical companies and healthcare institutions have the opportunity to take the lead. 

GlaxoSmithKline (GSK) recently announced that it will open up access to its clinical trial data as appropriate to support open collaboration among researchers.  (You can read more about this decision in this Wall Street Journal article here.)

To accelerate clinical research efforts, we also need new systems and software that can improve liquidity of clinical trial data across the complex healthcare ecosystem.

By increasing the liquidity of clinical trial data this way, we could both improve the lives of patients and reduce overall health care costs. 

The information technologies exist.  Attitudes toward information-sharing are changing.   And cost reduction and better outcomes are compelling motivators. 

In my next posts, I'll talk about some specific initiatives that could have a big impact on the liquidity of data in the healthcare industry as well as a number of issues related to consent, where my great friend John Wilbanks is leading the charge. Check out his TED talk.

This blog post came out of a presentation that I recently delivered at Rev Forum, a conference sponsored by Lance Armstrong’s LIVESTRONG Foundation and Genentech. I have worked with LIVESTRONG and various biopharmaceutical companies on new health care information products and apps that take advantage of data liquidity to help patients combat cancer and other difficult diseases.

Wednesday, August 8, 2012

Upstart - Empowering The Next Generation of Entrepreneurs

Today the team at Upstart announced what we’ve been up to over the past three months. 

The Upstart mission is inspiring (see Founder/CEO Dave Girouard's blog post here), the team is world-class, and the culture of the company is…well…a ton of fun.  I’m especially thrilled to be joined as a “Backer” by two close friends and trusted colleagues in Boston, Frank Moss and Jim Dougherty.  Their leadership in supporting young people who want to turn their passions into careers as entrepreneurs is exemplary.  I’m also thrilled to be working yet again with the fantastic partners at Kleiner Perkins, NEA and Google Ventures

Briefly, Upstart is a new approach to funding and mentorship. Using a crowdfunding model, it allows college grads/would-be entrepreneurs in virtually any field to raise capital in exchange for a small share of their income over a 10-year period.   Upstart aims to provide a modest amount of risk capital, paired with guidance and support from experienced backers, to help grads pursue less-traditional and more-inspiring careers.  

I believe that Upstart has the potential to supercharge the US Innovation Economy - as Dave says "When politicians say we need more entrepreneurs, what they mean is that we need more people creating jobs, rather than taking them." I couldn't agree more - it's time to do the heavy lifting required to create more entrepreneurs so that they can do the heavy lifting required to drive job growth over the next two decades. Upstart believes that one of the key factors in creating more entrepreneurs is early intervention in their career development. Some of the key principles that are driving us include:

       Innovative and ambitious young people should be empowered to pursue their passions when they are young.  If we don’t empower them when they are young, they risk being numbed by the bureaucracy of the larger organizations that they often join for lack of a viable alternative path as entrepreneurs. It’s not that big companies are bad. It’s just that young people who have a high risk/return profile can quickly lose their edge and passion as they succumb to the broader interests of a large organization vs. pursuing something that they care about deeply.  65% of our job growth over the past 2 decades has come from companies with less than 500 people - over the next 2 decades it's the Millennials/Generation Y that will create those companies and create the bulk of jobs that our country needs so desperately - we need to empower them as much as possible.  It's time to bet on Generation Y.

        It’s fundamentally valuable for our economy to balance the recruiting machines of large organizations with a social networking-based system that facilitates young people who want to follow more independent, highly individual paths.  This generation just wants to connect with people who could be their mentors in pursuing their interests and passions. Mentors just want to connect with inspiring young people. Upstart makes those connections easy and automatic. 

        Not every young person has a high risk/return profile, but many more young people will pursue entrepreneurial interests if they have a little bit of  financial flexibility at the right time.  Modest amounts of financial support as young people graduate from school, along with some strong support and encouragement from great mentors, can go a long way.  I know because I've had fantastic support from many great mentors during my career.  

        Mentorship is just as rewarding for the mentor as for the mentee  if only we can make the right connections. What’s been missing is a system to connect potential mentors and mentees around shared interests and affinities.  I’m a software guy who is interested in the life sciences, so I’m naturally prone to want to mentor smart, young, enthusiastic people who share those interests.  But I’m also passionate about rugby, so anyone who is involved in rugby always gets more of my time than those who don’t ;)  When young people who share my interests ask for my help - I'm compelled to help them - because I get way more back than I give.

        Most mentors who have the financial resources – when provided with the opportunity to earn a return similar to bonds – would be thrilled to invest their own money in promising young people with similar passions and interests. Upstart matches mentors with the young talent who will power the growth of our economy over the next 20+ years  and does this at scale.  I can't think of a better investment - definitely better than T-Bills.

        To scale entrepreneurship in the US, we need to scale our ability to empower and coach young people who are capable of taking risks and executing on their passions. I’ve spent a lot of time starting companies from scratch. In my experience, the older and more successful people get, the more they are prone to take for granted the value of a small amount of coaching and financial resources early in the career of a budding entrepreneur.  Small amounts of time and money directed strategically and without friction at scale, can have a huge positive effect on our economy.

        Most entrepreneurs need help, coaching and advice in order to achieve their missions. Most people are not the kind of superstar entrepreneurs that the media popularizes every day:  Steve Jobs, Bill Gates, and so on.  At the same time, these young people contribute the bulk of the job creation through the number of companies they start and the never ending flow of their ideas, energy, passion. A big part of what Upstart provides to young entrepreneurs is a network that can fill the gaps in their experience, knowledge and contacts so they can reach their full potential.

I’m honored to be partnering with my great friend and trusted colleague Dave Girouard as the Founding BOD Member at Upstart and thrilled to be the #1 Backer. 

Let’s take the “post-industrial reins” off our brightest, ambitious young people and empower them to leverage the Information Economy to change the world for the better.  Our country was founded on a core principle of rugged individualism. Let’s coach our young people to take control of their own careers, professional lives and interests and pursue their passions.  We will benefit as an economy and a society in ways that we can't begin to imagine. 

Friday, July 13, 2012

Vertica - Remembering the Early Days

I had an awesome visit @ Vertica earlier this week for lunch. Cool new space in Cambridge and so many fantastic new people. Thanks to Colin Mahony for the invite and to all the talented engineers and business people at Vertica for building such an amazing product and a great organization!

Couple of memories that came to life for me during my visit:

Reading the draft of Mike Stonebraker's  "One Size Does Not Fit All" paper and thinking:  "This is the mission of an important new company: to prove that One Size Does Not Fit All in Database Systems."

During my first meetings with the "Vertica professors" - Mike, Dave Dewitt, Mitch Cherniack, Stan Zdonick, Sam Madden and Dan Abadi - thinking "We have an unfair intellectual advantage."  The technical hurdle was set early by this fantastic team.

Looking up at the new duct work from our original server room (at our original office), which Carty Castaldi vented into the conference room because the conf room was so cold and the servers were running so hot ;)

Inspired Duct Work by Carty Castaldi

The thrill I felt the first time that I watched SELECT work on the Vertica DB :)<

Our first Purchase Order. Thanks to Emmanuelle Skala and Tom O'Connell for that one and the many more that followed and made it possible to build such a great product :)

At our first company summer picnic at Mike's place on Lake Winnipesaukee: taking Shilpa's husband Nitin Sonawane for a ride on the JetSki and him being thrown 10 feet in the air going over a wave. I thought he'd never talk to me again. So glad that he didn't get hurt and that he talks to me regularly ;) 

Our first big customer deal with Verizon and then the first repeat purchases by V. Thanks to Derek Schoettle and Rob O'Brien for building such a great telco vertical and for doing deals with integrity from Day One.

Sitting in the basement at Zip's house in Chicago with Stan, Zip and Mike as they jammed Bluegrass music and we all ate Chicago-style pizza until the wee hours.   Thanks to Zip and to everyone at Getco for being such a great early customer and partner.

Relief I felt when Sybase admitted that Vertica did not infringe on their IP :)  Thanks to Mike Hughes and everyone else involved for the truly awesome result.

Getting early offers from a bunch of big companies to buy Vertica and thinking "These guys don't realize how important Big Data is going to be and how great our product is and how incredibly talented our engineering team is."  Thank you to our BOD for resisting the early temptation in spite of tough economic conditions at the time and thanks to Chris Lynch for negotiating such a great deal with HP.  

During lunch on Wednesday, realizing that Vertica's product is truly world-class and has proven that one size does not fit all. Special thanks to every engineer at Vertica, especially Chuck B. : you all ROCK!

I have a much more detailed post in the works, about the early days of Vertica and what I as a founder learned from the experience.  Stay tuned for this post in the next few months.

Thursday, June 28, 2012

Enterprise Software Sales and Procurement Need a Makeover

Time to Tie Payment to the True Measure of ROI:  User Adoption

Today, I believe that the only way for IT organizations to truly measure their effectiveness is by analyzing the usage patterns, satisfaction and productivity of their customers: end-users.  Period.  Any IT organization that doesn’t use this kind of measurement won’t be successful over the next 10 years.  As David Sacks, the founder/CEO of Yammer, has said: “Voluntary adoption is the new ROI.”
One of the things that bugs me the most about the software industry is the sub-optimal behavior that exists between vendors and buyers.   Software buyers and vendors are engaged in a dysfunctional dance that wastes money and stifles innovation – on both sides. This dysfunction is driven by outdated software business models, and further complicated by non-technical sales and procurement people whose identity is tied to controlling the buying process instead of optimizing value created for end users.
Simply put, times have changed dramatically; enterprise software sales and procurement haven’t.  Here are the new realities, as I see them.  (Most of these realities also apply to how software is sold to and purchased by federal and state governments.)

12 Realities That Vendors and Buyers Can Ignore – But at Their Peril
1.       The pressure of the “consumerization of IT” is game-changing. This pressure is coming down on enterprise IT organizations, although most still have their heads in the sand.  This pressure is going to get acute over the next few years as the gap widens between what is available to the average consumer on the Internet and what an employee’s IT ecosystem provides at work. This gap will reveal just how wasteful and disconnected most IT organizations are from end-users’ real needs. 
2.      Buyers purchase a ton of software that they don’t ever use, much less realize value from.  The enterprise software emperor has no clothes. 
Buyers need to step away from the standard perpetual license agreement.  Just put it down... and pick up a subscription/term agreement: it will be okay, really. Businesses haven’t imploded because they don’t use perpetual agreements.  Short-term, subscription-based agreements are working great for all those Google Enterprise and customers who have month-to-month agreements.  Subscription/term agreements help ensure that buyers don’t waste bazillions of dollars buying software they don’t use. 
Now, the finance dudes will come in with their spreadsheets and models that show how you can finance perpetual agreements at a lower cost of capital.  Don’t believe them.  What these models don’t take into account is one important fact: that if you pay a vendor a bunch of money for a product that is supposed to do something and the vendor isn’t on the hook financially for this, then the vendor probably won’t do it.  There is little incentive for the vendor to ensure that the software is deployed, adopted and improved over time. The vendor (particularly if it’s one that’s oriented toward short-term goals) will likely take the money and run. Not necessarily because they are “bad people” but because the business (like everyone else) is under pressure to deliver more, faster, better. 
3.      Vendors sell a ton of software that is never deployed (see#2). And many of them don’t care.  As their businesses have matured, many of these vendors have sacrificed their souls to short-term thinking and financials. They are no longer driven by missions to deliver value or great experiences for end-users.  By comparison, the consumer Internet companies that have moved into the enterprise software market have used alternative models and behaviors and begun to disrupt the ecosystem.  Those customers that have embraced new usage-oriented models have benefited significantly. Those customers that haven’t are just wasting money and causing their end-users to continue to suffer with outdated and expensive technology that preserves the short term job security of narrow minded IT staff members.  
4.      Software vendor sales people and customers’ procurement staffs are disconnected from how software is used and developed in the enterprise.  Generally, neither procurement folks nor salespeople understand the technology or how it’s used. They are managed to objectives that have nothing to do with successful deployment, much less adoption of the software or technology.  Procurement people generally care about discounts and sales people care about commissions.  It’s time to get these folks out of the way or give them incentives that align with effective adoption and value created for end-users.  One of the powerful benefits of SaaS is that end users can buy their own capabilities and just expense the cost.  This is how many customers start with, and I’ve seen this same adoption pattern occuring in infrastructure at companies such as Cloudant
5.      Software that sucks.  There’s a disconnect between the amount of money that big companies spend on software and the value they get from it. Many big companies only resolve this disconnect over long intervals, after tons of money has been wasted on useless technology projects that aren’t aligned with users’ requirements.  Moving toward shorter-term subscription models helps reinforce the need for software companies to create value within reasonable periods of time.  In other works, deliver software that doesn’t suck.
Part of the reason corporate IT projects take so long is that businesses don’t push their vendors to deploy quickly or drive adoption. Here’s an example, from the Front Office/Customer Relationship Management sector of the software industry.  Siebel Systems launched its system in the early to mid-1990s using the traditional third-party installed and heavily configured model. (I suspect that the rationale was: “It worked for ERP, so let’s do the front office the same way.”)  Unfortunately for Siebel, things didn’t play out this way. As launched, customers realized that they could get immediate adoption, usage and value by just signing up for the service. These customers perhaps didn’t get all the customization that usually came with traditional enterprise software, but most of that customization was being sold to big companies by consultants who wanted to make money as part of the enterprise IT ecosystem.  A lot of smaller customers didn’t need the customization, and ending up paying for overhead that they didn’t need.
A general rule of thumb for IT organizations was that you had to spend an additional 2X-5X in services to get a third-party enterprise software application deployed and working.  This never made sense to me, but I participated in the dysfunction along with everyone else for many years, on both the buyer side and the sales side.  As this thinking became more broadly accepted, it became a self-fulfilling prophecy: vendors could make money customizing the solutions for customers (regardless of their actual need for the customization), so it was in their best interests to create software that required a bunch of consulting to get it working for customers. (Software that sucks, in the classic sense of “suck”: time, money, corporate IT resources.) 
With the evolution of SaaS, all vendors now face more accountability, like it or not. knocked it out of the park as an independent business while Siebel sold out to Oracle and has bounced along the proverbial third-party software bottom, collecting maintenance on software that they sold 10 years ago to big companies who are not capable of switching to
6.      Traditional business models encourage vendors to extract as much money from their customers as quickly as possible – regardless of whether the software works or the customer actually needs the software. During the 1980’s and 1990’s, this “sell first, ask questions later” model became standard practice for technology companies, based on the success of proponents like Oracle.  But now, we’ve evolved.  Customers shouldn’t stand for it. There are better alternatives.  And vendors in just about every enterprise-software category should realize that it’s only a matter of time before someone comes along and provides better solutions that work for users quickly.  Let the hangover of enterprise software purchases begin. 
7.      The perpetual-license model creates perverse incentives for both buyers and sellers.  The subscription/term license model creates a much more rational incentive for the seller of technology to deliver both short-term value (through adoption) and long-term value (through improvement to the software) for customers’ end-users.  With a perpetual license model, the seller gets too much value up front, misaligning his interests with those of the buyer.
8.      Traditional “maintenance” is just as dysfunctional as the perpetual license that it stems from.  Fifteen percent (15%) maintenance is not enough money to innovate and improve a new system. Therefore, vendors’ business models put them in a position where they have to “upsell” their customers’ perpetual licenses for some additional usage or a new product. 
9.      Many customers should be happy to pay larger subscription fees over time in exchange for significant probability of greater success, user satisfaction and innovation.  They just don’t realize this, because business owners, end-users and engineers aren’t involved in procurement processes.  This perpetuates a lack of accountability for vendors and feelings of helplessness among users and consumers of these software systems. 
10.   Multi-tenant Web services present a compelling alternative. The broad availability of commercial multi-tenant hosted web services (epitomized by Amazon Web Services and GoogleApps) is creating a widening gap.  On one side of the gap, there are buyers and sellers of software who are merely perpetuating outmoded models for consuming and selling software.  On the other side of the gap, are software buyers that demand that their vendors deliver value through reliability and innovation every single day – and have the means to measure this. 
11.    FUD continues to rule – for now.  Many of the procurement and sales establishment are using the FUD (Fear Uncertainty and Doubt) arguments to slow the adoption of new software-as-a-service models.  I can understand why: the new business models including SaaS challenge their very existence.  However, as a result,  their customers are saddled with a  sub-optimal state of productivity for their IT systems and infrastructure.  This is not sustainable as IT organizations are under dramatic pressure to reduce costs significantly.
12.   IT organizations that embrace new software models are more productive and efficient.  They can focus more on high-leverage skills like networking and integration – and worry less about lower-value activities such as racking and stacking servers or building and releasing software. These benefits have been documented among the likes of Google Apps enterprise customers (Genentech for example) as well as large companies that have embraced Amazon Web Services (Netflix for example).  
I believe that all software contracts should tie payments to end-user adoption.  Monthly software subscription deals can be used to accomplish this relatively quickly: if users adopt. you pay; if they don’t, you don’t pay.  
For software industry old-timers this is heresy.  But it’s time to leave this one in the rear-view mirror – or eventually suffer the consequences.  The packaged third-party software industry is due for a reckoning - it's time for vendors to modernize their business models which depend on bilking customers for perpetual licenses and maintenance streams on software that is never used.  And customers should start buying software as a service and not overpaying for big perpetual licenses that they may or may not ever use.