Search Close Search
Search Close Search
Page Menu

CIPC Blog

The PIP 2.0: A Tool to Measure, Compare and Monitor Primary Care Integration

Tuesday, May 25, 2021
|

BarreHuddle.JPG

A Behavioral Health staff huddle at the Barre Family Health Center

For the past 13 years, my clinical practice has been at the Barre Family Health Center, in central Massachusetts.  Barre, which is pronounced barry, is often described as a “rural” community.   And in many ways, the Barre Family Health Center feels like a rural practice.  The PCPs I work with have a broader scope of practice than many of their colleagues in the big city.  Barre has a single central street lined with one and two-story buildings and among the few stores, there is a lumber yard and a shop that sells woodstoves.  But when I travel in the western US, or in Texas, or in Alaska, I often chuckle and admonish myself for thinking about my practice as being “rural.”  I suppose that rural can be a bit subjective, or at least relative.  In everyday language, we don’t really have a great way to describe HOW rural a place is. 

My experience in Barre has led me to conclude that rural and urban communities often benefit from different approaches to organizing the integration of behavioral health and primary care services.  If your town is too far from a psychiatrist or a methadone clinic, you are going to need to get your needs met in primary care, or else your needs won’t be met.  The demands and resources of each community need to be considered when organizing primary care services.  One size fits all approaches to integrating behavioral health aren’t going to work.


Aside: In scientific parlance, we think of the variable urban/suburban/rural as being nominal.  Nominal categories are like different buckets.  They are different but one isn’t necessarily MORE than the other.  (Unless you are specifically referring to population density.)


During my 13 years in Barre, our BH team has grown.  We are currently 3 psychologists, one clinical social worker, a nurse-led buprenorphine team, and 2-3 psychology or social work trainees depending on the year.  It certainly feels to me as if our team is “more” integrated now than we were when I arrived in Barre.  When I review fidelity checklists or use tools such as the IPAT, I often conclude that our practice is not “fully integrated.”  This notion of being “collocated” vs. “fully integrated” suggests I have more work to do.


Another aside: The continuum from “no integration” to “collocation” to “fully integrated” is different than a nominal category.  You may recall from your statistics classes that we refer to these as ordinal categories.  In this example, the idea is that “fully Integrated” practices have MORE integration than “collocated” practices.


But does MORE integration mean better healthcare?  Maybe.  Probably.  I have been working for 13 years to increase the degree of integration in Barre.  I sure hope more integration results in better healthcare!

Ok then.  More integration is good.  But there are lots of ways to do MORE integration. 

And if I hired 10 different integration consultants, I’d get 10 different ideas about what MORE integration looks like in practice.  I am not even sure if my practice is really rural or not.  How am I going to compare Barre’s integration to the integration occurring in other practices?

These sorts of questions have motivated a group of my colleagues and me to develop the Practice Integration Profile (PIP 2.0).  We wanted to create a measure of integrated primary care that could be used to compare one practice to another and monitor how much MORE or LESS integration was occurring in a practice over time.  We set out to create a measure of integration, like a ruler, that could be used to tell us how much MORE integration one practice had than another.  We have worked hard to improve the psychometric performance of the PIP 2.0.  For example, we needed to be sure that each time we used the ruler it gave us the same reading (reliability), and equally important we needed to be sure the thing we were measuring was actually integration (validity).

To establish validity, we used the Agency for Healthcare Research and Quality’s “Lexicon” as our definition of integrated care.  We then tested the PIP 2.0 in the real world to establish its reliability.  Our experiences with the PIP 2.0 suggest its reliability is equivalent to other similar measures of health service delivery.  The PIP 2.0 gives us a rating of how integrated a practice is on a scale from 0 – 100.  Practices with a score of 80 are more integrated than practices with a score of 40.  In this way, the PIP 2.0 is an ordinal scale.


A final aside: While the PIP 2.0 is an ordinal scale, we aren’t yet sure if it is an interval scale.  To be an interval scale we need to calibrate it further to be sure that the distances between each number on the scale is equal.  We need to be sure that a practice with a score of 60 is actually twice as integrated as a practice with a score of 30.  Our team is just beginning this work now.


On Tuesday, June 15th at noon, I will lead a free webinar to introduce people to the PIP 2.0.  You can learn more about the webinar on the CIPC website under the "Short Courses, Webinars, Videos" tab. 

And you can access the PIP 2.0 here: https://go.umassmed.edu/CIPC/PIP