Advisory committee for environmental research and education september 12, 2012



Download 0.69 Mb.
Page12/23
Date28.01.2017
Size0.69 Mb.
#10305
1   ...   8   9   10   11   12   13   14   15   ...   23

[laughter]

DR. ROBIN: So, I'm going to defer to my colleagues here who've done some evaluation, and maybe they can speak about some of their experiences with some of their programs, because we're trying to explore that, and I -- like I said, this is, we struggle with this quite a bit. In terms of the questions, this was put together by a large group of people, so this is sort of our laundry list of ideas, and there's no -- there's an order around the goals, but we really just wanted to get them all out there to you.

DR. TSAPOGAS: So, there are -- there are various ways that I can perceive approaching this. One is we have all these 2,500 proposals, and these PIs come from somewhere. I mean, some of them come from geological science, some mathematics, they have a home, and yet they're involved as PIs -- we also have the co-PIs, as well. So, part of this is trying to measure the major players in each one of these proposals, and from what degree field they come from and to what extent they're actually doing research on sustainability. Not only the proposal here, but also other proposals that might be construed as part of the whole sustainability effort. That's just one way of trying to get at it.

FEMALE SPEAKER: So when we're doing evaluation usually we start with a logic model, and I understand that you have a draft of a logic model, that will tell us what are the outcomes that are required, and then the immediate, intermediate, and long-term outcomes. And, of course, whether we can capture long-term outcomes -- well, some of these things have really happened many years after the award is closed. That is another issue. In some programs in that condition, we actually have monitoring systems that ask questions after the award is closed. So, the PIs actually go online and log in and put information about students, or [unintelligible] PI or program in engineering, we ask about sales and number of employees in the company and things like that. But that's the exception; it's not the rule.

So, these questions, I guess, were put together by the program directors, and they have to back up, I guess, back to the logic model. So that's something -- so I guess what they want to know is whether the questions make sense, and then, I don't know, whether you have a logic model to show what is the -- what we call the period of action. So, why is that we have the program in the first place? So, what is that we want to know, or we want to change, what type of research you want to fund? So, it is a line in between the logic model and the questions, so that's -- so, it is a laundry list, but I guess, at one -- it has to go back to the logic model so that we know what are the outcomes that are sought and what are the indicators for those outcomes.

DR. ROBIN: So, I think we do have a draft logic model, and we'll have Beth circulate it to you all. It's sort of in its infancy stages. We kind of -- when we first started this, like I said, those of us who started getting involved, we didn't -- don't have an evaluation background, we started talking to some external groups and how we go about doing that. We kept, you know, as we went along, we kept doing more and more things until we realized, "Oh, we're sort of in the process of putting this all together." It's not linear how we do it; we kind of did the questions and then we did the logic model. I mean, this kind of speaks to SEES itself, and so -- but we can definitely get that to you. And I have to say Beth was really instrumental, she just sat down one day, and started to say, "Well, let's see how we go about doing this." And we've gotten some feedback from different program directors and we will certainly share that with you all.

DR. TRAVIS: So we'll go to Molly, then Stephanie, Tony, and Joan. Then Ivor and David.

DR. BROWN: I'm interested in what your thoughts are about how the information will feed back into the programs, since you're doing rolling -- and so, if you were to get answers to your short-term goals, how would that influence your future project, you know, direction? Because I think there's such richness and such complexity, and I really want to speak to Lilian's point -- too bad she isn't here -- about how important it is to leverage, to not have, you know, biological complexity asking such a similar question so that people -- you find people proposing a phone [spelled phonetically] call, several calls in SEES simultaneously with a little paper this thin. Not that that's particularly a bad thing, but I -- there's so much work to do and I really want to make sure that there's a feedback mechanism.

DR. ROBIN: Right. So, just on a programmatic level: We now have 16 programs, okay, we roll these out. How does this program differ from that program? Are we getting the same thing in programs? We're talking now about some programs ending. You know, our first set of programs, the first five, Climate Education Partnership's moving out; Dimensions of Biodiversity is a 10 year initiative, that will be staying; but Water Sustainability Climate; Ocean Acidification; and Earth System Modeling. Okay, so what do we do with those? Are we seeing those in the core program? So, programmatically it's very important for us, because we got to make these big decisions in terms of, do we continue or not? And, I think Sarah's data that she showed from a couple -- the CNH program, they're not seeing the decrease. [laughs] In fact, they're seeing an increase in number of proposals that are coming to that program and we're still bringing in all these new programs. I mean, you know, the first question is, "Well, why do you need a new program, you have CNH?" CNH cannot do, you know, everything. And so, these bills certainly help us make programmatic decisions, but nobody wants to run a program if we have somewhere else in the Foundation. It's a lot of work. So, just from a practical manner, that's sort of a first step of business for us.

DR. TSAPOGAS: So, can I just address that question, because I think it's a very important one. I mean, just the nature of the program, as Jessica mentioned, is just -- it's very complex, it involves various pieces, some pieces that began earlier than others. It involves programs that are changing. It involves programs that are dropping out. Part of this whole effort of evaluation is to get a handle on it, it's for us to gather the information to help manage the programs themselves so, I mean, there clearly has to be a historical component to the evaluation, there has to be a portfolio review component to the evaluation. So, that's from where I sit, I sit -- those are components that I think are important. But I think it's more important to hear from you, what you think is important, and I think these questions that you're raising are helping us, you know -- or we're affirming some of the thinking that we have.

DR. TRAVIS: Stephanie?

STEPHANIE PFIRMAN: This is just a minor point, but you're talking about data sources and tools, and Fred asked, "How do you measure integration?" And Alan Porter, I don't know if you're aware of his metrics, but it's really cool. What he's done is he's looked at the fields of the references you cite in a paper or in a proposal, and he's -- he doesn't just measure, like, how many fields there are, but he can identify how far apart they are. And so, that would be something, since you have the bibliographies of the set of references, you could use an already existing tool. Then he also tracks people's research over time, and this was something that Erin Lee [spelled phonetically] developed in the University of Arizona, which is looking at key words associated with your publications over time, and do they change or do they diversify over fields, and how apart they are. So, those are just two elements.

And then, I guess, I would like to come back to the question that I raised earlier about -- because you had said that you would address it under evaluation, which is -- but maybe we come back to this as a bigger question at the end, is these 10 percent levels, you know, they're not sustainable in terms of human resources, in terms of success rates, and maybe that's a bigger question we come back to tomorrow or at some other point, or you could address it now.

DR. ROBIN: Well, like I said, part of what, you know -- so, up until this point, we, you know, we really rely on the program directors to explain to us the individual programs and what's going on. We're looking at some of these tools so we can get a larger handle on that. If you look across the foundation, I think on average the success rate is about 25 percent, but in biological sciences it's 10 percent. So for some communities, this is a pretty standard success rate; for others, it's not so much. But, at the end of the day, if you look at some of these programs, how many proposals come in and how many are being awarded, is that the best use of resources? And those are very tough questions, nobody, you know -- but what we need to do is show the data, what's going on. And so, we are hoping to look at the success rate. There's a big concern across the Foundation in terms of workload. Like I said, we all run these programs in addition to our traditional programs, and so, we are planning to look at that, but we're trying to get a handle first in terms of what's being done and what cohesive fashion.

I want to introduce another colleague, Connie Della-Piana from the Education and Human Resources, another one of our evaluation gurus across the Foundation --

[laughter]

-- so you're going to answer more questions.

DR. TRAVIS: Tony, give Connie a chance to answer a question.

DR. JANETOS: Okay, I'll do my best. In -- I love this approach to the, sort of, short-term questions, long-term questions. In some cases, and I'm sure they're going evolve over time, in some cases we seem to be looking for a particular answer, but I can't tell how you're going to evaluate that. Some of it is just data gathering and categorization, but when you ask a question like, "Are career patterns of recipients different or similar to more -- to traditional career patterns?" you're looking for something, and it's just -- you'll just have to think that through. If it's different, is that a good thing? Is that what you expected? Is that what you're hoping for? Or not? So --


DR. ROBIN: So, we're really looking --

DR. JANETOS: I'm not making a value judgment --

DR. ROBIN: No, no, no.

DR. JANETOS: -- but I think you got to decide.

DR. ROBIN: But I think we're really looking for feedback from you are, are these the right questions? I mean, we want to hear from you, like, you know, "maybe you should be asking this sort of question, or that sort of question." So, we're not committed to our questions. We've used this as a -- sort of a spring board to get your feedback, and John told me before the meeting started, "Get them to answer the questions."

[laughter]

DR. JANETOS: So I do have a suggestion for a class of questions that's almost absent from this. I mean, these are mostly questions about what was proposed and what was funded. I think you might -- it might be interesting to turn your attention to the body of solicitations, you know. If SEES is going to have a long-term systematic effect on the Foundation, then, in fact, what I would hope to see is that -- one might expect to see is that there are some similarities in feature solicitations from the disciplinary programs that have -- that they started to incorporate elements of sustainability and interdisciplinary research, and you won't know if you don't ask.

DR. TRAVIS: Joe?

DR. FERNANDO: Yes, it’s more of a -- it’s more of a comment. I was kind of struck by this one, you know, questions such as whether the absence of such a program are and -- or/and [unintelligible] traditional NSF program type questions. I would have imagined that you had, before SEES, you had some information about all these programs that had been proposed this type of SEES-type activities. For example, had anybody proposed something like risk assessment, social sciences, and those kinds of things together? Probably very little. Probably, right? I mean, for a program -- traditional NSF program cannot probably accommodate that type of multidisciplinary type activity, and nobody would dare to send a proposal assuming -- expecting any success. So SEES might, definitely you can answer that one, and I suppose it has, because of the type of activities you have kind of put together. This is the area you’ve gone, so that people come together and submit this. So those are I think 100 percent success rate, close to a 100 percent success rate. That can be looked at by looking at your previous proposals, just before SEES started.

The next one is the long term. So what are you going to expect with SEES? So I would have imagined that scientific knowledge, maybe the basic knowledge would have come from the traditional program. But the multidisciplinary knowledge, something you do a little bit and many, many people can use and come in, that kind of knowledge would have come from SEES programs. So, how to differentiate you have to think about it, but certainly that's one of the ways -- how much fundamental knowledge and how much cross-disciplinary type activity has come?

On the long range, so, you know, how many people who graduated from the SEES funded activities have gone on to the academics who are doing more type of sustainable activities or research, and how many start a company that'll come up in long range? How many students are working with these kind of activities have gone on to careers in industry, in sustainability? So you can probably do it with a metrics type thing like that, and look at the long-term effects. And also some activities coming out of -- the knowledge coming out of SEES, within the local governments, especially there are some cities who are very interested in adopting; the city of Chicago is one of the places craving for new knowledge to put these things into practice. Are they -- have they adopted these type of activities coming from SEES? The governments are stakeholders. So those are the kind of matrices you can [unintelligible], but it requires some thought and put together. So, some thoughts, right?

DR. TSAPOGAS: I'm glad you mentioned the issue about existing information, because we have information here that we can begin using or at least begin thinking about. There's existing projects that we supported that have sustainability focus, even the PIRE program. When I look back at those 59 projects, some of them before we joined the sustainability effort, some of them before were really sustainability focused. So the questions is, How many where sustainability focused and how many -- and how has that changed as a result of us being part of the interagency sustainability effort? And that could be answered now.

DR. FERNANDO: Now, yes.

DR. TRAVIS: Ivor?

DR. KNIGHT: Yes, I was -- kind of goes along the lines of what Tony say about questions that you’re kind of looking for a particular answer, and I was looking at the long-term goals there in the private sector, the partnerships. "How has the private sector been engaged and affected by the SEES program partnerships? Is the private sector able to more rapidly identify and employ technologies and methods to address sustainability issues?" Seems like a goal that you have that you want to evaluate, and I think that's going to be really difficult.

And here's a reason: At Canon, I talked last time about our shift to looking at product life cycle, including what the costumer does with product after they're done with product. We're moving in a big way in this direction. Factories are being organized in a way that, you know -- where consumable products are being totally recycled and made into new consumable products, things like that. Energy is a huge area for us; I was in Japan a couple weeks ago, we're dealing with the energy shortage there by simply turning off the air conditioning, which makes it very uncomfortable, but it works, right? And so, there have to be smarter ways to do these things, et cetera. And, as a company, we're continually scouring any kind of information that allows us to make those changes. And so, you know, I think one of the -- it's difficult to evaluate, but I think one of the things that has to happen is, if you want them to be more rapidly integrated by the private sector, then you got to put them out there, you got to put that, you know, so I can find it on here [laughs]. That's basically, you know -- because we're going to seek, and we're going to seek very strongly, but how you measure that, I don't know how you're going to measure that. But it's going to happen. I can tell you, the answer is yes.

[laughter]

But how are you going to measure that, I don't know. And we should probably talk about that. But it's, I think it's an important thing to capture.

DR. ROBIN: I think, Marge, this morning, you talked about some of the overarching NSF goals in terms of Innovation Corp -- and making these things more technologically. So maybe there's some overlap there in that there's now a strong emphasis on NSF for these technologies, making them more rapidly available. A lot of them will be sustainability related, so, again, another discussion we have to have internally within the Foundation and some of the groups that are doing more of that than we currently are.

DR. KNIGHT: Just a quick follow-up. I agree with that, but you're -- what you're -- in those programs you've got a linear -- you got a certain private sector, you got a certain -- student and everything, so you've got a linear -- I think the real benefit is going to come, you build it, we'll come, you know. That's -- [laughs] -- that's really what happens, so -- but evaluating that is tough. It's easier to evaluate the set partnerships that you create; it's not so easy to evaluate the kinds of things I'm talking about, which may, in fact, be larger in their impact.

DR. CAVANAUGH: Right. Yeah, I was thinking exactly the same thing, and I don't know, but we might need to spend more time talking with the folks in engineering about how they've looked at how -- any of it. And, you know, the knowledge that is created is being picked up by companies, because this is something that engineering is very interested in for their programs at large. So, you know, can we learn something from what that directorate is doing to apply to this? Because that -- what I was talking about was a set program where you have, you know, a certain number of people who, you know, who you’ve brought into a program and you can easily track what happens to them, which is very different from, you know, you gave a bunch of awards without necessarily thinking anything about what you're -- whether they were going to have a spin off or not. But, did they? Different question, this situation.

DR. TRAVIS: Okay, got a long list of people. David? Then we’ll get to everyone.

DR. BLOCKSTEIN: Thank you. I've got several comments and questions in terms of my own participation. As I look at the portfolio over here -- I’ve been -- I ran a workshop for one of the programs in the beginning, I've been on the PI on another one, I've been unsuccessful applicant in another one, and I've been a reviewer in another. And it seems to me that the questions are, as you have it right now, is really looking in a very specific way in terms of the program that exists and within the programs, and I think that you need to also step up the scale to look at the completeness of the portfolio as it exists relative to the goals. I think it still is a little bit of a collection of cats and dogs here in the sense that there are some programs that, as you know, were grandfathered in. Some -- and it's almost like, I think, really, evaluating a directorate in a certain sense, which we're trying to do here. You know, now, the Climate Change Education Partnership is going to be moved out for some reason that hasn't been articulated to us. Others, like Dimensions of Biodiversity, isn't really a sustainability program, in my opinion. And so, I think that there needs to be a higher level of evaluation in terms of, is there some [unintelligible] programs relative to the goals? Are there other areas that are critical to sustainability that either are being addressed in other parts of the Foundation that potentially should be part of the program, or that don't exist at all, that need to be added to that?

I would also add two other aspects of the evaluation that I don't see represented here yet. One is a diversity question in terms of, not only developing a diversity of the -- of individuals, that's clearly a big goal of NSF, but also the diversity of the types of institutions that are getting funded. And we saw this at the panel that I was involved in. Basically, the R1s that rise to the top and the MSIs fall to the bottom. And then, the third is, in terms of looking at success rates, and I don't know how much this happens in the Foundation as a whole, but I think that it's useful to look at two measures within the success rates: One is sort of the overall success rate that you measure; but then the other is a metric of how many proposals that are quote, unquote "fundable" in terms of being regarded by the panel as highly competitive or competitive are not getting funded. There's an example in the one that I was involved in, we felt that maybe the number of awards could have been doubled, but it couldn't have been tripled just because there weren't enough qualifying. So, I think that those would all be useful things to evaluate.

DR. TRAVIS: Okay. Stephanie?

DR. PFIRMAN: I'm following right on his, I was like, look at the proposals and then look at what gets funded. So you mentioned that you did this text analysis, but often what happens in the evaluation, itself, is that you tend to look at the actual funded proposals, and I think it's interesting to look back at what was proposed. So, the type of institution, as David said, male versus female, career stage or years post-Ph.D., and I think this would, this would be really -- content area, as well, so it'd be interesting to help you shape future programs, you know. If all of the younger, you know, people from minority-serving institutions or small colleges or something don't get through the final stage, then what can you do, because clearly there's a lot of interest there. Is there some other solicitation that could, you know, try to meet that need? So I think, you know, comparing the proposals in much more detail than NSF has done in the past could be really interesting in this solicitation.

DR. TRAVIS: All right. We have about 15 minutes. I'm going to ask Fred if he wouldn't mind deferring to give three folks who have not yet spoken in this little section a chance to speak, and that would be Bruce, Eric, and Mary Catherine. Then we'll come back to Fred. So, Bruce?

DR. LOGAN: In looking at these questions I'm kind of wondering what the control is in the experiment. So, getting back to Joe's comments. Are we -- you know, is the goal here to show something that's above average or in some way just significantly different from the average? So, is our control group X number of publications, or maybe, as Stephanie said, a diversity index of -- a Shannon Index of diversity publications, if you will, or a principal component analysis, which shows a cluster over here and a cluster over there? To have that sort of information, I think -- and you may have this in mind, we need -- we would need to -- we don't need to see the details, but a sense that there are these metrics that you're collecting on your control group or within the existing infrastructure or reporting scheme, and how this is going to be evaluated against that.

So, you know, some of these questions that struck me were, you said, "Are the project findings that have arisen as a result of this support, may not have occurred with such a program?" Yes.

[laughter]

Because that's, by definition, what we heard, that this cannot be funded -- I mean, so it's yes, so we're done. So, I'm not being -- I'm just trying to say that it's probably -- and you asked for this critical feedback, so --



Download 0.69 Mb.

Share with your friends:
1   ...   8   9   10   11   12   13   14   15   ...   23




The database is protected by copyright ©ininet.org 2024
send message

    Main page