Quantcast

Prairie State Wire

Tuesday, November 5, 2024

Illinois Criminal Justice Information Authority, Traffic Data Stop Task Force met Sept. 26

Webp 4

Patrick Delfino - ICJIA Board Member | Illinois state's Attorneys Appellate Prosecutor

Patrick Delfino - ICJIA Board Member | Illinois state's Attorneys Appellate Prosecutor

Illinois Criminal Justice Information Authority, Traffic Data Stop Task Force met Sept. 26.

Here are the minutes provided by the task force:

Task Force Member Attendance 

WebEx 

Absent

[A] Dr. Christopher Donner, Department of Criminal Justice & Criminology

Loyola University Chicago

 X

[A] Jack McDevitt, Professor of Criminology and Criminal Justice at Northeastern University; Director of Institute on Race and Justice

X

[B]Tyrone Forman: Professor of Sociology and African American Studies, UIC

 X

[C] Major Jody Huffman, #5964 Illinois State Police North Central Patrol Command

 X

[D]Stephen Chung, Commander, Chicago Police Department

X

E) Joe Leonas, representative from the Illinois Association of Chiefs of Police (Apt, August 20, 2024)

 X

[F]Jim Kaitschuk, Executive Director, Illinois Sheriffs Association

X

[H] Donald "Ike" Hackett, Illinois Fraternal Order of Police

X

[I] Khadine Bennett, Director of Advocacy and Intergovernmental Affairs, ACLU

X

[J]Rev. Ciera Bates-Chamberlain, Executive Director, Live Free Illinois

X

[J] Gregory Chambers - Ill coalition to end permanent Punishments

X E
[J] Amy Thompson, Impact for Equity Staff Counsel, Criminal Legal System

X

Quorum = 7/12

E= Excused

Also present were: 

Anne Fitzgerald, Cook County Sheriff Office

Sean Berberet, Illinois Department of Transportation

Amy Qin, WBEZ

Chief Marc Maton

ICJIA Staff Present: 

Kimberly Atkins, ICJIA Strategic Project Administrator

Vanessa Morris, ICJIA Strategic Project Administrator, OMA

Linda Taylor, ICJIA Strategic Project Administrator

Emilee Green, ICJIA Research (Facilitating)

Gowri Kuda-Singappulige, ICJIA Research

Minutes by: Kimberly Atkins ICJIA Strategic Project Administrator, Editor (OMA)

I. CALL TO ORDER/ ROLL CALL 

• Tim Lavery, ICJIA Director Research, Facilitated the second meeting of the Traffic Data Stop Statistical Taskforce Meeting and called the meeting to order at 1:04 A.M. Mr. Lavery stated meeting was recorded by Kimberly Atkins.

• Vanessa Morris roll call: quorum has not been achieved

• Kimberly Atkins 2nd roll call: quorum achieved

II. MEETING MINUTES 

• Minutes for September 26, 2024, not approved tabled until October 24, 2024.

III.WELCOME AND INTRODUCTIONS 

• Facilitator Tim Lavery, ICJIA provided framing and a summarization of some events at the last meeting. One main idea that emerged at the last meeting is that the group spent a lot of time at the last convening, focusing on the data forms, focusing on what is collected and whether the information it's useful, and if it requires additions or modifications. The sentiment that came forth strongly in the surveys and through the groups discussion for this convening was the strategy for one focal area. We may want to consider how the data, as it currently exists, is being used, how it should be used, and what actions are necessary to improve utilization. We will split this into two discussions. We thought we'd start by considering it from a law enforcement perspective. We will introduce Chief Leonas, representative from the Illinois Association of Chiefs of Police who has generously agreed to share his thoughts as framing for our discussion. Mr. Lavery mentioned other things the group discussed to include the Illinois department of Transportation (IDOT) racial profiling prevention and data oversight board and receiving updates agenda items going forward. We also discussed a deliberation on the scope of the of pedestrian stop reporting, particularly whether reporting should extend to stops for a reasonable articulate suspicion. In such case, does not involve a search of risk, so we might offer search of risk as an agenda item at some point in the future. There was a final area in which we talked about funding. We can take that up again in the future as well. Today in this meeting we will discuss, the data and its utilization.

• Facilitator Tim Lavery: Before turning over the meeting to Chief Leonas, there is one other agenda item that Emily will first present. ICJIA research reached out to a sample of, law enforcement officials who within their organization are knowledgeable about the study and might have thoughts on it. We received some responses, although not representative sample it does frame and guide the discussion.

• Khadine Bennett ACLU-IL: Ms. Bennett presented her appreciation for the information shared. Is there a process of suggesting people to present to us to get a fuller sense. Ms. Bennett provided information that WBZZ did a series on the data traffic and pedestrians stop data collection and traffic. WBZZ is an entity that's using the data and there could be or other groups. It may be helpful in terms of when outside people are using the data and the way that we're hoping for our community groups, and other advocates, review what's been helpful, what's been not helpful data. In part, what we have seen before is there is not a hundred percent compliance with the law by every law enforcement entity. We may want to have the space to be able to discuss. Furthermore, the meetings are an hour but some of these conversations would require a deeper dive for us to fully flesh things out so that we are in a place to be able to make recommendations that are fully thought through. The meeting was handed back to Facilitator Lavery.

• Facilitator Tim Lavery Thank you Khadine Bennett. We have planned on talking about it from the law enforcement side and hearing what they have to say. The last thing on the agenda today is the community perspective, other users outside of law enforcement. We would need members feedback on who should we bring in or who should we invite to share that perspective. What information should be out there for community stakeholders or other individuals who want transparency in police activities. Mr. Lavery turns the meeting over to Emilee Green.

IV. NEW BUSINESS 

• Emilee Green, ICJIA Research - Presentation – Themes from ICJIA law enforcement interviews. Ms. Green had met with law enforcement representatives and compiled last year’s task force's recommendations that were grouped into several categories reviewed at the first taskforce meeting this year. One of the categories was data utilization by law enforcement agencies. The last task force recognized that the reports generated by IDOT and their statistical consultants must ultimately be used. The summation is law enforcement agencies have an impact on police behavior. Law enforcement representatives on the last task force noted that it can be challenging for police administrators to utilize the reports in their departments for several reasons. Frustrations surrounding the methodology, the statistically complex format of the findings, and the changing of methods from year to year.

• Therefore, recommendation twelve of the last task force report reads, “to conduct a study examining how law enforcement agencies throughout Illinois are implementing Illinois traffic and pedestrian stop statistical studio requirements”. The study should consider data collection challenges, including technology barriers and strategies for using Illinois traffic and pedestrian stop statistical study data for gaining insights. And it was suggested that I see to do this because we are already established in a research capacity. The research group decided to conduct interviews with law enforcement representatives who could speak on the annual studies. Like Tim mentioned, I want to preface this by saying we are not claiming the results here of this would be generalizable to all law enforcement in Illinois. We were just hoping to share our findings in a form like this task force where it could be used to jump start conversation and discussion. We knew that there would only be a limited number of task force meetings this time around. So, we want to make sure to have something to share at one of our early meetings. We had several recruitment methods be used to identify law enforcement who may be willing to speak with ICJIA. Today we will briefly describe the three main methods we used to try and speak with a diverse group of law enforcement. As a starting point, the last Illinois Association of Chiefs of police Representative on our task force, Carl Waldorf, sent out a recruitment sign up through their chiefs of police email list serve. Unfortunately, there wasn't a lot of interest through this method. We only had two. The group also used what’s called snowball sampling, where we asked those interviewed if they knew of anyone else who might be interested in speaking with us on this topic. This helps me reach out to two additional potential participants and I was able to speak with them. Finally, we used a more randomized sampling method for a more proactive approach to recruitment. After gathering the list serve, a member of I the ICJIA data analytics team used an algorithm to group similar law enforcement agencies together. Agencies with similar amounts of traffic stops and similar population sizes were put into different Strata. I then randomly selected law enforcement agencies from each of those Strata to reach. This method netted us an additional nine participants. The researchers were able to speak with 13 law enforcement representatives. Of the 13, most were primarily representing departments in Cook County, which is where Chicago is and Cook county's surrounding suburbs, as well as Central Illinois. There were a mix of police leaders. Higher ranking officers as well as civilians who were designated by their police chiefs as having knowledge on their department's stop processes. Research spoke mostly with people representing midsized to larger departments. It was a bit challenging to recruit people from the small departments of Illinois.

• Eleven of the 13 were from primarily urban areas, whereas two were from primarily rural areas. The 1st topic I'll speak on is the technology aspect, as it was specifically touched on in the task force right recommendation. Overall, interviewees felt that the technological pieces of the stock process are working smoothly. Interviewees felt that entering data into their electronic system is fine, and they also indicated that the system itself updates smoothly as shared by one person. It's very easy for them, the software manufacturer, to keep up with the changes from year to year. A stop form can be built into the mobile software so anytime you make a traffic stop, it automatically pops up as fields. My next asked interviewees about traffic stop data collection, specifically the benefits and the drawbacks.

• The majority of interviewees eight out of 13, thought the collecting stop information was useful to their departments. As one interviewee said, to law enforcement, having that data is valuable because it says what we've done. It doesn't rely on anecdotal stories, bar stories of what we've done, and how we've done it. The transparency of police behavior was the major benefit cited for collecting this information. However, not all law enforcement felt this way. Those who opposed to collecting this data felt that it was used to create division between communities and law enforcement and that the data collection misrepresents their work.

• A 2nd interview was shared, I don't think there are any benefits to this traffic stop data collection. I think that it creates issues and I think that the information that is collected is not good. It's not used properly. Next I dived into the data analysis. Although most interviewees felt there was value in collecting stop data, there was more uncertainty and hesitation in how the data is being analyzed. When I asked participants how they felt about data analysis, one person described that their community and the community next door have different benchmarks calculated for them, as shared by one person here. My benchmark is crash based, and the town right next to me, their benchmark is distance based. So they're using two different benchmarks to calculate these ratios and we're literally next door to each other.

• As a reminder, a crash benchmarking is a method for estimating the driving population of an area by looking at not at fault drivers involved in traffic crashes. The idea is that when someone crashes into another person on the road, typically they're not crashing into someone based on that driver's race or ethnicity. The person who was hit was whoever happened to be there. Therefore, the hypothesis is that not at fault drivers can be used to estimate who was on the roads. However, interviewees were clear that crash benchmarking created estimates that felt misleading, especially when some municipalities have very few crashes to make these estimates. Again, another person noted that identifying the true driving population of an area is challenging. Something as divisive as this topic can be, concerning. The same person continued by later adding, there's NO good data point unless we started pulling demographics from businesses in the area to say what the true population of commuters through our town is. In Day we're pulling from crash reports because allegedly crashed reports are the best way to gain the baseline for what's in the town and I just don't think that's right. So again, those interviewed felt uncertain if not frustrated at the use of both differing benchmarks as well as the benchmarks themselves. I also asked interviewees about utilization of the annual study in their departments. Most police staff interviewed did not think the studies were useful to their department regardless of subdepartment size. One participant representing a smaller department said, we don't personally use the information for anything that I know of. I think it's all just, you know, disseminated to IDAT and how they go forth with it. A 2nd small department representative noted, it's something that law enforcement's been doing for so long. It's part of the process and it's fine, but there's not like a strong benefit to an agency.

• Representatives of larger departments who were interviewed explained that they're often tracking their own numbers themselves as the study is only published annually, it's more useful for them to internally review their own numbers so that they can be aware of issues faster. A representative of a larger department added that because of the frustrations around the data analysis piece. Now only does their department not really use the study, but neither do the elected officials of some communities they're aware of. They explained, I don't want to name names, but there's a couple of city councils where their report looks horrible, and the city council doesn't care because they're convinced its junk science.

• Finally, two other interviewees pointed to the complexity of the annual study. One said, this annual report that I'm looking at, the executive summary, it's it's 86 pages. It has really important information, percentages, ratios, not everybody knows what that all means.

• if you don't study the executive summary, it can be a headache for someone trying to interpret things. Another interview we supported this claim and noted, I think if we don't get back to a simpler format, I think what you're going to have is where the vast majority of people ignore the study because it's too difficult for them to understand. I think that's the key problem we've had over the last few years. When participants were asked how to address their concerns, solutions were, were somewhat difficult. One person explained, there's just so many external factors that you will almost possibly count for everything without someone physically standing on the corner of a town for a week. It is really, complex. Therefore, interviewees felt that coming to some sort of middle ground or consensus on what the goals for the annual study should be helpful. Whether that's recommended benchmarking methods, more communication with the statistical consultants or even what the public might expect to see in the annual studies.

• This concludes the overview to some of my findings. Most interviewees felt the collecting stop data provides useful information. Law enforcement leaders can understand what their officers are doing, the data provides transparency to the public, and it can defend the work of law enforcement if NO disparity is occurring. However, there was a general sentiment that what's published in the report is not representative of their work. It's to be challenging to capture all the variables that affect the numbers such as driving patterns and officer assignment and the complexity of the report adds an additional barrier for law enforcement leaders.

• There were a few limitations with this investigation we did. Again, this wasn't intended to be a formal research study necessarily, but because of the recruitment methods we used, we can't see these findings represent the opinions of all law enforcement across Illinois. I am also the only one who conducted interviews and analyzed the data, so there could be a lack of different perspective. On the findings there I the potential for my own biases. Finally, it was a bit challenging to speak with smaller and more rural departments. I understand if, you know, you're only conducting a handful of traffic stops each year, you, you probably don't have a lot to say about the whole process itself. But just to add, there wasn't as much of their perspectives shared here compared to larger or more suburban departments. Ms. Green provided contact information and stopped sharing screen and handed the meeting back to Mr. Lavery.

• Facilitator Tim Lavery -handed the meeting to Chief Leonas. •

V. PRESENTATIONS 

• Chief Joe Leonas, Illinois Association of Chiefs of Police- Presenter: Thanks Ms. Green for the presentation thanks the taskforce for having him as a presenter to touch on the topic of racial profiling. The topic has been on the minds of people probably more recently with all the events surrounding law enforcement for the last four or five years for Illinois history the was created back in 2004. Comments that the report has been generated for about the last 20 years. Mr. Leonas inquires if law enforcement been asked to respond because law enforcement is referenced. Emilee Green addressed the question that law enforcement was included via letters.

• Mr. Lavery asked Mr. Lenos to speak of general thoughts, global or pertaining to your own agencies about the utilization. The report could be useful for law enforcement too. What changes are needed to make it feasible.

• Chief Joe Leonas The report is public. We could potentially use the data ourselves for certain purposes or maybe there's other things from the vendor that could be useful, specifically for law enforcement, so change could be made because agencies vary in analytic capacity. Thoughts about the utilization of the data, and gaps is interesting because the two things today different is the change in the vendor that did the calculation. The vendor did the methodology in the last four to five years, 2019. The enlightenment was to look at our data because it had significantly changed the benchmarks had changed and the stop ratios or the rate ratios had changed. As a result, it looked to sometimes go down in numbers, other towns went up. What we really try to do is figure out how the data was calculated is where the methodology being questioned because it is difficult to understand. Jack McDevitt, part of this group, may help understand, as ley people, the methodology. The benchmarking itself was inaccurate for most, if not all the jurisdictions.

• The state of Illinois from the 2023 study a particular group was more likely to be stopped in Illinois of all the groups. The groups are white, African American, Hispanic, or Latino Asian, American India or Native Hawaiian. The data using the stop rate ratio Hawaiians are five and a half times more likely to be stopped than a white person, and they are the highest profiled minority of the groups. Has the taskforce talked about that or any of the other things that might come to light that would make you scratch your head and go, well, why is that? We looked at how we entered data. The computer in the car uses a dropdown menu. If you hit “H” for Hispanic in the dropdown menu, the 1st H is Hawaiian. How many  of these are erroneously entered? There are thousands of people being counted miscounted just because of a software problem or because the software isn't interacting with the officers correctly. But there's NO real remarks about that type of mistake. Is this a mistake or is this not?

• letters were provided to Emilee from the Illinois chiefs, and I participated on a call. The vendor also provided a response to us because we interacted with them because we just had to figure out how is this information calculated. I don't think anybody's doubting the number of stops or the decision making or whether there was an arrest, or a ticket issued or a search. What should the number be for each of these groups in a perfect world? If we could say law enforcement behaving completely correctly on every single stop, what should the number be? I think there are other ways to calculate more information. As in WBEZ article Khadine Bennett mentioned, I didn't see any reference from WBEZ to the methodology that was used by mountain whisper Light by the vendor.

• Miss Quinn did you perform your own analysis? We use the data we collect and look at it in real time. If we have an officer doing something wrong, there is opportunity to correct immediately. I'm not sure how this data is being used statewide or by other jurisdictions because they're getting these reports in the summertime. It doesn't really define anything more micro level, more independent for each department. Does anybody have any questions or comments?

• Khadine Bennett ACLU-IL: Thank you so much. You mentioned that you're able to use the data in real time to address things in real time. Like, can you give an example of that? Cause I know when we were looking at making it permanent and talking to law enforcement and part of wanting this group to actually meet was to think through how to get the data in a place where it's usable in a helpful way. And we heard from law enforcement that they have used it as a training tool, but an example from you would be super helpful.

• Joe Leonas: When you're looking at the data in the aggregate, so when you're looking at this annually, it doesn't provide the granular detail, but since we are the ones that are submitting the data, as it's submitted, each of the supervisors, number one, they conduct randomized video watches of each stops so we have them on body camera as well as in car cameras, so we can, we pick those up by policy. We also conduct quarterly studies to make sure that if there's anything that's really a ride that we can look at that more quickly. And then obviously, you know, we're, we're a smaller agency, we're not dealing with thousands of officers. The Chicago police department is, so we're able to supervise maybe a little more closely or observe directly the decision making by the officers differently because there may be only a few stops in a in a given day and those officers again the supervisors are aware of those stops. So does that.

• Khadine Bennett ACLU-IL- What would be like something that would trigger a further conversation with one of your officers? Like what are the kinds of things you're looking for when you're looking at the videos or the quarterly analysis? If their decision making clearly was based on race or some other factor that wasn't lawful or constitutional, if the numbers were skewed in some way abnormally you could pick that up on a quarterly review. So, both observationally by the supervisor directly that supervises the officers as well as using the data that's sent in.

• Amy Thompson- Thanks this has been helpful. Just to piggyback a little bit about off one of the questions, you mentioned in the quarterly studies that's where you're looking at bigger data questions and getting a sense whether the data's viewed abnormally. How does your department think through what kind of abnormal is? Is it, yeah, what kind of metrics would you use to that that you would find helpful? You mentioned that in the, in the quarterly studies what you're looking for is if the numbers were skewed abnormally, I think that gets to the underlying issue that you presented of, you know, what's the, what's the benchmark? What's the denominator of the kind of racial disparity calculations? So, I'm just interested as, as we're answering asking this bigger question of what is what should be the proper way to do those studies, you know, how do you, how do you all look at that in your quarterly studies of what is abnormal versus what would be normal?

• Chief Joe Leonas - number one, each officer will conduct traffic steps different times of the day for different reasons, so we have to allow for some variance between officers, so it's not drilled down to the one stop or two stops, it is trying to collect as much information as we can, but if there were one group that was, overrepresented, we could pull that data or even pull those stops and look at those a little bit more closely. So, the smaller the sample size, the more difficult it is in some ways, but it's also because we're a small agency, we're able to see the officers directly.

• Khadine Bennett ACLU-IL - I'm curious cause you know you're making the asking the question like what's considered the right number? And I wonder since you all do this extra review beyond just submitting the data every year, if you have any engagement with community people who have concerns or raise concerns, and if you're able to explain yes, this is what the numbers say, here are the things that we're doing, does that end up helping? Because I think part of what from our perspective, there's a benefit to be able to say, here's our process, here's what we're doing. We're not scared of the data. We could talk through what we think is happening and then maybe respond to how people are feeling in terms of their interactions. So, have you had those kinds of opportunities?

• Chief Joe Leonas- I think specifically after George Floyd's murder, we received a lot of community feedback, number one, people wanted to know how to, how do I know this isn't going to happen in our community? And we had, I mean from students at the high school sending emails, calls, questions, it was probably the most feedback requests that I got. When you're looking at the data, what we were trying to figure out is, does this make sense for our community? That is who should we be? Stopping versus who we should not be stopping and what's the decision making by each officer and then what are the, what should the numbers look like? Because if you looked at the overall state population, that would be a different number than the lake County.

• if a car is coming at you speeding, the likelihood of you seeing who's driving is very small when you have an officer that's, you know, that you have all the equipment that you're driving along, you're observing all of the traffic patterns. There’s a, there's not a high degree of certainty of looking at the driver specifically because you're looking at all these other factors. And we had people that were doing ride along and just seeing how we operate. The fear that I have and have had is that you look at numbers. Of every stop that we make, whether the person's guilty or not. But how do they feel about being stopped and why did they think they were stopped to the exclusion of other people? So, we even had to explain the difference between traffic, you know, moving violations and nonmoving violations. We're in a good spot now, but I think that whole topic of traffic stops and law enforcement, is complex. I don’t know if I can look at the report with the Methodology and properly explain it.

• Tyrone Forman- You pointed out in the traffic stop reports was that the stop ratios were higher for, I think it's native Hawaiians and Pacific Islanders. I think you were in inferencing that to some extent that might speak to the, a problem with either how officers are reporting race ethnicity or the methodology. Can you to expand on that because in and of itself, the fact that their rate is higher than other groups, say, other than African Americans or Latinx doesn't necessarily mean there's an error in the data. It's just that they're given their numbers in the population that's estimated that they, it appears that they're being stopped at a higher rate than white, so that that doesn't necessarily mean there's an error. I just wanted to get a clarification from you about how you were making that inference

• Chief Joe Leonas- Given their numbers in the population that's estimated that they, it appears that they're being stopped at a higher rate than white, so that that doesn't necessarily mean there's an error. So I, I guess I just wanted to get a clarification from you about how you were making that inference. I appreciate you pointing that out. My understanding and I'm looking at the screen now for 2023 for native Hawaiian or other Pacific islanders, is that the stop rate ratio versus white is 5.5. And my understanding is that means that a native Hawaiian is five and a half times more likely to be stopped. In the state of Illinois, versus a white driver.

• Tyrone Forman- Is that correct?

• Joe Leonas-That seems high, and it may be completely factual. So, I don't know if that's my question I guess is when you look at this as a lay person, it doesn't make common sense? To me it seems high to you if you're a statistician, you may say, oh, that's completely normal. Would a police chief, or any police chief or any resident or anybody else look at that the same way that a statistician might look at that? They might say well that that can't be, or it is that way and we need to really address what police are doing and does that in and of itself mean that.

• That is what racial profiling is because that point that's five 5.5 is extremely high as compared to a white driver. I don't know how to answer those probably appropriately, maybe the, report does have an executive summary that tries to address that, but as was pointed out by Emily, it seems very, very high level and there's a lot of consideration.

• Chief Marc Maton- I just wanted to address a couple of the comments and maybe elaborate on a couple of things from a historical perspective on the use of the traffic stop data. This is going back to my state police days when this 1st started for the 1st five years the biggest use benefit of it was comparing officer to officer, right? So you could, you could argue about the, the comparison of what should be the driving population and who's actually being stopped, but it should be consistent with the officers on the same shift, so that was where we would spend a lot of our analysis time looking. Are there, do we have an officer that has an anomaly?

• We have a fairly high confidence level in the reliability of the data. If you had issues you would call Northwestern and you could talk about the data and sometimes, they realized that they had either a data error or a comparison error. What's changed over the last five years.

• Is there a low confidence level, so people are not using it because they don't believe the data. They don't think the data is accurate, they don't think the methodology is properly scientific and there are ways to check, we've done it in the past for other applications where you go out and you, you do sample surveys, we did it with speeding and seatbelts heavily with seatbelts when that law came into effect where we went to sample cities and then compared, here's where, where we think our methodologies.

• Are what we are predicting and seeing are the same? And if they're not, then we need to adjust our formulas. Things like an individual that's the person that's, the victim of a crash. How have they validated that? That might be some, some theory, but they've never validated that that could be representative of a driving population, and you can't go and create product on unvalidated theories. We, we met with the Department of Transportation and laid out some of these methodologies’ issues with the current vendor and they agreed. An RFP is coming to change that and to bring that back into a university. But I think one of the values of a group like this would be to put together a group to help guide them on methodology issue each year as they were putting these studies together because if you don't have confidence in the reliability of the data, then how is it going to be used at all? The change in confidence is that specifically attributed to the crash the use of the crash data?

• Chief Marc Maton- No, I think it's I think it's that it followed the change of vendors. I think, you know, it went almost immediately as people started looking at it and you know they must choose a methodology validate that methodology. The town next door has a different one and then Joliette next to them has even a different one. So, you have three different methodologies and three different towns, and you would expect our driving population to be similar.

• Timothy Lavery – We have a question in the chat. Chris Writes, the one officer who said the data being collected is not good and that the data is not being used properly. Was there any follow up with that, with that officer? Did the officer define what he she meant by properly? Did the officer say how their agency is using it or how, and in their opinion, it could be used more properly?

• Emilee Green -I remember that individual was just kind of particularly frustrated by the results and they, they couldn't exactly say, you know, what the perfect method would be either, but the same sentiment that's been shared I would say over the last conversation we've been having is like, the response to your question.

• Facilitator Timothy Lavery -To effectively utilize this data in their various roles, be it media or community groups or just, you know, the public at large. So, we'll look at that. We'll start talking about who we can bring to the table for that. We'll also have to take our votes next time on both the chairs and, the meeting minutes of less importance. So, with that we will open for public comments.

VI.PUBLIC COMMENT 

The floor was opened the floor for public comment by Tim Lavery

• Amy Qin WBEZ- setting aside the questions about the methodology of calculating the ideal Driving population, what we found by looking at the IDOT data is that just the share of African American drivers, number of African American drivers being stopped has increased from 17 % in 2004 when the study was 1st created to 30 % in 2022.

• Amy Qin WBEZ Meanwhile, we know that the state's adult population is about 13 % African Americans, so there is a clear disparity now in the share of drivers being stopped that are black, and I was wondering if you had any comments on kind of this widening racial disparity. We we've been using the state's data that's directly from the state's reports. 

• Joe Leonas- the data that we get at least for up until five years ago, they didn't separate out by based on race, so just all minorities compared to white drivers. So I don't know, I haven't seen that data. That's what I mean is, is with the new vendor, they were able to break it out by these other races. And then the 2nd part of your question was the state's adult population is 13 %. The benchmarking is trying to do is take what the expected population is for a given area and then compare that to what has stopped or what the observation by the officer. You can't just use general population data because you have NO idea who's driving. In fact, my understanding is you don't have any idea who the driver is even from the Secretary of state's office because race isn't indicated on the driver's license. So, we don't know how, what the population is that's driving and I, so that's I wouldn't be able to, to make a comment unless I actually knew more information

• Chief Marc Maton- The phenomenon on here that we were looking at while we started looking at, you know, what happened with what with the change of vendors was that the driving population, however it was calculated was pretty consistent within a, you know, a given area. Your town's driving population doesn't change that much from year to year. In one year, it changed dramatically for just about everybody. So you can go back to any individual town and kind of look at that break and see what was it before? What was it after? Why was there a large, I mean that should have been the 1st thing the researchers asked is why, why was there a large. A change in the driving population for this town or that town and it should have been, you know, there would have been explanations for certain towns for growth and, you know, new factory open, but it wouldn't been wouldn't have been across the board. So they should have answered that question as they were

• Amy Qin WBEZ- the state has been collecting data on black, Hispanic, white, Asian drivers going back to 2004. The Data is not aggregated into just all minorities versus why drivers like that data has been collected for the past 20 years. I'll just say one more thing, the reason why I mentioned these dates adult population being 13 % black is because I find it hard to believe, If in Illinois we're stopping twice the number, and there's twice the number of black drivers to black residents in Illinois. So I think I've just use that statistic to point out the disproportionality of what has been presented.

VII. ADJOURNMENT 

• Next Meeting is Monday, October 24, 2024, 1:00pm-2:30pm • Motion By: Amy Thompson at 2:01pm

Seconded By: Joe Leonas

Tim Lavery: adjourns the meeting

https://agency.icjia-api.cloud/uploads/Traffic_Stop_Data_Stop_Minutes_KA_VM_9_26_24_e9bfd36670.pdf

!RECEIVE ALERTS

The next time we write about any of these orgs, we’ll email you a link to the story. You may edit your settings or unsubscribe at any time.
Sign-up

DONATE

Help support the Metric Media Foundation's mission to restore community based news.
Donate