When we planned our testbed days at the start of the Zenzic CAM Scale-up we deliberately avoided February. It’s always cold, wet and snowy in February. So instead we selected days in December, January, and then March.
Our tactics paid off when we avoided the December snow and January rain and were bathed in relatively acceptable sunshine. However it turned out that February was the driest on record and Thursday 9th March was predicted to be the coldest night on record with blizzard conditions. Mother nature didn’t disappoint, with the drive up the M1 covered in snow.
Was the snow going to ruin our chances of a day’s testing?
When we arrived at HORIBA MIRA, the snow had settled on the proving ground and the unfortunate words of track control came across: Closed until further notice.
With the sight of snow outside and still falling at 9am, we followed the initial plan of briefing the new drivers of the day’s plan. This was the first time we have tested 20 fully connected cars and human drivers, and the first time we have used our multi-vehicle coordination at HORIBA MIRA, and the first time we hoped to increase the speed limit to 30mph.
The key conclusion from the meeting and track risk assessment was that unless the snow turned to rain and quite quickly, we wouldn’t be testing.
The last rain dance
At 10:30am the day didn’t look any more hopeful. A few observer guests were due to travel up for the afternoon but we informed them not to travel. This was the key moment that our fortunes turned for the better. To many school childrens’ disappointment, the snow turned to sleet and then rain. Snow started falling off the cars and melting away from the track.
We expected to be on the track after lunch, so got prepared with a readjusted timeline for the day.
We created a circuit around HORIBA MIRA’s City Course that was just over a mile long. We created 4 narrower sections where the road was only wide enough for 1 car to travel past.
With the curvature of the road and with placed screening, the narrow sections were partially blind for the drivers.
The track layout at HORIBA MIRA showing the pairs of virtual traffic signals
The Eloy app advising a driver to wait until the car travelling in the opposite direction has passed
Between the last UTAC Millbrook test day (Day 2) and Day 3 at HORIBA MIRA, we had made a number of upgrades to the artificial intelligence system. For Days 1 and 2 we certainly over analysed the course, meaning we spent an unrealistic time on the digital twin modelling and reinforcement learning.
For Day 3 and a new venue, we created a process where training time can be done in minutes rather than days and weeks with almost no human input required. This makes scaling up the technology to be applied across the 250,000 miles of UK roads feasible and also adjustable to road conditions – a key area where in-vehicle intervention stands out against physical road infrastructure. More on this in the discussion at the end.
One technical error did occur. Only 18 out of the 20 phones were working. This is known as founder error as I firstly lost 2 of the sim cards and then forgot to top-up the pay-as-you-go replacements. On the other hand, we got to observe performance with 90% participation as getting 100% will always be difficult!
With a good technical test completed, we had time to attempt one of our efficacy tests at a sufficient scale. We turned all the MVC messages off on the phones and allowed the drivers to drive around the course without help.
Within a lap our first traffic jam occurred. When the app was running, even with 18 out of 20 using it, we didn’t see any jams forming. With the higher car density and higher speeds, it was quite clear that lap times were significantly slower without the MVC support.
We continue to focus on an efficacy measurement with lap times. We like this approach as it is a relatively easy data collection process and is comparable between the app-on and app-off cases. Whilst this focuses on journey times, which we think has a higher private sector benefit, it is highly correlated with safety that is linked to head-on collisions.
We’re really pleased to share that average lap times with app on (at 90% participation) was 219 seconds and average lap times without the app (0% participation) was 264 seconds. The Eloy MVC improved journey times by 17%, With 100% participation we start to approach our simulated result of 20%
At the start of the day we thought we would be facing a big 0 out of 10 on my success scale. Given the weather and the positive efficacy reading, we’re happy to say the day was an 8 out of 10. We had hoped to observe efficacy with 50% participation but the reduced track hours made this too difficult.
We also hoped to start testing MVC with priority, where an emergency vehicle or a vulnerable road user gets a proceed signal in most cases and others wait for them to pass. Again, given the weather, there was insufficient time to test this. This starts to shift to a far more complex efficacy that is beyond scope for the Zenzic CAM Scale-up. We are looking to incorporate a quick test into our final testbed day on 24th March.
Finally, we’re really positive about our artificial intelligence training process. Continuing the theme of emergencies and with the poor weather, we believe our MVC could be ready to be deployed within 5 minutes of a road risk appearing. This might be a fallen tree, a flood or the need to deploy temporary traffic measures. Digital infrastructure has the key advantage in terms of response times. This takes us back to on-the-road testing, where we can start to test the software back in the public domain.