1 Introduction

1.1 Description

Four dockless vehicle operators: Bird, Lime, Bolt and Spin, have launched their opreation in the city of Louisville since August 2018. Their app-based rental scooters are designed to transport individuals during the last mile or so of their commute across the city. More than one year’s implementation has exposed several problems:

  • Shortages of scooters often occur where there is a high demand, leading to inconvenience for users and market loss for companies;

  • In places where there is no potential users, Scooters are oversupplied and piled. Depreciation without any revenue cuts down rate of return greatly.

  • Due to the characteristic of dockless, it is common to see people park their scooters in wrong areas, resulting in unpleasant city views and traffic chaos.

1.2 Motivation

One of the biggest operational challenges of dockless scooter system is to get vehicles to areas that are anticipated to have high demand but actually have no scooter. In this report, we manage to predict its suppy and demand based on existing data. By detecting real demand-supply situation, we hope this model can realize:

  • Better decision making in future investment, since Louisville is now allowing more companies to enter the market and scooter companies are considering double its fleet;

  • A successful re-balancing system with larger market share and more potential benefits, figuring out a specific amout of money that we can save with more detailed preparations;

  • A win-win outcome for both the providers and users, due to higher public transportation accessibility and less reliance on personal cars;

  • Suggestions for virtual dock site planning if necessary in the future.

For more about our app Sco-Porter, here is a video!

2 Data Preparation

Before the analysis, some libraries are loaded and some graphic themes are set.

2.1 Data Collection

We compiled our dataset from:

  • All dockless trips across the city from all providers are obtained from Louisville Open Data, with details in month, date, day of the week;
  • Holiday and event data are collected from web resources;

  • Census tract data are obtained using tidycensus package. Boundary shapefile is downloaded from TIGER/Line® Shapefiles dataset and have been clipped to fit the study area.

  • Weather data are obtained using riem_stations, including temperature, precipitation and wind speed data. Station ‘LOU’ is selected here;

Note that this is the final dataset which has been narrowed down to a set of best predictors through exploratory analysis and the modeling process.

2.2 Data Processing

This process included various transformations of the data in order to optimize predictive ability of each variable. For time scenario, we labeled start time and end time for each trip and sliced them into every hour intervals.

We will focus on trips generated from March to August.

Census tract is set as the space unit in the first place and we projected scooter trips on the tracts. In this way, trips data are transformed from points into polygons.

2.2 Create Space-time Panels

Here, two space-time panels are created based on the labeled start time and end time. The study.panel is like an empty table where each row can record scooter activity in each census tract in every hour.

Two full panels are then created by summarizing counts by census tract for each time interval. Weather data are joined to the panels as well. We also added census tract data as spatial content.

2.3 Create Time Lag

Time lags are added to help us gain insight into the different patterns of scooter trips generated in various time slots. Besides, a dummy variable holidayLag is created to control the holiday influence on the demand during the three day weekend due to the holiday like Memorial Day. Since during selected time period, there are two holidays, Memorial Day on May 27th, and Independence day on July 4th, holiday lag should be considered. In addition to that, we divide six months into high, medium and low demand categories based on monthly use volume, and label them in monthCat column.

#start----
ride.panel.start <- 
  ride.panel.start %>% 
  arrange(GEOID, interval60) %>% 
  mutate(lagHour = dplyr::lag(Trip_Count,1),
         lag2Hours = dplyr::lag(Trip_Count,2),
         lag3Hours = dplyr::lag(Trip_Count,3),
         lag4Hours = dplyr::lag(Trip_Count,4),
         lag12Hours = dplyr::lag(Trip_Count,12),
         lag1day = dplyr::lag(Trip_Count,24),
         holiday = ifelse(yday(interval60) == 245|yday(interval60) ==185,1,0),
         weekday = ifelse(dotw != 'Sun' & dotw!='Sat',1,0),
         events = ifelse(isin(c(76,97,103,117,124,165,185,227,236),yday(interval60)),1,0),
         day = yday(interval60)) %>%
  mutate(holidayLag = case_when(dplyr::lag(holiday, 1) == 1 ~ "PlusOneDay",
                                dplyr::lag(holiday, 2) == 1 ~ "PlustTwoDays",
                                dplyr::lag(holiday, 3) == 1 ~ "PlustThreeDays",
                                dplyr::lead(holiday, 1) == 1 ~ "MinusOneDay",
                                dplyr::lead(holiday, 2) == 1 ~ "MinusTwoDays",
                                dplyr::lead(holiday, 3) == 1 ~ "MinusThreeDays"),
         time_of_day = case_when(hour(interval60) < 7 | hour(interval60) > 18 ~ "Overnight",
                                 hour(interval60) >= 7 & hour(interval60) < 10 ~ "AM Rush",
                                 hour(interval60) >= 10 & hour(interval60) < 15 ~ "Mid-Day",
                                 hour(interval60) >= 15 & hour(interval60) <= 18 ~ "PM Rush"),
         monthCat = case_when(month(interval60)<5 | month(interval60)>9 ~ 'Low-demand month',
                              month(interval60)==6|month(interval60)==5 ~ 'Medium-demand month',
                              month(interval60)==7|month(interval60)==8 ~ 'High-demand month'),
         holidayLag = replace_na(holidayLag, '0'))

#end----
ride.panel.end <- 
  ride.panel.end %>% 
  arrange(GEOID, interval60) %>% 
  mutate(lagHour = dplyr::lag(Trip_Count,1),
         lag2Hours = dplyr::lag(Trip_Count,2),
         lag3Hours = dplyr::lag(Trip_Count,3),
         lag4Hours = dplyr::lag(Trip_Count,4),
         lag12Hours = dplyr::lag(Trip_Count,12),
         lag1day = dplyr::lag(Trip_Count,24),
         holiday = ifelse(yday(interval60) == 147|yday(interval60) ==185,1,0),
         weekday = ifelse(dotw != 'Sun' & dotw!='Sat',1,0),
         events = ifelse(isin(c(76,97,103,117,124,165,185,227,236),yday(interval60)),1,0),
         day = yday(interval60)) %>%
  mutate(holidayLag = case_when(dplyr::lag(holiday, 1) == 1 ~ "PlusOneDay",
                                dplyr::lag(holiday, 2) == 1 ~ "PlustTwoDays",
                                dplyr::lag(holiday, 3) == 1 ~ "PlustThreeDays",
                                dplyr::lead(holiday, 1) == 1 ~ "MinusOneDay",
                                dplyr::lead(holiday, 2) == 1 ~ "MinusTwoDays",
                                dplyr::lead(holiday, 3) == 1 ~ "MinusThreeDays"),
         time_of_day = case_when(hour(interval60) < 7 | hour(interval60) > 18 ~ "Overnight",
                                 hour(interval60) >= 7 & hour(interval60) < 10 ~ "AM Rush",
                                 hour(interval60) >= 10 & hour(interval60) < 15 ~ "Mid-Day",
                                 hour(interval60) >= 15 & hour(interval60) <= 18 ~ "PM Rush"),
         monthCat = case_when(month(interval60)<5 | month(interval60)>9 ~ 'Low-demand month',
                              month(interval60)==6|month(interval60)==5 ~ 'Medium-demand month',
                              month(interval60)==7|month(interval60)==8 ~ 'High-demand month'),
         holidayLag = replace_na(holidayLag, '0'))

The correlation between these time lags and the trips count is evaluated respectively. It is obvious that there are Pearson’s R higher or around 0.5 for lagHour, lag2Hours, lag3Hours, lag4Hours and lag1day, which indicates the strong correlations. And thus, we would like to take them into account.

Pearson’s R (Pickup)
Variable correlation
lagHour 0.92
lag2Hours 0.82
lag3Hours 0.69
lag4Hours 0.56
lag12Hours -0.31
lag1day 0.73
Pearson’s R (Return)
Variable correlation
lagHour 0.92
lag2Hours 0.82
lag3Hours 0.70
lag4Hours 0.56
lag12Hours -0.31
lag1day 0.73

3 Exploratory Analysis

In this section, we explore the data and gain insight into the temperal and spatial pattern of scooter activity, as well as the relationship between trips count and one or more independent variables.

3.1 Overall analysis

First, we do some general analysis to have a rough understanding of the scooter trips in Louisville. From figure below, it could be concluded that generally, scooters are used for short-distance trips, usually for trips less than 1 mile.

From histograms below, we can conclude that weather affects activity. There are more trips when the wind speed is slow, temperature is comfortable and it is not raining.

3.1 Time Trend

First, let’s focus on the overtime pattern of ridership in July, 2019, when the number of trips is the highest. There is a clear periodicity, where there are peak in day time and more trips were made on weekends, especially on Saturday compared to trips made during weekdays. Moreover, we can see an increase in the trips count on July 4th, which is a Thurday holiday, compared to the average trip count on Thurday.

For more details, we plot the scooter trips by day of the week. It can be concluded that there are more usage on Saturday and Friday. In terms of the hours, people usually travel by scooter during day time, especially after noon.

3.2 Space Trend

It is easy to think that most scooters usage would happen in CBD. To validate our thoughs, we plot distribution of scooters as below. In short, the scooter activity varies across the space and time. High demand occurs in downtown, old city and districts around University of Louisville, suggesting that city residents and students might be potential users of scooters.

Then we plot distribution of trips by four time periods during one day: AM Rush from 7 am to 10 am, Mid-Day from 10 am to 15 pm, PM rush from 15 pm to 18 pm, and others for Overnight. It can be seen from the map that since there is no significant pattern difference among different time periods, people in Louisville do not widely use scooters for communiting.

The result in choropleth map is the same as point map.

The animation below presents a daily scooter activity by census tract on a Saturday in July. We can see a cluster of trips in the center city census tract.

4 Spatial-Time Model Built on Census Tract

4.1 Model bulilding

In this part, we will build two spatial-time models, one for predicting scooter demand and one for supply, based on pickup (start trip info) and return (end trip info) respectively. The subtraction of two models can reflect whether supply and demand is well-balanced in each census tract.

\(Final Outcome = Predicted Supply - Predicted Demand\)

To realize the final goal, we firstly divide trip data into training dataset as trips generated in March, May and July, and test dataset comes from trips in April, June and August.

From what has been discussed in exploratory analysis above, it can be concluded that scooter activity shows obvious spatial and temporal patterns in downtown city and weekends. Then we decided to take these variables into models, which are:

  • Time effect: month, day of week, time of day, events, time lag family (1-hr, 2-hr, 3-hr, 4-hr, 12-hr, 1-day), holiday and holiday lag. Among, when building the first model, there are some usage peak on weekdays. And after researching, we find that these were caused by the events or festivals held in the city, and thus, we include all these date into consideration.

  • Space effect: census tract GEOID.

  • Other: temperature, wind speed and precipitation.

In the final part of the model, we focus on detecting real demand(pickup)-supply(return) situation, which can be calculated as the subtratction of predicted supply and demand value. From the result, we find that generally, a census tract need 5 re-balancing on average and the error of our model is 0.96 meaning that we would overpredict or underredict about 1 rebalancing for each census tract on average. The final map shows that:

  • For most census tracts in Louisville, scooter demand and supply are well-balanced;

  • In downtown city, scooters are considerably undersupply, which will become a focal area in sooter rebanlancing;

  • Excess demand also happens in Wyandotte districts;

  • The northern part of Lousville may exist a problem of wasted oversupply.

4.2 Evaluation of The Models

4.2.1 Accuracy

Mean Absolute Error (MAE) is applied to evaluate the acuuracy of the model. It can be calculated that for pickup activity and return activity, MAE is 0.378 and 0.468 respectively. The model for pickup performs better than that for return.

4.2.2 Generalizability

MAEs for different month categories are listed below. Apparently, our models perform better when predicting low to medium-demand months like April and June. The model generate great errors when predicting scooter activity in August. The root problem is that our model is poor when it comes to prediction of high demand.

MAE (pickup activity)
monthCat MAE
High-demand month 0.5455064
Low-demand month 0.3641645
Medium-demand month 0.4790297
MAE (return activity)
monthCat MAE
High-demand month 0.5510436
Low-demand month 0.3614943
Medium-demand month 0.4888506

As we mentioned above, months are divided into three categories: low demand in March and April, medium demand in May and June, high-demand in July and August. Here are predicted results from these three scenarios.

grid.arrange(
#low----
ggplot(data=ride.test.s %>% 
  subset(monthCat=='Low-demand month') %>%
  dplyr::select(interval60, GEOID, Trip_Count, pred) %>%
  gather(Variable, Value, -interval60, -GEOID) %>%
  group_by(Variable, interval60) %>%
  summarize(Value = sum(Value)), aes(interval60, Value, colour=Variable)) + 
  geom_line(size=1.1) + 
  labs(title = "Pickup in Louisville; Low-demand month",  x = "Hour", y= "Station Trips") +
  plotTheme,

ggplot(ride.test.e %>% 
  subset(monthCat=='Low-demand month') %>%
  dplyr::select(interval60, GEOID, Trip_Count, pred) %>%
  gather(Variable, Value, -interval60, -GEOID) %>%
  group_by(Variable, interval60) %>%
  summarize(Value = sum(Value)), aes(interval60, Value, colour=Variable)) + 
  geom_line(size=1.1) + 
  labs(title = "Return in Louisville; Low-demand month",  x = "Hour", y= "Station Trips") +
  plotTheme,

#mild----
ggplot(ride.test.s %>% 
  subset(monthCat=='Medium-demand month') %>%
  dplyr::select(interval60, GEOID, Trip_Count, pred) %>%
  gather(Variable, Value, -interval60, -GEOID) %>%
  group_by(Variable, interval60) %>%
  summarize(Value = sum(Value)), aes(interval60, Value, colour=Variable)) + 
  geom_line(size=1.1) + 
  labs(title = "Pickup in Louisville; Medium-demand month",  x = "Hour", y= "Station Trips") +
  plotTheme,

ggplot(ride.test.e %>% 
  subset(monthCat=='Medium-demand month') %>%
  dplyr::select(interval60, GEOID, Trip_Count, pred) %>%
  gather(Variable, Value, -interval60, -GEOID) %>%
  group_by(Variable, interval60) %>%
  summarize(Value = sum(Value)), aes(interval60, Value, colour=Variable)) + 
  geom_line(size=1.1) + 
  labs(title = "Return in Louisville; Medium-demand month",  x = "Hour", y= "Station Trips") +
  plotTheme,
#high----
ggplot(ride.test.s %>% 
  subset(monthCat=='High-demand month') %>%
  dplyr::select(interval60, GEOID, Trip_Count, pred) %>%
  gather(Variable, Value, -interval60, -GEOID) %>%
  group_by(Variable, interval60) %>%
  summarize(Value = sum(Value)), aes(interval60, Value, colour=Variable)) + 
  geom_line(size=1.1) + 
  labs(title = "Pickup in Louisville; High-demand month",  x = "Hour", y= "Station Trips") +
  plotTheme,

ggplot(ride.test.e %>% 
  subset(monthCat=='High-demand month') %>%
  dplyr::select(interval60, GEOID, Trip_Count, pred) %>%
  gather(Variable, Value, -interval60, -GEOID) %>%
  group_by(Variable, interval60) %>%
  summarize(Value = sum(Value)), aes(interval60, Value, colour=Variable)) + 
  geom_line(size=1.1) + 
  labs(title = "Return in Louisville; High-demand month",  x = "Hour", y= "Station Trips") +
  plotTheme,
top='Predicted/Observed scooter time series', ncol=1
)

Then we plot the distribution of MAE by census tract. These maps indicate that for both the demand and supply model, higher error comes from downtown and university areas, where scooter demands are more intensive.

Further generalizability test is conducted. Race data is used here to define the differences in Louisville’s urban context. Tracts with more than 50% of white population is defined as ‘Majority White’ in racial context.

Though there are some missing value in the census data, the results do bring us an insight into whether the model generalize to different urban contexts.

Though averaging across positive and negative counts does not provide the best indicator of accuracy (MAE is better), we use error here to see whether we onverpredict or underpredict.

With the slight difference in error when using our model to predict scooter activity in “Majority White” and “Majority Non-White” census tracts, we can conclude that our models generalize well with respect to ratial context. The difference is reflected in that the city would slightly underpredict in majority-White tracts.

Error (by race context)
raceContext Error
Majority_Non_White -0.0079663
Majority_White -0.0175292

5 Spatial-Time Model Built for a Census Tract on Fishnet

Recall that in our first model, we build two models to predict scooter demand and supply by each census tract. However, such large unit area may weaken the model’s effectiveness and make it less useful in practice. To get a more detailed information about supply-demand situation within each census tract, let’s focus on the downtown city census tract as an example, where MAE is the highest and scooters are considerably undersupply in overall.

5.1 Data Wrangling

Based on the census tract boundary, we create a fishnet with unit area of 0.003 degree latitude × 0.003 degree latitude.For each fishnet. An uniqueID is generated for spatial join, and trip points are projected to the fishnet. In the mean time, cvIDs are created for further cross validation.

Same as what we have done before, we project the point data to fishnet.

Then, two space-time panels are created based on the start time and end time for each gird cell in every hour. Since process in feature engineering and time lag creation are quite similar to the previous model, description of this part is omitted.

study.panel.start.FN <- 
  expand.grid(interval60=unique(Data_in_fishnet_s$interval60), 
              uniqueID = unique(Data_in_fishnet_s$uniqueID))

study.panel.end.FN <- 
  expand.grid(interval60=unique(Data_in_fishnet_e$interval60),
              uniqueID = unique(Data_in_fishnet_e$uniqueID))

ride.panel.start.FN <- 
  Data_in_fishnet_s %>%
  mutate(Trip_Counter = 1) %>%
  right_join(study.panel.start.FN) %>% 
  st_set_geometry(NULL) %>%
  group_by(interval60, uniqueID) %>%
  summarize(Trip_Count = sum(Trip_Counter, na.rm=T)) %>%
  left_join(fishnet,by='uniqueID') %>%
  left_join(weather.Panel, by='interval60') %>%
  ungroup() %>%
  mutate(week = week(interval60),
         dotw = wday(interval60, label = TRUE),
         Preci = case_when(Precipitation<0.001~"No rain",
                           Precipitation >= 0.001~"Wet"),
         Wind = case_when(Wind_Speed<=3~"Light", 
                          Wind_Speed> 3 & Wind_Speed <= 6~"Moderate", 
                          Wind_Speed > 6 & Wind_Speed < 9~"Gale", 
                          Wind_Speed >= 9~"Stormy"),
         time_uniqueID=paste(as.character(interval60),uniqueID))

ride.panel.end.FN <- 
  Data_in_fishnet_e %>%
  mutate(Trip_Counter = 1) %>%
  right_join(study.panel.end.FN) %>% 
  st_set_geometry(NULL) %>%
  group_by(interval60, uniqueID) %>%
  summarize(Trip_Count = sum(Trip_Counter, na.rm=T)) %>%
  left_join(fishnet,by='uniqueID') %>%
  left_join(weather.Panel, by='interval60') %>%
  ungroup() %>%
  mutate(week = week(interval60),
         dotw = wday(interval60, label = TRUE),
         Preci = case_when(Precipitation < 0.001~"No rain", 
                           Precipitation >= 0.001~"Wet"),
         Wind = case_when(Wind_Speed <= 3~"Light", 
                          Wind_Speed > 3 & Wind_Speed <= 6~"Moderate", 
                          Wind_Speed > 6 & Wind_Speed < 9~"Gale",
                          Wind_Speed >= 9~"Stormy"),
         time_uniqueID=paste(as.character(interval60),uniqueID))

ride.panel.start.FN <- 
  ride.panel.start.FN %>% 
  arrange(uniqueID, interval60) %>% 
  mutate(lagHour = dplyr::lag(Trip_Count,1),
         lag2Hours = dplyr::lag(Trip_Count,2),
         lag3Hours = dplyr::lag(Trip_Count,3),
         lag4Hours = dplyr::lag(Trip_Count,4),
         lag12Hours = dplyr::lag(Trip_Count,12),
         lag1day = dplyr::lag(Trip_Count,24),
         holiday = ifelse(yday(interval60) == 245|yday(interval60) ==185,1,0),
         weekday = ifelse(dotw != 'Sun'&dotw!='Sat',1,0)) %>%
  mutate(day = yday(interval60)) %>%
  mutate(holidayLag = case_when(dplyr::lag(holiday, 1) == 1 ~ "PlusOneDay",
                                dplyr::lag(holiday, 2) == 1 ~ "PlustTwoDays",
                                dplyr::lag(holiday, 3) == 1 ~ "PlustThreeDays",
                                dplyr::lead(holiday, 1) == 1 ~ "MinusOneDay",
                                dplyr::lead(holiday, 2) == 1 ~ "MinusTwoDays",
                                dplyr::lead(holiday, 3) == 1 ~ "MinusThreeDays"),
         time_of_day = case_when(hour(interval60) < 7 | hour(interval60) > 18 ~ "Overnight",
                                 hour(interval60) >= 7 & hour(interval60) < 10 ~ "AM Rush",
                                 hour(interval60) >= 10 & hour(interval60) < 15 ~ "Mid-Day",
                                 hour(interval60) >= 15 & hour(interval60) <= 18 ~ "PM Rush"),
         monthCat = case_when(month(interval60)<5 | month(interval60)>9 ~ 'Low-demand month',
                              month(interval60)==6|month(interval60)==5 ~ 'Medium-demand month',
                              month(interval60)==7|month(interval60)==8 ~ 'High-demand month'),
         events = ifelse(isin(c(76,97,103,117,124,165,185,227,236),yday(interval60)),1,0),
         holidayLag = replace_na(holidayLag, '0'))

ride.panel.end.FN <- 
  ride.panel.end.FN %>% 
  arrange(uniqueID, interval60) %>% 
  mutate(lagHour = dplyr::lag(Trip_Count,1),
         lag2Hours = dplyr::lag(Trip_Count,2),
         lag3Hours = dplyr::lag(Trip_Count,3),
         lag4Hours = dplyr::lag(Trip_Count,4),
         lag12Hours = dplyr::lag(Trip_Count,12),
         lag1day = dplyr::lag(Trip_Count,24),
         holiday = ifelse(yday(interval60) == 245|yday(interval60) ==185,1,0),
         weekday = ifelse(dotw != 'Sun'&dotw!='Sat',1,0)) %>%
  mutate(day = yday(interval60)) %>%
  mutate(holidayLag = case_when(dplyr::lag(holiday, 1) == 1 ~ "PlusOneDay",
                                dplyr::lag(holiday, 2) == 1 ~ "PlustTwoDays",
                                dplyr::lag(holiday, 3) == 1 ~ "PlustThreeDays",
                                dplyr::lead(holiday, 1) == 1 ~ "MinusOneDay",
                                dplyr::lead(holiday, 2) == 1 ~ "MinusTwoDays",
                                dplyr::lead(holiday, 3) == 1 ~ "MinusThreeDays"),
         time_of_day = case_when(hour(interval60) < 7 | hour(interval60) > 18 ~ "Overnight",
                                 hour(interval60) >= 7 & hour(interval60) < 10 ~ "AM Rush",
                                 hour(interval60) >= 10 & hour(interval60) < 15 ~ "Mid-Day",
                                 hour(interval60) >= 15 & hour(interval60) <= 18 ~ "PM Rush"),
         monthCat = case_when(month(interval60)<5 | month(interval60)>9 ~ 'Low-demand month',
                              month(interval60)==6|month(interval60)==5 ~ 'Medium-demand month',
                              month(interval60)==7|month(interval60)==8 ~ 'High-demand month'),
         events = ifelse(isin(c(76,97,103,117,124,165,185,227,236),yday(interval60)),1,0),
         holidayLag = replace_na(holidayLag, '0'))

5.2 Model Building

The data used for trainging and testing, as well as the predictors in the models are the same as the former model and would be omited. The only difference is we here use the uniqueID of fishnet accounts for the spatial effects.

Same as what we have done before, we focus on detecting real demand(pickup)-supply(return) situation. From the result, we find that generally, a census tract need 5 re-balancing on average per day and the error of our model is 2 meaning that we would overpredict or underredict about 2 rebalancing for each cell on average. Based on the map below, we can conclude that scooter demand and supply in central city is poor-balanced.

5.3 Evaluation of The Models

5.3.1 Accuracy

It can be calculated that for pickup activity and return activity, MAE is 0.700 and 0.649 respectively. The model for pickup performs better than that for return.

5.3.2 Genralizability

We can see that the model perform better when predicting the pickup and return activity in low to medium-demand months. Compared the MAE of these two month, it is concluded that our model is more reliable when predicting scooter activity in low or high usage months.

MAE (pickup activity)
monthCat mae
High-demand month 0.7446504
Low-demand month 0.4195220
Medium-demand month 0.6484296
MAE (return activity)
monthCat mae
High-demand month 0.6930751
Low-demand month 0.3832812
Medium-demand month 0.5984686

For more detail, we plot predicted/observed scooter time series and find that the model fail to predict the high demand during day time.

grid.arrange(
ride.test.s.FN %>% 
  subset(monthCat=='Low-demand month') %>%
  dplyr::select(interval60, uniqueID, Trip_Count, pred, monthCat) %>%
  gather(Variable, Value, -interval60, -uniqueID, -monthCat) %>%
  group_by(Variable, interval60, monthCat) %>%
  summarize(Value = sum(Value)) %>%
  ggplot(aes(interval60, Value, colour=Variable)) + 
  geom_line(size=1.1) +
  labs(title = "Pickup in Louisville; Low-demand month",  x = "Hour", y= "Station Trips") +
  plotTheme,

ride.test.e.FN %>% 
  subset(monthCat=='Low-demand month') %>%
  dplyr::select(interval60, uniqueID, Trip_Count, pred) %>%
  gather(Variable, Value, -interval60, -uniqueID) %>%
  group_by(Variable, interval60) %>%
  summarize(Value = sum(Value)) %>%
  ggplot(aes(interval60, Value, colour=Variable)) + 
  geom_line(size=1.1) + 
  labs(title = "Return in Louisville; Low-demand month",  x = "Hour", y= "Station Trips") +
  plotTheme,

ride.test.s.FN %>% 
  subset(monthCat=='Medium-demand month') %>%
  dplyr::select(interval60, uniqueID, Trip_Count, pred, monthCat) %>%
  gather(Variable, Value, -interval60, -uniqueID, -monthCat) %>%
  group_by(Variable, interval60, monthCat) %>%
  summarize(Value = sum(Value)) %>%
  ggplot(aes(interval60, Value, colour=Variable)) + 
  geom_line(size=1.1) +
  labs(title = "Pickup in Louisville; Medium-demand month",  x = "Hour", y= "Station Trips") +
  plotTheme,

ride.test.e.FN %>% 
  subset(monthCat=='Medium-demand month') %>%
  dplyr::select(interval60, uniqueID, Trip_Count, pred) %>%
  gather(Variable, Value, -interval60, -uniqueID) %>%
  group_by(Variable, interval60) %>%
  summarize(Value = sum(Value)) %>%
  ggplot(aes(interval60, Value, colour=Variable)) + 
  geom_line(size=1.1) + 
  labs(title = "Return in Louisville; Medium-demand month",  x = "Hour", y= "Station Trips") +
  plotTheme,

ride.test.s.FN %>% 
  subset(monthCat=='High-demand month') %>%
  dplyr::select(interval60, uniqueID, Trip_Count, pred) %>%
  gather(Variable, Value, -interval60, -uniqueID) %>%
  group_by(Variable, interval60) %>%
  summarize(Value = sum(Value)) %>%
  ggplot(aes(interval60, Value, colour=Variable)) + 
  geom_line(size=1.1) + 
  labs(title = "Pickup in Louisville; High-demand month",  x = "Hour", y= "Station Trips") +
  plotTheme,

ride.test.e.FN %>% 
  subset(monthCat=='High-demand month') %>%
  dplyr::select(interval60, uniqueID, Trip_Count, pred) %>%
  gather(Variable, Value, -interval60, -uniqueID) %>%
  group_by(Variable, interval60) %>%
  summarize(Value = sum(Value)) %>%
  ggplot(aes(interval60, Value, colour=Variable)) + 
  geom_line(size=1.1) + 
  labs(title = "Return in Louisville; High-demand month",  x = "Hour", y= "Station Trips") +
  plotTheme, top='Predicted/Observed scooter time series', ncol=1)

Both the models perform better when predicting low-demand area and fail to account for the high demand in the center of the census tract.

6 Limitation and Discussion

From what has been discussed above, the model underassess for periods of high demand but could sense the spatio-temporal change in scooter activity. With the results of the prediction, a simple calculation could be done to know more about the excess demand and supply across the city. However, this model is far from a mature one, and here are several steps to take in the future:

  • A more advanced prediction model.

  • Use additional factors.

  • A more appropriate space unit.

Based on our prediction result, here are some suggustions we think would be helpful to the four providers:

  • Virtual dock. Another problem for the scooter share is that some users park the scooters in wrong places. The setting of virtual dock not only solve this problem, but also make it efficient and convinient for the four companies to manage. With this relatively fixed “station”, the prediction would be more accurate.

  • Profit analysis. Based on the prediction result, we calculate the net revenue if this rebalancing system operate normally and every scooter demand can be met. It can be concluded that 91383 more trips would be accomplished in the last April, June and August. Acoording to the economic statistics, each scooter generates 0.95 dollar per ride, deducting costs of charging, repairs, credit card fees, customer support, and insurance. Therefore, these four companies will recollect a total profit of 86813 dollar.