Trainee Efficiency Analysis and Forecast

Intro

This short article utilizes to anticipate trainee efficiency. For lots of applications, consisting of online customer support, marketing, and financing, the stock rate is a vital obstacle. Provided a trainee’s efficiency utilizing huge companies and organizations, it can be tough to come up with a Trainee efficiency analysis and forecast system that is precise throughout all designs. This short article will talk about how trainee efficiency forecast can fix this issue. Here we will handle trainee efficiency analysis and forecast with aid of a dataset.

Find Out More on what is predictive analytics for newbies here

This short article was released as a part of the Data Science Blogathon.

Tabulation

Comprehending the Issue Declaration

This task comprehends how the trainee’s efficiency (test ratings) is impacted by other variables such as Gender, Ethnic background, Adult level of education, and Lunch and Test preparation course.

The main goal of college organizations is to impart quality education to their trainees. To accomplish the greatest level of quality in the education system, understanding needs to be found to anticipate trainee registration in particular courses, determine problems with conventional class mentor designs, spot unjust ways utilized in online assessments, spot irregular worths in trainee result sheets, and anticipate trainee efficiency. This understanding is concealed within academic datasets and can be drawn out through information mining strategies.

This task concentrates on assessing trainees’ abilities in numerous topics utilizing a category job. Information category has lots of techniques, and the choice tree technique and probabilistic category technique are used here. By performing this job, understanding is drawn out that explains trainees’ efficiency in the end-semester assessment. This assists in determining dropouts and trainees who need unique attention, making it possible for instructors to supply proper recommending and counseling.

Information Collection

Dataset Source– Trainees efficiency dataset.csv The information includes 8 column and 1000 rows.

Import Information and Required Bundles

Importing Pandas, Numpy, Matplotlib, Seaborn and Warings Library.

 import pandas as pd

. import numpy as np

. import matplotlib.pyplot as plt

. import seaborn as sns

. import cautions 
. warnings.filterwarnings(" disregard")

Import the CSV Data as Pandas DataFrame

 df = pd.read _ csv(" data/StudentsPerformance. csv")

Program the leading 5 Recoreds

 df.head()

Analysis | student performance | dataset

reveal the leading 5 records on the dataset and take a look at the functions.

To see the shape of the dataset

 df.shape

And it will assist to discover the shape of the dataset.

Dataset Info

  • gender: sex of trainees -> > (Male/female)
  • race/ethnicity: ethnic background of trainees -> > (Group A, B, C, D, E)
  • adult level of education: moms and dads’ last education ->>( bachelor’s degree, some college, master’s degree, partner’s degree)
  • lunch: having lunch prior to test (basic or free/reduced)
  • test preparation course: total or not total prior to test
  • mathematics rating
  • reading rating
  • composing rating

After that, we examine the information as the next action. There are a variety of categorical functions consisted of in the dataset, consisting of numerous missing out on worth kinds, replicate worths, examine information types, and a variety of special worth types.

Information Checks to Carry Out

  • Inspect Missing out on worths
  • Inspect Duplicates
  • Inspect information type
  • Inspect the variety of special worths in each column
  • Inspect the stats of the information set
  • Inspect numerous classifications present in the various categorical column

Inspect Missing Out On Worths

To examine every column of the missing out on worths or null worths in the dataset.

 df.isnull(). amount()

Missing value analysis | student performance | dataset

If there are no missing out on worths in the dataset.

Inspect Duplicates

If inspecting the our dataset has any duplicated worths present or not

 df.duplicated(). amount()

There are no duplicates worths in the dataset.

Inspect the Information Types

To examine the details of the dataset like datatypes, any null worths present in the dataset.

 #check the null and Dtypes

. df.info()

Data types analysis | student performance

Inspect the Variety Of Special Worths in Each Column

 df.nunique()

Unique values analysis | student performance

Inspect Stats of the Data Set

To take a look at the dataset’s stats and identify the information’s stats.

Insight

  • The mathematical information revealed above programs that all ways are relatively comparable to one another, falling in between 66 and 68.05.
  • The variety of all basic variances, in between 14.6 and 15.19, is likewise narrow.
  • While there is a minimum rating of 0 for mathematics, the minimums for composing and checking out are significantly greater at 10 and 17, respectively.
  • We do not have any replicate or missing out on worths, and the following codes will supply a great information monitoring.

Exploring Data

 print(" Classifications in 'gender' variable: ", end=" ")

. print( df
["gender"] special() )
. 
. print( "Classifications in' race/ethnicity' variable
:
", end="") 
. print( df["race/ethnicity"] special()) 
. 
. print(" Classifications in' adult level of education' variable:", end="") 
. print( df["parental level of education"]
.
special())

.

. print(" Classifications in 'lunch 'variable:", end= "")

. print( df["lunch"] special ()) 
. 
. print (" Classifications in 'test preparation course' variable:", end=" ") 
. print( df["test preparation course"] special ())

The special worths in the dataset will be offered and provided in an enjoyable method the code above.

The output will following:

We specify the mathematical and categorical columns:(
*) #define mathematical and categorical columns . numeric_features=

 dtype! =" item"] 
. categorical_features=[feature for feature in df.columns if df[feature] dtype==" item"] 
. 
. print(" We have {} mathematical functions: {} ". format( len( numeric_features), numeric_features)) 
. print(" We have {} categorical functions: {}". format( len( categorical_features), categorical_features))[feature for feature in df.columns if df[feature] The above code will utilize different the mathematical and categorical functions and count the function worths.

Exploring Information( Visualization)

Envision Typical Rating Circulation to Make Some Conclusion

Pie Chart

  • Kernel Circulation Function (KDE)
  • Pie Chart & & KDE
  • Gender Column

How is circulation of Gender?

Is gender has any influence on trainee’s efficiency?

# Develop a figure with 2 subplots . f, ax= plt.subplots( 1,2, figsize=( 8,6)) . . . # Develop a countplot of the ‘gender’ column and include labels to the bars . sns.countplot( x= df


, information= df, combination =" brilliant", ax= ax['gender'], saturation
= 0.95) 
 for container in ax[0] containers: 

.
ax[0] bar_label (container, color=" black", size= 15 )
 
. # Set font size of x-axis and y-axis labels and
tick labels 
. ax[0] set_xlabel(' Gender', fontsize= 14)

.
ax[0] set_ylabel( 'Count', fontsize= 14) 
. ax[0]
tick_params( labelsize= 14) 
. 
. # Develop a pie chart of the' gender' column and include labels to the pieces 
. plt.pie (x= df(* )
.

value_counts (), identifies =[0], blow up =(* ), autopct="% 1.1 f%%", shadow= Real, colors =['gender'], textprops= {'fontsize': 14}) 
. 
. # Show the plot

.
plt.show ()['Male','Female'] Gender has actually stabilized information with female trainees are 518 (48%) and male trainees are 482( 52% )[0,0.1] Race/Ethnicity Column['#ff4d4d','#ff8000'] # Specify a color combination for the countplot 
.
colors =

Analysis of classes using plot | Gender classification | dataset

. # blue
,
orange, green, red, purple are respectiively the color names for the color codes utilized above . .
# Develop a figure with 2 subplots . f, ax= plt.subplots( 1, 2,
figsize=( 12, 6)) .
. # Develop a countplot of the ‘race/ethnicity ‘column and include labels to
the bars . sns.countplot( x= df(* ), information= df, combination =colors, ax= ax

, saturation= 0.95) . for container in ax

 containers: 
. ax['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd']
.
bar_label( container, color=" black", size= 14)

.

. # Set font size of x-axis and y-axis labels and tick labels 
. ax['race/ethnicity'] set_xlabel(' Race/Ethnicity', fontsize = 14) 
. ax[0]

set_ylabel(
' Count ', fontsize= 14) 
. ax[0]
tick_params (labelsize= 14 )
. 
. # Develop a dictionary 
that maps classification names to colors in the color combination 
. color_dict= dict( zip( df[0] special(), colors )) 
. 
.
# Map the colors to the pie chart slices 
. pie_colors=[0] for race in df[0] value_counts(). index] 
. 
. # Develop a pie chart of the 'race/ethnicity' column and include labels to the pieces 
.
plt.pie( x =df[0] value_counts (), identifies =df['race/ethnicity'] value_counts()
index, blow up =[color_dict[race]
,

autopct="% 1.1 f %%", shadow= Real, colors=
pie_colors, textprops = {' fontsize': 14}) 
. 
. # Set the element ratio of the pie chart to' equivalent' to make it a circle 
. plt.axis(' equivalent') 
. 
. # Show the plot 
. plt.show() ['race/ethnicity']
id= Insights > Insights['race/ethnicity']
The majority of the trainee belonging from group C/ group D.['race/ethnicity']
Most affordable variety of trainees come from group A.[0.1, 0, 0, 0, 0] Adult Level of Education Column

Data Visualization | dataset

plt.rcParams

  • =
    ( 15, 9) . plt.style.use(‘ fivethirtyeight’) . sns.histplot( df
  • , combination=”Blues”) >plt.title(‘ Contrast of Adult Education ‘, fontweight= 30, fontsize= 20) . plt.xlabel(‘ Degree’) . plt.ylabel(‘ count’
    ) . plt.show()

id= Insights > Insights


Biggest variety of moms and dads are from college.['figure.figsize'] Bivariate Analysis["parental level of education"] df.groupby(' adult level of education '). agg(' indicate'). plot (kind=' barh', figsize=( 10,10)) 
 plt.legend( bbox_to_anchor=( 1.05, 1), loc= 2, borderaxespad= 0. )
. plt.show()

PARENTAL LEVEL OF EDUCATION COLUMN | student performance

id= Insights > Insights

  • Ball game of trainee whose moms and dads have master and bachelor level education are greater than others.

Optimum Rating of Trainees in All 3 Topics

 
plt.fig(
*) figsize=( 18,8)) 
. plt.subplot( 1, 4, 1) 
. plt.title(' MATHEMATICS RATINGS') 
.

sns.violinplot( y=' mathematics rating', information= df, color =" red", linewidth= 3) plt.subplot( 1, 4, 2) plt.title(' READING RATINGS') sns.vi 
. 
. plot( y='reading>rating', information= df, color=" green", linewidth =3) plt.subplot( 1, 4, 3) plt.title (' WRITING SCORES') sns.violinplot (y=' composing rating ', information= df, color =" blue ", linewidth= 3) plt.show () 

BIVARIATE ANALYSIS | dataset

id= Insights > Insights(* )From the above 3 plots its plainly noticeable that the majority of the trainees score in between 60-80 in Maths whereas in reading and composing the majority of them score from 50-80.

  • Multivariate Analysis Utilizing Pie Plot

# Set figure size . plt.rcParams

= (12, 9) 
.

. # First row of pie charts 
.
plt.subplot( 2, 3, 1) 
. size= df
 value_counts() 
. labels
=" Female", 'Male' 
.
color=

multiple scores for three subjects | dataset

.
plt.pie( size, colors= color, identifies= labels, autopct =”%.2 f%%” ) plt.title (‘ Gender’
, fontsize= 20) . plt.axis( ‘off’) . . plt.subplot (2, 3,
2) . size= df

value_counts() . labels=” Group C”,’
Group D’, ‘Group B’,’ Group E’,’ Group A’ .
color=

.
plt.pie (size, colors= color, identifies= labels, autopct=”%.2 f%%”)
plt.title(‘ Race/Ethnicity’, fontsize= 20 )
.
plt.axis(‘ off ‘) .
. plt.subplot (2 ,
3, 3) .
size= df

 value_counts(
)

. labels=" Requirement", 'Free' 
. color=['figure.figsize'] 
. plt.pie( size, colors= color, identifies= labels, autopct="%.2 f%%")

 plt.title('
Lunch'
, fontsize= 20 )
. plt.axis(' off ') 
. 
. # Second row of pie charts 
. plt.subplot( 2, 3, 4) 
.
size = df['gender'] value_counts() 
. labels=" None",' Finished' 
. color=['red','green'] 
. plt.pie (size, colors= color, identifies= labels,
autopct="%.2 f%%") 
 plt.title( 'Test Course ', fontsize= 20) 
. plt.axis(' off' )
. 
. plt.subplot( 2, 3, 5) 
. size= df['race/ethnicity'] value_counts () 
. labels=" Some College"," Partner's Degree ",' High School', 'Some High School ', "Bachelor's Degree"," Master's Degree "
. color=['red', 'green', 'blue', 'cyan', 'orange'] 
. plt.pie( size, colors= color, identifies= labels, autopct= "%.2 f%%") 
 plt.title (' Adult Education', fontsize= 20) 
. plt.axi ['lunch'] ff ') # Eliminate additional subplot plt.subplot( 2, 3, 6 ). get rid of () # Include extremely title plt.suptitle(' Contrast of Trainee Characteristics', fontsize= 20, fontweight =" strong") # Change design and reveal plot # This is eliminated as there are just 5 subplots in this figure and we wish to organize them in a 2x3 grid. # Considering that there is no sixth subplot, it is eliminated to prevent an empty subplot being displayed in the figure. plt.tight _ design() plt.subplots _ change( top= 0.85) plt.show()['red', 'green'] id =Insights > Insights['test preparation course'] The variety of Male and Female trainees is practically equivalent.(* )The variety of trainees is greater in Group C.['red', 'green'] The variety of trainees who have basic lunch is higher.['parental level of education'] The variety of trainees who have actually not registered in any test preparation course is higher.['red', 'green', 'blue', 'cyan', 'orange', 'grey'] The variety of trainees whose adult education is "Some College" is higher followed carefully by "Partner's Degree".
 From the above plot, it is clear that all ball games increase linearly with each other.

comparison of student attributes via pie plot | dataset

Trainee’s Efficiency is connected to lunch, race, and adult level education.

  • Females lead in pass portion and likewise are top-scorers.
  • Trainee Efficiency is very little associated to evaluate preparation course.
  • The ending up preparation course is helpful.
  • Design Training
  • Import Information and Required Bundles

Importing

scikit library algorithms

  • to import regression algorithms.
  • # Designing . from sklearn.metrics import mean_squared_error ,
    r2_score . from sklearn.neighbors import KNeighborsRegressor . from sklearn.tree import DecisionTreeRegressor . from sklearn.ensemble import RandomForestRegressor, AdaBoostRegressor . from sklearn.svm import SVR . from sklearn.linear _ design import LinearRegression, Lasso . from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error . from sklearn.model _ choice imp
  • RandomizedSearchCV from catboost import CatBoostRegressor from xgboost import XGBRegressor import cautions

Dividing the X and Y Variables

This separation of the reliant variable( y) and independent variables( X) is one the most essential in our task we utilize the mathematics rating as a reliant variable. Since numerous trainees do not have in mathematics topics it will practically 60% to 70% of trainees in classes 7-10 trainees are worry of mathematics topics that’s why I am picking the mathematics rating as a reliant rating.

It will utilize to enhance the portion of mathematics ratings and increase the graduate f trainees and likewise get rid of worry in mathematics. X = df.drop( columns=” mathematics rating”, axis= 1) . y= df Develop Column Transformer with 3 Kinds Of Transformers

 num_features = X.select _ dtypes( leave out=" item"). columns

 cat_features = X.select _ dtypes( consist of=" item"). columns


 
. from sklearn.preprocessing import OneHotEncoder, StandardScaler

. from sklearn.compose import ColumnTransformer

.

. numeric_transformer = StandardScaler()

. oh_transformer = OneHotEncoder()

.

. preprocessor= Column 
.
transformer( 
) X = preprocessor.fit _ change( X)

.

Different Dataset into Train and Test

To separate the dataset into train and test to determine the training size and screening size of the dataset.

from sklearn.model _ choice import train_test_split . X_train, X_test, y_train, y_test= train_test_split( X, y, test_size= 0.2, random_state= 42) . X_train. shape, X_test. shape

 Develop an Evaluate Function for Design Training["math score"] 

This function is usage to examine the design and develop a great design.

 def evaluate_model( real, forecasted):

. mae= mean_absolute_error( real, forecasted)

. mse= mean_squared_error( real, forecasted) 
. rmse= np.sqrt( mean_squared_error( real, forecasted)) 
. r2_square= r2_score( real, forecasted) 
. return mae, mse, rmse, r2_square[ ("OneHotEncoder", oh_transformer, cat_features), ("StandardScaler", numeric_transformer, num_features), ] To produce a designs variable and form a dictionary formate.

designs = { .” Direct Regression”: LinearRegression(), .” Lasso”: Lasso(), .” K-Neighbors Regressor”: KNeighborsRegressor(), .” Choice Tree”: DecisionTreeRegressor(), .” Random Forest Regressor”: RandomForestRegressor(), .” Gradient Boosting”: GradientBoostingRegressor (), .” XGBRegressor”: XGBRegressor(),
.” CatBoosting Regressor” :
CatBoostRegressor( verbose= False), .
” AdaBoost Regressor”: AdaBoostRegressor() .
} . model_list=

.
r2_list =

 
. 
.
for i in variety( len( list (designs ))): 
.

design= list( models.values())

.
model.fit( X_train, y_train) # Train design .
. # Make forecasts .
y_train_pred= model.predict( X_train ) .
y_test_pred= model.predict( X_test) . . # Assess Train and Test dataset . model_train_mae, model_train_mse, model_train_rmse, model_train_r2= evaluate_model( y_train, y_train_pred) . . model_test_mae, model_test_mse, model_test_rmse, model_test_r2= evaluate_model( y_test, y_test_pred) . .
. print( list( models.keys())

) . model_list.
append( list( models.keys( ))(* )) . . print (‘ Design efficiency for Training set’) .
print(“- Root Mean Squared Mistake: {:.4 f}”. format( model_train_rmse)) . print(” -Mean Squared Mistake: {:.4 f}”. format( model_train_mse)) . print (“- Mean Outright Mistake: {:.4 f}

. format( model_train_mae)) . print(“- R2 Rating: {:.4 f}”.
format( model_train_r2)) . . print(‘ ———————————-‘) . print(‘ Design efficiency for Test set’) .
print(“- Root Mean Squared Mistake: {:.4 f}”.
format( model_test_rmse)) . print( “- Mean Squared
Mistake: {:.4 f}”. format( model_test_rmse) ) . print (” -Mean Outright Mistake: {:.4 f} “. format( model_test_mae )) . print (“- R2 Rating: {:.4 f}”. . for . . model_test_r2 )) r2_list. append( model_test_r2) print(‘=’ * 35) print( ‘n’) .

 The output of prior to tuning all algorithms' hyperparameters. And it offers the RMSE, MSE, MAE, and R2 rating worths for training and test information.

Hyperparameter Tuning

 It will offer the design with a lot of precise forecasts and enhance forecast precision.

(
* )This will offer the enhanced worth of[] hyperparameters(
*
), which optimize your design predictive precision. 
(
*) from sklearn.model _ choice import GridSearchCV, RandomizedSearchCV 
. from sklearn.metrics import make_scorer 
. 
. # Specify hyperparameter varieties for each design 
. param_grid ={
." Direct Regression": {}, 
." Lasso ": {" alpha":[]}, 
. "K-Neighbors Regressor ": {" n_neighbors":[i],}, 
." Choice Tree": {
" max_depth": [i],
'
requirement': [i]}, 
." Random Forest Regressor":  student performance " width="341" height="220" srcset="https://cdn.analyticsvidhya.com/wp-content/uploads/2023/04/Linear-Regression-Score.png 341w, https://cdn.analyticsvidhya.com/wp-content/uploads/2023/04/Linear-Regression-Score-300x194.png 300w, https://cdn.analyticsvidhya.com/wp-content/uploads/2023/04/Linear-Regression-Score-150x97.png 150w" sizes="(max-width: 341px) 100vw, 341px"/>

model tuning ,

.

,' subsample':

, . ' n_estimators':} , ." XGBRegressor": { ' depth':, 'learning_rate':(* ), 'versions': ( * )}, ." CatBoosting Regressor": { " versions":

," depth":[1]
}, 
. "AdaBoost Regressor": {' learning_rate':[3, 5, 7],' n_estimators':[3, 5, 7]
} 
.}

.

.
model_list=(*
)

.
r2_list

=(*
) 
. 
. for model_name, design in models.items(): 
. # Develop a scorer challenge utilize in grid search 
. scorer= make_scorer( r2_score )
. 
. # Perform grid search to discover the very best hyperparameters 
. grid_search= GridSearchCV( 
. design, 
. param_grid['squared_error', 'friedman_mse', 'absolute_error', 'poisson'], 
. scoring= scorer, 
. cv= 5, 
. n_jobs= -1 
.) 
. 
. # Train the design with the very best hyperparameters[8,16,32,64,128,256] rid_search. fit( X_train, y_train) # Make forecasts y_train_pred= grid_search. anticipate( X_train) y_test_pred= grid_search. anticipate( X_test) # Assess Train and Test dataset model_train_mae, model_train_mse, model_train_rmse, model_train_r2= evaluate_model( y_train, y_train_pred) model_test_mae, model_test_mse, model_test_rmse, model_test_r2= evaluate_model( y_test, y_test_pred) print( model_name) model_list. append( model_name) print( 'Finest hyperparameters: ', grid_search. best_params _) print(' Design efficiency for Training set') print(" -Root Mean Squared Mistake: {:.4 f}". format( model_train_rmse)) print ("- Mean Squared Mistake: {:.4 f}". format( model_train_mse)) print("- Mean Outright Mistake: {:.4 f}". format( model_train_mae)) print("- R2 Rating: {:.4 f}". format (model_train_r2)) print(' ----------------------------------' )print(' Design efficiency for Test set') print("- Root Mean Squared Mistake: {:.4 f}". format( model_test_rmse)) print( "- Mean Squared Mistake: {:.4 f}". format( model_test_rmse)) print ("- Mean Outright Mistake: {:.4 f} ". format( model_test_mae)) print("- R2 Rating: {:.4 f} ". for[3, 5, 7]

. model_test_r2)) r2_list. append( model_test_r2) print('=' * 35) print( 'n')

.[.1,.01,.05,.001] Outputs[0.6,0.7,0.75,0.8,0.85,0.9] The output of after tuning all algorithms' hyperparameters. And it offers the RMSE, MSE, MAE, and R2 rating worths for training and test information.[8,16,32,64,128,256] If we pick Direct regression as the last design since that design will get a training set r2 rating is 87.42 and a screening set r2 rating is 88.03.[6,8,10] Design Choice[0.01, 0.05, 0.1] This is utilized to pick the very best design of all of the regression algorithms.[30, 50, 100] In direct regression, we got 88.03 curacy in all of the regression designs that's why we pick design.[100, 500] pd.DataFrame( list( zip( model_list, r2_list)), columns =[3, 5, 7]). sort_values( by =[.1,.01,0.5,.001], rising= False)

.[8,16,32,64,128,256] Precision of the design is 88.03%[] plt.scatter( y_test, y_pred)

. plt.xlabel(' Actual')

. plt.ylabel(' Forecasted')

. plt.show()[] sns.regplot( x= y_test, y= y_pred, ci= None, color=" red")[model_name] Distinction In Between Actual and Predicted Worths
 pred_df= pd.DataFrame( {'Actual Worth': y_test,' Forecasted Worth': y_pred,' Distinction': y_test-y_pred} )

. pred_df
 Transform the Design to Pickle File

# filling library .
import pickle . # produce an iterator item with compose authorization - model.pkl . with open(' model_pkl',' wb') as files: . pickle.dump( design, files) .
. # load
conserved design . with open( 'model_pkl',' rb'
) as f: . lr = pickle.load( f)

Conclusion

output | student performance

output | student performance

output | student performance

output | student performance | dataset

output | dataset

dataset

This brings us to an end to the trainee's efficiency forecast. Let us examine our work. Initially, we began by specifying our issue declaration, checking out the algorithms we were going to utilize and the regression execution pipeline. Then we proceeded to virtually carrying out the recognition and regression algorithms like Linear Regression, Lasso, K-Neighbors Regressor, Choice Tree, Random Forest Regressor, XGBRegressor, CatBoosting Regressor, and AdaBoost Regressor. Moving on, we compared the efficiencies of these designs. Last but not least, we constructed a Direct regression design that showed that it works finest for trainee efficiency forecast issues.

The essential takeaways from this trainee efficiency forecast are:

Recognition of trainee efficiency forecast is necessary for lots of organizations.

Direct regression provides much better precision compared to other regression issues.

 Direct regression is the very best suitable for the issue['Model Name', 'R2_Score'] Direct regression offers a precision of 88%, offering the most precise outcomes.["R2_Score"] I hope you like my short article on "Trainee efficiency analysis and forecast." The whole code can be discovered in my 
GitHub
 repository. You can get in touch with me here on 

model selection graph | student performance | dataset

 LinkedIn

student performance | dataset

 The media displayed in this short article is not owned by Analytics Vidhya and is utilized at the Author's discretion. 

student performance on actual and predicted values | dataset

Associated

Like this post? Please share to your friends:
Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: