I tried using as many seeds as possible but I get a completely different result with my neural model. I also run classical Ml models such as Linear Regression or Random Forest but they are seeded and give me the same results each time.
---
SKlearn version: 0.24.2
Tensorflow version: 2.7.0
torch.cuda.is_available() is True
---
Here are my layers and fit() function:
model = tf.keras.Sequential()
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(1))
model.compile(loss='huber', optimizer='adam', metrics=['mae', 'mse'])
history = model.fit(x=X_train, y=y_train, epochs=100, batch_size = 100, validation_data = (X_dev, y_dev), callbacks = [early_stopping])
What I have tried:
I import seeds at the top, and only once:
np.random.seed(51)
tf.random.set_seed(51)
random.seed(51)
torch.manual_seed(51)
torch.cuda.manual_seed(51)
torch.cuda.manual_seed_all(51)
torch.backends.cudnn.deterministic = True
I put random_state in train_test_split:
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 51, test_size = 0.4)
X_dev, X_test, y_dev, y_test = train_test_split(X_test, y_test, random_state = 51, test_size=0.5)
These are the only places that I used any kind of seeding or random state.