+1 vote
2.7k views
asked in Deep Learning by (190 points)  

Hi All,

I am writing a simple program using Tensorflow and DNNClassifier. Training Data is 9 pixel with four spectral bands, i.e. 4*9=36 featurs. And each data-point will be mapped to a class (from 1 to 7). 

Last parameter, is the class label.

A line of data-point is like this:

67,75,77,62,67,79,81,62,75,87,89,71,66,79,88,63,66,79,84,63,66,79,80,59,67,84,86,68,71,84,86,64,67,81,82,64,7

But I got below Error:

InvalidArgumentError (see above for traceback): assertion failed: [Labels must >= 0] [Condition x >= 0 did not hold element-wise:] [x (dnn/head/labels:0) = ] [[3][3][3]...]

I am sure there is no datapoint which has a label less than 0. Would you please advise?

import numpy as np

import pandas as pd

import tensorflow as tf

from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import GridSearchCV, KFold
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.model_selection import StratifiedShuffleSplit

print('** DNN Classification *******************************************************')

landsatData = pd.read_csv("./resources/landsat/lantsat.1.csv")

landsatData.describe()

X_landSatAllFeatures = landsatData.iloc[:, np.arange(36)].copy()

y_midPixelAsTarget = landsatData.iloc[:, 36].copy()

# Testing and training sentences splitting (stratified + shuffled) based on the index (sentence ID)
allFeaturesIndexes = X_landSatAllFeatures.index
targetData = y_midPixelAsTarget
sss = StratifiedShuffleSplit(n_splits=1, test_size=0.3, random_state=42)

for train_index, test_index in sss.split(allFeaturesIndexes, targetData):
    train_ind, test_ind = allFeaturesIndexes[train_index], allFeaturesIndexes[test_index]

Test_Matrix = X_landSatAllFeatures.loc[test_ind]
Test_Target_Matrix = y_midPixelAsTarget.loc[test_ind]
Train_Matrix = X_landSatAllFeatures.loc[train_ind]
Train_Target_Matrix = y_midPixelAsTarget.loc[train_ind]

scaler = StandardScaler().fit(Train_Matrix)
Train_Matrix, Test_Matrix = scaler.transform(Train_Matrix), scaler.transform(Test_Matrix)

def reset_graph(seed=42):
    tf.reset_default_graph()
    tf.set_random_seed(seed)
    np.random.seed(seed)

X_train = Train_Matrix
y_train = Train_Target_Matrix
X_test = Test_Matrix
y_test = Test_Target_Matrix

xx, yy = Train_Matrix.shape
#training phase
feature_cols = [tf.feature_column.numeric_column("X", shape=[36])]
dnn_clf = tf.estimator.DNNClassifier(hidden_units=[300,100], n_classes=8, feature_columns=feature_cols)
# dnn_clf = tf.estimator.DNNClassifier(hidden_units=[300,100], n_classes=10)


input_fn = tf.estimator.inputs.numpy_input_fn(
    x={"X": X_train}, y=y_train, num_epochs=40, batch_size=64, shuffle=True)
dnn_clf.train(input_fn=input_fn)

#testing phase
test_input_fn = tf.estimator.inputs.numpy_input_fn(
    x={"X": X_test}, y=y_test, shuffle=False)
eval_results = dnn_clf.evaluate(input_fn=test_input_fn)
print("The prediction result is : {0:.2f}%".format(100*eval_results['accuracy']))
y_pred_iter = dnn_clf.predict(input_fn=test_input_fn)
y_pred = list(y_pred_iter)
y_pred[0]


print('**********************************************************************************')

 

  

1 Answer

+1 vote
answered by (190 points)  
I think I found the issue. Th issue had two layers:

1- My data is positive integer, it does not need to be scaled! Scaler made some of my data negative! Removing scaler, resolved the error.

Then I got another error, complaining about data type. It is resolved by converting dataframe to np.

Thank you ght!
...