530 views
0 0 votes
Hello,

I trained a CNN using synthetic data to perform a segmentation task on human faces. During the test and to evaluate the prediction of this network, I used 200 examples from the database to compute precision and recall.

Is this number sufficient, knowing that I control myself the data generator and that I build the database by randomly drawing the elements using centered Gaussian distributions.


Thank you,

Please log in or register to answer this question.

Related questions

1 1 vote
1 1 answer
724
724 views
metelon asked Dec 15, 2020
724 views
When I standardized my data when I created my model. Do I need to save the standardization transformation when I want to predict with my model new data ?
0 0 votes
0 0 answers
549
549 views
Anas asked Nov 28, 2021
549 views
So say I have a column with categorical data like different styles of temperature: 'Lukewarm', 'Hot', 'Scalding', 'Cold', 'Frostbite',... etc.I know that we can use pd.ge...
0 0 votes
1 1 answer
1.0k
1.0k views
Kesz asked Nov 17, 2020
1,039 views
Hi. I have a question about model-based predictions when data is only available after the fact. Let me give you an example. I try to predict the result (HOME, AWAY or a D...
1 1 vote
0 0 answers
342
342 views
1 1 vote
1 answers 1 answer
1.3k
1.3k views
Kesz asked Oct 27, 2020
1,340 views
So far, I have modeled on known historical data. What if there are variables known only after the fact?Let me give you an example. I want to predict the outcome of the ma...