I need a Python/Keras code that analyses ocean streams in a 2000x2000 kilometers square of the atlantic ocean. The project's aim is that it shall predict when the stream exceeds a certain speed. In that special area this happens about 10 times per year that an average daily speed of 25 km/h is exceeded. I have daily datas for the last 10 years. My idea is that I want to use a convolutional neuronal network though it is not image processing. But I want to let the datas being processed like an image. For that I want to devide that large area of 2000x2000 km into 128x128 "pixels". So every "pixel" has a size of around 16x16 km. There are 5 attributes for those pixels I would like to add as channels. Sure, that are no colour channels but these attributes can be processed in the same way. That attributes are average temperature at that day, wind direction, wind speed, stream speed and stream temperature for each pixel area. To bring also a temporal order in to the 128x128 pixels area which is available for each day, I want to append a recurrent neuronal network after the CNN.
From my dataset of that last 10 years (3650 days) I can calculate all that 5 attributes for each 128x128 km pixel per day. I must admit that I a am a Deep Learning beginner so I depend on your help and experience how to label the datas (if needed) to train the network. After the training the network shall be able to predict a probability that the stream speed excceds the 25 km/h level in any of that pixels AND in in which pixel within the next x days basing on a sequencial dataset of the last y days.
In details what I need:
First of all the text/csv based dataset I have must be pre-processed so that Python can handle it. As far as I am informed the h5 file format is the best for this purpose. So you must tell me how I shall set up the csv/text file structure so that it can be converted into a h5 file. If you say there is no need for a h5 file or that there is another format which is better or easier to handle then it's also okay of course.
In a second step you must provide me with a Keras/Python code that imports the source data, feeds one or more convolutional layers, max pooling drop, off and fully connected where senseful and followed by a RNN to bring the temporal aspect into the network processing. And finally the network must put out the probability for each pixel that within this pixel the stream speed exceeds 25 km/h.
It is not part of your project to make the network work successfully. It is up to me to test out order, amount and size of the layers.
Hi, I am expert in machine learning/Deep learning/AI/OCR/data mining/NLP. I implemented algorithms for data classification, text classification, Trading, automation, recommendation system, data mining, speech recogniti Daha Fazla