{"id":62,"date":"2017-12-13T01:25:10","date_gmt":"2017-12-13T01:25:10","guid":{"rendered":"http:\/\/www.goodbits.ca\/?p=62"},"modified":"2017-12-28T21:42:12","modified_gmt":"2017-12-28T21:42:12","slug":"a-look-at-tensor-flow-in-c-on-fedora-google-wtf","status":"publish","type":"post","link":"https:\/\/www.goodbits.ca\/index.php\/2017\/12\/13\/a-look-at-tensor-flow-in-c-on-fedora-google-wtf\/","title":{"rendered":"Google Tensor Flow In C On Fedora Linux"},"content":{"rendered":"<h1>Decided to have a look at google <a href=\"https:\/\/www.tensorflow.org\/install\/install_c\">tensor flow<\/a> to see what all the hype is about.<\/h1>\n<p>Reading the first page of how to install under Linux just blew my mind. Nice work google.<\/p>\n<p>So apparently you are supposed to run this:<\/p>\n<pre class=\"lang:default decode:true\"> TF_TYPE=\"cpu\" # Change to \"gpu\" for GPU support\r\n OS=\"linux\" # Change to \"darwin\" for Mac OS\r\n TARGET_DIRECTORY=\"\/usr\/local\"\r\n curl -L \\\r\n   \"https:\/\/storage.googleapis.com\/tensorflow\/libtensorflow\/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.4.0.tar.gz\" |\r\n   sudo tar -C $TARGET_DIRECTORY -xz<\/pre>\n<p>Okay that works.<\/p>\n<p>Then:<\/p>\n<pre class=\"lang:default decode:true\">sudo ldconfig\r\n<\/pre>\n<p>Oh wait linux generally doesn&#8217;t include \/usr\/local\/lib by default from any ld.so conf configuration. Well, at least Fedora doesn&#8217;t.<\/p>\n<p>Easy fix:<\/p>\n<pre class=\"lang:default decode:true\">sudo echo \/usr\/local\/lib &gt;&gt; \/etc\/ld.so.conf\/usr-local.conf\r\n\r\n<\/pre>\n<p>Then they say to run:<\/p>\n<pre class=\"lang:default decode:true\">gcc hello_tf.c\r\n\/tmp\/ccqYuwfC.o: In function `main':\r\nhello_tf.c:(.text+0xa): undefined reference to `TF_Version'\r\ncollect2: error: ld returned 1 exit status\r\n<\/pre>\n<p>Hmm okay, that will never work. What magic voodoo will make that function appear for the linker:<\/p>\n<p>Right to add the lib:<\/p>\n<pre class=\"lang:default decode:true\">gcc hello_tf.c -ltensorflow<\/pre>\n<p>And then run it like google says:<\/p>\n<pre class=\"lang:default decode:true \">a.out\r\n<\/pre>\n<p>Hey google guess what, linux doesn&#8217;t generally have the current directory in the path. What kind of weird linux are you running?<\/p>\n<p>Maybe try:<\/p>\n<pre class=\"lang:default decode:true \">.\/a.out \r\nHello from TensorFlow C library version 1.4.0\r\n<\/pre>\n<p>Wowzer it works.<\/p>\n<p>Google is usually good about their docs. \u00a0What happened? \u00a0Too many pythons?<\/p>\n<p>Okay so how to use this thing in C?<\/p>\n<p>Oh wait there aren&#8217;t any docs for the c library.<\/p>\n<p>Check the header.. that doesn&#8217;t look too promising.<\/p>\n<p>Check github project page. Hmm not much there.<\/p>\n<p>So back to the <a href=\"https:\/\/www.tensorflow.org\/get_started\/get_started\">Getting Started python tutorial<\/a>. See if it can work in C?<\/p>\n<p>Attempt Digging Through c_api.h<\/p>\n<p>Good news the header has a lot of documentation.<\/p>\n<h2 id=\"the_computational_graph\">The Computational Graph<\/h2>\n<p>Found this:<\/p>\n<pre class=\"lang:default decode:true\">typedef struct TF_Graph TF_Graph;\r\n<\/pre>\n<p>And This:<\/p>\n<pre class=\"lang:default decode:true\">TF_CAPI_EXPORT extern TF_Graph* TF_NewGraph();\r\n<\/pre>\n<p>Must be on to something.<\/p>\n<h2>A Tensor<\/h2>\n<p>Where?<\/p>\n<pre class=\"lang:default decode:true \">typedef struct TF_Tensor TF_Tensor;\r\n<\/pre>\n<p>Must be that and:<\/p>\n<pre class=\"lang:default decode:true\">TF_CAPI_EXPORT extern TF_Tensor* TF_NewTensor(\r\n    TF_DataType, const int64_t* dims, int num_dims, void* data, size_t len,\r\n    void (*deallocator)(void* data, size_t len, void* arg),\r\n    void* deallocator_arg);<\/pre>\n<p>So how can I get those numbers from the example in there.<\/p>\n<p>Docs:<\/p>\n<pre class=\"lang:default decode:true\">\/\/ --------------------------------------------------------------------------\r\n\/\/ TF_Tensor holds a multi-dimensional array of elements of a single data type.\r\n\/\/ For all types other than TF_STRING, the data buffer stores elements\r\n\/\/ in row major order.  E.g. if data is treated as a vector of TF_DataType:\r\n<\/pre>\n<p>So looking for a float holding tensor.<\/p>\n<p>Have this now so far:<\/p>\n<pre class=\"lang:c decode:true\">#include &lt;stdio.h&gt;\r\n#include &lt;tensorflow\/c\/c_api.h&gt;\r\n\r\nvoid tensor_free(void* data, size_t len, void* arg) {\r\n        printf(\"Free Called\\n\");\r\n}\r\n\r\nint main() {\r\n\r\n        int64_t dims[1] = {1.0f};\r\n        int64_t num_dims = 1;\r\n\r\n\r\n        float tens_1_data[] = {3.0f};\r\n        size_t tens_1_data_len = 1;\r\n\r\n        float tens_2_data[] = {4.0f};\r\n        size_t tens_2_data_len = 1;\r\n\r\n        printf(\"Hello from TensorFlow C library version %s\\n\", TF_Version());\r\n\r\n        TF_Tensor * tensor1 =  TF_NewTensor(TF_FLOAT, dims, num_dims, tens_1_data, tens_1_data_len, tensor_free, NULL);\r\n        TF_Tensor * tensor2 =  TF_NewTensor(TF_FLOAT, dims, num_dims, tens_2_data, tens_2_data_len, tensor_free, NULL);\r\n\r\n\r\n        TF_DeleteTensor(tensor1);\r\n        TF_DeleteTensor(tensor2);\r\n        return 0;\r\n}\r\n<\/pre>\n<p>Quick valgrind check.<\/p>\n<p>Bunch of leaks.. need to shut something down.<\/p>\n<p>Can&#8217;t seem to find a shutdown\/free\/.. anything.. leave that for later.<\/p>\n<pre class=\"lang:default decode:true\">valgrind --leak-check=full .\/a.out<\/pre>\n<h2>Getting Some Output<\/h2>\n<pre class=\"lang:default decode:true\">Hello from TensorFlow C library version 1.4.0\r\n2017-12-12 17:56:58.424881: I tensorflow\/core\/platform\/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was noTt compiled to use: SSE4.1\r\nFree Called\r\nFree Called\r\n<\/pre>\n<h2>Need A Session<\/h2>\n<p>Found this that must be it, but I need a graph and options to make one??<\/p>\n<pre class=\"lang:default decode:true\">typedef struct TF_Session TF_Session;\r\nTF_CAPI_EXPORT extern TF_Session* TF_NewSession(TF_Graph* graph,\r\n const TF_SessionOptions* opts,\r\n TF_Status* status);\r\n<\/pre>\n<p>Okay says something about NULL being ok.<\/p>\n<p>Try:<\/p>\n<pre class=\"lang:default decode:true \">        TF_Session * session = TF_NewSession(NULL, NULL, NULL);\r\n<\/pre>\n<p>Boom segfault.<\/p>\n<p>So probably the first parameter is the most important.. Try to make a graph.<\/p>\n<pre class=\"lang:default decode:true \">        TF_Graph * graph = TF_NewGraph();\r\n<\/pre>\n<p>Nope still segfaults.<\/p>\n<p>Try making options.<\/p>\n<pre class=\"lang:default decode:true \">        TF_SessionOptions * options = TF_NewSessionOptions();\r\n<\/pre>\n<p>Nope still segfaults.<\/p>\n<p>K it&#8217;s happy now.. no more crashing.<\/p>\n<pre class=\"lang:default decode:true \">        TF_Status * status = TF_NewStatus();\r\n<\/pre>\n<p>And clean all those up.<\/p>\n<pre class=\"lang:default decode:true \">        TF_DeleteGraph(graph);\r\n        TF_DeleteSessionOptions(options);\r\n        TF_DeleteStatus(status);\r\n        TF_DeleteTensor(tensor1);\r\n        TF_DeleteTensor(tensor2);<\/pre>\n<p>Running exiting ok now.<\/p>\n<h2>Run The Session?<\/h2>\n<pre class=\"lang:default decode:true \">TF_CAPI_EXPORT extern void TF_SessionRun(\r\n    TF_Session* session,\r\n    \/\/ RunOptions\r\n    const TF_Buffer* run_options,\r\n    \/\/ Input tensors\r\n    const TF_Output* inputs, TF_Tensor* const* input_values, int ninputs,\r\n    \/\/ Output tensors\r\n    const TF_Output* outputs, TF_Tensor** output_values, int noutputs,\r\n    \/\/ Target operations\r\n    const TF_Operation* const* target_opers, int ntargets,\r\n    \/\/ RunMetadata\r\n    TF_Buffer* run_metadata,\r\n    \/\/ Output status\r\n    TF_Status*);\r\n<\/pre>\n<p>That must be it.<\/p>\n<p>Looks like some TF_Operations and array of TF_Sensors for I\/O.<\/p>\n<p>After not finding many examples and many attempts at different guesses and reading the test code in\u00a0c_api_test.cc and\u00a0c_test_util.cc I&#8217;ve managed to add two numbers together.<\/p>\n<p>The output tensor is allocated by session run so it has been removed.<\/p>\n<h1>The final extremely basic C example<\/h1>\n<pre class=\"lang:c decode:true \">#include &lt;stdio.h&gt;\r\n#include &lt;stdlib.h&gt;\r\n#include &lt;string.h&gt;\r\n#include &lt;tensorflow\/c\/c_api.h&gt;\r\n\r\n\/*\r\n * Super basic example of using google tensorflow directly from C\r\n *\r\n *\/\r\n\r\n\/\/ Using stack input data nothing to free\r\nvoid tensor_free_none(void * data, size_t len, void* arg) {\r\n}\r\n\r\nTF_Operation * PlaceHolder(TF_Graph * graph, TF_Status * status, TF_DataType dtype, const char * name) {\r\n\tTF_OperationDescription * desc = TF_NewOperation(graph, \"Placeholder\", name);\r\n\tTF_SetAttrType(desc, \"dtype\", TF_FLOAT);\r\n\treturn TF_FinishOperation(desc, status);\r\n}\r\n\r\nTF_Operation * Const(TF_Graph * graph, TF_Status * status, TF_Tensor * tensor, const char * name) {\r\n\tTF_OperationDescription * desc = TF_NewOperation(graph, \"Const\", name);\r\n\tTF_SetAttrTensor(desc, \"value\", tensor, status);\r\n\tTF_SetAttrType(desc, \"dtype\", TF_TensorType(tensor));\r\n\treturn TF_FinishOperation(desc, status);\r\n}\r\n\r\nTF_Operation * Add(TF_Graph * graph, TF_Status * status, TF_Operation * one, TF_Operation * two, const char * name) {\r\n\tTF_OperationDescription * desc = TF_NewOperation(graph, \"AddN\", name);\r\n\tTF_Output add_inputs[2] = {{one, 0}, {two, 0}};\r\n\tTF_AddInputList(desc, add_inputs, 2);\r\n\treturn TF_FinishOperation(desc, status);\r\n}\r\n\r\nint main() {\r\n\tprintf(\"TensorFlow C library version: %s\\n\", TF_Version());\r\n\r\n\tTF_Graph * graph = TF_NewGraph();\r\n\tTF_SessionOptions * options = TF_NewSessionOptions();\r\n\tTF_Status * status = TF_NewStatus();\r\n\tTF_Session * session = TF_NewSession(graph, options, status);\r\n\r\n\tfloat in_val_one = 4.0f;\r\n\tfloat const_two = 2.0f;\r\n\r\n\tTF_Tensor * tensor_in = TF_NewTensor(TF_FLOAT, NULL, 0, &amp;in_val_one, sizeof(float), tensor_free_none, NULL);\r\n\tTF_Tensor * tensor_out = NULL; \/\/ easy access after this is allocated by TF_SessionRun\r\n\tTF_Tensor * tensor_const_two = TF_NewTensor(TF_FLOAT, NULL, 0, &amp;const_two, sizeof(float), tensor_free_none, NULL);\r\n\r\n\t\/\/ Operations\r\n\tTF_Operation * feed = PlaceHolder(graph, status, TF_FLOAT, \"feed\");\r\n\tTF_Operation * two = Const(graph, status, tensor_const_two, \"const\");\r\n\tTF_Operation * add = Add(graph, status, feed, two, \"add\");\r\n\r\n\t\/\/ Session Inputs\r\n\tTF_Output input_operations[] = { feed, 0 };\r\n\tTF_Tensor ** input_tensors = {&amp;tensor_in};\r\n\r\n\t\/\/ Session Outputs\r\n\tTF_Output output_operations[] = { add, 0 };\r\n\tTF_Tensor ** output_tensors = {&amp;tensor_out};\r\n\r\n\tTF_SessionRun(session, NULL,\r\n\t\t\t\/\/ Inputs\r\n\t\t\tinput_operations, input_tensors, 1,\r\n\t\t\t\/\/ Outputs\r\n\t\t\toutput_operations, output_tensors, 1,\r\n\t\t\t\/\/ Target operations\r\n\t\t\tNULL, 0, NULL,\r\n\t\t\tstatus);\r\n\r\n\tprintf(\"Session Run Status: %d - %s\\n\", TF_GetCode(status), TF_Message(status) );\r\n\tprintf(\"Output Tensor Type: %d\\n\", TF_TensorType(tensor_out));\r\n\tfloat * outval = TF_TensorData(tensor_out);\r\n\tprintf(\"Output Tensor Value: %.2f\\n\", *outval);\r\n\r\n\tTF_CloseSession(session, status);\r\n\tTF_DeleteSession(session, status);\r\n\r\n\tTF_DeleteSessionOptions(options);\r\n\r\n\tTF_DeleteGraph(graph);\r\n\r\n\tTF_DeleteTensor(tensor_in);\r\n\tTF_DeleteTensor(tensor_out);\r\n\tTF_DeleteTensor(tensor_const_two);\r\n\r\n\tTF_DeleteStatus(status);\r\n\treturn 0;\r\n}\r\n<\/pre>\n<p>To build and run:<\/p>\n<pre class=\"lang:default decode:true\">gcc -g3 hello_tf.c -ltensorflow -o hello\r\n\r\n.\/hello<\/pre>\n<p>Notes:<\/p>\n<p>The underlying library is written C++ so there is really no point in doing this unless you have some C code that needs to integrate with tensor flow from their.<\/p>\n<p>Valgrind still finds some left over memory from pthread_create. Couldn&#8217;t figure away to clean up the lib completely. \u00a0Doesn&#8217;t seem to be any function join the internal threads.<\/p>\n<pre class=\"lang:default decode:true \">==5693== HEAP SUMMARY:\r\n==5693==     in use at exit: 5,358,141 bytes in 105,451 blocks\r\n==5693==   total heap usage: 310,615 allocs, 205,164 frees, 17,029,114 bytes allocated\r\n==5693== \r\n==5693== 640 bytes in 2 blocks are possibly lost in loss record 63,119 of 63,227\r\n==5693==    at 0x4C2FA50: calloc (vg_replace_malloc.c:711)\r\n==5693==    by 0x4013F8A: _dl_allocate_tls (in \/usr\/lib64\/ld-2.24.so)\r\n==5693==    by 0x8D3B2DB: pthread_create@@GLIBC_2.2.5 (in \/usr\/lib64\/libpthread-2.24.so)\r\n==5693==    by 0x9214C92: std::thread::_M_start_thread(std::shared_ptr&lt;std::thread::_Impl_base&gt;, void (*)()) (in \/usr\/lib64\/libstdc++.so.6.0.22)\r\n==5693==    by 0x9214D9C: std::thread::_M_start_thread(std::shared_ptr&lt;std::thread::_Impl_base&gt;) (in \/usr\/lib64\/libstdc++.so.6.0.22)\r\n==5693==    by 0x814BFDF: tensorflow::(anonymous namespace)::PosixEnv::StartThread(tensorflow::ThreadOptions const&amp;, std::string const&amp;, std::function&lt;void ()&gt;) (in \/usr\/local\/lib\/libtensorflow_framework.so)\r\n==5693==    by 0x8124A46: tensorflow::thread::ThreadPool::ThreadPool(ten<\/pre>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Decided to have a look at google tensor flow to see what all the hype is about. Reading the first page of how to install under Linux just blew my mind. Nice work google. So apparently you are supposed to run this: TF_TYPE=&#8221;cpu&#8221; # Change to &#8220;gpu&#8221; for GPU support OS=&#8221;linux&#8221; # Change to &#8220;darwin&#8221; &hellip; <a href=\"https:\/\/www.goodbits.ca\/index.php\/2017\/12\/13\/a-look-at-tensor-flow-in-c-on-fedora-google-wtf\/\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">Google Tensor Flow In C On Fedora Linux<\/span> <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[22,3],"tags":[],"class_list":["post-62","post","type-post","status-publish","format-standard","hentry","category-c","category-development"],"_links":{"self":[{"href":"https:\/\/www.goodbits.ca\/index.php\/wp-json\/wp\/v2\/posts\/62","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.goodbits.ca\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.goodbits.ca\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.goodbits.ca\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.goodbits.ca\/index.php\/wp-json\/wp\/v2\/comments?post=62"}],"version-history":[{"count":4,"href":"https:\/\/www.goodbits.ca\/index.php\/wp-json\/wp\/v2\/posts\/62\/revisions"}],"predecessor-version":[{"id":67,"href":"https:\/\/www.goodbits.ca\/index.php\/wp-json\/wp\/v2\/posts\/62\/revisions\/67"}],"wp:attachment":[{"href":"https:\/\/www.goodbits.ca\/index.php\/wp-json\/wp\/v2\/media?parent=62"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.goodbits.ca\/index.php\/wp-json\/wp\/v2\/categories?post=62"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.goodbits.ca\/index.php\/wp-json\/wp\/v2\/tags?post=62"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}