Arrays in Python
Python does not have built-in support for Arrays. Length of an array is always one more than the highest array index.. In the case of python arrays, you would have to use loops while numpy provides support for this in efficient manner.
Where
T is a list. Where List can contain different data types within a single list
Z is a z List ( scalar , vector , matrix etc )
If T is List then function F is not defined from T to Z.
Where
T is a Numpy ndarray. Ndarray is a multidimensional or n-dimensional array of fixed size with homogeneous elements( i.e., the data type of all the elements in the array is the same). NumPy for working with an array. NumPy support CPU and also no automatic Differentiation. Values can be numerical. These arrays used in Scipy,sklearn,xgboost, and lightgbm. These are Machine Learning Platforms. .
Z is a Tensor or Numpy Ndarray. ( it can be Scalar, Vector, Matrix , Tensor etc)
If T is Numpy.ndarray then function F is not defined from T to Z.
To do mathematics with Numpy
new create : array_ = np.array([1,2,3])
Sum per row : np.sum(array_, axis=1)
multiplicaiton :np.matmul(array_a, array_b)
divide items elementwise : np.divide(array_a, array_b)
add items elementwise : np.add(array_a, array_b)
subtract items elementwise : np.subtract(array_a, array_b)
reshape item: np.reshape(array_, (1,2))
shape of item : array_.shape
Where
T is a Tensor. Tensor is a multidimensional array with a uniform data type as dtype. Update a tensor is not defined but create a new one. TensorFlow for working with a tensor. If TensorFlow library is used for tensor then TensorFlow decide when to use GPU or when to use CPU as the TensorFlow library automatically takes care of that. TensorFlow support CPU/GPU/TPU and also automatic Differentiation. Values can be numerical / Strings. do not try to append, insert, delete or change a value to a tensor. Tensors used in TensorFlow and also in PyTorch ( these two are Deep Learning platforms )
Z is a Tensor ( it can be Scalar, Vector, Matrix , Tensor etc)
F is a function from Tensor T to another Tensor Z.
Since Function F is defined on Tensor T, it is possible to compute Derivative of T. ( GPU is very useful to compute Derivative of F)
To do mathematics with TensorFlow version of Tensor
new create : tensor_ = tf.constant([1,2,3])
Sum per row : tf.reduce_sum(tensor_, reduction_indices=[1])
multiplicaiton : tf.matmul(tensor_a, tensor_b)
divide items elementwise : tf.div(tensor_a, tensor_b)
add items elementwise : tf.add(tensor_a, tensor_b)
subtract items elementwise : tf.subtract(tensor_a, tensor_b)
reshape item: tf.reshape(tensor_, (1,2))
shape of item : tensor_.get_shape()
Use of Tensor to represent Data is a good option. Above shows it is not good to use numpy.ndarray to represent Data. A tensor is a more suitable choice if GPU is available for computing. tensor can reside in GPU accelerators memory. Tensors are immutable
Image file is transformed in to Tensor. Tensor used in TensorFlow model is not same as Tensor used in PyTorch.
TensorFlow
PyTorch
Above Diagram shows need for transform a Tensor A to B. Where A is Tensor used in TensorFlow and B is a Tensor used in PyTorch..
TensorFlow
PyTorch
Above Diagram shows need for transform a Tensor B to A . Where A is Tensor used in TensorFlow and B is a Tensor used in PyTorch..
TensorFlow
Tensor A used in TensorFlow
Numpy Array
numpy array
PyTorch
Tensor B used in PyTorch
Above Diagram shows both way transformation from TensorFlow tensor to PyTorch tensor. TensorFlow tensor is converted in to Numpy Array and then Numpy array converted in to PyTorch Tensor. Similarly , the case with taking B to A, Numpy array used as in between variable. In this process, transformation of Tensor is done by using numpy array.