Apache SINGA
A distributed deep learning platform .
|
header file of tensor data structure and functions covention: this lib requires explicit memory allocation and de-allocation all the data structure Tensor<cpu,1>, Tensor<gpu,1> are like handles(pointers), no memory allocation is happening during calculation More...
#include "tensor_base.h"
#include "tensor_expr.h"
#include "tensor_expr_engine-inl.hpp"
#include "tensor_cpu-inl.hpp"
#include "tensor_gpu-inl.hpp"
#include "tensor_expr_ext.h"
#include "tensor_io.h"
#include "tensor_container.h"
#include "tensor_random.h"
Go to the source code of this file.
Classes | |
struct | mshadow::Shape< dimension > |
shape of a tensor IMPORTANT NOTE: this shape is different from numpy.shape shape[0] gives the lowest dimension, shape[dimension-1] gives the highest dimension shape[k] corresponds to k-th dimension of tensor More... | |
struct | mshadow::cpu |
device name CPU More... | |
struct | mshadow::gpu |
device name CPU More... | |
struct | mshadow::Tensor< Device, dimension > |
general tensor More... | |
struct | mshadow::Tensor< Device, 1 > |
Namespaces | |
mshadow | |
namespace for mshadow | |
Functions | |
MSHADOW_XINLINE Shape< 1 > | mshadow::Shape1 (index_t s0) |
construct a one dimension shape, stride will equal s0 More... | |
MSHADOW_XINLINE Shape< 2 > | mshadow::Shape2 (index_t s1, index_t s0) |
construct a two dimension shape, stride will equal s0 More... | |
MSHADOW_XINLINE Shape< 3 > | mshadow::Shape3 (index_t s2, index_t s1, index_t s0) |
construct a three dimension shape, stride will equal s0 More... | |
MSHADOW_XINLINE Shape< 4 > | mshadow::Shape4 (index_t s3, index_t s2, index_t s1, index_t s0) |
construct a four dimension shape, stride will equal s0 More... | |
void | mshadow::InitTensorEngine (int device_id=0) |
initialize tensor engine, used to call intialization functions of dependent libs this function should be called before all GPU tensor operations, for using tensors in CPU, this call is actually not needed More... | |
void | mshadow::ShutdownTensorEngine (void) |
Shutdown tensor engine, this function should be called after all GPU tensor operations, for using tensors in CPU, this call is actually not needed. | |
template<int dim> | |
void | mshadow::AllocSpace (Tensor< cpu, dim > &obj, bool pad=MSHADOW_ALLOC_PAD) |
CPU/CPU: allocate space for CTensor, according to the shape in the obj this function is responsible to set the stride_ in each obj.shape. More... | |
template<int dim> | |
void | mshadow::AllocSpace (Tensor< gpu, dim > &obj, bool pad=MSHADOW_ALLOC_PAD) |
refer to comment of cpu ver More... | |
template<int dim> | |
void | mshadow::FreeSpace (Tensor< cpu, dim > &obj) |
CPU/GPU: free the space of tensor, will set obj.dptr to NULL. More... | |
template<int dim> | |
void | mshadow::FreeSpace (Tensor< gpu, dim > &obj) |
refer to comment of cpu ver More... | |
template<typename Device , int dim> | |
Tensor< Device, dim > | mshadow::NewTensor (const Shape< dim > &shape, real_t initv, bool pad=MSHADOW_ALLOC_PAD) |
CPU/GPU: short cut to allocate and initialize a Tensor. More... | |
template<int dim> | |
void | mshadow::Copy (Tensor< cpu, dim > dst, const Tensor< cpu, dim > &src) |
copy data from one tensor to another, with same shape More... | |
template<int dim> | |
void | mshadow::Copy (Tensor< cpu, dim > dst, const Tensor< gpu, dim > &src) |
refer to comment of cpu ver More... | |
template<int dim> | |
void | mshadow::Copy (Tensor< gpu, dim > dst, const Tensor< cpu, dim > &src) |
refer to comment of cpu ver More... | |
template<int dim> | |
void | mshadow::Copy (Tensor< gpu, dim > dst, const Tensor< gpu, dim > &src) |
refer to comment of cpu ver More... | |
void | mshadow::Softmax (Tensor< cpu, 2 > dst, const Tensor< cpu, 2 > &energy) |
CPU/GPU: normalize softmax: dst[i][j] = exp( energy[i][j] ) /( sum_j exp( energy[i][j] ) ) More... | |
void | mshadow::Softmax (Tensor< gpu, 2 > dst, const Tensor< gpu, 2 > &energy) |
refer to comment of cpu ver More... | |
template<typename Saver , int dim, typename E , int etype> | |
void | mshadow::MapExp (Tensor< cpu, dim > dst, const expr::Exp< E, etype > &exp) |
CPU/GPU: map a expression to a tensor, this function calls MapPlan. More... | |
template<typename Saver , int dim, typename E , int etype> | |
void | mshadow::MapExp (Tensor< gpu, dim > dst, const expr::Exp< E, etype > &exp) |
refer to comment of cpu ver More... | |
template<typename Saver , typename Reducer , typename E , int etype> | |
void | mshadow::MapReduceKeepLowest (Tensor< cpu, 1 > dst, const expr::Exp< E, etype > &exp, real_t scale=1.0f) |
CPU/GPU: map a expression, do reduction to 1D Tensor in lowest dimension (dimension 0) More... | |
template<typename Saver , typename Reducer , typename E , int etype> | |
void | mshadow::MapReduceKeepLowest (Tensor< gpu, 1 > dst, const expr::Exp< E, etype > &exp, real_t scale=1.0f) |
refer to comment of cpu ver More... | |
template<typename Saver , typename Reducer , int dimkeep, typename E , int etype> | |
void | mshadow::MapReduceKeepHighDim (Tensor< cpu, 1 > dst, const expr::Exp< E, etype > &exp, real_t scale=1.0f) |
CPU/GPU: map a expression, do reduction to 1D Tensor in third dimension (dimension 2) More... | |
template<typename Saver , typename Reducer , int dimkeep, typename E , int etype> | |
void | mshadow::MapReduceKeepHighDim (Tensor< gpu, 1 > dst, const expr::Exp< E, etype > &exp, real_t scale=1.0f) |
refer to comment of cpu ver More... | |
header file of tensor data structure and functions covention: this lib requires explicit memory allocation and de-allocation all the data structure Tensor<cpu,1>, Tensor<gpu,1> are like handles(pointers), no memory allocation is happening during calculation