GithubHelp home page GithubHelp logo

microsoft / cntk Goto Github PK

View Code? Open in Web Editor NEW
17.5K 1.3K 4.3K 913.3 MB

Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

Home Page: https://docs.microsoft.com/cognitive-toolkit/

License: Other

Makefile 0.42% C++ 54.73% Batchfile 0.22% C 0.32% Cuda 3.83% Shell 0.87% HTML 0.01% Python 12.03% C# 1.01% PowerShell 0.66% Jupyter Notebook 23.91% Perl 0.06% MATLAB 0.01% Awk 0.01% Java 0.06% CMake 0.18% Dockerfile 0.11% SWIG 0.84% Bikeshed 0.67% BrighterScript 0.05%
cognitive-toolkit cntk deep-learning machine-learning deep-neural-networks neural-network distributed python c-plus-plus c-sharp

cntk's Introduction

CNTK

Chat Windows build status Linux build status
Join the chat at https://gitter.im/Microsoft/CNTK Build Status Build Status

The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes neural networks as a series of computational steps via a directed graph. In this directed graph, leaf nodes represent input values or network parameters, while other nodes represent matrix operations upon their inputs. CNTK allows users to easily realize and combine popular model types such as feed-forward DNNs, convolutional nets (CNNs), and recurrent networks (RNNs/LSTMs). It implements stochastic gradient descent (SGD, error backpropagation) learning with automatic differentiation and parallelization across multiple GPUs and servers. CNTK has been available under an open-source license since April 2015. It is our hope that the community will take advantage of CNTK to share ideas more quickly through the exchange of open source working code.

Installation

Installing nightly packages

If you prefer to use latest CNTK bits from master, use one of the CNTK nightly packages:

Learning CNTK

You can learn more about using and contributing to CNTK with the following resources:

More information

Disclaimer

Dear community,

With our ongoing contributions to ONNX and the ONNX Runtime, we have made it easier to interoperate within the AI framework ecosystem and to access high performance, cross-platform inferencing capabilities for both traditional ML models and deep neural networks. Over the last few years we have been privileged to develop such key open-source machine learning projects, including the Microsoft Cognitive Toolkit, which has enabled its users to leverage industry-wide advancements in deep learning at scale.

Today’s 2.7 release will be the last main release of CNTK. We may have some subsequent minor releases for bug fixes, but these will be evaluated on a case-by-case basis. There are no plans for new feature development post this release.

The CNTK 2.7 release has full support for ONNX 1.4.1, and we encourage those seeking to operationalize their CNTK models to take advantage of ONNX and the ONNX Runtime. Moving forward, users can continue to leverage evolving ONNX innovations via the number of frameworks that support it. For example, users can natively export ONNX models from PyTorch or convert TensorFlow models to ONNX with the TensorFlow-ONNX converter.

We are incredibly grateful for all the support we have received from contributors and users over the years since the initial open-source release of CNTK. CNTK has enabled both Microsoft teams and external users to execute complex and large-scale workloads in all manner of deep learning applications, such as historical breakthroughs in speech recognition achieved by Microsoft Speech researchers, the originators of the framework.

As ONNX is increasingly employed in serving models used across Microsoft products such as Bing and Office, we are dedicated to synthesizing innovations from research with the rigorous demands of production to progress the ecosystem forward.

Above all, our goal is to make innovations in deep learning across the software and hardware stacks as open and accessible as possible. We will be working hard to bring both the existing strengths of CNTK and new state-of-the-art research into other open-source projects to truly broaden the reach of such technologies.

With gratitude,

-- The CNTK Team

Microsoft Open Source Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

News

You can find more news on the official project feed

2019-03-29. CNTK 2.7.0

Highlights of this release

  • Moved to CUDA 10 for both Windows and Linux.
  • Support advance RNN loop in ONNX export.
  • Export larger than 2GB models in ONNX format.
  • Support FP16 in Brain Script train action.

CNTK support for CUDA 10

CNTK now supports CUDA 10. This requires an update to build environment to Visual Studio 2017 v15.9 for Windows.

To setup build and runtime environment on Windows:

To setup build and runtime environment on Linux using docker, please build Unbuntu 16.04 docker image using Dockerfiles here. For other Linux systems, please refer to the Dockerfiles to setup dependent libraries for CNTK.

Support advance RNN loop in ONNX export

CNTK models with recursive loops can be exported to ONNX models with scan ops.

Export larger than 2GB models in ONNX format

To export models larger than 2GB in ONNX format, use cntk.Function API: save(self, filename, format=ModelFormat.CNTKv2, use_external_files_to_store_parameters=False) with 'format' set to ModelFormat.ONNX and use_external_files_to_store_parameters set to True. In this case, model parameters are saved in external files. Exported models shall be used with external parameter files when doing model evaluation with onnxruntime.

2018-11-26.
Netron now supports visualizing CNTK v1 and CNTK v2 .model files.

NetronCNTKDark1 NetronCNTKLight1

Project changelog

2018-09-17. CNTK 2.6.0

Efficient group convolution

The implementation of group convolution in CNTK has been updated. The updated implementation moves away from creating a sub-graph for group convolution (using slicing and splicing), and instead uses cuDNN7 and MKL2017 APIs directly. This improves the experience both in terms of performance and model size.

As an example, for a single group convolution op with the following attributes:

  • Input tensor (C, H, W) = (32, 128, 128)
  • Number of output channels = 32 (channel multiplier is 1)
  • Groups = 32 (depth wise convolution)
  • Kernel size = (5, 5)

The comparison numbers for this single node are as follows:

First Header GPU exec. time (in millisec., 1000 run avg.) CPU exec. time (in millisec., 1000 run avg.) Model Size (in KB, CNTK format)
Old implementation 9.349 41.921 38
New implementation 6.581 9.963 5
Speedup/savings Approx. 30% Approx. 65-75% Approx. 87%

Sequential Convolution

The implementation of sequential convolution in CNTK has been updated. The updated implementation creates a separate sequential convolution layer. Different from regular convolution layer, this operation convolves also on the dynamic axis(sequence), and filter_shape[0] is applied to that axis. The updated implementation supports broader cases, such as where stride > 1 for the sequence axis.

For example, a sequential convolution over a batch of one-channel black-and-white images. The images have the same fixed height of 640, but each with width of variable lengths. The width is then represented by sequential axis. Padding is enabled, and strides for both width and height are 2.

 >>> f = SequentialConvolution((3,3), reduction_rank=0, pad=True, strides=(2,2), activation=C.relu)
 >>> x = C.input_variable(**Sequence[Tensor[640]])
 >>> x.shape
     (640,)
 >>> h = f(x)
 >>> h.shape
     (320,)
 >>> f.W.shape
     (1, 1, 3, 3)

Operators

depth_to_space and space_to_depth

There is a breaking change in the depth_to_space and space_to_depth operators. These have been updated to match ONNX specification, specifically the permutation for how the depth dimension is placed as blocks in the spatial dimensions, and vice-versa, has been changed. Please refer to the updated doc examples for these two ops to see the change.

Tan and Atan

Added support for trigonometric ops Tan and Atan.

ELU

Added support for alpha attribute in ELU op.

Convolution

Updated auto padding algorithms of Convolution to produce symmetric padding at best effort on CPU, without affecting the final convolution output values. This update increases the range of cases that could be covered by MKL API and improves the performance, E.g. ResNet50.

Default arguments order

There is a breaking change in the arguments property in CNTK python API. The default behavior has been updated to return arguments in python order instead of in C++ order. This way it will return arguments in the same order as they are fed into ops. If you wish to still get arguments in C++ order, you can simply override the global option. This change should only affect the following ops: Times, TransposeTimes, and Gemm(internal).

Bug fixes

  • Updated doc for Convolution layer to include group and dilation arguments.
  • Added improved input validation for group convolution.
  • Updated LogSoftMax to use more numerically stable implementation.
  • Fixed Gather op's incorrect gradient value.
  • Added validation for 'None' node in python clone substitution.
  • Added validation for padding channel axis in convolution.
  • Added CNTK native default lotusIR logger to fix the "Attempt to use DefaultLogger" error when loading some ONNX models.
  • Added proper initialization for ONNX TypeStrToProtoMap.
  • Updated python doctest to handle different print format for newer version numpy(version >= 1.14).
  • Fixed Pooling(CPU) to produce correct output values when kernel center is on padded input cells.

ONNX

Updates

  • Updated CNTK's ONNX import/export to use ONNX 1.2 spec.
  • Major update to how batch and sequence axes are handled in export and import. As a result, the complex scenarios and edge cases are handled accurately.
  • Updated CNTK's ONNX BatchNormalization op export/import to latest spec.
  • Added model domain to ONNX model export.
  • Improved error reporting during import and export of ONNX models.
  • Updated DepthToSpace and SpaceToDepth ops to match ONNX spec on the permutation for how the depth dimension is placed as block dimension.
  • Added support for exporting alpha attribute in ELU ONNX op.
  • Major overhaul to Convolution and Pooling export. Unlike before, these ops do not export an explicit Pad op in any situation.
  • Major overhaul to ConvolutionTranspose export and import. Attributes such as output_shape, output_padding, and pads are fully supported.
  • Added support for CNTK's StopGradient as a no-op.
  • Added ONNX support for TopK op.
  • Added ONNX support for sequence ops: sequence.slice, sequence.first, sequence.last, sequence.reduce_sum, sequence.reduce_max, sequence.softmax. For these ops, there is no need to expand ONNX spec. CNTK ONNX exporter just builds computation equivalent graphs for these sequence ops.
  • Added full support for Softmax op.
  • Made CNTK broadcast ops compatible with ONNX specification.
  • Handle to_batch, to_sequence, unpack_batch, sequence.unpack ops in CNTK ONNX exporter.
  • ONNX tests to export ONNX test cases for other toolkits to run and to validate.
  • Fixed Hardmax/Softmax/LogSoftmax import/export.
  • Added support for Select op export.
  • Added import/export support for several trigonometric ops.
  • Updated CNTK support for ONNX MatMul op.
  • Updated CNTK support for ONNX Gemm op.
  • Updated CNTK's ONNX MeanVarianceNormalization op export/import to latest spec.
  • Updated CNTK's ONNX LayerNormalization op export/import to latest spec.
  • Updated CNTK's ONNX PRelu op export/import to latest spec.
  • Updated CNTK's ONNX Gather op export/import to latest spec.
  • Updated CNTK's ONNX ImageScaler op export/import to latest spec.
  • Updated CNTK's ONNX Reduce ops export/import to latest spec.
  • Updated CNTK's ONNX Flatten op export/import to latest spec.
  • Added CNTK support for ONNX Unsqueeze op.

Bug or minor fixes:

  • Updated LRN op to match ONNX 1.2 spec where the size attribute has the semantics of diameter, not radius. Added validation if LRN kernel size is larger than channel size.
  • Updated Min/Max import implementation to handle variadic inputs.
  • Fixed possible file corruption when resaving on top of existing ONNX model file.

.Net Support

The Cntk.Core.Managed library has officially been converted to .Net Standard and supports .Net Core and .Net Framework applications on both Windows and Linux. Starting from this release, .Net developers should be able to restore CNTK Nuget packages using new .Net SDK style project file with package management format set to PackageReference.

The following C# code now works on both Windows and Linux:

 >>> var weightParameterName = "weight";
 >>> var biasParameterName = "bias";
 >>> var inputName = "input";
 >>> var outputDim = 2;
 >>> var inputDim = 3;
 >>> Variable inputVariable = Variable.InputVariable(new int[] { inputDim }, DataType.Float, inputName);
 >>> var weightParameter = new Parameter(new int[] { outputDim, inputDim }, DataType.Float, 1, device, weightParameterName);
 >>> var biasParameter = new Parameter(new int[] { outputDim }, DataType.Float, 0, device, biasParameterName);
 >>> 
 >>> Function modelFunc = CNTKLib.Times(weightParameter, inputVariable) + biasParameter;

For example, simply adding an ItemGroup clause in the .csproj file of a .Net Core application is sufficient: >>> >>> >>> >>> netcoreapp2.1 >>> x64 >>> >>> >>> >>> >>> >>> >>>

Bug or minor fixes:

  • Fixed C# string and char to native wstring and wchar UTF conversion issues on Linux.
  • Fixed multibyte and wide character conversions across the codebase.
  • Fixed Nuget package mechanism to pack for .Net Standard.
  • Fixed a memory leak issue in Value class in C# API where Dispose was not called upon object destruction.

Misc

2018-04-16. CNTK 2.5.1

Repack CNTK 2.5 with third party libraries included in the bundles (Python wheel packages)


2018-03-15. CNTK 2.5

Change profiler details output format to be chrome://tracing

Enable per-node timing. Working example here

  • per-node timing creates items in profiler details when profiler is enabled.
  • usage in Python:
import cntk as C
C.debugging.debug.set_node_timing(True)
C.debugging.start_profiler() # optional
C.debugging.enable_profiler() # optional
#<trainer|evaluator|function> executions
<trainer|evaluator|function>.print_node_timing()
C.debugging.stop_profiler()

Example profiler details view in chrome://tracing ProfilerDetailWithNodeTiming

CPU inference performance improvements using MKL

  • Accelerates some common tensor ops in Intel CPU inference for float32, especially for fully connected networks
  • Can be turned on/off by cntk.cntk_py.enable_cpueval_optimization()/cntk.cntk_py.disable_cpueval_optimization()

1BitSGD incorporated into CNTK

  • 1BitSGD source code is now available with CNTK license (MIT license) under Source/1BitSGD/
  • 1bitsgd build target was merged into existing gpu target

New loss function: hierarchical softmax

  • Thanks @yaochengji for the contribution!

Distributed Training with Multiple Learners

  • Trainer now accepts multiple parameter learners for distributed training. With this change, different parameters of a network can be learned by different learners in a single training session. This also facilitates distributed training for GANs. For more information, please refer to the Basic_GAN_Distributed.py and the cntk.learners.distributed_multi_learner_test.py

Operators

  • Added MeanVarianceNormalization operator.

Bug fixes

  • Fixed convergence issue in Tutorial 201B
  • Fixed pooling/unpooling to support free dimension for sequences
  • Fixed crash in CNTKBinaryFormat deserializer when crossing sweep boundary
  • Fixed shape inference bug in RNN step function for scalar broadcasting
  • Fixed a build bug when mpi=no
  • Improved distributed training aggregation speed by increasing packing threshold, and expose the knob in V2
  • Fixed a memory leak in MKL layout
  • Fixed a bug in cntk.convert API in misc.converter.py, which prevents converting complex networks.

ONNX

  • Updates
    • CNTK exported ONNX models are now ONNX.checker compliant.
    • Added ONNX support for CNTK’s OptimizedRNNStack operator (LSTM only).
    • Added support for LSTM and GRU operators
    • Added support for experimental ONNX op MeanVarianceNormalization.
    • Added support for experimental ONNX op Identity.
    • Added support for exporting CNTK’s LayerNormalization layer using ONNX MeanVarianceNormalization op.
  • Bug or minor fixes:
    • Axis attribute is optional in CNTK’s ONNX Concat operator.
    • Bug fix in ONNX broadcasting for scalars.
    • Bug fix in ONNX ConvTranspose operator.
    • Backward compatibility bug fix in LeakyReLu (argument ‘alpha’ reverted to type double).

Misc

  • Added a new API find_by_uid() under cntk.logging.graph.

2018-02-28. CNTK supports nightly build

If you prefer to use latest CNTK bits from master, use one of the CNTK nightly package.

Alternatively, you can also click corresponding build badge to land to nightly build page.


2018-01-31. CNTK 2.4

Highlights:

  • Moved to CUDA9, cuDNN 7 and Visual Studio 2017.
  • Removed Python 3.4 support.
  • Added Volta GPU and FP16 support.
  • Better ONNX support.
  • CPU perf improvement.
  • More OPs.

OPs

  • top_k operation: in the forward pass it computes the top (largest) k values and corresponding indices along the specified axis. In the backward pass the gradient is scattered to the top k elements (an element not in the top k gets a zero gradient).
  • gather operation now supports an axis argument
  • squeeze and expand_dims operations for easily removing and adding singleton axes
  • zeros_like and ones_like operations. In many situations you can just rely on CNTK correctly broadcasting a simple 0 or 1 but sometimes you need the actual tensor.
  • depth_to_space: Rearranges elements in the input tensor from the depth dimension into spatial blocks. Typical use of this operation is for implementing sub-pixel convolution for some image super-resolution models.
  • space_to_depth: Rearranges elements in the input tensor from the spatial dimensions to the depth dimension. It is largely the inverse of DepthToSpace.
  • sum operation: Create a new Function instance that computes element-wise sum of input tensors.
  • softsign operation: Create a new Function instance that computes the element-wise softsign of a input tensor.
  • asinh operation: Create a new Function instance that computes the element-wise asinh of a input tensor.
  • log_softmax operation: Create a new Function instance that computes the logsoftmax normalized values of a input tensor.
  • hard_sigmoid operation: Create a new Function instance that computes the hard_sigmoid normalized values of a input tensor.
  • element_and, element_not, element_or, element_xor element-wise logic operations
  • reduce_l1 operation: Computes the L1 norm of the input tensor's element along the provided axes.
  • reduce_l2 operation: Computes the L2 norm of the input tensor's element along the provided axes.
  • reduce_sum_square operation: Computes the sum square of the input tensor's element along the provided axes.
  • image_scaler operation: Alteration of image by scaling its individual values.

ONNX

  • There have been several improvements to ONNX support in CNTK.
  • Updates
    • Updated ONNX Reshape op to handle InferredDimension.
    • Adding producer_name and producer_version fields to ONNX models.
    • Handling the case when neither auto_pad nor pads atrribute is specified in ONNX Conv op.
  • Bug fixes
    • Fixed bug in ONNX Pooling op serialization
    • Bug fix to create ONNX InputVariable with only one batch axis.
    • Bug fixes and updates to implementation of ONNX Transpose op to match updated spec.
    • Bug fixes and updates to implementation of ONNX Conv, ConvTranspose, and Pooling ops to match updated spec.

Operators

  • Group convolution
    • Fixed bug in group convolution. Output of CNTK Convolution op will change for groups > 1. More optimized implementation of group convolution is expected in the next release.
    • Better error reporting for group convolution in Convolution layer.

Halide Binary Convolution

  • The CNTK build can now use optional Halide libraries to build Cntk.BinaryConvolution.so/dll library that can be used with the netopt module. The library contains optimized binary convolution operators that perform better than the python based binarized convolution operators. To enable Halide in the build, please download Halide release and set HALIDE_PATH environment varibale before starting a build. In Linux, you can use ./configure --with-halide[=directory] to enable it. For more information on how to use this feature, please refer to How_to_use_network_optimization.

See more in the Release Notes. Get the Release from the CNTK Releases page.

cntk's People

Contributors

alexeyo26 avatar amitaga avatar bmitra-msft avatar bowenbao avatar cha-zhang avatar chentams avatar chivee avatar chrisbasoglu avatar depaulaner avatar ebarsoumms avatar eldakms avatar fmegen avatar gaizkan avatar ivrodr-msft avatar jaliyae avatar kaisheng avatar liqunfu avatar mahilleb-msft avatar n17s avatar ottolu avatar pkranen avatar sayanpa avatar thhoens avatar thiagocrepaldi avatar thilow avatar vmazalov avatar wilrich-msft avatar wolfma61 avatar yzhang87 avatar zhouwangzw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cntk's Issues

create CNTK binary download

binary download from Codeplex is out-of-date.
Need a new binary download for current version of CNTK build

Testing data without label

Hello All,

If my testing dataset, does not include lable, what kind of configurtion shoud I apply? Is there any functionality like this, enabled right now? when I remove the label configurtion in my test action code, compiler gives the error as: "EXCEPTION occurred: features and label files must be the same file, use separate readers to define single use files"

Commit messages don't follow git guidelines

A quick look at the history of the repository shows that most of the past commits don't comply with git's standard commit guidelines (see here), which you might want to follow to make reading logs a bit easier and clean up the history if you are looking for external contributors.

Some examples:

  • 53de59c uses past tense and mostly repeats its content.
  • 198d64d uses past tense and line is too long (and gets split by Github or text editors depending on configuration)

Error when running CIFAR-10 example

I use the following command to download CIFAR-10 dataset
python CIFAR_convert.py

Then issue the following command to run it

cntk configFile=01_Conv.config configName=01_Conv

However there is an error from log below which complains labelsmap.txt can't be found. How do you create labelsmap.txt?

<<<<<<<<<<<<<<<<<<<< PROCESSED CONFIG WITH ALL VARIABLES RESOLVED <<<<<<<<<<<<<<<<<<<<
command: Train Test
precision = float
CNTKModelPath: ./Output/Models/01_Convolution
CNTKCommandTrainInfo: Train : 30
CNTKCommandTrainInfo: CNTKNoMoreCommands_Total : 30
CNTKCommandTrainBegin: Train
NDLBuilder Using CPU
Reading UCI file ./Train.txt

[CALL STACK]
/home/fc/src/CNTK/bin/../lib/libcntkmath.so ( Microsoft::MSR::CNTK::DebugUtil::PrintCallStack() + 0xbf ) [0x7f178c7c90ff]
cntk ( void Microsoft::MSR::CNTK::ThrowFormattedstd::runtime_error(char const*, ...) + 0xdd ) [0x43332d]
/home/fc/src/CNTK/bin/../lib/UCIFastReader.so ( void Microsoft::MSR::CNTK::UCIFastReader::InitFromConfigMicrosoft::MSR::CNTK::ConfigParameters(Microsoft::MSR::CNTK::ConfigParameters const&) + 0xfed ) [0x7f1787b9813d]
/home/fc/src/CNTK/bin/../lib/libcntkmath.so ( Microsoft::MSR::CNTK::DataReader::DataReaderMicrosoft::MSR::CNTK::ConfigParameters(Microsoft::MSR::CNTK::ConfigParameters const&) + 0x48e ) [0x7f178c7c101e]
cntk ( ) [0x6436b1]
cntk ( ) [0x648470]
cntk ( ) [0x48a9ec]
cntk ( ) [0x42cd99]
cntk ( ) [0x42d3e8]
cntk ( ) [0x421288]
/lib/x86_64-linux-gnu/libc.so.6 ( __libc_start_main + 0xf5 ) [0x7f178b528ec5]
cntk ( ) [0x424e07]
EXCEPTION occurred: label mapping file ./labelsmap.txt not found, can be created with a 'createLabelMap' command/action

Does cntk support cuda 7.5?

I would like to build cntk with cuda 7.5 and vs2013 on windows 10. However, the MathCUDA project cannot be loaded. The error message says, "The imported project "C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\BuildCustomizations\CUDA 7.0.props" was not found." I wonder if I have to install cuda 7.0 in order to build the solution.
Thanks

Error with MNIST Dataset

Hi,

I am trying to run CNTK on a system with GPUs.

The compilation and creation of data proceeded fine without any issues.
After running I see this error :

cat ../Output/01_OneHidden_out_train_test.log

<<<<<<<<<<<<<<<<<<<< PROCESSED CONFIG WITH ALL VARIABLES RESOLVED <<<<<<<<<<<<<<<<<<<<
command: train test
precision = float
CNTKModelPath: ../Output/Models/01_OneHidden
CNTKCommandTrainInfo: train : 30
CNTKCommandTrainInfo: CNTKNoMoreCommands_Total : 30
CNTKCommandTrainBegin: train

[CALL STACK]
/scratch-shared/mch/scratch/dipsank/CUDNN/CNTK/bin/../lib/libcntkmath.so ( Microsoft::MSR::CNTK::DebugUtil::PrintCallStack() + 0xb4 ) [0x7fe97d769d44]
cntk ( void Microsoft::MSR::CNTK::ThrowFormattedstd::runtime_error(char const_, ...) + 0xc0 ) [0x530140]
cntk ( Microsoft::MSR::CNTK::BestGpu::GetDevices(int, Microsoft::MSR::CNTK::BestGpuFlags) + 0x98d ) [0x7ae02d]
cntk ( Microsoft::MSR::CNTK::BestGpu::GetDevice(Microsoft::MSR::CNTK::BestGpuFlags) + 0x1a ) [0x7ae29a]
cntk ( Microsoft::MSR::CNTK::DeviceFromConfig(Microsoft::MSR::CNTK::ConfigParameters const&) + 0x5b3 ) [0x7b1873]
cntk ( void DoTrain<Microsoft::MSR::CNTK::ConfigParameters, float>(Microsoft::MSR::CNTK::ConfigParameters const&) + 0x4c ) [0x76117c]
cntk ( void DoCommands(Microsoft::MSR::CNTK::ConfigParameters const&) + 0x7a4 ) [0x5926e4]
cntk ( wmainOldCNTKConfig(int, wchar_t__) + 0xaa1 ) [0x52a941]
cntk ( wmain1(int, wchar_t_*) + 0x62 ) [0x52b0f2]
cntk ( main + 0xcc ) [0x51e06c]
/lib64/libc.so.6 ( __libc_start_main + 0xfd ) [0x344e61ed5d]
cntk ( ) [0x521b09]

EXCEPTION occurred: DeviceFromConfig: unexpected failure

Please let me know if you need additional details.
Any pointers on anything I am doing wrong ?

reading CSV

Hello All,

  • For reading CSV files, Should I use UCIFastReader?
  • If my test dataset, doesnt have label, what kind of configuration should i apply?should i remove the label for test config? I didi it and the compiler stopped working..

Incorrect settings for Visual Studio 2015

Include directories shall point to $(WindowsSDK_IncludePath) for the proper architecture, 10, in this case, 8-8.1?, include directories with the VC++ additional include directories option are the best for doing so.

Project does not compile with Visual Studio 2015 and Windows 10 unless directories are fixed, (ctype.h, include errors, etcetera).

Also, a nice README about CUDA, ACML and MPI would come in handy.

EXCEPTION occurred: cannot open word class file

hi, all, while running classbased LM trainning with LSTM, I have found that if define writeWordAndClass section as blow

writeWordAndClassInfo = [
   action = "writeWordAndClass"

   # input train data
   inputFile = "$DataDir$/$trainFile$"

   # four column vocabulary file
   #
   # FORMAT:
   # - the first column is the word id
   # - the second column is the count of the word
   # - the third column is the word 
   # - the fourth column is the class id
   outputVocabFile = "$ModelDir$/vocab.txt"

if outputVocabFile's parent directory(eg, $ModelDir$) doesn't exist, will failed to create vocabulary file, the error message is sth like this:
EXCEPTION occurred: cannot open word class file

SOLUTION:
just one more line code can fix this problem, modify Source/ActionsLib/OtherActions.cpp, funtion DoWriteWordAndClassInfo(const ConfigParameters& config), around line 431, change from:

    std::ofstream ofvocab;
    ofvocab.open(outputVocabFile.c_str());

to

    std::ofstream ofvocab;
    msra::files::make_intermediate_dirs(s2ws(outputVocabFile));
    ofvocab.open(outputVocabFile.c_str());

recomiple and install cntk, then all'll be well, that's all

Ubuntu build error: format ‘%d’ expects argument of type ‘int’

mpic++ -c Source/CNTK/ModelEditLanguage.cpp -o .build/Source/CNTK/ModelEditLanguage.o -D_POSIX_SOURCE -D_XOPEN_SOURCE=600 -D__USE_XOPEN2K -DCPUONLY -DUSE_ACML -DNDEBUG -msse3 -std=c++0x -std=c++11 -fopenmp -fpermissive -fPIC -Werror -fcheck-new -Wno-error=literal-suffix -O4 -ISource/Common/Include -ISource/Math -ISource/CNTK -ISource/ActionsLib -ISource/ComputationNetworkLib -ISource/SGDLib -ISource/SequenceTrainingLib -ISource/CNTK/BrainScript -I/opt/acml5.3.1/open64_64//include -I/opt/opencv-3.0.0/include -MD -MP -MF .build/Source/CNTK/ModelEditLanguage.d

Source/Math/ConvolutionEngine.cpp: In instantiation of ‘static std::unique_ptr<Microsoft::MSR::CNTK::ConvolutionEngineFactory<ElemType> > Microsoft::MSR::CNTK::ConvolutionEngineFactory<ElemType>::Create(int, Microsoft::MSR::CNTK::ConvolutionEngineFactory<ElemType>::EngineType, Microsoft::MSR::CNTK::ImageLayoutKind) [with ElemType = float]’:
Source/Math/ConvolutionEngine.cpp:493:16:   required from here
Source/Math/ConvolutionEngine.cpp:490:17: error: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘Microsoft::MSR::CNTK::ConvolutionEngineFactory<float>::EngineType’ [-Werror=format=]
     RuntimeError("Not supported convolution engine type: %d.", engType);
  cc1plus: all warnings being treated as errors

> dpkg --list | grep compiler
ii  gcc-4.8                                                     4.8.5-1ubuntu1                             amd64        GNU C compiler
ii  gcc-4.9                                                     4.9.3-5ubuntu1                             amd64        GNU C compiler
ii  gcc-5                                                       5.2.1-22ubuntu2                            amd64        GNU C compiler

undefined reference to `bool Microsoft::MSR::CNTK::CheckFunction<float>

somehow the compiler gets confused by the template

template <typename ElemType> bool CheckFunction(std::string& p_nodeType, bool* allowUndeterminedVariable = nullptr);

=-----------------------------------------------------------=
mpic++  -shared -L./lib -L/opt/acml5.3.1/ifort64_mp/lib -L/opt/opencv-3.0.0/release/lib -Wl,-rpath,'$ORIGIN' -Wl,-rpath,/opt/acml5.3.1/ifort64_mp/lib -Wl,-rpath,/opt/opencv-3.0.0/release/lib -o lib/ImageReader.so .build/Source/Readers/ImageReader/Exports.o .build/Source/Readers/ImageReader/ImageReader.o -lcntkmath -lopencv_core -lopencv_imgproc -lopencv_imgcodecs
=-----------------------------------------------------------=
building output for with build type release
mpic++  -L./lib -L/opt/acml5.3.1/ifort64_mp/lib -L/opt/opencv-3.0.0/release/lib -Wl,-rpath,'$ORIGIN/../lib' -Wl,-rpath,/opt/acml5.3.1/ifort64_mp/lib -Wl,-rpath,/opt/opencv-3.0.0/release/lib -o bin/cntk Source/CNTK/buildinfo.h .build/Source/CNTK/CNTK.o .build/Source/CNTK/ModelEditLanguage.o .build/Source/CNTK/NetworkDescriptionLanguage.o .build/Source/CNTK/SimpleNetworkBuilder.o .build/Source/CNTK/SynchronousExecutionEngine.o .build/Source/CNTK/tests.o .build/Source/ComputationNetworkLib/ComputationNode.o .build/Source/ComputationNetworkLib/ComputationNetwork.o .build/Source/ComputationNetworkLib/ComputationNetworkEvaluation.o .build/Source/ComputationNetworkLib/ComputationNetworkAnalysis.o .build/Source/ComputationNetworkLib/ComputationNetworkEditing.o .build/Source/ComputationNetworkLib/ComputationNetworkBuilder.o .build/Source/ComputationNetworkLib/ComputationNetworkScripting.o .build/Source/SGDLib/Profiler.o .build/Source/SGDLib/SGD.o .build/Source/ActionsLib/TrainActions.o .build/Source/ActionsLib/EvalActions.o .build/Source/ActionsLib/OtherActions.o .build/Source/ActionsLib/SpecialPurposeActions.o .build/Source/SequenceTrainingLib/latticeforwardbackward.o .build/Source/SequenceTrainingLib/parallelforwardbackward.o .build/Source/CNTK/BrainScript/BrainScriptEvaluator.o .build/Source/CNTK/BrainScript/BrainScriptParser.o .build/Source/CNTK/BrainScript/BrainScriptTest.o .build/Source/CNTK/BrainScript/ExperimentalNetworkBuilder.o .build/Source/Common/BestGpu.o .build/Source/Common/MPIWrapper.o .build/Source/SequenceTrainingLib/latticeNoGPU.o  -lacml_mp -liomp5 -lm -lpthread -lcntkmath -fopenmp
.build/Source/CNTK/CNTK.o: In function `Microsoft::MSR::CNTK::NDLScript<float>::CheckName(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool)':
/home/me/CNTK-r2016-01-26/Source/CNTK/NetworkDescriptionLanguage.h:800: undefined reference to `bool Microsoft::MSR::CNTK::CheckFunction<float>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&, bool*)'
.build/Source/CNTK/CNTK.o: In function `Microsoft::MSR::CNTK::NDLScript<float>::NDLScript(Microsoft::MSR::CNTK::ConfigValue const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool)':
/home/me/CNTK-r2016-01-26/Source/CNTK/NetworkDescriptionLanguage.h:525: undefined reference to `bool Microsoft::MSR::CNTK::CheckFunction<float>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&, bool*)'
.build/Source/CNTK/CNTK.o: In function `Microsoft::MSR::CNTK::NDLScript<float>::ParseValue(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, unsigned long)':
/home/me/CNTK-r2016-01-26/Source/CNTK/NetworkDescriptionLanguage.h:1022: undefined reference to `bool Microsoft::MSR::CNTK::CheckFunction<float>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&, bool*)'
collect2: error: ld returned 1 exit status
Makefile:494: recipe for target 'bin/cntk' failed
make: *** [bin/cntk] Error 1

Make sure samples use realistic settings for real-life use

The samples use very small data sets, but will serve as the best-practice examples for real-life use. We should review/revise their configurations as to be applicable to large realistic data sets. E.g. while a tiny minibatch size is great for debugging, it is unnecessarily inefficient for parallelization.

How to implement first DOM load then angularJs function execute

HI i am facing some problem that first angularjs function execute then View DOM load.. so how to fix this issue.. alternatively i have used below code---

app.run(["$rootScope", "$location","$window", function ($rootScope, $location,$window) {
$rootScope.$on("$routeChangeSuccess", function (userInfo) {
//console.log(userInfo);
angular.element(document).ready(function () {
setTimeout(function () {
var windowHeight = $(window).height() - 100;
$('#page-layout').css({ 'overflow-x': 'auto', 'max-height': windowHeight });
}, 1000);
$(window).resize(function () {
var windowHeight = $(window).height() - 100;
$('#page-layout').css({ 'overflow-x': 'auto', 'max-height': windowHeight });
});

        });
    });
    $rootScope.$on("$routeChangeError", function (event, current, previous, eventObj) {

        if (eventObj.authenticated === false) {
            $location.path("/Login");
        }
    });
}]) 

Contribution guidelines

As per Github guide, you can set a CONTRIBUTING.md to help the community write pull requests or reviewing their commits. This should include preferred code style per language, commit messages guidelines and other related preferences.

Issues in compiling with Visual Studio 2013

Some projects don't compile under Visual Studio 2013 without changes.

  1. File.cpp isn't included in project HTKMLFReader.
    It is easy to fix. Just add the missing file to the project.
  2. Initialization of array
    The errors happen at several places, e.g., in file TensorView.cpp.
    void TensorView::DoUnaryOpOf(ElemType beta, const TensorView& a, ElemType alpha, ElementWiseOperator op)
    {
    ....
    PrepareTensorOperands<ElemType, 2>(array<TensorShape, 2>{a.GetShape(), GetShape()}, offsets, regularOpDims, regularStrides, reducingOpDims, reducingStrides);
    ...
    }

I got the following error.

1>TensorView.cpp(291): error C2440: '' : cannot convert from 'initializer-list' to 'std::arrayMicrosoft::MSR::CNTK::TensorShape,0x04'
1> Constructor for class 'std::arrayMicrosoft::MSR::CNTK::TensorShape,0x04' is declared 'explicit'
1> TensorView.cpp(280) : while compiling class template member function 'void Microsoft::MSR::CNTK::TensorView::DoTernaryOpOf(ElemType,const Microsoft::MSR::CNTK::TensorView &,const Microsoft::MSR::CNTK::TensorView &,const Microsoft::MSR::CNTK::TensorView &,ElemType,Microsoft::MSR::CNTK::ElementWiseOperator)'
1> with
1> [
1> ElemType=float
1> ]
1> c:\home\library\cntk\source\math\TensorView.h(117) : see reference to function template instantiation 'void Microsoft::MSR::CNTK::TensorView::DoTernaryOpOf(ElemType,const Microsoft::MSR::CNTK::TensorView &,const Microsoft::MSR::CNTK::TensorView &,const Microsoft::MSR::CNTK::TensorView &,ElemType,Microsoft::MSR::CNTK::ElementWiseOperator)' being compiled
1> with
1> [
1> ElemType=float
1> ]
1> TensorView.cpp(373) : see reference to class template instantiation 'Microsoft::MSR::CNTK::TensorView' being compiled

My work-around solution is to add an additional pair of '{' and '}' for the array as
PrepareTensorOperands<ElemType, 2>(array<TensorShape, 2>{ { a.GetShape(), GetShape()} } , offsets, regularOpDims, regularStrides, reducingOpDims, reducingStrides);

Anyone knows what is the root cause of this compiling error?

Error Running Simple2d Example

I am attempting to run the Simple2d example from the Wiki, but I get the following exception:

EXCEPTION occurred: ConfigValue (uint64_t): invalid input string

CNTK is installed on machine running CentOS 6.6 and is one of the last versions from when the source code was on codeplex. I have attached the full output here: run_simple2d.txt.

Kaldi decoding error with new release of CNTK

Hi All,
The Kaldi decoding fails with the new version of CNTK. If I use an older version of CNTK the decoding works fine. I find it is difficult to infer the issue from the error message. I have mentioned the error below. Please advice. Thank you.

`Post-processing network complete.
HTKMLFWriter::Init: reading output script file data-lda/test_eval92/split8/1/cntk_test.counts ... 560 entries

Allocating matrices for forward and/or backward propagation.
evaluate: reading 571 frames of 440c02010
evaluate: reading 571 frames of 440c02010

[CALL STACK]
/home/lahiru/Devinstall/cntk_github/CNTK/build/release/lib/libcntkmath.so ( Microsoft::MSR::CNTK::DebugUtil::PrintCallStack() + 0xbf ) [0x7ff296ba6cdf]
cntk ( void Microsoft::MSR::CNTK::ThrowFormattedstd::logic_error(char const_, ...) + 0xdd ) [0x53d5dd]
cntk ( Microsoft::MSR::CNTK::ComputationNode::NotifyFunctionValuesMBSizeModified() + 0x41c ) [0x53e57c]
cntk ( ) [0x758d37]
cntk ( Microsoft::MSR::CNTK::SimpleOutputWriter::WriteOutput(Microsoft::MSR::CNTK::IDataReader&, unsigned long, Microsoft::MSR::CNTK::IDataWriter&, std::vector<std::basic_string<wchar_t, std::char_traits<wchar_t>, std::allocator<wchar_t> >, std::allocator<std::basic_string<wchar_t, std::char_traits<wchar_t>, std::allocator<wchar_t> > > > const&, unsigned long, bool) + 0x363 ) [0x75bb63]
cntk ( void DoWriteOutput(Microsoft::MSR::CNTK::ConfigParameters const&) + 0x669 ) [0x760849]
cntk ( void DoCommands(Microsoft::MSR::CNTK::ConfigParameters const&) + 0xc07 ) [0x593c47]
cntk ( wmainOldCNTKConfig(int, wchar_t__) + 0x909 ) [0x535519]
cntk ( wmain1(int, wchar_t_*) + 0x68 ) [0x535be8]
cntk ( main + 0xd8 ) [0x529518]
/lib/x86_64-linux-gnu/libc.so.6 ( __libc_start_main + 0xf5 ) [0x7ff29582fec5]
cntk ( ) [0x52d4b7]
Closed Kaldi writer`

can not run cntk binary

hi, I am trying the cntk binary file and when I try to run it on gpu, I encounter the the following error message:" CNTK: Win32 exception caught (such an access violation or a stack overflow".
I am using nvidia k5000 and cuda7.0 on Windows 10.
Does anyone know what is the problem?
thanks a lot

How to train a model using multiple machines?

The main selling point of CNTK (compared to other deep learning packages) is that it supports training a large model on a compute cluster. However, I didn't find any information online or from the book on how to set up training across computers. Anybody can help?

Wrong demo "03_ConvBatchNorm.config " in MNIST data

Error "EXCEPTION occurred: Undefined function or macro 'ConvReLUBNLayer' in ConvReLUBNLayer(featScaled, cMap1, 25, kW1, kH1, hStride1, vStride1, 10)"

In the network defined file "03_ConvBatchNorm.ndl", It says:

# ConvReLUBNLayer is defined in Macros.ndl
conv1 = ConvReLUBNLayer(featScaled, cMap1, 25, kW1, kH1, hStride1, vStride1, 10)

Actually, The function "ConvReLUBNLayer" is not defined in the "Macros.ndl"!!!

Cannot compile the source code under Win8.1 + VS2013 Update 5

Hi, I am new to CNTK, so I followed the step-by-step setup on https://github.com/Microsoft/CNTK/wiki/Setup-CNTK-on-Windows
I've download all the third party files and set the environmental variables, but when I open the CNTKSolution and build the CNTK project, I got the following errors:

1>------ Rebuild All started: Project: MathCUDA, Configuration: Debug x64 ------
2>------ Rebuild All started: Project: SequenceTrainingLib, Configuration: Debug x64 ------
3>------ Rebuild All started: Project: EvalWrapper, Configuration: Debug x64 ------
3> wrapper.cpp
1>
1> D:\2015erdos\Install CNTK\CNTK-master\CNTK-master\Source\Math>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\bin\nvcc.exe" -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\x86_amd64" -I..\Common\include\ -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\include" -lineinfo --keep-dir x64\Debug -maxrregcount=0 --machine 64 --compile -Xcudafe "--diag_suppress=field_without_dll_interface" -g -use_fast_math -D_DEBUG -DNO_SYNC -DWIN32 -D_WINDOWS -D_USRDLL -DMATH_EXPORTS -DUSE_CUDNN -D_UNICODE -DUNICODE -Xcompiler "/EHsc /W4 /nologo /Od /Zi /RTC1 /MDd " -o x64\Debug\MathCUDA\GPUTensor.cu.obj "D:\2015erdos\Install CNTK\CNTK-master\CNTK-master\Source\Math\GPUTensor.cu" -clean
2> parallelforwardbackward.cpp
2>C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\crtdefs.h(496): error C2371: 'size_t' : redefinition; different basic types
2> parallelforwardbackward.cpp : see declaration of 'size_t'
3>C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\crtdefs.h(496): error C2371: 'size_t' : redefinition; different basic types
3> wrapper.cpp : see declaration of 'size_t'
3>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(889): error C4235: nonstandard extension used : '__asm' keyword not supported on this architecture
3>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(890): error C2065: 'mov' : undeclared identifier
2>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(889): error C4235: nonstandard extension used : '__asm' keyword not supported on this architecture
2>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(890): error C2065: 'mov' : undeclared identifier
2>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(890): error C2146: syntax error : missing ';' before identifier 'ecx'
2>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(890): error C2065: 'ecx' : undeclared identifier
2>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(891): error C2146: syntax error : missing ';' before identifier 'mov'
2>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(891): error C2065: 'mov' : undeclared identifier
2>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(891): error C2146: syntax error : missing ';' before identifier 'eax'
2>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(891): error C2065: 'eax' : undeclared identifier
......

GitHub won't let me to paste the full output of the compiling results. So I attached it as a txt file.
Compiling errors.txt

Any idea how to fix it? Thanks!!!

error when compiling "Source/Readers/Kaldi2Reader/HTKMLFReader.cpp"

Hi,
I recently download the source code of CNTK and I'm trying to compile it on Ubuntu (Ubuntu 14.04.3 LTS) with the Kaldi plug-in. I have followed the installation instructions in README.md and /Source/Readers/KaldiReaderReadme , but unfortunately I encountered an error when compiling "Source/Readers/Kaldi2Reader/HTKMLFReader.cpp" (see below)

Command:

mpic++ -c Source/Readers/Kaldi2Reader/HTKMLFReader.cpp -o /home/mirco/cntk_source/build/release/.build/Source/Readers/Kaldi2Reader/HTKMLFReader.o -D_POSIX_SOURCE -D_XOPEN_SOURCE=600 -D__USE_XOPEN2K -DUSE_ACML -DKALDI_DOUBLEPRECISION=0 -DHAVE_POSIX_MEMALIGN -DHAVE_EXECINFO_H=1 -DHAVE_CXXABI_H -DHAVE_ATLAS -DHAVE_OPENFST_GE_10400 -DNDEBUG -msse3 -std=c++0x -std=c++11 -fopenmp -fpermissive -fPIC -Werror -fcheck-new -Wno-error=literal-suffix -O4 -ISource/Common/Include -ISource/Math -ISource/CNTK -ISource/ActionsLib -ISource/ComputationNetworkLib -ISource/SGDLib -ISource/SequenceTrainingLib -ISource/CNTK/BrainScript -I/usr/./include/nvidia/gdk -I/home/mirco/cub-1.5.1 -I/usr/local/cuda-7.0//include -I/opt/acml5.3.1/ifort64_mp/include -I/home/mirco/kaldi-trunk//src -I/home/mirco/kaldi-trunk//tools/ATLAS/include -I/home/mirco/kaldi-trunk//tools/openfst/include -MD -MP -MF /home/mirco/cntk_source/build/release/.build/Source/Readers/Kaldi2Reader/HTKMLFReader.d

Error:

Source/Readers/Kaldi2Reader/HTKMLFReader.cpp: In member function ‘void Microsoft::MSR::CNTK::HTKMLFReader<ElemType>::PrepareForTrainingOrTesting(const ConfigRecordType&)’:
Source/Readers/Kaldi2Reader/HTKMLFReader.cpp:477:79: error: no matching function for call to ‘msra::dbn::latticesource::latticesource(std::pair<std::vector<std::basic_string<wchar_t> >, std::vector<std::basic_string<wchar_t> > >&, std::unordered_map<std::basic_string<char>, long unsigned int>&)’
             m_lattices = new msra::dbn::latticesource(latticetocs, modelsymmap);
                                                                               ^
Source/Readers/Kaldi2Reader/HTKMLFReader.cpp:477:79: note: candidate is:
In file included from Source/Readers/Kaldi2Reader/minibatchiterator.h:16:0,
                 from Source/Readers/Kaldi2Reader/rollingwindowsource.h:14,
                 from Source/Readers/Kaldi2Reader/HTKMLFReader.cpp:15:
Source/Common/Include/latticesource.h:29:5: note: msra::dbn::latticesource::latticesource(std::pair<std::vector<std::basic_string<wchar_t> >, std::vector<std::basic_string<wchar_t> > >, const std::unordered_map<std::basic_string<char>, long unsigned int>&, std::wstring)
     latticesource (std::pair<std::vector<std::wstring>,std::vector<std::wstring>> latticetocs, const std::unordered_map<std::string,size_t> & modelsymmap, std::wstring RootPathInToc)

Source/Common/Include/latticesource.h:29:5: note:   candidate expects 3 arguments, 2 provided
make[1]: *** [/home/mirco/cntk_source/build/release/.build/Source/Readers/Kaldi2Reader/HTKMLFReader.o] Error 1
make[1]: Leaving directory `/home/mirco/cntk_source'
make: *** [all] Error 2

How can I fix it?

Note that this is the only error I have when running make.

Thank you!

Mirco

bring back Transpose operation

Currently disabled because it has a known bug (Backprop). Also should allow transposing arbitrary tensor dimensions. Switching to true tensors will fix both.

Add 'stable'

binary download from Codeplex is out-of-date.
Need a new binary download for current version of CNTK build

How to export the classification result to a txt file

Hi,
After compiling the CNTK on Win8.1, I was able to run the Simple2d to have a taste on this toolkit. The performance is amazing!!! It's super fast!!!

However, I have a simple question, besides print out the EvalErrorPrediction value onto the screen, is there a command or action that can output the classification result to a .txt file like this:
0 0
1 1
0 1
1 0
1 1
...
while the first column is the label of the test file, and the second column is the label identified with CNTK. The reason I'm asking this is that we are intend to run different fiber identification task on CNTK, hence we need to know the identified fiber blend ratio, say the 70/30 cotton/wool might be identified as 65/35 blends.

Thanks!!

Sparse Times/Parameter

Hello there :)

I'm trying to create a sparse layer, for that I need sparse matrix multiplication and to define the sparse weight matrix. This is the only framework that has sparse matrix multiplication on GPU. But I could not find a way to use it with the NDL.

I guess the "wrappers" are missing. It also would be nice if you guys provide an example of using the library directly from C++, maybe from there I can access what I need?

Best regards,

Caio Mendes.

cpu only binary + compiler errors

Hello,

I cannot find the CPU only binary in the binary downloads link
https://github.com/Microsoft/CNTK/wiki/CNTK-Binary-Download-and-Configuration

While compiling the source from scratch I observe type qualifier errors as:

In file included from Source/Common/Include/latticearchive.h(25),
from Source/SequenceTrainingLib/latticeforwardbackward.cpp(11):
Source/Common/Include/simplesenonehmm.h(294): error #858: type qualifier on return type is meaningless
const size_t getnumsenone() const
.....

License

The license looks like it was written for CNTK, but I think that legally it's equivalent to either BSD or MIT. It would be convenient for developers if you were to adopt one of those, or really almost any license that is well known, instead of writing your own.

Just a thought.

make lock-file location in CrossProcessMutex.h configurable

CrossProcessMutex.h uses a file for a global lock inside /var/lock. That location cannot be universally be assumed to be writeable in all environments for all users. It should be changed to some place that is, or changeable through a config variable.

CrossProcessMutex(const std::string& name)
    : m_fd(-1),
      m_fileName("/var/lock/" + name)
{
}

Windows binary CUDA problem

I wanted to try the library in CPU mode, I downloaded CNTK-20160126-Windows-64bit-ACML5.3.1-CUDA7.0 binary and set the appropriate environment variable ACML_FMA. When I run CNTK with 'Simple' example which should have CPU support, it crashes and produces error: "The program cant start because cublas_64_70.dll" and "...curand64_70.dll is missing from your computer". Those libraries seem to be CUDA-related... Should I try to compile the library by myself or is there some other problem here?

Ubunt 14.04 (LTS) build error: error: ‘MPI_Iallreduce’ was not declared in this scope

Hi,
This is my first build of CNTK, and I am stuck at the below error message:
Source/SGDLib/SimpleDistGradAggregator.h:282:186: error: ‘MPI_Iallreduce’ was not declared in this scope
system: Linux Mint Rosa (based on Ubuntu LTS 14.4).
CUDA Kit v7.5 (GTX 970)
I downloaded and installed openmpi, acml, CUB, Cunn per the instructions given on this site.
Appreciate any suggestions!
Ken

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.