Chainer – Preferred Networks, Inc. https://www.preferred.jp Tue, 12 May 2020 01:34:56 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.9 https://www.preferred.jp/wp-content/uploads/2019/08/favicon.png Chainer – Preferred Networks, Inc. https://www.preferred.jp 32 32 Preferred Networks Deepens Collaboration with PyTorch Community https://www.preferred.jp/en/news/pr20200512/ https://www.preferred.jp/en/news/pr20200512/#respond Tue, 12 May 2020 01:30:55 +0000 https://preferred.jp/?p=13957 TOKYO – May 12, 2020 – Preferred Networks, Inc. (PFN) today released pytorch-pfn-extras, an open-source librar […]

投稿 Preferred Networks Deepens Collaboration with PyTorch CommunityPreferred Networks, Inc. に最初に表示されました。

]]>
TOKYO – May 12, 2020 – Preferred Networks, Inc. (PFN) today released pytorch-pfn-extras, an open-source library that supports research and development in deep learning using PyTorch. The new library is part of PFN’s ongoing effort to strengthen its ties with the PyTorch developer community as well as Optuna™, the open-source hyperparameter optimization framework for machine learning, which recently joined the PyTorch Ecosystem. 

 

The pytorch-pfn-extras library includes several popular Chainer™ functions from user feedback during PFN’s transition from the Chainer deep learning framework to PyTorch.

pytorch-pfn-extras includes the following features:

  • Extensions and reporter

Functions frequently used when implementing deep learning training programs, such as collecting metrics during training and visualizing training progress

  • Automatic inference of parameter sizes

Easier network definitions by automatically inferring the sizes of linear or convolution layer parameters via input sizes

  • Distributed snapshots

Reduce the costs of implementing distributed deep learning with automated backup, loading, and generation management of snapshots

pytorch-pfn-extras is available at: https://github.com/pfnet/pytorch-pfn-extras 

The migration guide from Chainer to PyTorch can also be found at: https://medium.com/pytorch/migration-from-chainer-to-pytorch-8ed92c12c8 

On April 6, Optuna was added to the PyTorch Ecosystem of tools that are officially endorsed by the PyTorch community for use in PyTorch-based machine learning and deep learning research and development.

 

PFN is discussing merging pytorch-pfn-extras features into the PyTorch base build with the PyTorch development team at Facebook, Inc. In response to strong demand from both internal and external users, PFN also aims to release a PyTorch version of the deep reinforcement learning library, ChainerRL, as open-source software by the end of June 2020.  

PFN aims to continue leveraging its software technology it has accumulated through the development of Chainer to contribute to the development of PyTorch and the open-source community.

The PyTorch team at Facebook commented:

“We appreciate PFN for contributing important Chainer functions, such as gathering metrics and managing distributed snapshots, through pytorch-pfn-extras. With this newly available library, PyTorch developers have the ability to understand their model performances and optimize training costs. We look forward to continued collaboration with PFN to bring more contributions to the community, like ChainerRL capabilities later this summer.”

投稿 Preferred Networks Deepens Collaboration with PyTorch CommunityPreferred Networks, Inc. に最初に表示されました。

]]>
https://www.preferred.jp/en/news/pr20200512/feed/ 0
Preferred Networks Migrates its Deep Learning Research Platform to PyTorch https://www.preferred.jp/en/news/pr20191205/ https://www.preferred.jp/en/news/pr20191205/#respond Thu, 05 Dec 2019 06:00:42 +0000 https://preferred.jp/?p=13694 December 5, 2019, Tokyo Japan – Preferred Networks, Inc. (PFN, Head Office: Tokyo, President & CEO: Toru N […]

投稿 Preferred Networks Migrates its Deep Learning Research Platform to PyTorchPreferred Networks, Inc. に最初に表示されました。

]]>
December 5, 2019, Tokyo Japan – Preferred Networks, Inc. (PFN, Head Office: Tokyo, President & CEO: Toru Nishikawa) today announced plans to incrementally transition its deep learning framework (a fundamental technology in research and development) from PFN’s Chainer™ to PyTorch. Concurrently, PFN will collaborate with Facebook and the other contributors of the PyTorch community to actively participate in the development of PyTorch. With the latest major upgrade v7 released today, Chainer will move into a maintenance phase. PFN will provide documentation and a library to facilitate the migration to PyTorch for Chainer users.

PFN President and CEO Toru Nishikawa made the following comments on this business decision. 

“Since the start of deep learning frameworks, Chainer has been PFN’s fundamental technology to support our joint research with Toyota, FANUC, and many other partners. Chainer provided PFN with opportunities to collaborate with major global companies, such as NVIDIA and Microsoft. Migrating to PyTorch from Chainer, which was developed with tremendous support from our partners, the community, and users, is an important decision for PFN. However, we firmly believe that by participating in the development of one of the most actively developed frameworks, PFN can further accelerate the implementation of deep learning technologies, while leveraging the technologies developed in Chainer and searching for new areas that can become a source of competitive advantage.”

● Background

Developed and provided by PFN, Chainer has supported PFN’s R&D as a fundamental technology and significantly contributed to its business growth since it was open-sourced in June 2015. Its unique Define-by-Run method has gained support from the community of researchers and developers. It has been widely adopted as a standard method by the current mainstream deep learning frameworks, because it allows users to build complex neural networks intuitively and flexibly, speeding up the advancement of deep learning technology.
Meanwhile, the maturation of deep learning frameworks over the last several years has marked the end of the era when deep learning framework itself was the competitive edge to development. PFN believes that instead of making small adjustments to differentiate itself from competitors, it should contribute to the sustainable growth of the community of developers and users and create a healthy ecosystem with the common goal of further advancing deep learning technology.

● Migrating PFN’s deep learning R&D platform to PyTorch

PFN will migrate its deep learning research platform to PyTorch, which draws inspiration from Chainer, to enable flexible prototyping and a smooth transition from research to production for machine learning development. With a broad set of contributing developers including Facebook, PyTorch boasts an engaged developer community and is one of the most frequently used frameworks in academic papers. Migrating to PyTorch will allow PFN to efficiently incorporate the latest research results into its R&D activities and leverage its existing Chainer assets by converting them to PyTorch. PFN will cooperate with PyTorch team at Facebook and in the open-source community to contribute to the development of PyTorch, as well as supporting PyTorch on MN-Core, a deep learning processor currently being developed by PFN.

PFN has received the following comments from Facebook and the Toyota Research Institute : 

Bill Jia, Vice President of AI Infrastructure, Facebook

“As a leading contributor to PyTorch, we’re thrilled that a pioneer in machine learning (ML), such as PFN, has decided to adopt PyTorch for future development,” said Bill Jia, Facebook Vice President of AI Infrastructure. “PyTorch’s enablement of leading-edge research, combined with its ability for distributed training and inference, will allow PFN to rapidly prototype and deploy ML models to production for its customers. In parallel, the entire PyTorch community will benefit from PFN code contributions given the organization’s expertise in ML tools.”

Gill Pratt, CEO, Toyota Research Institute

 “TRI and TRI-AD welcome the transition by PFN to PyTorch,” said Gill Pratt, CEO of Toyota Research Institute (TRI), Chairman of Toyota Research Institute – Advanced Development (TRI-AD), and a Fellow of Toyota Motor Corporation. “PFN has in the past strongly contributed to our joint research, development, and advanced development in automated driving by creating and maintaining Chainer. TRI and TRI-AD have used PyTorch for some time and feel that PFN’s present adoption of PyTorch will facilitate and accelerate our application of PFN’s expertise in deep learning.”

 
● Major features of the latest deep learning framework Chainer™ v7 and general-purpose matrix calculation library CuPy™ v7.

Chainer v7 features improved inter-operability with C++-based ChainerX

  • Chainer v7 includes the distributed deep learning package ChainerMN, and ChainerX is supported by many Chainer functions
  • TabularDataset class has been added to flexibly process multi-column datasets
  • With ONNX and Chainer consolidated, Chainer v7 can work with inference engines through ONNX

For details about Chainer’s new features, future development, and documentation on how to migrate to PyTorch, please read the latest blog post from the Chainer development team.
https://chainer.org/announcement/2019/12/05/released-v7.html

  • With cuTENSOR and CUB library supported, CuPy has improved performance on NVIDIA GPUs
  • CuPy has experimentally added support for ROCm, enabling it to be used on AMD GPUs

 

Chainer Release Note: https://github.com/chainer/chainer/releases/tag/v7.0.0

Chainer Documentation: https://docs.chainer.org/en/v7.0.0/

 

PFN will continue to develop other open-source software (namely CuPy, and Optuna) as actively as ever.

投稿 Preferred Networks Migrates its Deep Learning Research Platform to PyTorchPreferred Networks, Inc. に最初に表示されました。

]]>
https://www.preferred.jp/en/news/pr20191205/feed/ 0
Preferred Networks releases version 6 of both the open source deep learning framework Chainer and the general-purpose matrix calculation library CuPy https://www.preferred.jp/en/news/pr20190516/ Thu, 16 May 2019 09:30:07 +0000 https://www.preferred-networks.jp/ja/?p=11748 May 16, 2019, Tokyo Japan – Preferred Networks, Inc. (PFN, Head Office: Tokyo, President & CEO: Toru Nishi […]

投稿 Preferred Networks releases version 6 of both the open source deep learning framework Chainer and the general-purpose matrix calculation library CuPyPreferred Networks, Inc. に最初に表示されました。

]]>
May 16, 2019, Tokyo Japan – Preferred Networks, Inc. (PFN, Head Office: Tokyo, President & CEO: Toru Nishikawa) has released Chainer(TM) v6 and CuPy(TM) v6, major updates of PFN’s open source deep learning framework and general-purpose matrix calculation library, respectively. The latest version will run as-is on most of the code used in previous versions.
Chainer was released as open source software in 2015 and is known as a pioneer of flexible and intuitive deep learning frameworks based on the Define-by-Run method. Chainer has since been supported by many users and is being actively developed.

ChainerX, a C++ implementation of automatic differentiation that has experimentally been integrated into the main Chainer distribution since the release of the v6 beta version, now supports more examples. The use of ChainerX can significantly reduce overhead on the framework side in both forward and backward propagations without losing much of Chainer’s flexibility and backward compatibility, resulting in increased performance. In addition, Chainer and ChainerX source code does not need to be changed to use new hardware on ChainerX if a third-party developer implements the support for the hardware as a plug-in.

 

Main features of Chainer v6 and CuPy v6 are:

  • Integration of ChainerX
    • Fast and more portable multi-dimensional arrays and the automatic differentiation backend have been added.
    • A compatibility layer has been implemented to allow for the use of ChainerX arrays in the same manner as NumPy and CuPy arrays, allowing automatic differentiation with low overhead in C++.
    • An integrated device API has been introduced. The unified interface can handle the specification of devices or inter-device transfer for a wide variety of backends such as NumPy, CuPy, iDeep, and ChainerX.
  • Enhanced support for training in mixed precision
    • Mixed 16, a new default data type, has been added. It is a mixed precision mode that realizes transparent training using operations in single and half precisions.
    • Dynamic scaling that detects and automatically adjusts overflow has been implemented in order to avoid underflow in mixed precision training.
  • Addition of a function and link test tool
    • A test tool that generates unit tests for forward and backward propagations as well as second order differentials with minimal code has been added.
  • CuPy arrays to support NumPy functions
    • NumPy’s experimental feature __array_function__ is supported now. CuPy arrays have been directly applied to many __array_function__ enabled Numpy functions.

 

 

PFN will continue improving Chainer performance and expanding the backend. It will contribute to improved performance in a wide range of use cases by making ChainerX easier to use as well as supporting more arithmetic operations.

Chainer has incorporated a number of development results from external contributors. PFN will continue to quickly adopt the results of the latest deep learning research and promote the development and popularization of Chainer in collaboration with supporting companies and the OSS community.

 

About the Chainer(TM) Open Source Deep Learning Framework

Chainer is a Python-based deep learning framework developed and provided by PFN, which has unique features and powerful performance that allow for designing complex neural networks easily and intuitively, thanks to its “Define-by-Run” approach. Since it was open-sourced in June 2015, as one of the most popular frameworks, Chainer has attracted not only the academic community but also many industrial users who need a flexible framework to harness the power of deep learning in their research and real-world applications.

Chainer quickly incorporates the results of the latest deep learning research. With additional packages such as ChainerRL (reinforcement learning), ChainerCV (computer vision), and Chainer Chemistry(a deep learning library for chemistry and biology)and through the support of Chainer development partner companies, PFN aims to promote the most advanced research and development activities of researchers and practitioners in each field.(http://chainer.org/

投稿 Preferred Networks releases version 6 of both the open source deep learning framework Chainer and the general-purpose matrix calculation library CuPyPreferred Networks, Inc. に最初に表示されました。

]]>
Nihon Keizai Shimbun Best Awards at the Nikkei Superior Products and Services Awards 2018 https://www.preferred.jp/en/news/nihon-keizai-shimbun-best-awards-at-the-nikkei-superior-products-and-services-awards-2018/ https://www.preferred.jp/en/news/nihon-keizai-shimbun-best-awards-at-the-nikkei-superior-products-and-services-awards-2018/#respond Thu, 31 Jan 2019 15:00:01 +0000 https://stg-preferred.giginc.xyz/?p=12586 投稿 Nihon Keizai Shimbun Best Awards at the Nikkei Superior Products and Services Awards 2018Preferred Networks, Inc. に最初に表示されました。

]]>
投稿 Nihon Keizai Shimbun Best Awards at the Nikkei Superior Products and Services Awards 2018Preferred Networks, Inc. に最初に表示されました。

]]>
https://www.preferred.jp/en/news/nihon-keizai-shimbun-best-awards-at-the-nikkei-superior-products-and-services-awards-2018/feed/ 0
Preferred Networks releases ChainerX, a C++ implementation of automatic differentiation of N-dimensional arrays, integrated into Chainer v6 (beta version) for higher computing performance https://www.preferred.jp/en/news/pr20181203-1/ Mon, 03 Dec 2018 07:00:15 +0000 https://www.preferred-networks.jp/ja/?p=11520 Dec. 3, 2018, Tokyo Japan – Preferred Networks, Inc. (“PFN”, Head Office: Tokyo, President & CEO: To […]

投稿 Preferred Networks releases ChainerX, a C++ implementation of automatic differentiation of N-dimensional arrays, integrated into Chainer v6 (beta version) for higher computing performancePreferred Networks, Inc. に最初に表示されました。

]]>
Dec. 3, 2018, Tokyo Japan – Preferred Networks, Inc. (“PFN”, Head Office: Tokyo, President & CEO: Toru Nishikawa) releases ChainerX, a C++ implementation of automatic differentiation of N-dimensional arrays for the Chainer™ v6 open source deep learning framework. Chainer v6 will run without the need to change most of the code used in previous versions.

Since the release of its source code in 2015, the development of Chainer, known as a pioneer of flexible and intuitive deep learning frameworks, has been very active and attracted many users. Many other deep learning frameworks have followed suit in adopting Chainer’s Define-by-Run method, demonstrating the foresight of Chainer. Chainer’s pure Python implementation policy has, on the one hand, contributed to the legibility and simplicity of code, but on the other, it was becoming a bottleneck due to increased overhead of the Python execution system relative to the overall runtime as its performance improved.

Therefore, the release of ChainerX, which is written in C++ and integrated into the main Chainer, is a first step in achieving higher performance without losing much of Chainer’s flexibility and backward compatibility for many users.

 

 

Main features of ChainerX are:

  • C++ implementation in close connection with Python – NumPy, CuPy™, and automatic differentiation (autograd), all of which are mostly written in Python, have been implemented in C++

The logic of matrix calculation, convolution operations, and error backpropagation has all been implemented in C++ to reduce CPU overhead by Python by up to 87% (comparison of overhead measurements only)

 

  • Easy to work with CPU, GPU, and other hardware backends

Replaceable backends have increased portability between devices

 

Figure:In addition to the multidimensional array implementation which corresponds to NumPy/CuPy, the Define-by-Run style automatic differentiation function is covered by ChainerX.

 

 

As well as improving ChainerX performance and expanding the backend, PFN plans to enable models written in ChainerX to be called from non-Python environments.

For more details on Chainer X, developer Seiya Tokui is scheduled to give a presentation at NeurIPS, a top conference in machine learning (formerly called NIPS), in Montreal, Canada this month.

Dec. 7, 12:50-02:55 Open Source Software Showcase:

http://learningsys.org/nips18/schedule.html

 

Chainer has adopted a number of development proposals from external contributors. PFN will continue to quickly adopt the results of the latest deep learning research and promote the development and popularization of Chainer in collaboration with supporting companies and the OSS community.

 

  • About the Chainer™ Open Source Deep Learning Framework

Chainer is a Python-based deep learning framework developed and provided by PFN, which has unique features and powerful performance that allow for designing complex neural networks easily and intuitively, thanks to its “Define-by-Run” approach. Since it was open-sourced in June 2015, as one of the most popular frameworks, Chainer has attracted not only the academic community but also many industrial users who need a flexible framework to harness the power of deep learning in their research and real-world applications.

Chainer quickly incorporates the results of the latest deep learning research. With additional packages such as ChainerRL (reinforcement learning), ChainerCV (computer vision), and Chainer Chemistry(a deep learning library for chemistry and biology)and through the support of Chainer development partner companies, PFN aims to promote the most advanced research and development activities of researchers and practitioners in each field. (http://chainer.org/

投稿 Preferred Networks releases ChainerX, a C++ implementation of automatic differentiation of N-dimensional arrays, integrated into Chainer v6 (beta version) for higher computing performancePreferred Networks, Inc. に最初に表示されました。

]]>
Preferred Networks releases version 5 of both the open source deep learning framework, Chainer and the general-purpose array calculation library, CuPy. https://www.preferred.jp/en/news/pr20181025/ Thu, 25 Oct 2018 09:12:58 +0000 https://www.preferred-networks.jp/ja/?p=11477 Preferred Networks, Inc. (PFN, President and CEO: Toru Nishikawa) has released Chainer(TM) v5 and CuPy(TM) v5, […]

投稿 Preferred Networks releases version 5 of both the open source deep learning framework, Chainer and the general-purpose array calculation library, CuPy.Preferred Networks, Inc. に最初に表示されました。

]]>
Preferred Networks, Inc. (PFN, President and CEO: Toru Nishikawa) has released Chainer(TM) v5 and CuPy(TM) v5, major updates of PFN’s open source deep learning framework and general-purpose array calculation library, respectively.

In this major upgrade after six months, Chainer has become easier to use after integrating with ChainerMN, which has been provided as a distributed deep learning package to Chainer. The latest v5 will run as-is on most of the code used in previous versions.

 

Main features of Chainer v5 and CuPy v5 are:

  • Integrated with the ChainerMN distributed deep learning package

・With ChainerMN incorporated in Chainer, fast distributed deep learning on multiple GPUs can be conducted more easily.

  • Support for data augmentation library NVIDIA(R)

・Chainer v5 performs faster data preprocessing by decoding and resizing of JPEG images on GPUs.

  • Support for FP16

・Changing to half-precision floating-point (FP16) format is possible with minimal code changes.

・Reduced memory consumption, which allows larger batch sizes.

・Further speed increases with the use of NVIDIA(R) Volta GPU Tensor Cores.

  • Latest Intel(R) Architecture compatibility

・Chainer v5 supports the latest version 2 of Chainer Backend for Intel(R) Architecture (previously, iDeep, which was added to Chainer v4) for faster training and inference on Intel(R) Processors.

  • High-speed computing and memory saving for static graphs

・Chainer v5 optimizes computation and memory usage by caching static graphs that do not change throughout training. This speeds up training by 20-60%.

  • Enhanced cooperation with Anaconda Numba and PyTorch, enabling the mutual exchange of parallel data

・Added ability to pass a CuPy array directly to a JIT-compiled function by Anaconda Numba.

・DLpack:Array data can be exchanged with PyTorch and other frameworks.

  • CuPy basic operations are 50% faster

・Performance of basic operations such as memory allocation and array initialization has improved.

 

Chainer and CuPy have incorporated a number of development results from external contributors. PFN will continue to quickly adopt the results of the latest deep learning research and promote the development and popularization of Chainer and CuPy in collaboration with supporting companies and the OSS community.

 

◆ About the Chainer(TM) Open Source Deep Learning Framework

Chainer is a Python-based deep learning framework developed and provided by PFN, which has unique features and powerful performance that allow for designing complex neural networks easily and intuitively, thanks to its “Define-by-Run” approach. Since it was open-sourced in June 2015, as one of the most popular frameworks, Chainer has attracted not only the academic community but also many industrial users who need a flexible framework to harness the power of deep learning in their research and real-world applications.

Chainer quickly incorporates the results of the latest deep learning research. With additional packages such as ChainerRL (reinforcement learning), ChainerCV (computer vision), and Chainer Chemistry(a deep learning library for chemistry and biology)and through the support of Chainer development partner companies, PFN aims to promote the most advanced research and development activities of researchers and practitioners in each field. (http://chainer.org/

投稿 Preferred Networks releases version 5 of both the open source deep learning framework, Chainer and the general-purpose array calculation library, CuPy.Preferred Networks, Inc. に最初に表示されました。

]]>
Preferred Networks wins second place in the Google AI Open Images – Object Detection Track, competed with 454 teams https://www.preferred.jp/en/news/pr20180907/ Fri, 07 Sep 2018 02:00:05 +0000 https://www.preferred-networks.jp/ja/?p=11417 Sept. 7, 2018, Tokyo Japan – Preferred Networks, Inc. (PFN, Headquarters: Chiyoda-ku, Tokyo, President and CEO […]

投稿 Preferred Networks wins second place in the Google AI Open Images – Object Detection Track, competed with 454 teamsPreferred Networks, Inc. に最初に表示されました。

]]>
Sept. 7, 2018, Tokyo Japan – Preferred Networks, Inc. (PFN, Headquarters: Chiyoda-ku, Tokyo, President and CEO: Toru Nishikawa) participated in the Google AI Open Images – Object Detection Track, an object detection challenge hosted by Kaggle*1, and won second place in the competition among 454 teams from around the world.

 

Object detection, which is one of the major research subjects in computer vision, is a basic technology that is critical for autonomous driving and robotics. Challenges in using large-scale datasets, such as ImageNet and MS COCO, to achieve better accuracy in object detection have been the unifying force of the research community, contributing to the rapid improvement of detection techniques and algorithms.

 

The Google AI Open Images – Object Detection Track held between July 3, 2018 and August 30, 2018 was a competition of an unprecedented scale that used Open Images V4*2, a large and complex dataset released by Google this year. As a result, the event attracted the attention of many researchers. A total of 454 teams from around the world participated in the competition.

PFN entered the competition as team “PFDet”, comprising interested members, mainly developers of ChainerMN and ChainerCV, PFN’s distributed deep learning library and computer vision library based on deep learning, respectively, as well as specialists in the fields of autonomous driving and robotics. During the competition, PFN’s large-scale cluster MN-1b that has 512 NVIDIA (R) Tesla(R) V100 32GB GPUs was in full operation for the first time since its launch in July this year. In addition, the team utilized a parallel deep learning technique to speed up training with a large-scale dataset and made full use of research results PFN had accumulated over the years in the fields of autonomous driving and robotics.  These efforts resulted in the team finishing in a close second place by a narrow margin of 0.023% behind the team who won first place.

 

We have published a paper, entitled “PFDet: 2nd Place Solution to Open Images Challenge 2018 Object Detection Track,” regarding our solution method in this competition, at https://arxiv.org/abs/1809.00778

We also plan to present the content of the paper at a workshop at the European Conference on Computer Vision (ECCV)2018.

 

A part of the techniques developed for this competition will be released as additional functionality to ChainerMN and ChainerCV.

 

PFN will continue to work on research and development of image analysis and object detection technologies, and promote their practical applications in our three primary business domains, namely, transportation, manufacturing, and bio/healthcare.

 

*1:A platform for machine learning competitions

*2:A very large training dataset comprised of 1.7 million images (including 12 million objects of 500 classes)

投稿 Preferred Networks wins second place in the Google AI Open Images – Object Detection Track, competed with 454 teamsPreferred Networks, Inc. に最初に表示されました。

]]>
Chainer awarded the Open Source Data Science Project Award Winner at ODSC East 2018 https://www.preferred.jp/en/news/pr20180517/ Thu, 17 May 2018 08:45:23 +0000 https://www.preferred-networks.jp/ja/?p=11309 The Open Source Data Science Project award is given in recognition for the significant contribution to the fie […]

投稿 Chainer awarded the Open Source Data Science Project Award Winner at ODSC East 2018Preferred Networks, Inc. に最初に表示されました。

]]>
The Open Source Data Science Project award is given in recognition for the significant contribution to the field of data science. Winners in previous years were the Pandas Project and scikit-learn.

Chainer, an open source deep learning framework, won the award this year, in the recognition of its dynamic and flexible neural network definition by “define-by-run”.

 

 

Chainer is evaluated for the award as follows:
Chainer strives to “bridge the gap between algorithms and deep learning implementations” in its flexible and intuitive Python-based framework for neural networks. Chainer was the first framework to provide the “define-by-run” neural network definition which allows for dynamic changes in the network. Since flexibility is a significant part of the foundations of Chainer, the framework allows for customization that similar platforms do not so easily provide and supports computations on either CPUs or GPUs.

https://opendatascience.com/odsc-east-2018-open-source-data-science-project-award-winner-the-chainer-framework/

 

About the Open Data Science Conference (ODSC)

ODSC is a conference for people to connect with the data science community and contribute to the open source applications they use every day. Its goal is to bring together the global data science community to help foster the exchange of innovative ideas and encourage the growth of open source software.

 

 

About the Chainer Open Source Deep Learning Framework

Chainer is a Python-based deep learning framework developed mainly by PFN, which has unique features and powerful performance that allow for designing complex neural networks easily and intuitively, thanks to its “Define-by-Run” approach. Since it was open-sourced in June 2015, as one of the most popular frameworks, Chainer has attracted not only the academic community but also many industrial users who need a flexible framework to harness the power of deep learning in their research and real-world applications.
Chainer incorporates the results of the latest deep learning research. With additional packages such as ChainerMN (distributed learning), ChainerRL (reinforcement learning), ChainerCV (computer vision) and through the support of Chainer development partner companies, PFN aims to promote the most advanced research and development activities of researchers and practitioners in each field. (http://chainer.org/)

 

投稿 Chainer awarded the Open Source Data Science Project Award Winner at ODSC East 2018Preferred Networks, Inc. に最初に表示されました。

]]>
Preferred Networks released open source deep learning framework Chainer v4 and general-purpose array calculation library CuPy v4. https://www.preferred.jp/en/news/pr20180417_2/ Tue, 17 Apr 2018 08:48:54 +0000 https://www.preferred-networks.jp/ja/?p=11243 Tokyo, Japan, April 17, 2018 — Preferred Networks, Inc. (PFN, Headquarters: Chiyoda-ku, Tokyo, President […]

投稿 Preferred Networks released open source deep learning framework Chainer v4 and general-purpose array calculation library CuPy v4.Preferred Networks, Inc. に最初に表示されました。

]]>
Tokyo, Japan, April 17, 2018 — Preferred Networks, Inc. (PFN, Headquarters: Chiyoda-ku, Tokyo, President and CEO: Toru Nishikawa) has released v4 of Chainer™ and CuPy™, major updates of the open source deep learning framework and the general-purpose array calculation library, respectively.

This major upgrade to Chainer and CuPy incorporates the results of the latest deep learning research over the last six months. The newly released v4 is largely compatible with previous versions of Chainer.

 

Main features of Chainer and CuPy v4 include:

Additional functions for fast, memory-efficient training on NVIDIA(R) GPUs *1

Chainer now supports NVIDIA TensorCore to speed up convolutional operations. Loss scaling has also been implemented to alleviate the vanishing gradient problem when using half-precision floats.

Quick installation of CuPy

We have begun providing a binary package of CuPy to reduce the installation time from 10 minutes down to about 10 seconds.

Optimized for Intel(R) Architecture

An Intel Deep Learning Package (iDeep) *2 backend has been added to make training and inference on Intel CPUs faster. This delivers an 8.9-fold improvement of GoogLeNet (a neural network used for image recognition) inference speed on CPUs, according to our benchmark results*3.

More functions supporting second order differentiation

Enhanced support for second order differentiation, which was first introduced in v3, allows easier implementation of the latest networks and algorithms.

A new function to export results of training with Chainer in the Caffe format

A function to export Chainer’s computational procedure and learned weights in the Caffe format has been added as experimental. This makes it easier to use the results of training with Chainer even in an environment where Python cannot be executed. (Exporting into the ONNX format is also available via the onnx-chainer package.)

 

◆Chainer ReleaseNote: https://github.com/chainer/chainer/releases/tag/v4.0.0

◆Update Guide:https://docs.chainer.org/en/latest/upgrade.html

 

Chainer and CuPy have taken in a number of development results from external contributors. PFN will continue working with supporting companies and the OSS community to promote the development and popularization of Chainer and CuPy.

 

* 1:http://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html

* 2:NumPy-compatible library for performing general arithmetic operations in deep learning at a high speed on Intel CPUs https://github.com/intel/ideep

* 3:The results of comparison in time to process an image between when iDeep was enabled and disabled. Intel Math Kernel Library was enabled in both cases. Intel Xeon(R) CPU E5-2623 v3 was used.

 

About the Chainer Open Source Deep Learning Framework

Chainer is a Python-based deep learning framework developed mainly by PFN, which has unique features and powerful performance that allow for designing complex neural networks easily and intuitively, thanks to its “Define-by-Run” approach. Since it was open-sourced in June 2015, as one of the most popular frameworks, Chainer has attracted not only the academic community but also many industrial users who need a flexible framework to harness the power of deep learning in their research and real-world applications.

Chainer incorporates the results of the latest deep learning research. With additional packages such as ChainerMN (distributed learning), ChainerRL (reinforcement learning), ChainerCV (computer vision) and through the support of Chainer development partner companies, PFN aims to promote the most advanced research and development activities of researchers and practitioners in each field. (http://chainer.org/

投稿 Preferred Networks released open source deep learning framework Chainer v4 and general-purpose array calculation library CuPy v4.Preferred Networks, Inc. に最初に表示されました。

]]>
Preferred Networks achieved the world’s fastest training time in deep learning, completed training on ImageNet in 15 minutes,using the distributed learning package ChainerMN and a large-scale parallel computer https://www.preferred.jp/en/news/pr20171110/ Fri, 10 Nov 2017 02:00:24 +0000 https://www.preferred-networks.jp/ja/?p=11006 November 10, 2017, Tokyo – Preferred Networks, Inc. (PFN, Headquarters: Chiyoda-ku, Tokyo, President and […]

投稿 Preferred Networks achieved the world’s fastest training time in deep learning, completed training on ImageNet in 15 minutes,using the distributed learning package ChainerMN and a large-scale parallel computerPreferred Networks, Inc. に最初に表示されました。

]]>
November 10, 2017, Tokyo – Preferred Networks, Inc. (PFN, Headquarters: Chiyoda-ku, Tokyo, President and CEO: Toru Nishikawa) has achieved the world’s fastest training time in deep learning by using its large-scale parallel computer MN-1 1.

With the size of training data and the number of parameters expanding for the sake of better accuracy of deep learning models, computation time is also increasing. It is not unusual to take several weeks to train a model. Getting multiple GPUs to link with one another for faster training is very important to reduce the time spent on trial and error and verification of new ideas, and produce results quickly.

On the other hand, it is generally known in parallel/distributed learning that the accuracy and learning rate of a model decrease gradually with increased GPUs, due to larger batch sizes and GPU communication overhead.

This time, we have improved learning algorithms and parallel performance to address these issues, and used one of Japan’s most powerful parallel computers with 1,024 of NVIDIA(R)’s Tesla(R) multi-node P100 GPUs and leverages Chainer’s distributed learning package ChainerMN 2 for training.

As a result, we completed training ResNet-50 3 for image classification on the ImageNet 4 dataset in 15 minutes, which is a significant improvement from the previously best known result 5.

The research paper on this achievement is available in the following URL under the title “Extremely Large Minibatch SGD: Training ResNet-50 on ImageNet in 15 Minutes”.
(imagenet_in_15min.pdf)

Based on this research result, PFN will further accelerate its research and development activities in the fields of transportation systems, manufacturing, and bio/healthcare, which require large-scale deep learning.

 

1 One of the most powerful private supercomputer in Japan, contains 1,024 of NVIDIA(R)’s Tesla(R) multi-node P100 GPUs.https://www.preferred.jp/en/news/pr20170920

2 A package adding distributed learning functionality with multiple GPUs to the open source deep learning framework Chainer

3 A network frequently used in the field of image recognition

4 A dataset widely used for image classification

5 Training completed in 31 minutes using Intel(R) Xeon(R) Platinum 8160 x 1,600(Y. You et al. ImageNet Training in Minutes. CoRR,abs/1709.05011, 2017)

 

■ About the Chainer Open Source Deep Learning Framework

Chainer is a Python-based deep learning framework being developed mainly by PFN, which has unique features and powerful performance that allow for designing complex neural networks easily and intuitively, thanks to its “Define-by-Run” approach. Since it was open-sourced in June 2015, as one of the most popular frameworks, Chainer has attracted not only the academic community but also many industrial users who need a flexible framework to harness the power of deep learning in their research and real-world applications.

Chainer incorporates the results of the latest deep learning research. With additional packages such as ChainerMN (distributed learning), ChainerRL (reinforcement learning), ChainerCV (computer vision) and through the support of Chainer development partner companies, PFN aims to promote the most advanced research and development activities of researchers and practitioners in each field. (http://chainer.org/

■ About Preferred Networks, Inc.

Founded in March 2014 with the aim of promoting business utilization of deep learning technology focused on IoT, PFN advocates Edge Heavy Computing as a way to handle the enormous amounts of data generated by devices in a distributed and collaborative manner at the edge of the network, driving innovation in three priority business areas: transportation, manufacturing and bio/healthcare. PFN promotes advanced initiatives by collaborating with world leading organizations, such as Toyota Motor Corporation, Fanuc Corporation, and the National Cancer Center. (https://www.preferred.jp/en/)

*Chainer(R) is the trademark or the registered trademark of Preferred Networks, Inc. in Japan and other countries.

 

 

投稿 Preferred Networks achieved the world’s fastest training time in deep learning, completed training on ImageNet in 15 minutes,using the distributed learning package ChainerMN and a large-scale parallel computerPreferred Networks, Inc. に最初に表示されました。

]]>