后台小程序开发的全方位指南
952
2022-10-23
Taichi 是一种用于计算机图形应用程序的高性能编程语言
# python 3.6/3.7 needed# CPU only. No GPU/CUDA needed. (Linux, OS X and Windows)python3 -m pip install taichi-nightly# With GPU (CUDA 10.0) support (Linux only)python3 -m pip install taichi-nightly-cuda-10-0# With GPU (CUDA 10.1) support (Linux only)python3 -m pip install taichi-nightly-cuda-10-1
Related papers
(SIGGRAPH Asia 2019) High-Performance Computation on Sparse Data Structures [Video] [BibTex]by Yuanming Hu, Tzu-Mao Li, Luke Anderson, Jonathan Ragan-Kelley, and Frédo Durand (ICLR 2020) Differentiable Programming for Physical Simulation [Video] [BibTex] [Code]by Yuanming Hu, Luke Anderson, Tzu-Mao Li, Qi Sun, Nathan Carr, Jonathan Ragan-Kelley, and Frédo Durand
Short-term goals
(Done) Fully implement the LLVM backend to replace the legacy source-to-source C++/CUDA backends (By Dec 2019) The only missing features compared to the old source-to-source backends: Vectorization on CPUs. Given most users who want performance are using GPUs (CUDA), this is given low priority.Automatic shared memory utilization. Postponed until Feb/March 2020. (WIP) Tune the performance of the LLVM backend to match that of the legacy source-to-source backends (By the end of Jan 2020)(WIP) Redesign memory allocator
Updates
(Jan 3, 2020) v0.3.20 released. Support for loops with ti.static(ti.grouped(ti.ndrange(...))) (Jan 2, 2020) v0.3.19 released. Added ti.atan2(y, x)Improved error msg when using float point numbers as tensor indices (Jan 1, 2020) v0.3.18 released. Added ti.GUI classImproved the performance of performance ti.Matrix.fill (Dec 31, 2019) v0.3.17 released. Fixed cuda context conflict with PyTorch (thanks to @Xingzhe He for reporting)Support ti.Matrix.T() for transposing a matrixIteratable ti.static(ti.ndrange)Fixed ti.Matrix.identity()Added ti.Matrix.one() (create a matrix with 1 as all the entries)Improved ir_printer on SNodesBetter support for dynamic SNodes. Struct-for's on dynamic nodes supportedti.length and ti.append to query and manipulate dynamic nodes (Dec 29, 2019) v0.3.16 released. Fixed ndrange-fors with local variables (thanks to Xingzhe He for reporting this issue) (Dec 28, 2019) v0.3.15 released. Multi-dimensional parallel range-for using ti.ndrange:
@ti.kernel def fill_3d(): # Parallelized for all 3 <= i < 8, 1 <= j < 6, 0 <= k < 9 for i, j, k in ti.ndrange((3, 8), (1, 6), 9): x[i, j, k] = i + j + k
(Dec 28, 2019) v0.3.14 released. GPU random number generator support for more than 1024x1024 threadsParallelized element list generation on GPUs. Struct-fors significantly sped up.ti and tid (debug mode) CLI commands (Dec 26, 2019) v0.3.13 released. ti.append now returns the list length before appendingFixed for loops with 0 iterationsSet ti.get_runtime().set_verbose_kernel_launch(True) to log kernel launchesDistinguish / and // following the Python conventionAllow using local variables as kernel argument type annotations (Dec 25, 2019) v0.3.11 released. Support multiple kernels with the same name, especially in the OOP cases where multiple member kernels share the same nameBasic dynamic node support (ti.append, ti.length) in the new LLVM backendFixed struct-for loops on 0-D tensors (Dec 24, 2019) v0.3.10 released. assert
for I in ti.grouped(x): # I is a vector of size x.dim() and data type i32 x[I] = 0 # If tensor x is 2D for I in ti.grouped(x): # I is a vector of size x.dim() and data type i32 y[I + ti.Vector([0, 1])] = I[0] + I[1]# is equivalent tofor i, j in x: y[i, j + 1] = i + j
(Nov 27, 2019) v0.1.5 released.Better modular programming supportDisalow the use of ti.static outside Taichi kernelsDocumentation improvements (WIP)Codegen bug fixesSpecial thanks to Andrew Spielberg and KLozes for bug report and feedback. (Nov 22, 2019) v0.1.3 released.Object-oriented programming. [Example]native Python function translation in Taichi kernels: Use print instead of ti.printUse int() instead of ti.cast(x, ti.i32) (or ti.cast(x, ti.i64) if your default integer precision is 64 bit)Use float() instead of ti.cast(x, ti.f32) (or ti.cast(x, ti.f64) if your default float-point precision is 64 bit)Use abs instead of ti.absUse ti.static_print for compile-time printing (Nov 16, 2019) v0.1.0 released. Fixed PyTorch interface. (Nov 12, 2019) v0.0.87 released.Added experimental Windows support with a [known issue] regarding virtual memory allocation, which will potentially limit the scalability of Taichi programs (If you are a Windows expert, please let me know how to solve this. Thanks!). Most examples work on Windows now.CUDA march autodetection;Complex kernel to override autodiff. (Nov 4, 2019) v0.0.85 released.ti.stop_grad for stopping gradients during backpropagation. [Example];Compatibility improvements on Linux and OS X;Minor bug fixes. (Nov 1, 2019) v0.0.77 released.Python wheels now support OS X 10.14+;LLVM is now the default backend. No need to install gcc-7 or clang-7 anymore. To use legacy backends, export TI_LLVM=0;LLVM compilation speed is improved by 2x;More friendly syntax error messages. (Oct 30, 2019) v0.0.72 released.LLVM GPU backend now as fast as the legacy (yet optimized) CUDA backend. To enable, export TI_LLVM=1;Bug fixes: LLVM struct for list generation. (Oct 29, 2019) v0.0.71 released. LLVM GPU backend performance greatly improved. Frontend compiler now emits readable syntax error messages. (Oct 28, 2019) v0.0.70 released. This version comes with experimental LLVM backends for x86_64 and CUDA (via NVVM/PTX). GPU kernel compilation speed is improved by 10x. To enable, update the taichi package and export TI_LLVM=1. (Oct 24, 2019) Python wheels (v0.0.61) released for Python 3.6/3.7 and CUDA 10.0/10.1 on Ubuntu 16.04+. Contributors of this release include Yuanming Hu, robbertvc, Zhoutong Zhang, Tao Du, Srinivas Kaza, and Kenneth Lozes. (Oct 22, 2019) Added support for kernel templates. Kernel templates allow users to pass in taichi tensors and compile-time constants as kernel parameters. (Oct 9, 2019) Compatibility improvements. Added a basic PyTorch interface. [Example].
Notes:
You still need to clone this repo for demo scripts under examples. You do not need to execute install.py or dev_setup.py. After installation using pip you can simply go to examples and execute, e.g., python3 mpm_fluid.py.Make sure you clear your legacy Taichi installation (if applicable) by cleaning the environment variables (delete TAICHI_REPO_DIR, and remove legacy taichi from PYTHONPATH) in your .bashrc or .zshrc. Or you can simply do this in your shell to temporarily clear them:
export PYTHONPATH=export TAICHI_REPO_DIR=
The Taichi Library [Legacy branch]
Taichi is an open-source computer graphics library that aims to provide easy-to-use infrastructures for computer graphics R&D. It's written in C++14 and wrapped friendly with Python.
News
May 17, 2019: Giga-Voxel SPGrid Topology Optimization Solver is released!March 4, 2019: MLS-MPM/CPIC solver is now MIT-licensed!August 14, 2018: MLS-MPM/CPIC solver reloaded! It delivers 4-14x performance boost over the previous state of the art on CPUs.
Getting Started (Legacy)
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~