Linux Tutorial from Ryan's Tutorials
0
The following pages are intended to give you a solid foundation in how to use the terminal, to get the computer to do useful work for you. You won't be a Unix guru at the end but you will be well on your way and armed with the right knowledge and skills to get you there if that's what you want (which you should because that will make you even more awesome). Here you will learn the Linux command line (Bash) with our 13 part beginners tutorial. It contains clear descriptions, command outlines, examples, shortcuts and best practice. At first, the Linux command line may seem daunting, complex and scary. It is actually quite simple and intuitive (once you understand what is going on that is), and once you work through the following sections you will understand what is going on. Unix likes to take the approach of giving you a set of building blocks and then letting you put them together. This allows us to build things to suit our needs. With a bit of creativity and logical thinking, mixed in with an appreciation of how the blocks work, we can assemble tools to do virtually anything we want. The aim is to be lazy. Why should we do anything we can get the computer to do for us? The only reason I can think of is that you don't know how (but after working through these pages you will know how, so then there won't be a good reason). A question that may have crossed your mind is "Why should I bother learning the command line? The Graphical User Interface is much easier and I can already do most of what I need there." To a certain extent you would be right, and by no means am I suggesting you should ditch the GUI. Some tasks are best suited to a GUI, word processing and video editing are great examples. At the same time, some tasks are more suited to the command line, data manipulation (reporting) and file management are some good examples. Some tasks will be just as easy in either environment. Think of the command line as another tool you can add to your belt. As always, pick the best tool for the job.
PyTorch Introduction
0
This is a very barebones introduction to the PyTorch framework used to implement machine learning. This tutorial implements a feed-forward neural network and is taught completely asynchronously through Stanford University. A good start after learning the theory behind feed-forward neural networks.
Using Dask on HPC Systems
0
A tutorial on the effective use of Dask on HPC resources. The four-hour tutorial will be split into two sections, with early topics focused on novice Dask users and later topics focused on intermediate usage on HPC and associated best practices. The knowledge areas covered include (but are not limited to):
Beginner section
High-level collections including dask.array and dask.dataframe
Distributed Dask clusters using HPC job schedulers
Earth Science data analysis using Dask with Xarray
Using the Dask dashboard to understand your computation
Intermediate section
Optimizing the number of workers and memory allocation
Choosing appropriate chunk shapes and sizes for Dask collections
Querying resource usage and debugging errors
DELTA Introductory Video
0
Introductory video about DELTA. Speaker Tim Boerner, Senior Assistant Director, NCSA
Practical Machine Learning with Python
0
This video series provides a holistic understanding of machine learning, covering theory, application, and inner workings of supervised, unsupervised, and deep learning algorithms. It covers topics such as linear regression, K Nearest Neighbors, Support Vector Machines (SVM), flat clustering, hierarchical clustering, and neural networks. Goes over the high level intuitions of the algorithms and how they are logically meant to work. Apply the algorithms in code using real world data sets along with a module, such as with Scikit-Learn.
Representation Learning in Deep Learning
0
Representation learning is a fundamental concept in machine learning and artificial intelligence, particularly in the field of deep learning. At its core, representation learning involves the process of transforming raw data into a form that is more suitable for a specific task or learning objective. This transformation aims to extract meaningful and informative features or representations from the data, which can then be used for various tasks like classification, clustering, regression, and more.
Jetstream2 Status
0
Jetstream2 makes cutting-edge high-performance computing and software easy to use for your research regardless of your project’s scale—even if you have limited experience with supercomputing systems.Cloud-based and on-demand, the 24/7 system includes discipline-specific apps. You can even create virtual machines that look and feel like your lab workstation or home machine, with thousands of times the computing power.
Awesome Jupyter Widgets (for building interactive scientific workflows or science gateway tools)
0
A curated list of awesome Jupyter widget packages and projects for building interactive visualizations for Python code
Trusted CI
0
The mission of Trusted CI is to lead in the development of an NSF Cybersecurity Ecosystem with the workforce, knowledge, processes, and cyberinfrastructure that enables trustworthy science and NSF’s vision of a nation that is a global leader in research and innovation.
AI Institutes Cyberinfrastructure Documents: SAIL Meeting
0
Materials from the SAIL meeting (https://aiinstitutes.org/2023/06/21/sail-2023-summit-for-ai-leadership/). A space where AI researchers can learn about using ACCESS resources for AI applications and research.
MATLAB with other Programming Languages
0
MATLAB is a really useful tool for data analysis among other computational work. This tutorial takes you through using MATLAB with other programming languages including C, C++, Fortran, Java, and Python.
Molecular Dynamics Tutorials for Beginner's
0
Links to MD tutorials for beginner's across various simulation platforms.
Python
0
Python course offered by Texas A&M HPRC
Ultimate guide to Unix
0
Unix is incredibly common and useful. This website provides all the common commands and explanations for one to get started with a unix system.
Harnessing the Power of Cloud and Machine Learning for Climate and Ocean Advances
0
Documentation and presentation on how to use machine learning and deep learning framework using TensorFlow, Keras and sci-kit learn for Climate and Ocean Advances
ACCESS KB Guide - Anvil
0
Purdue University is the home of Anvil, a powerful supercomputer that provides advanced computing capabilities to support a wide range of computational and data-intensive research spanning from traditional high-performance computing to modern artificial intelligence applications.
Samtools Documentation
0
Samtools is a suite of programs for interacting with high-throughput sequencing data, especially in the SAM/BAM format. It offers various utilities for processing, analyzing, and managing sequence data generated from next-generation sequencing (NGS) experiments. Samtools is widely used in bioinformatics and genomics research for tasks such as read alignment, variant calling, and data manipulation.
Machine Learning in R online book
0
The free online book for the mlr3 machine learning framework for R. Gives a comprehensive overview of the package and ecosystem, suitable from beginners to experts. You'll learn how to build and evaluate machine learning models, build complex machine learning pipelines, tune their performance automatically, and explain how machine learning models arrive at their predictions.
ACCESS Resource Advisor
0
A web-based tool to help researchers identify appropriate ACCESS resources for their project.
Charliecloud User Group
0
Announcements for for users and developers of Charliecloud, which provides lightweight user-defined software stacks for high-performance computing.
DAGMan for orchestrating complex workflows on HTC resources (High Throughput Computing)
0
DAGMan (Directed Acyclic Graph Manager) is a meta-scheduler for HTCondor. It manages dependencies between jobs at a higher level than the HTCondor Scheduler.
It is a workflow management system developed by the High-Throughput Computing (HTC) community, specifically for managing large-scale scientific computations and data analysis tasks. It enables users to define complex workflows as directed acyclic graphs (DAGs). In a DAG, nodes represent individual computational tasks, and the directed edges represent dependencies between the tasks. DAGMan manages the execution of these tasks and ensures that they are executed in the correct order based on their dependencies.
The primary purpose of DAGMan is to simplify the management of large-scale computations that consist of numerous interdependent tasks. By defining the dependencies between tasks in a DAG, users can easily express the order of execution and allow DAGMan to handle the scheduling and coordination of the tasks. This simplifies the development and execution of complex scientific workflows, making it easier to manage and track the progress of computations.
TensorFlow for Deep Neural Networks
0
TensorFlow is a powerful framework for Deep Learning, developed by google. This specifically is their python package, which is easy to use and can be used to train incredibly powerful models.
Neurostars
0
A question and answer forum for neuroscience researchers, infrastructure providers and software developers.
PetIGA, an open-source code for isogeometric analysis
0
This documentation provides an overview of the PetIGA framework, an open source code for solving multiphysics problems with isogeometric analysis. The documentation covers some simple tutorials and examples to help users get started with the framework and apply it to solve real-world problems in continuum mechanics, including solid and fluid mechanics.
Introduction to Parallel Programming for GPUs with CUDA
0
This tutorial provides a comprehensive introduction to CUDA programming, focusing on essential concepts such as CUDA thread hierarchy, data parallel programming, host-device heterogeneous programming model, CUDA kernel syntax, GPU memory hierarchy, and memory optimization techniques like global memory coalescing and shared memory bank conflicts. Aimed at researchers, students, and practitioners, the tutorial equips participants with the skills needed to leverage GPU acceleration for scalable computation, particularly in the context of AI.