Programming VIII Courses Online

Live Instructor Led Online Training Programming VIII courses is delivered using an interactive remote desktop! .

During the course each participant will be able to perform Programming VIII exercises on their remote desktop provided by Qwikcourse.


How do I start learning Programming VIII?


Select among the courses listed in the category that really interests you.

If you are interested in learning the course under this category, click the "Book" button and purchase the course. Select your preferred schedule at least 5 days ahead. You will receive an email confirmation and we will communicate with trainer of your selected course.

Programming VIII Training


Avato Python Client Training

About

Confidential ML Training

Python client library for Training instance of the decentriq platform.

What is Confidential ML Training?

By using Confidential Computing, Confidential ML Training enables you to train machine learning models based on data that nobody ever can access; not you, not us, not the infrastructure provider, nobody. This removes the risk for data breaches or data misuse.


7 hours

$1,990

Discover Baobab

About

Training data generator for hierarchically modeling strong lenses with Bayesian neural networks

The baobab package can generate images of strongly-lensed systems, given some configurable prior distributions over the parameters of the lens and light profiles as well as configurable assumptions about the instrument and observation conditions. It supports prior distributions ranging from artificially simple to empirical.

A major use case for baobab is the generation of training and test sets for hierarchical inference using Bayesian neural networks (BNNs). The idea is that Baobab will generate the training and test sets using different priors. A BNN trained on the training dataset learns not only the parameters of individual lens systems but also, implicitly, the hyperparameters describing the training set population (the training prior). Such hierarchical inference is crucial in scenarios where the training and test priors are different so that techniques such as importance weighting can be employed to bridge the gap in the BNN response.

Content

  • Introduction on features
  • Installation
  • Usage
  • Attribution

7 hours

$1,990

Creating Dashboards With Shiny Live Training

About

Creating Dashboards with Shiny

Slides

  1. Introduction to the webinar and instructor (led by DataCamp TA).
  2. Introduction to the topics.
  3. Discuss the need for data scientists to build web apps.
  4. Introduction to the MoMa dataset and app we will be building.

    Live Training

    Shiny 101

  5. Understand building blocks of a Shiny App (textInput, plotOutput, fluidPage)
  6. Explore reactivity (reactive, reactiveValues)
  7. Build a simple shiny app

    Inputs, Outputs, Layouts

  8. Explore inputs: textInput, numericInput, sliderInput, and selectInput
  9. Explore outputs: renderPlot/plotOutput, renderPlotly/plotlyOutput, renderDT, and DTOutput
  10. Explore layouts: sidebarLayout, sidebarPanel, tabPanel, tabsetPanel
  11. Combine inputs, outputs, and layouts to build a small shiny app.

7 hours

$1,990

Train Emotions

About

Train emotions

This is a course dedicated for training machine learning models for voice files with emotions (angry, disgust, fear, happy, neutral, or surprised) from video files downloaded from Youtube.

Team members

Active members of the team working on this repo include:

  • Luke Lyon (University of Colorado - Boulder, CO) - data scientist
  • Jim Schwoebel (Boston, MA) - advisor

    Meeting times

    We plan to do slack updates every week 8 PM EST on Fridays. If we need to do a work session, we will arrange for that.

    Goal / summary or prior models (how they were formed)

    Here are some goals to try to beat with demo projects. Below are some example files that classify various emotions with their accuracies, standard deviations, model types, and feature emebddings. It will give you a good idea on what to brush up on as you think about new embeddings for audio and text features for models.


7 hours

$1,990

Manta

About

Manta

Manta is a PyTorch based neural network training library. Manta is powerful and flexible which lets you only write the code you need to. Sample Usage model = Model() train_loader, valid_loader = get_loaders() trainer = ModelTrainer(model, train_loader, valid_loader) trainer.fit(epochs=10) Manta also helps you monitor your training from anywhere. You can our the web interface to keep track of your experiments and visualize the progress.

manta.training.ModelTrainer

A module that makes training models much easier. class ModelTrainer(): def init(self, model, train_loader, valid_loader=None, metrics=None, lr=10e-3, optimizer=None, loss_fn=None, save_path="model.bin", reporting=False):

manta.layers

Common sense modules that make building models easier. class GlobalAvgPooling(nn.Module): def forward(self, x): pass class GlobalMaxPooling(nn.Module): def forward(self, x): pass class Upscale(nn.Module): def init(self, factor=2): pass def forward(self, x): pass


7 hours

$1,990

MNIST Demo

About

MNIST Demo for beginners

The aim was to train and evaluate the MNIST dataset of 60000 labeled handwritten digits. This was an example of a Supervised Machine Learning system. The Anaconda platform is used to setup Keras with a TensorFlow backend, using Jupyter as a frontend to run kernels and notebooks. TensorBoard is used to view the scalars and graphs used to run the job. Tested on: A Sequential model (stack layers sequentially) is used, implementing a Convolutional Neural Network (CNN) LeNet architecture, using Python and the Keras deep learning package. This has been compiled using a Categorical Cross Entropy loss function, accuracy metric function, and the Adam optimizer (a method for stochastic optimization). Basic model training has been done on the sample set, without using any rate annealing methods or sample synthesis (data generation). The model was fitted in 15 epochs to an accuracy of 99.45% and loss of 1.61% in under 95 seconds. Better models having 99.76% accuracy have also been achieved. A personal attempt leads to an accuracy of 98.12% which has not been submitted due to impracticality in application.

Introduction

Machine Learning (ML) is a computer science field focused on implementing algorithms capable of drawing predictive insight from static or dynamic data sources using analytic or probabilistic models and using refinement via training and feedback. It makes use of pattern recognition, artificial intelligence learning methods, and statistical data modeling.


7 hours

$1,990

CountryTrainingKit

About

CountryTrainingKit

WIP Flag trainging using Core ML

Overview

Objective: investigate machine learning to present flag game challenges that are appropriate for the specific player. Present a flag challenge to any given player in gradual progression of difficulty, easy country flags first and more difficult flags as the user progresses.

Getting Starting

CoreML Model

training data: MLData/flatData_.csv

ML Info:

ML Target (outputs) : ML Fatures (inputs) : Notes about ML Algorithms and Metrics:

Training, Root Mean Square Error:


7 hours

$1,990

CCTV Training Institute In Hyderabad CCTV Course In Hyderabad CCTV Installation Training Institu

About

CCTV-Training-Institute-in-Hyderabad-CCTV-Course-in-Hyderabad-CCTV-Installation-Training-Institu

CCTV Training Provide training within the Process of Watching Over A Facility Which Is Under Suspicion or Area to Be Secured; Main a part of The Surveillance Electronic Security System Training Consists of Camera or CCTV Cameras Which Forms as Eyes to closed-circuit television. system may consist many components during which may include cameras for surveillance purpose and hard disc for recording purpose then on. CCTV cameras are often used for remote viewing from foreign places and a few of The CCTV Surveillance Systems consist Cameras, Networking Equipments, Monitors and IP Cameras. In This CCTV installation training in Hyderabad, we can use CCTV Detect and record the Crime Through theses Cameras, it gives Alarm after Receiving the Signal from The CCTV Cameras Which Are Connected to CCTV surveillance System. The Professionals and Students Willing to Take Up the CCTV and Access Control Courses and Certifications in Low Voltage Security System Training Institute in Hyderabad and for People Choose to Invest in A CCTV Surveillance Cameras Installation at Their Business and Home to Feel Safe and More Secure. even Though People install CCTV Cameras for many Reasons, CCTV Surveillance Camera Systems Helps People to Feel More secure and Gain Peace of Mind. CCTV Cameras helps People to Monitor the Activities of Their Business and Home When They Are Away using remote technology. Whether It is to Protect their Family, animals, Assets, Or Employees, CCTV Surveillance system Can Help people to Feel Assured That the Things which are Most Important to Them are Safe. CCTV Surveillance system training coves these things such as How a CCTV camera works, Introduction to CCTV system, what are the types of indoor and outdoor CCTV cameras, Analog and IP CCTV cameras, Designing of CCTV system, Power Supply used in CCTV system, System Requirements while installing CCTV system, Considerations while designing a CCTV System, what are the Components of a CCTV Systems, types of Transmission, types of Cable, what is IP Network Transmission, different types of Video Storage, what is DVR and NVR. A Digital Video closed-circuit television Provides Users with Superior Quality Compared to Older, Outdated Analog CCTV Systems. In Addition, there are Various new Features That Digital Video Recorder (DVR) or Network Video Recorder (NVR) Provides. Modern Security System Offers High Resolution (HD), more Hard Disk Storage Capacity, Event Search, Motion Detection, Remote Accessibility, Missing Object, Tripwire, And Many other Features also. Business Owners Can install Various Security Camera Around Multiple Business Locations and can make One Fully Integrated CCTV Surveillance system Solution using remote technology. 24/7 Surveillance Monitoring In CCTV Camera Course We Provide A Variety of Security Solutions to Professionals and Students To Help Clients of All Sizes Achieve Their Security Goals. the importance of having a cctv system is that can be seen from the fact that when the police officers use CCTV footage as evidence in court it is found to be inadmissible in most cases. it is true that anyone can tell a location and suggest a number of Cameras required, only those who have experience in this field, through an understanding about cctv surveillance system, we can generally be able to recommend a best CCTV Camera system that will give the customer a good picture with the good coverage. For example: During the visit to a college there was a CCTV camera at each end of the 30m corridors. And These cameras had standard wide-angled lenses which provided face recognition for the 3.5m of corridor nearest to the cameras. This is the case in 90% of the systems we do audit. If people came out of the classroom, committed some misdemeanor and then returned to their classroom there was no possibility of knowing who did and who they were. By setting the cameras so that it view the opposite end of the corridor and carefully choosing lenses the cameras will provide face recognition for half of the corridor, enabling the whole corridor to be viewed with pupils being recognized.


7 hours

$1,990

Sketch Of Robot Distrdistribute Brochures

About

Sketch_of_robot_distrdistribute_brochures

This is a part of the Smart Methods training,
The process of distributing the brochure is by holding the them in one hand which is the left hand and giving them to people using the right hand as it's shown in the sketch attached above. the motor will be use is MG995 High Speed Metal Gear Dual Ball Bearing Servo for the joint of the upper parts of the Puppy Robot, and the reasons of choosing this servo are:

  1. The connection cable is thicker.
  2. Cheap and strong.
  3. Equips high-quality motor.
  4. High resolution.
  5. Accurate positioning.
  6. Fast control response.
  7. Constant and high torque throughout the servo travel range. It equips sophisticated internal circuitry that provides good torque, holding power, and faster updates in response to external forces.
  8. Excellent holding power.
  9. They dont take much space since they are packed within a tight sturdy plastic case which makes them water and dust resistant which is very useful in the robot joints. Moreover, the needs of the servo to operate this robot show that MG995 is fair enough to fullfill the requirment

7 hours

$1,990

Rapidminer Training

About

Rapidminer_Training

Collection of Rapidminer Processes for Training Processes

Classification

ITDS_TextMining
Logistic Regression on Blog articles, Prediction of gender of author

Clustering

6 Clustering
Clustering of the IRIS Datasets with the follwing algorithms: Performance measured with:
Centroid distance and Davies Bouldin ITDS_Web_TextClustering
k-menoids Text Clustering of Wikipedia Documents


7 hours

$1,990

Face Recognition Attendance System

About

Face-Recognition-Attendance-System

Face Recognition was done using OpenCV, with the help of haarcascade. After training the dataset, the program would be able to detect the people and add the person's name and timestamp in a csv file.The main file is att2.py that contains the face recognition attendance program. The others are used for training the dataset.


7 hours

$1,990

STS Net

About

STS-Net is a training strategy which uses MSE and KLD to distill optical flow stream. The network can avoid the use of optical flow during testing while achieving high accuracy. We release the testing and train code. We have not put all the code, some code needs to be modified for better reading. We will add the test model as soon as possible.

Testing script

For RGB stream: python test_single_stream.py --batch_size 1 --n_classes 51 --model resnext --model_depth 101 \

Training script

For STS:

From pretrained Kinetics400:

python STS_train.py --dataset HMDB51 --modality RGB_Flow \

From pretrained checkpoint:

python STS_train.py --dataset HMDB51 --modality RGB_Flow \


7 hours

$1,990

Computer Vision

About

Computer Vision In this practical project will solve several computer vision problems using deep models. The goals of these projects are: Develop proficiency in using Keras for training and testing neural nets (NNs). Optimize the parameters and architectures of a dense feed-forward neural network (ffNN), and a convolutional neural net (CNN) for image classification. Build a traffic sign detection algorithm.


7 hours

$1,990

Credit Card Fraud Detection

About

Credit Card Transaction Fraud Detection - Project Overview

Cyber frauds are increasing day by day in the world. More and more people are looted these days online due to increase in transactions happening via cards and online wallets. Hence it becomes important to increase the security and stop the online scamsters from looting the masses. Hence, keeping this issue in mind, I have developed a model wherein I have trained it to detect whether a transaction that has been carried out by a credit card fraudulent or not. The model has an efficiency of about 98% with Logistic Regression.

About the dataset

  • The dataset comprised of both fraudulent and non-fraudulent transactions with 99% transactions lying in non fraudulent transactions.
  • The transactions were evenly spread in all levels i.e from as low as $2 to as high as $400.
  • The dataset had details of about 2 lakh transactions.

    Processes Involved in the whole Project

  • Importing the dataset as a csv and converting it into a dataframe so as to work with pandas on it.
  • Exploratory Data Analysis(EDA) to understand the data. Some important insights were gathered from this data exploration, which involved:
    • Number of transactions with NA values.
    • Number of fraudulent transactions and number of non fraudulent transactions.
    • Correlations between the various columns of the dataset.
  • Data Visualization to further understand the variations in dataset.
  • Data cleaning. This was done majorly to remove the outliers for a more accurate prediction.
  • Data sampling. This was done to increase the number of fraudulent transactions so that the model is trained to identify not just the non fraudulent once and not be biased. Various sampling algorithms were used here which included:
    • Random UP Sampler
    • Random DOWN Sampler
    • SMOTE
    • Near - Miss
  • Model Training. Data obtained from all the sampling techniques was fit into four different models to check which performs better.

    Training a model

  • Four different models were used for this project. The models were:
    • Logistic Regression
    • Random Forests
    • XG - Boost
    • K-Nearest Neighbours
  • The data obtained from various sampling methods was fit into these models simultaneously and the area under the curve(AUC) was calculated for each one of them.

    Model Performance

    The best performing model was Logistic Regression with the highest area under the curve. The performances for the various models are as follows:

  • Logistic Regression: 0.98
  • Random Forests: 0.97
  • K-Nearest Neighbours: 0.93
  • XG-Boost: 0.97 Hence from the above results we can say that logistic regression performs the best and hence can be used for productionization.

7 hours

$1,990

TrainingVisualizeTool

About

PythonLossAccuracy. ( A python tool for visualize loss and accuracy while training )

1. How this look?

2. How to use?

Step1.

canvas object,port, ...
Create a canvas object. you can manually setting web server port, canvas size, border size... from utils.web_render import WebRenderer canvas = WebRenderer(port=12345, batch_size=batch_size, sample_nums=len(trainloader.dataset), update_per_batches=update_per_batches, total_epoches=total_epoches, mode='auto', blank_size=70, epoch_pixel=30, max_vis_loss=10, canvas_h=500, x_ruler=5, y_ruler=2) canvas_t = threading.Thread(target=canvas.start) canvas_t.deamon = True canvas_t.start() atexit.register(program_exit) :
:: :--- total_epoches epoches blank_size pixel(boarder size) epoch_pixel epoch(x) pixel max_vis_loss loss canvas_h pixel () x_ruler epoch() y_ruler y English table:
total_epoches max epoches
blank_size the border size
epoch_pixel how many pixels in one epoch?
max_vis_loss visualize max loss vlaue
canvas_h canvas height (pixels)
x_ruler how many lines you want to display in one epoch (vertical line number)
y_ruler how many lines you want to display in y-axis (horizontal line number)

Step2.

canvas
passing acc and loss data (data type=list) to canvas object. if batch%update_per_batches == 0:

for record

_, predicted = torch.max(outs.data, 1)
correct = (predicted == labels).sum().item()
accs["train"].append(100*correct/batch_size)
losses["train"].append(loss.data.item())
# for visualize
canvas.updating(accs=accs["train"], 
                losses=losses["train"], 
                show_this=True, mode='train')
: :: :--- English table:

Step3.


7 hours

$1,990

Hand Written Digits Classifier

About

Hand-written-digits-classifier

A multi layer perceptron(MLP) neural network used to classify hand-written digits. By building a feedforward network with back propagation and training process using the Stochastic Gradient descent algorithm. Without using any machine learning libraries. Since it is a Multi-class classification problem- Implemented Softmax activation function for the output layer. Sigmoid activation function for the Hidden layers.


7 hours

$1,990

Android Students Training Batch 1

About

Android Application Development - Syllabus

Introduction :

Mobile Apps are becoming popular day by day. Today, Everyone owns a smartphone and they do a lot of things with the help of their smartphones such as making payments, ordering groceries, playing games, chatting with friends and colleagues etc .There is huge demand in the market to develop android apps. Its Googles CEO Sundar Pichai's initiative to train 2 Million people to become android developers as this platform has a huge need of developers. In view of this scenario and keeping industry needs in mind, APSSDC is offering Android Application Development - FDP so that the faculty across engineering colleges in the state of Andhra Pradesh gain App Development knowledge and share the same to their students.

Hardware Requirements:

i3 or above Processor is required 8 GB RAM is recommended Good Internet Connectivity Microphone and Speakers facility for Offline training program.

Duration :

36 Hours (2 hours each day X 18 days)

Workshop Syllabus :

   1. Introduction to Mobile App Development
   2. History of Mobile evolution
   3. Version History of Android 
   4. Android Architecture
   5. Installing the Development Environment
        a. Installation of Android Studio
        b. Installation of Android emulator
        c. Connecting the physical device with the IDE
   6. Creating the first application 
   7. Hello World
   8. Creating a User Interactable App
   9. Hello Toast
  10. Text and Scroll View
  11. Intents
        a. Explicit Intents
        b. Implicit Intents
  12. Activity LifeCycle
  13. User Interface Components
  14. Buttons and Clickable Images
  15. Input Controls
  16. Menus & Pickers
  17. Using Material Design for UI
  18. User Navigation
        a. Navigation Drawer 
        b. Navigation Components
              i. Navigation Graph
             ii. Navigation Host
            iii. Navigation Controller
        c. Ancestral and Back Navigation
        d. Lateral Navigation 
              i. Tabs for navigation
  19. Recyclerview and DiffUtil
  20. Working in the background
  21. Fetching JSON Data from the internet using retrofit GET.
        a. Discussion of various JSON Converters.
        b. Writing data to the api using retrofit POST.
  22. Broadcast Receivers
  23. Schedulers
        a. Notifications
        b. WorkManger
  24. Saving user Data
        a. ViewModel
        b. LiveData
        c. SharedPreferences
        d. Room Persistence Library.

Course Objectives :

Entry Requirements :

Eligibility :

Mode Of Training :


7 hours

$1,990

Fairscale

About

fairscale

fairscale is a PyTorch extension library for high performance and large scale training. fairscale supports:

  • pipeline parallelism (fairscale.nn.Pipe)
  • tensor parallelism (fairscale.nn.model_parallel)
  • optimizer state sharding (fairscale.optim.oss)

    Examples

    Run a 4-layer model on 2 GPUs. The first two layers run on cuda:0 and the next two layers run on cuda:1. import torch import fairscale model = torch.nn.Sequential(a, b, c, d) model = fairscale.nn.Pipe(model, balance=[2, 2], devices=[0, 1], chunks=8)

    Requirements

  • PyTorch >= 1.4

7 hours

$1,990

WideResNet MNIST Adversarial Training

About

WideResNet_MNIST_Adversarial_Training

WideResNet implementation on MNIST dataset. FGSM and PGD adversarial attacks on standard training, PGD adversarial training, and Feature Scattering adversarial training.

Executing Program

For standard training and PGD adversarial training use It automatically executes main.py with additional arguments like no. of iteration, epsilon value, max iterations for attack and step size in each attack. After training the model it executes FGSM and PGD attacks on it. For feature scattering based adversarial training use Does the same previous one but implements feature scattering based adversarial training.


7 hours

$1,990

Training Developer Src

About

Confluent Apache Kafka for Developers Course

This is the source code accompanying the Confluent Apache Kafka for Developers course. It is organized in two subfolders solution and labs. The former contains the complete sample solution for each exercise whilst the latter contains the scaffoliding for each exercise and is meant to be used and elaborated on by the students during the hands-on.


7 hours

$1,990

House Price Prediction System

About

House Price Prediction System Documentatiom

This Project is based on Machine Learning in which We predict the Price of the House using a Dataset and training it by using several Machine Learning Algorithms Like EDA(Exploratory Data Analysis) and SEMMA(Sample, Explore, Modify, Model, Assess) model. The Model takes several factors like lot size, Number of Rooms, Floors, Bathrooms etc and hence Predicts the Price with 95% Accuracy in USD. You can use it here: 13.232.31.236


7 hours

$1,990

Basic of Mixer

About

Mixer is a small and adaptive program for training and testing sentence classification models inspired by Cloud AutoML. It is completely offline and the speed of loading the dataset and training time directly depends on the performance of the computer

Language Independence

The mixer is a multilingual program, ie language independent. All font types that UTF-8 supports are also supported by the Mixer.

Content

  • How mixer works
  • Process flow diagram
  • Requirements

 


7 hours

$1,990

TrainingBud IOS

About

TrainingBud Duo App

This is an iOS application that brings none or motivated people together that wants to be active. So, in other words its for people who want to do a specific sport or activity but does not have buddy to train with or is not commit enough to an organization. By using our application, the user should be able to find a training session based on their preference, they should be able to join a group or duo session and they should be able to add or find a training buddy.


7 hours

$1,990

Statistical Model Implementer

About

statistical-model-implementer

A library (if I do push it to pypi) that takes in training and test datasets and then applies statistical models, calculates metrics and also gives the best performing model.

Example: Using the Winconsin Cancer data that I had analysed earlier.

The train and test set are used as inputs for running the Implementer.

Models Used:

  1. Logistic Regression
  2. Decision Tree Classifier
  3. Support Vector Classifier
  4. K Neighbors Classifier
  5. Random Forest Classifier
  6. Adaboost Classifier

    Metrics used:

  7. Classification Report
  8. Accuracy Score
  9. Confusion Matrix

All other metrics that take y_test and predicted Y value as input.

To Implement:

  1. KNN algorithm: need a way to find the optimal K value and then use that as the k_neighbors value
  2. Add a way for the user to add the random state and other input parameters for the Statistical models
  3. Keep on adding to this list.

7 hours

$1,990

Epam Xt Net Web

About

Repository for Epam External Training Tasks

Task 1.1. The Magnificent Ten

Calculate the area of a rectangle with sides A and B. Display an image of a rectangular triangle with a height of N lines. Display an image of an isosceles triangle with a height of N lines. Display an image of a Christmas tree consisting of N isosceles triangles. Calculate the sum of all natural numbers from 1 to 1000, which are multiples of 3 or 5. Write a program to store text formatting options (bold, italic, underline and their combinations). Write a program that generates a random array, sorts this array and displays the maximum and minimum elements. Write a program that replaces all positive elements in a three-dimensional array with zeros. Write a program that determines the sum of non-negative elements in a one-dimensional array. Determine the sum of all elements of a two-dimensional array that are in even positions.

Task 1.2. String, Not Sting

Write a program that determines the average word length in the entered text string. Write a program that doubles in the first input string all the characters that belong to the second input string. Write a program that counts the number of words starting with a lowercase letter. Write a program that replaces the first letter of the first word in a sentence with a capital letter.

Task 2.1. OOP

Write your own class that describes the string as an array of characters. Describe the classes of geometric shapes. Implement your own editor that interacts with rings, circles, rectangles, squares, triangles and lines.

Task 2.2. Game Development

Create a class hierarchy and define methods for a computer game. Try making a playable version of your project.

Task 3.1. Weakest Text

There are N people in the circle, numbered from 1 to N. Every second person is crossed out in each round until there is one left. Create a program that simulates this process. For each word in a given text, indicate how many times it occurs.

Task 3.2. Dynamic Array

Based on the array, implement your own DynamicArray generic class, which is an array with a margin that stores objects of random types.

Task 3.3. Pizza Time

Extend the array of numbers with a method that performs actions with each specific element. The action must be passed to the method using a Delegate. Extend the String with a method that checks what language the word is written in the given string. Simulate the work of a pizzeria in your application. The user and the pizzeria interact through the pizza order. The user places an order and waits for a notification that the pizza is ready. The peculiarity of your pizzeria is that you do not store customer data.

Task 4.1. Files

There is a folder with files. For all text files located in this folder or subfolders, save the history of changes with the ability to roll back the state to any moment.


7 hours

$1,990

Discover PyHessian

About

PyHessian is a pytorch library for Hessian based analysis of neural network models. The library enables computing the following metrics:

  • Top Hessian eigenvalues
  • The trace of the Hessian matrix
  • The full Hessian Eigenvalues Spectral Density (ESD)

Content

  • Usage
  • Installation
  • Examples

 


7 hours

$1,990

Allwize Training

About

AllWize Training Environment

This course contains a training environment for workshops using two different approaches:

  • A vagrant definition file for a VirtualBox machine
  • A docker-compose file to run a docker stack The Vagrant machine will be reacheble at 192.168.42.10, the docker container will expose the ports directly to the host machine. None of the services have any security defined. This environment is not meant for production or machines exposed to the Internet!!! Service port
    Mosquitto 1883
    NodeRED 1880
    InfluxDB 8086
    Grafana 3000

7 hours

$1,990

Dojo Web Master

About

Dojo Web

Coding Dojo - code and programming training local

Overview

Coding Dojo is a safe environment for testing new ideas, promoting networking and sharing ideas among team members. It is very common for companies to promote open Dojos. In this way the company can meet professionals who can adapt to its environment and professionals also have the opportunity to know the environment of these companies.

Formats

Kata: In this format there is the figure of the presenter. He must demonstrate a ready-made solution, previously developed. The objective is that all participants are able to reproduce the solution achieving the same result, being allowed to make interruptions to resolve doubts at any time; Randori: In this format, everyone participates. A problem is proposed to be solved and the programming is carried out on only one machine, in pairs. For this format, the use of TDD and baby steps is essential. The person coding is the pilot, and his partner is the co-pilot. Every five minutes the pilot returns to the audience and the co-pilot assumes the status of pilot. A person from the audience takes on the position of co-pilot. Interruptions are only allowed when all tests are green. It is also important to note that it is the pair who decide what will be done to solve the problem. Everyone must understand the solution, which must be explained by the pilot and the co-pilot at the end of their implementation cycle; Kake: It is a format similar to Randori, however there are several pairs working simultaneously. Each shift the pairs are exchanged, promoting integration between all participants of the event. In this format, more advanced knowledge of the participants is necessary.


7 hours

$1,990

PythonFundamentals

About

Python Fun(damentals)

Course TOC

The course is built to be taught in a structured order: 1) Python 3 Setup 2) Python 3 Core 3) Python 3 Types 4) Python 3 Variables 5) Python 3 Flow Control

Course Introduction

This course was designed in free time for our team who are not already or currently studying basic programing principles. It is built to introduce them to a few concepts that are key to understanding the Python programing language and syntax. Many of the lessons as a seasoned programmer come only with time, this course is meant to arm a programmer with the thought process and how logic works within the programing world and the basic concepts to apply that thought process to get Python to do what you need it get the job done.


7 hours

$1,990

PythonBoto3Training

About

PythonBoto3Training

Two Python 3.x modules, that use Boto3, suitable for use in introductory / intermediate training

Overview

I developed two Python 3.x modules suitable for use in introductory to intermediate level training. Both use Boto3 to interact with the AWS S3 service.
The module s3_list.py is suitable for short, entry-level training. This module returns a list of the S3 buckets that belong to an AWS account. I have found this module ideal for entry-level training sessions lasting a day or less. The module S3_man.py is suitable for longer, intermediate-level training. It imports (i.e., includes) the s3_list.py module and supports a few basic S3 operations (e.g., list the objects in a bucket, create a bucket, delete a bucket). Both modules work across the Boto3 Resource and the Boto3 Client API sets. Additionally, both can be run using the default profile contained in the /.aws/credentials file (i.e., created during installation of the AWS CLI) or using an IAM user of your choice. To execute a module's functionality using an IAM user of your choice, you supply the AWS IAM access key id and the AWS IAM secret access key as parameters to a function call or a command line operation. If you do so, you also have the option of specifying which AWS regional endpoint will be used when communicating with the S3 service. Outside of Boto3, both modules only make use of modules from the Python 3.x Standard Library. And finally, both modules have the ability to be run as a stand-alone script or to be imported by another module.


7 hours

$1,990

SOMeSolution

About

SOMeSolution

An iteratively developed approach to the problem of fast SOM training. Will work towards the implementation of the HPSOM algorithm described by Liu et al.

C++ install

To compile the code to a library, cd ~/SOMeSolution/src/C++ make The static library will be in bin/somesolution.a To compile the code to a commandline usable executable, cd ~/SOMeSolution/src/C++ make build

Commandline Usage

Through the command line you can add different flags and optional arguments. Arguments: Positional Arguments Description WIDTH HEIGHT Sets the width and heigth of the SOM Flag Description Example: The following will make a 10 x 10 SOM, generate it's own training data, have 100 features and 100 dimensions somesolution 10 10 -g 100 100 -o trained_data.txt

Python Visualization

To visualize a SOM weights file produced by the commandline executable, simply run: python som.py -i weights.txt -d


7 hours

$1,990

NEXT SparseEventID

About

NEXT Sparse Event Identification

This course contains some networks to do signal/background separation. The initial push will contain networks based on a Residual architecture in both dense and sparse implementations.

Dependencies

Eventually, I want to add PointNet, PointNet++, and DGCNN (Edgeconvs for graph networks)


7 hours

$1,990

PitchPerfect

About

PitchPerfect

Open source resources for SLP and vocal training The aim of this software is to help develop cross platform tools for analyzing speech with a focus on providing real time feedback for those involved in vocal therapy either as patients or practitioners. This is a work in progress, presently very limited in function and user friendlyness, as they are tools that I am personally using in my own vocal training. The software is written in python3, and has dependencies on PyQt4, pyqtgraph, numpy scipy, and pyaudio. It should be cross platform but it is being developed on Linux The record and playback functions have now been unified in the pitch_perfect.py application. Upon launch you will be able to choose a file for playback, or a file for recording. Once playback or recording has begun you may click stop to end the operation.


7 hours

$1,990

Discover Face Extraction

About

This code uses the Haar Cascade Classifier to detect face in a video feed (webcam used here) and extracts 100 training samples The training samples and raw images are stored in a folder named Training in C: drive The code has been tested on the following configuration:

  1. Windows 7 Professional (64-bit)
  2. OpenCV 2.4.2
  3. Visual Studio 2012 Ultimate The build configuration for the project in Visual Studio was x64(Release). For users using OpenCV for the first time in a Visual Studio project, a custom property sheet has been provided OpenCV.props WARNING: This property sheet can be used only if:
  4. You have the OpenCV installation path as: C:\OpenCV-2.4.2\opencv
  5. You are using a 64-bit Windows 7 installation

7 hours

$1,990

Generalized Regression Neural Networks Library From Scratch

About

Generalized Regression Neural Networks (GRNN)

Generalized regression neural network (GRNN) is a variation to radial basis neural networks. GRNN was suggested by D.F. Specht in 1991.GRNN can be used for regression, prediction, and classification. GRNN can also be a good solution for online dynamical systems. GRNN represents an improved technique in the neural networks based on the nonparametric regression. The idea is that every training sample will represent a mean to a radial basis neuron.[1] GRNN is a feed forward ANN model consisting of four layers: input layer, pattern layer, summation layer and output layer. Unlike backpropagation ANNs, iterative training is not required. Each layer in the structure consists of different numbers of neurons and the layers are connected to the next layer in turn. [2]

  • In the first layer, the input layer, the number of neurons is equal to the number of properties of the data.[3]

  • In the pattern layer, the number of neurons is equal to the number of data in the training set. In the neurons in this layer, the distances between the training data and the test data are calculated and the results are passed through the radial based function (activation function) with the value and the weight values are obtained.[3]

  • The summation layer has two subparts one is Numerator part and another one is Denominator part. Numerator part contains summation of the multiplication of training output data and activation function output (weight values). Denominator is the summation of all weight values. This layer feeds both the Numerator & Denominator to the next output layer.[3]

  • The output layer contains one neuron which calculate the output by dividing the numerator part of the Summation layer by the denominator part.[3]

                                               The general structure of GRNN [3]

Training Procedure

Training procedure is to find out the optimum value of . Best practice is that find the position where the MSE (Mean Squared Error) is minimum. First divide the whole training sample into two parts. Training sample and test sample. Apply GRNN on the test data based on training data and find out the MSE for different . Now find the minimum MSE and corresponding value of . [3]

Advantages of GRNN

  • The main advantage of GRNN is to speed up the training process which helps the network to be trained faster.
  • The network is able to learning from the training data by 1-pass training in a fraction of the time it takes to train standard feed forward networks.
  • The spread, Sigma (), is the only free parameter in the network, which often can be identified by the V-fold or Split-Sample cross validation.
  • Unlike standard feed forward networks, GRNN estimation is always able to converge to a global solution and wont be trapped by a local minimum. [3]

    Disadvantages of GRNN

  • Its size can be huge, which would make it computationally expensive. [4]

Example

Retrieved from [3]

Resources


7 hours

$1,990

Movie Title Generator

About

movie-title-generator

I wanted to learn how to use and train a [Transformer][0] (in a [pytorch][1] environment). This is my (not so serious) attempt at it. I collected a dataset of about 150k instances of movie titles (english, plus other languages as well), along side with their IMDB ratings. The objective was to generate a random movie title considering the input rating, so that the generated title is conditioned on the input rating (i.e. a lower rating should produce a movie title that if had existed it would have gotten a bad rating on IMDB). The resulting language model is modeling the following probabilities:

P(token1 | [rating])
P(token2 | [rating] token1)
P(token3 | [rating] token1 token2)
...
P(tokenN | [rating] token1 token2 ...) I'm not uploading the dataset here, but I've uploaded the model weights so you can try to generate titles on your machine.

Model architecture and training

The encoder/decoder architecture was completely dispensed with by just using a stack of 6 transformer encoder layers. Ratings and tokens uses different embeddings to keep the concepts separate within the neural network. The text is tokenized by using byte-pair encoding ([sentencepiece][2]). The BPE model was trained on the dataset.
The training happens in an unsupervised fashion, using cross entropy loss, teacher forcing, and Noam optimizer.
Practically, the model learns to predict the next token given the previous context (rating + tokens) (as you can see in the picture above). The uploaded pretrained model was trained with batch size = 512, d_model = 128, n_head = 4, dim_feedforward = 512 and 6 stacked transformers.
There isn't a proper reason behind the choice of these values, I just wanted to train it as fast as I could and also get "good" results.
I stopped the training at epoch 1120 with a average loss per batch of roughly 3.13.
Since the loss is still far from good, don't expect too much from this pretrained model.

Examples

multinomial with temperature sampling = 0.8 (no top_k or top_p sampling used/implemented) $ python3 eval.py --samples 10 4.5 Tall Wave
Yamamama
Wild Dr. Bay
The Witch We Getting Well Lords
The Secret of War
Un napriso tis amigos
The Lonesomes of Destrada
Don't Kill
The Black Curse of Saghban
Ghosts of the Skateboard
$ python3 eval.py --samples 10 7.8
The Crazy
East Angel
Time Spaceship
Una noche tu vida de Sabra
Waver, Paradel
The Scarecrow
Unearthed Encounter
Terror of the End
To Best of Those West
You Are Ends
Have fun!


7 hours

$1,990

PyPersonalTrainer

About

PyPersonalTrainer

A personal training analyzer based on Python and Excel. As the PolarPersonalTrainer Webpage will be shut down by 31.12.2019, older Polar Fitness and GPS watches will be deprecated. This software enables to further use these older devices and creates relevant running training information in form of Excel WorkSheets. The software imports the training data in form HRM and GPX Files, creates training session worksheet for each training, and adds the training to an overview sheet.

Introduction

This software aims to provide a similar (of course restricted) functionality as the website PolarPersonalTrainer by creating excel sheets with a similar look as the website. For each training session, an individual excel is created which contains the following information:

  1. A detailed overview table with
    1. start date
    2. start time
    3. duration
    4. distance
    5. pace
    6. average heart rate
    7. maximum heart rate
    8. Polar running index
    9. Type of Training
  2. A figure with the heart rate and speed over time
  3. A table with time and percentage in zones of the puls rates
  4. A table with the autolaps of the watch, taken each 1 km
  5. A table with userlaps, if recorded The filename of the individual excel sheet is derived from the filenames of the hrm and gpx files. An example can be found under "19010601.xlsx". The data heartrate, pace, speed, and altitude versus time is stored in the same workbook in worksheet 'Data' to be used for further analysis. The software then adds the information on the overview table to a separate excel which should give an overview over the yearly training. It contains of a first sheet which gives overview over total duration, total distance, and number of training sessions for each month. The workbook contains for each month a table with the detailed overview table.
    The software als tries to detect if a training session was of type interval by analyzing variation in heart rate and speed and then adds this information to the worksheet 'Intervals'. An example of the Yearly overview excel workbook can be found under "Overview_2019.xlsx".

    Requirements

    The software was tested with training data from a Polar RC3 GPS with HRM version 1.06.
    Python 3.5.1 or higher
    argparse
    openpyxl
    gpxpy
    geopy
    datetime


7 hours

$1,990

Wp Cli

About

WP-CLI

Description

In this lesson, you'll learn how to use the WP-CLI, what they are, when you should use them and how it helps you in your WordPress development.

Objectives

After completing this lesson, you will be able to:

  • Install the WordPress using the Commands.
  • You can easily mentioned the WordPress version during the installation.
  • Installing Themes and Plugins using commands.
  • Activating Theme and Plugins.
  • Create backups and setup the WordPress in few minutes.

    Target Audience

    Who is this lesson intended for? What interests/skills would they bring? Choose all that apply.

  • [ ] Users
  • [ ] Designers
  • [X] Developers
  • [ ] Speakers
  • [ ] All

    Experience Level


7 hours

$1,990

Starnet

About

starnet

Star net is a multidimensional ICT and Security System Company whose existence sequel to the dire need to ensure that both small and big offices as well as homes automation are not frustrated by non-competent firms. We are mainly specialized on ICT Services, ICT Training, ICT Consultancy and Security Installations. OUR CORE SERVICES Bulk SMS we offer the best bulk sms platform with an API integration and fast delivery system for different telecommunication networks. S.E.O Search Engine Optimization is an important part of any successful local marketing strategy. Social Media Marketing We are available to help you publicize your product, company or event on all the social networks. Local Search Strategy Maximize your presence on search engine results pages on a local scale. Website Design Our team specializes in affordable web design using different tools on different platforms. Custom Email Design Custom email templates that speak to your customers and resonate with your brand. Graphics Design Our team specializes in affordable Graphics design such as Logos, Banners, Flyers etc. using different tools on different platforms.


7 hours

$1,990

Meta Apo

About

Meta-Apo

Contents

Introduction

Meta-Apo (Metagenomic Apochromat) calibrates the predicted gene profiles from 16S-amplicon sequences using an optimized machine-learning based algorithm and a small number of paired WGS-amplicon samples as model training, thus produces diversity patterns that are much more consistent between amplicon- and WGS-based strategies (Fig. 1). The Meta-Apo takes the functional gene profiles of small number (e.g. 15) of WGS-amplicon sample pairs as training, and outputs the calibrated functional profiles of large-scale (e.g. > 1,000) amplicon samples. Currently the Meta-Apo requires functional gene profiles to be annotated using KEGG Ontology. Fig.1. Calibration of predicted functional profiles of microbiome amplicon samples by a small number of amplicon-WGS sample pairs for training.

System Requirement and dependency

Hardware Requirements

Meta-Apo only requires a standard computer with >1GB RAM to support the operations defined by a user.

Software Requirements

Meta-Apo only requires a C++ compiler (e.g. g++) to build the source code.


7 hours

$1,990

Training And Placement Website

About

Training-and-Placement-Website

Training and Placement Cell is a total management and informative system, which provides the up-to date information of all the students in a particular college. TPC helps the colleges to overcome the difficulty in keeping records of hundreds and thousands of students and searching for a student eligible for recruitment criteria from the whole thing. It helps in effective and timely utilization of the hardware and the software resources. The home page contain various links such as links to login, various services like Events happened, achievements and recruiter details etc.,. The administrator will create the users and the users will use the accounts created by administrator. When the user enteres into his respective page he can update his details, and the details are to be approved by the administrator. All the users have some common services like changing password, updating details, searching for details, checking the details, mailing to administrator, and reading the material uploaded by admin if the user is a student. Administrator has the services to add events and achievements and he can reply to the mails sent by users. He can upload materials, search for student details, and he has the right to approve the students. This package is developed in windows platform. The programming language used is JSP with three tier architecture. Oracle 8i is used as backend database for the details to be stored.


7 hours

$1,990

Tuxcap

About

TuxCap

TuxCap is a program for buffering a series of photos and capturing the time before, during and after some trigger event using a raspberry pi and a USB webcam. The captures can be stored as a folder of jpegs or as an mp4 video, depending on your size limitations and quality requirements. The command line interface is basic, and accepts the following commands:

  • h, help, ?: Show Help
  • q: quit
  • show: Show number of images in buffer
  • cap: save current buffer to disk when the time comes This was developed at the request of the University of Cape Town for a penguin conservancy project, in order to provide a large amount of photo data on penguins. With some adaptation it could really be useful for any situation where you have a trigger line and the need to capture buffered still images.

    Dependencies and Installation

    TuxCap is written for Python 3. Requirements are updated and can be found in the requirements.txt file. It depends on OpenCV for camera handling and Numpy because OpenCV depends on it. If you intend to create video captures, ffmpeg should be installed. You can install all dependencies on x86-64 debian with: pip3 install -r requirements.txt sudo apt install ffmpeg On the Raspbian, dependencies are not all available through pip. Instead, run the following to install the relevant packages: pip3 install opencv-python sudo apt install ffmpeg libatlas3-base libcblas3 libjasper1 libqt4-test libgstreamer1.0-0 libqt4-dev-bin libilmbase12 libopenexr-dev rpi.gpio You may need to add the user running the program to the video group, using usermod -aG video


7 hours

$1,990

Aldohonen

About

aldohonen

This is a simple tool to visualize a Kohonen Network with color training. Colors represent neurons weight. So, when training with an constant input color set, network learn from this pattern, then self organize to represent this pattern. The more iterations executed, the more the network will look like the input.

Authors

Luiz Eduardo Pizzinatto & Bruno Martins Crocomo

Execution

python Main.py

Examples

Below are shown 4 screenshots from a simple training process. Starting with a random image, every training step (activate and backward) turn the network more organized, culminating in organized colors (i.e. organized neurons).

Infos

  • pybrain works only with numpy 1.11.0.
  • This is a single thread approach.

7 hours

$1,990

TrainingEnvironment

About

This repo contains the environment used for the Datadog Training platform. Each course has it's own environment and you can find directories for each supported platform. More details about configuring the environment can be found in the specific directories. |---||-| |Datadog 101| Vagrant | Datadog101| |Introduction to APM|Docker Compose| APM| |Introduction to Logs|Docker Compose| LogsIntro| |Autodiscovery with Kubernetes|MiniKube|k8sautodiscovery|


7 hours

$1,990

Torchtrainers

About

Torchtrainers

The torchtrainers library is a small library for helping train DL models in PyTorch. It helps with setting up training and optimizers, keeping track of losses and metrics, and running learning rate schedules. See the accompanying notebook for an example of how to use the library.


7 hours

$1,990

Ccpbiosimbase

About

= The CCPBioSim base container

This container forms the basis of our cloud based training platform. It is designed to be very minimal such that it only provides basic system utilities and tools along with the JupyterHub server for serving multiuser Jupyter notebooks. This particular container does not contain any Jupyter based training material, but simply sets up and configures the JupyterHub server and a number of basic system utilities that will enable this container to function as a reliable base container for specific workshop courses.


7 hours

$1,990

Swirl Courses

About

swirl courses

This is a collection of interactive courses for use with the swirl R package. You'll find instructions for installing courses further down on this page. Some courses are still in development and we'd love to hear any suggestions you have as you work through them.


7 hours

$1,990

Managing Spam On A Site

About

Managing Spam On A Site

Description

Learn why spam is a problem for all WordPress sites, why you should control it and how and learn tips to manage it.

Objectives

  • Students will understand the problems that spam comments may have on a site as well acquire the skills in order to change site settings to control spam.

    Prerequisite Skills

  • Understanding the WordPress Admin panel and how to navigate the admin menus.
  • Understanding of installing and activating plug-ins on a self-hosted WordPress website.

    Readiness Questions

  • Do students have the skills to navigate through the admin panels and change basic settings?
  • Do students have the skills to install and activate a plugin?

    Target Audience

    Who is this lesson intended for? What interests/skills would they bring? Choose all that apply.

  • [x] Users
  • [ ] Designers
  • [ ] Developers
  • [ ] Speakers
  • [ ] All

    Experience Level


7 hours

$1,990

TheBoyOfSilence

About

TheBoyOfSilence

Sentiment Analysis tool for Tweets based on a keyword that can convert unstructured Tweet data to a structured .csv file that can be used as input for training supervised machine learning models. Important Notes: 1)WIP- Will add features to translate tweets,store words as tokens and plot statistical information. 2)Required packages: . . . . . . . . . . . . . . . . . MIT License Copyright (c) 2019 Hamza Ali Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights


7 hours

$1,990

ZMOD

About

Zementis Modeler (ZMOD)

Zementis modeler is an open source machine learning and artificial intelligence platform for Data Scientist to solve business problems faster and quicker, build prototypes and convert them to actual project. The modeler helps from data preparation to model building and deployment, the tool supports large variety of algorithms that can be run without a single line of code. The web based tool has various components which help Data Scientist to perfrom several model building tasks and provides deployment ready PMML files which can be hosted as a REST services. Zementis Modeler allows its user to cover wide variety of algorithms and Deep Neural Network architectures, with minimal or No code enviornment. It is also one of the few deep-learning platforms to support the Predictive Model Markup Languaue (PMML) format, PMML allows for different statistical and data mining tools to speak the same language. The feature offerings of Zementis Modeler are:

  • Zementis AutoML : Automatically train Machine learning models on data supports huge space of algorithms and hyper parameters tuning.
  • Zementis Model Editor : Create Deep Neural Network models using drag and drop functionality, which supports wide variety of model layers, once model architecture is ready, train your model. Zementis Modeler Editor also comes with pre-trained architcture templates that helps in quick model building and training.
  • Jupyter Notebook : Zementis Modeler comes with integrated Jupyter Notebook (For R and Python).
  • Tensorboard : Zementis Modeler provides Tensorboard dashboard to show the progress of models.
  • Code Execution : Zementis Modeler provides support of executing Python script files for more advance requirements.
  • REST API Support : Zementis Modeler can be used using REST calls and can be used as a deployment tool. Zementis Modeler comes to you with the complete source code in Python, .Net, Angular, docker files and extended HTML documentation on how to use guidelines, and a growing number of video, blogs and tutorials that help you familiarize yourself with the way Zementis Modeler supports a Data Scientist on becoming more productive.

7 hours

$1,990

Anatomy Of A Theme

About

Anatomy Of A Theme

Description

In this lesson, you'll learn about the different files that make up a theme and how they work together to display your WordPress website.

Objectives

After completing this lesson, participants will be able to:

  • Recognize that many files are needed to make a theme.
  • Identify the basic blocks are used in a WordPress theme.
  • Identify the files required to make a WordPress theme.

    Target Audience

    Who is this lesson intended for? What interests/skills would they bring? Choose all that apply.

  • [ ] Users
  • [x] Designers
  • [ ] Developers
  • [ ] Speakers
  • [ ] All

    Experience Level


7 hours

$1,990

201Training

About

BusMall

Author: Lena Eivy

Description:

Bus Mall is an application that displays three products at a time and allows users to click on the product that they would be most likely to purchase. The application tabulates the voting data and displays it in a chart. Data is stored in local storage so that it can persist on any given machine.

Dependancies

ChartJS is used to generate the graph of the voting data once the user has voted 25 times.


7 hours

$1,990

PrepCrysEos Py

About

PrepCrysEos_Py

Python script to generate equation of state data from .cif files (performing volumetric expansion and compression of native crystal structures). These .cif structures are then converted to ReaxFF .bgf files with a modified openbabel v2.4.1 code, and can be used as training set data for ReaxFF force fields development.


7 hours

$1,990

AssignPointsToExistingClusters

About

AssignPointsToExistingClusters are algorithms for assigning points in one dataset to clusters in another dataset. Ideally, if we have two datasets that represent the same objects in the real world, there would be an unambiguous correspondence between the two datasets. Though, this is not usually the case when working with real-world data. Hence, this repository exists. Finding correct correspondences between datasets is particularly important when training and testing supervised machine learning models. These algorithms have been specifically developed for finding matches between in situ data and clusters in remotely sensed point clouds (such as from lidar and Structure from Motion), though the ideas will generalize to other contexts in machine learning in which the goal is to match points in one dataset to clusters in another dataset.


7 hours

$1,990

IICARus

About

IICARus

This is a web application used to provide training and reviewing services to the public for learning publications appraisal. This web application is written for supporting project "IICARus", A randomised controlled trial of an Intervention to Improve Compliance with the ARRIVE guidelines. This project is aimed to assess whether mandating the completion of an ARRIVE checklist improves full compliance with the ARRIVE guidelines. Manuscripts, limited to in vivo studies, will be scored by two independent reviewers against the operationalised ARRIVE checklist, blinded both to intervention status and to the scores from the other reviewer. Discrepancies will be resolved by a third reviewer who will be blinded to the identity, and unblinded to the scores, of the previous reviewers. Discrepancies will be resolved by a third reviewer who will be blinded to the identity, and unblinded to the scores, of the previous reviewers.


7 hours

$1,990

Bag Of Visual Words

About

Bag of Visual Words Converter

this course contains a simple implementation of a descriptors to bag of words converter. The file contains functions on converting image descriptors to bag of words. It also includes training a k-NN to find the similar images based on bag of words.

Functions

init_SURF This function initializes the SURF object that will serve as the feature extractor of the image. The two parameters that this function accepts is first the hessianThreshold and the extended parameter.

  • hessianThreshold - this is a threshold to decide from which value of you are willing to accept keypoints. The default value is 100
  • extended - this parameter tells whether or not you want to use the normal SURF (64 pixels) or the extended SURF (128 pixels). Values can only be 0 or 1. 0 for normal SURF and 1 for extended SURF. The default value is 0 RETURNS the SURFObject retrieve_all_images This functions helps the user retrieve the features of all images in the directory. The strategy used by this function is it will iteratively load images by taking advantage of filenames that are named incrementally. (ex. cat.1.jpg, cat.2.jpg, etc.) There are five (5) parameters that are accepted by the function
  • filepath - this is the filepath of the directory
  • file_prefix - this is the text before the iteratable number of the files in the directory. If your filename is boy_face_1.jpg, then the file_prefix would be 'boyface'
  • file_extension - the file extension of the files in the directory. Take note that the dot before the abbreviation must be included. (ex. '.jpg', '.png', etc.)
  • num_images - this is the number of images you want to get the descriptors of. Make sure that num_images is the same as the number of images in the directory
  • SURFObject - pass the SURFObject here RETURNS image_names, image_descriptors create_patch_clusters This function creates the clusters of the patches generated from getting the descriptors of all the images you have in your directory. The parameters that this function takes are only two (2) namely: descriptors_of_images and num_clusters
  • descriptors_of_images - these are the descriptors of all the images in your directory
  • num_clusters - the number of clusters you want to have to represent your bag of words model. RETURNS clusters create_bag_of_visual_words This function converts the a single image's descriptors into bag of words. This function takes in the following parameters
  • descriptors - the image descriptors
  • clusters - the KMeans clusters that are already trained RETURNS image_bow convert_image_to_bow This function converts the a set of images' descriptors into bag of words. This function takes the following parameters:
  • descriptors - the set of descriptors per image
  • clusters - the KMeans clusters that were already trained RETURNS image_bows scale_bows This simply normalizes the bows to prepare it for k-NN and determining the nearest neighbor.
  • image_bows - the image converted to bag of words will serve as the only parameter for this function RETURNS sc, image_bows_normalized (returns the scaler and the normalized bows) train_knn Trains the k-NN for determining similar images. The parameters are as follows:
  • image_bows - all the bag of words of the training images
  • n_neighbors - the number of images you want to see. Default is equal to 5
  • radius - default is equal to 1 RETURNS neighbors (the k-NN model) predict_similar_images This function determines the number of similar images based on the bag of words of the query image. The function has the following parameters
  • image_bow - the bag of words of the image
  • scaler - the normalization function for bow RETURNS kneighbors along with the distances from the query image

7 hours

$1,990

Backpropagation Cuda

About

Parallel Neural Training

This is an application that trains, runs and validates a neural network on GPU, given a dataset. The training of the network is done using the backpropagation algorithm. The parallelization is done using a mix of CUDA, Pthreads and OMP. The program runs on a machine with CUDA 7+ installed. To execute it, run: $ make $ ./parallel_neural_training


7 hours

$1,990

Ufrgs Intel Modern Code

About

ufrgs-intel-modern-code

Intel Modern Code is an initiative to spread knowledge on how to design and optimize software through the use of parallelism, aiming to exploit the full potential of computers and supercomputers. This community is made up of experts who provide libraries, support and training in modern code techniques. The GPPD (Parallel and Distributed Processing Group), the Institute of Informatics at UFRGS, joined the modern source community as an Intel Partner Modern Code (MCP) in August 2016 to offer courses and training.


7 hours

$1,990

CloudCal Training

About

CloudCal

This app will allow you to build & apply calibrations for the Tracer & Artax series of XRF devices. It (currently) works with .csv files produced from S1PXRF, PDZ versions 24 and 25, .spx files, Elio spectra (.spt), .mca files, and .csv files of net counts produced from Artax (7.4 or later).


7 hours

$1,990

Europython2018

About

RIDICULOUSLY ADVANCED PYTHON

Advanced Python Training At EuroPython 2018

Francesco Pierfederici

If you have been using Python for some time already and want to reach new heights in your language mastery, this training session is for you!

Python has a number of features which are extremely powerful but, for some reason are not particularly well known in the community. This makes progressing in our Python knowledge quite hard after we reach an intermediate level. Fear not: this session has you covered! We will look at some advanced features of the Python language including properties, class decorators, the descriptor protocol, annotations, data classes and meta-classes. If time allows we will even delve into the abstract syntax tree (AST) itself. We will use Python 3.7 and strongly recommend that attendees install a reasonably recent version of Python 3 to make the most out of the training.


7 hours

$1,990

JavaBasicsTraining

About

JavaBasicsTraining

This project is to provide the fundamentals of the java language along with examples, so that students can learn it easily... If you want to contribute to this, feel free to do so !!! This training is only about the JAVA language basics and the concepts related to them along with pertinent examples. This does not cover all the technology stack built on top of Java (e.g. EJB or something like that...) Just basic fundamentals. It can be covered in 14 days ( everyday spending just 2 to 3 hours of time, along with coding )


7 hours

$1,990

Trainingdaytwo

About

Terraform Day Two (102)

Overview - Creating Terraform modules.

A Terraform module is a grouping of variables, resources, and outputs that can be reused. It reduces code repitition, and means that the module can be maintained externally to the template using it. And a module is just a Terraform template itself!

Training Goals for day two

  • Understand how to create a module
  • How to use a module from GitHub
  • How to use a versioned module
  • Restrictions of modules NOTE: We will be using the same user roles as trainingdayone had so all terraform commands should be run like this, to use the correct account and user. aws-vault exec terraformrole -- terraform init aws-vault exec terraformrole -- terraform apply
    1. Create a module that creates an ec2 launch template, autoscaling group, and load balancer
    2. Create a template that uses the module to create a Drupal website.
    3. Using a Makefile to simplify commands
    4. Create an S3 bucket in a particular region

7 hours

$1,990

Keras Image Similarity Training

About

Keras Image Similarity Training

Train a convolutional neural network to determine content-based similarity between images. This is done with a siamese neural network as shown The model learns from labeled images of similar and dissimiar pairs. The model's objective is to embed similar pairs nearby and dissimilar pairs far apart. This property of the latent space means kNN searches can find similar images. This idea is based on the paper found

Requirements

Labeled Data

For both training and indexing, labeled data will be needed. This data needed is multiple images of each unique item. Create a JSON file such as the one seen below. The key of top level items should be the item_id. Each value should have an images array, which contains data on each image for that item. Optionally, you can also provide labels for each item_id, where two items sharing some label will not be considered dissimilar. { "item_id_1": { "images": [ { "filename": "relative/path/to/item_1_1.jpg" }, { "filename": "relative/path/to/item_1_2.jpg" } ], "labels": ["red", "pink"] }, "item_id_2": { "images": [ { "filename": "relative/path/to/item_2_1.jpg" }, { "filename": "relative/path/to/item_2_2.jpg" } ], "labels": ["blue"] } }

Training

For training a model, you will definitely need a GPU. If you do not have one, then we suggest only using a pretrained model provided by Keras's API.

Notebook

We provide a Jupyter notebook that will walk you through how to train a siamese network. Note you will need a machine with an Nvidia GPU here. DATA=/path/to/images/and/label/files make notebook

Exporting Model

If you trained a model, run the following make bash-cpu python utilities.py --export savedmodel --keras-model checkpoints/file_saved_by_notebook.hdf5 Else you can use Google's pretrained model on classification make bash-cpu python utilities.py --export savedmodel

Indexing

Images need to be embedded and indexed for fast kNN search. GPU and a trained model DATA=/path/to/images/and/label/files make bash-gpu python utilities.py --export balltree \ GPU and Google's pretrained model DATA=/path/to/images/and/label/files make bash-gpu python utilities.py --export balltree \ CPU and Google's pretrained model DATA=/path/to/images/and/label/files make bash-cpu python utilities.py --export balltree \


7 hours

$1,990

What Is Open Source

About

What is Open Source?

Description

In this lesson, you will learn the meaning of the term Open Source when referring to software, what the GPL software license provides, why WordPress is an open-source project and how this is important for both the users of WordPress and the contributors to WordPress.

Objectives

After completing this lesson, students will be able to:

  • Describe and compare the concepts of open-source software, free software, and proprietary software.
  • Define the purpose of the GPL license.
  • Explain the benefits of open-source software for WordPress users.
  • Identify the ways that individuals and organizations can contribute to the WordPress project.

    Target Audience

    Who is this lesson intended for? What interests/skills would they bring? Choose all that apply.

  • [ ] Users
  • [ ] Designers
  • [X] Developers
  • [ ] Speakers
  • [ ] All

    Experience Level


7 hours

$1,990

Tf2 Models

About

Welcome to the Model Garden for TensorFlow

The TensorFlow Model Garden is a course with a number of different implementations of state-of-the-art (SOTA) models and modeling solutions for TensorFlow users. We aim to demonstrate the best practices for modeling so that TensorFlow users can take full advantage of TensorFlow for their research and product development. - ---

7 hours

$1,990

EPlatform

About

EPlatform

Educational program dedicated for people with severe motor disabilities, with GUI designed to be operated with binary-choice switch devices, with applications for training of the basic linguistic skills: fill the gaps in pictures' labels, match a description to the picture, put the word's letters in a correct order, correct spelling errors in words.


7 hours

$1,990

Txt2Vec

About

Txt2Vec

Txt2Vec is a toolkit to represent text by vector. It's based on Google's word2vec project, but with some new features, such incremental training, model vector quantization and so on. For a specified term, phrase or sentence, Txt2vec is able to generate correpsonding vector according its semantics in text. And each dimension of the vector represents a feature. Txt2Vec is based on neural network for model encoding and cosine distance for terms similarity. Furthermore, Txt2Vec has fixed some issues of word2vec when encoding model in multiple-threading environment. The following is the introduction about how to use console tool to train and use model. For API parts, I will update it later.

Console tool

Txt2VecConsole tool supports four modes. Run the tool without any options, it will shows usage about modes. Txt2VecConsole.exe
Txt2VecConsole for Text Distributed Representation
Specify the running mode:
-mode
: train model to build vectors for words
: calculating the similarity between two words
: multi-words semantic analogy
: shrink down the size of model
: dump model to text format
: build vector quantization model in text format

Train model

With train mode, you can train a word-vector model from given corpus. Note that, before you train the model, the words in training corpus should be word broken. The following are parameters for training mode
Txt2VecConsole.exe -mode train
Parameters for training:
-trainfile : Use text data from to train the model
-modelfile : Use to save the resulting word vectors / word clusters
-vector-size : Set size of word vectors; default is 200
-window : Set max skip length between words; default is 5
-sample : Set threshold for occurrence of words. Those that appear with higher frequency in the training data will be randomly down-sampled; default is 0 (off), useful value is 1e-5
-threads : the number of threads (default 1)
-min-count : This will discard words that appear less than times; default is 5
-alpha : Set the starting learning rate; default is 0.025
-debug : Set the debug mode (default = 2 = more info during training)
-cbow : Use the continuous bag of words model; default is 0 (skip-gram model)
-vocabfile : Save vocabulary into file
-save-step : Save model after every words processed. it supports K, M and G for larger number
-iter : Run more training iterations (default 5)
-negative : Number of negative examples; default is 5, common value are 3 - 15
-pre-trained-modelfile : Use which is pre-trained-model file
-only-update-corpus-word : Use 1 to only update corpus words, 0 to update all words
Example:
Txt2VecConsole.exe -mode train -trainfile corpus.txt -modelfile vector.bin -vocabfile vocab.txt -debug 1 -vector-size 200 -window 5 -min-count 5 -sample 1e-4 -cbow 1 -threads 1 -save-step 100M -negative 15 -iter 5
After the training is finished. The tool will generate three files. vector.bin contains words and vector in binary format, vocab.txt contains all words with their frequency in given training corpus, and vector.bin.syn which is used for incremental model training in future.

Incremental Model Training

After we collected some new corpus and new words, to get these new words' vector or update existing words' vector by new corpus, we need to re-train existing model in incremental model. Here is an example:
Txt2VecConsole.exe -mode train -trainfile corpus_new.txt -modelfile vector_new.bin -vocabfile vocab_new.txt -debug 1 -window 10 -min-count 1 -sample 1e-4 -threads 4 -save-step 100M -alpha 0.1 -cbow 1 -iter 10 -pre-trained-modelfile vector_trained.bin -only-update-corpus-word 1
We have already trained a model "vector_trained.bin" before, currently, we have collected some new corpus named "corpus_new.txt" and new words saved into "vocab_new.txt". The above command line will re-train existing model incrementally, and generate a new model file named "vector_new.bin". To get better result, the "alpha" value should be usually bigger than that in full corpus and vocabulary size training.
Incremental model training is very useful for incremental corpus and new word. In this mode, we are able to generate new words vector aligned with existing words efficiently.

Calculating word similarity

With distance mode, you are able to calculate the similarity between two words. Here are parameters for this mode Txt2VecConsole.exe -mode distance
Parameters for calculating word similarity
-modelfile : encoded model needs to be loaded
-maxword : the maximum word number in result. Default is 40
After the model is loaded, you can input a word from console and then the tool will return the Top-N most similar words.


7 hours

$1,990

CrowdLayer

About

CrowdLayer

A neural network layer that enables training of deep neural networks directly from crowdsourced labels (e.g. from Amazon Mechanical Turk) or, more generally, labels from multiple annotators with different biases and levels of expertise, as proposed in the paper:

Rodrigues, F. and Pereira, F. Deep Learning from Crowds. In Proc. of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). This implementation is based on Keras and Tensorflow.


7 hours

$1,990

Work around with Java and PostgreSQL

About

PostgreSQL and Java training

this course contains the source code of the examples and code demos included in some of the "PostgreSQL and Java" trainings that [8Kdata][1] delivers. Java source code Included in the java directory are some maven-ized Java projects:

  • helloJDBC: an iterative approach to JDBC, where a simple example is improved across several executable programs, adding better JDBC constructs and Java best-practices
  • hellojOOQ: a simple project to show how to take back control of your SQL with [jOOQ][2]
  • helloMyBatis: a simple project to show how to use the [MyBatis][3] mapper
  • helloProcessBuilder: connect to PostgreSQL via the stdin
  • helloPool: a simple project to show how to use of connection pool with [HikariCP][5] and [FlexyiPool][6]
  • helloPLJava: call Java from inside of PostgreSQL via PL/Java Example database Included in the db directory is an example database used by the projects in the java folder. This database is derived from the [PgFoundry Sample Databases][4] world database.

7 hours

$1,990

Lfcs

About

LFCS Training Resources

Training resources for LFCS certification (Linux Foundation Certified System Administrator)

Overview of Domains and Competencies

Command-line Filesystem & storage Local system administration Local security Shell scripting Software management


7 hours

$1,990

SenseHatShowIP

About

SenseHatShowIP

Small program for the Raspberry Pi with Sense Hat: Displays IP address on the Sense Hat, useful for headless operation esp. for events like training workshops

Purpose

The intended usage is as a program that can be included during startup of a Raspberry Pi with Sense Hat attached, enabling the Pi to announce its' IP address so that a remote system can use the IP address to connect to it (using ssh etc). The program logic relies on the target address in external_IP_and_port being routable, so if you're running this on a network not connected to the Internet or lacking a default route you'll need to alter external_IP_and_port to use an address that the Pi can reach on your network. It should be helpful for workshops using Raspberry Pis with Sense Hats allowing the Pis to be used 'headless' via ssh - which is what I wrote this utility for.


7 hours

$1,990

Explore NeuroBricks

About

A theano warpper for implementing, training and analysing neural nets with convenience.

Core Features

By using pre-defined layers and overiding the "+" operator, we can build deep networks within one line. Now building deep networks is as easy as stacking bricks together.

Easy to rearrange diffrent layers, be it pretrained or not, into new models. It allows you to build a network with different kind of layers, or reuse trained layers from some other models. It also brings convenience to form ensumbles. We make training methods completely separated to the models. With this separability, you can apply any training methods on any networks you built.

We also ensure that diffent kinds of training mechanisms get independent to each other. This allows you to combine different tricks together to form very complicated training procedures, like combining Dropout, Feedback Alignment, and unsupervised pretraining together, without having to define a new training class. A set of analyzing and visualizing methods are built into the definition of the models. So you are going to have a lot of analysing methods at hand right after your model is created. Most analysing methods allow interactive data updating, you can see how the weights/activations are changing during training epoches.


7 hours

$1,990

MachineLearningObjectTracking

About

Machine Learning Object Tracking (MLOT)

This course contains all the functions needed for the MLOT algorithm. this algorithm is useful for identification of low SNR particles in 3D. This README file contains a general overview of the algorithm, as well as installation instructions.

General Overview

Fundamentals

MLOT uses a linear logistic regression in order to create a general linear model of data based on a small sample dataset. The user is presented with this small training dataset and asked to identify all in focus particles of interest. With pixel locations of known particles, this linear model is generated and stored. The linear model is applied to unkown data in order to calculate the probability of particlse presence in the dataset. Using a threshold probablility value, unknown data is analyzed. Once all data has been analyzed and particle locations through time are known, these (x,y,z,t) points are then stitched together using a Hungarian simple tracking algorithm with gap detection. You can read more about the fundamental mathematics behind this algorithm, as well as preliminary results and error quantification, on out AIMES Biophysics paper (link coming soon).

Practical Implementation

MLOT has three major steps:

  1. Image pre-processing
  2. Training
  3. Tracking

    Image pre-processing

    Preprocessing is a very imporatant step as it de-noises the images and increases the SNR so that the algorithm can better identify particles. This routines (currently) consists of two denoising steps.

  4. Mean Subtraction
  5. Band-Pass Filtering Mean subtraction calculates the mean image of a time sequence of images and subtracts that mean image from all images in that sequence. This eliminates any stationary artifacts from the image and increases contrast for objects that are moving (i.e the things that are going to be tracked). Band-pass filtering eliminates low and high spatial frequency noise from images. For the DHM instrument used in the AIMES Biophysics Paper (link coming soon), the diffraction limited resolution of the instrument is about 800 nm. This represents a physical upper limit of spatial frequencies that can be observed, thus anything beyond this frequency is pure noise. A lower cut off frequency is used to attenuate any large scale artificats in the image (e.g lens curvature). To edit the low and high frequency cutoff values for this filter, edit the following variables in MAIN.m (located in the 'Ask user for inputs' section) innerRadius = 30; outerRadius = 230; centerX = n(1)/2; centerY = n(2)/2;

    Training

    This step presents the user with a total of ten z-slices and asks the user to select ONLY in-focus particles. This step is crucial because this determines the selection sensativity of the tracking algorithm. By selecting only in focus particles helps the program intrinsically reduce false positives. For more information on the GUI aspect of the training routine, see the 'Running the Code' Section of this README document. Once all in focus particles are selected through the 10 z-slices, a linear logistic regression is used to generate a linear model of the data and answer key provided by the user (where the particles are located). This linear model is used to track other datasets.

    Tracking

    Tracking is done in two stages:

  6. Particle Detection
  7. Particle Linking

    Particle Detection


7 hours

$1,990

Video2frames

About

video2frames

A python script for converting videos to series of frames for NN training Run by typing python video2image.py path -o output_dir --skip n --mirror The path leads to a video or a directory with only videos. The optional output folder specifies the directory where to save the images. Can be called with -o or --output. The optional skip parameter specifies every nth frame that should be saved. When running with the optional --mirror parameter, every second image saved is flipped.


7 hours

$1,990

App Abc List

About

ABC-List

Intention ABC-List is a learning software which allowed the user to train his ability structured thinking about a choosen topic and deepen his knowlegde in the topics through key terms. ABC-List is a [JavaFX] & [Maven] application, written in [NetBeans IDE]. Image: 0.2.0-PRERELEASE
Content

  • Download
  • Requirements
  • License
  • Autor
  • Contact

7 hours

$1,990

Pyssword

About

A simple program allowing you to practice password memory. It is not a password manager, passwords are securely stored as hashed strings hashed using bcrypt algorithm. If you like to rely on your memory and like to remember very long, random passwords full of strange characters, this program is for you. This script helps to build a good habit to practice your memory once every few days.


7 hours

$1,990

Guma

About

GUMA a Free Software Educational Program for elementary school students. What is GUMA? Guma is a ree Software Educational Program for elementary school students, that helps thet to practice in basic arithmetic operations of multiplication,adding, substraction and division. So how does help them? Is helps them by challenging them to solve random arithmetic praxis with random numbers. Also you can selext the number of the random of arithmetic praxis that you want to slove, the maximum value of a number that you want to participate in arithmetic praxis, and the type of arithmetic praxis that you want to practice. You can select to simulate the arithmetic operation.


7 hours

$1,990

PraxManager

About

PraxManager

What is PraxManager ? PraxManager is an online instrument destined to monitor and evaluate the practical training of the students in several specializations from healthcare education field, offering in the same time an overview on the evolution of the students during their training and the performance of the schools in time. PraxManager, the software developed in the project is tested and documented during the project lifetime, in local context, in the daily activities of other partner schools and in transnational mobilities for students.

Cope Project

The project entitled CoPE (Communities of practice in education) aims at creating an on line instrument destined to monitor and evaluate the practical training of the students in several specializations from healthcare education field, offering in the same time an overview on the evolution of the students during their training and the performance of the schools in time. PRAX-Manager, the software developed in the project will be developed, tested and documented during the project lifetime, in local context, in the daily activities of other partner schools and in transnational mobilities for students.

Erasmus+


7 hours

$1,990

Krautli

About

krautli_yo

Krautli is supposed to become an (offline/online)app that allows users to log positions of their favorite plants and find them back at times certain of their plantparts become harvestable. It started with pen&paper and advanced to a collection of notes and photos taken with a smartphone. The app -is- will be capable of logging Positions and Access Data offline and synchronize with the server if needed.

The Techstack behind:

Development setup

Pages / Routes accessible


7 hours

$1,990

Datalab

About

datalab

Purpose

Provide the means of ownership of one's data and trained models used in machine learning applications.

Functions

Store data as datapoints and add labels to each datapoint. Datapoints can be organized into different datasets and downloaded as labelled data.

Entities

Each datapoint derives from an entity. Entity is a special type which stores the data in a specific way.

Image entity

When storing image data, this entity extracts the pixel array and stores it in 3-tuples of RGB components.


7 hours

$1,990

Diffchecker

About

diffchecker

An open-source tool for ACM ACPC trainings to compare judge output with generated output

Description

In order to check your program if it generates the right output, you can use diffcheker to compare them with the judge output. diffchecker will display "Accepted" in green if the gen.out and judge.out are identical, "Wrong Answer" in red, or other problems in blue. In case of "Wrong Answer", it will display the cases that you should reconsider. If there are multiple ones, it will write them in a file called diff.txt.

Dependencies

  • Colorize gem : sudo gem install colorize

7 hours

$1,990

Horseshoehell

About

horseshoehell

Horseshoe Hell is a mobile app designed for the 24 Hours of Horseshoe Hell climbing competition at Horseshoe Canyon Ranch Arkansas. This app is used for score keeping during the competition and can be used to view others scores, past scores, and can be used for training. The server side code and website to display results are included in this repository as well. This project (including the iOS and Android apps as well as the website and service side code) is owned by Luke Stufflebeam. Please consult the included license for details on usage retrictions. This software is open source and can be modified by anyone who wants to contribute to the project. Please contact Luke before attempting to modify the code. Luke Stufflebeam lucas@lucasstufflebeam.com


7 hours

$1,990

Dataset Creator GroundTruth

About

This tool is used for enhancing the process of collecting training samples from image datasets. cropping multiple regions. The tool allows you to use mouse-drag controls to crop ROIs and automatically store their image coordinates to a simple XML file for the validation of Ground Truth information once a statistical a statistical learning method or classifier has been trained.


7 hours

$1,990

Etcher

About

Etcher

Flash OS images to SD cards & USB drives, safely and easily. Etcher is a powerful OS image flasher built with web technologies to ensure flashing an SDCard or USB drive is a pleasant and safe experience. It protects you from accidentally writing to your hard-drives, ensures every byte of data was written correctly and much more. It can also flash directly Raspberry Pi devices that support the usbboot protocol


Supported Operating Systems

Note that Etcher will run on any platform officially supported by


7 hours

$1,990

Gson

About

Gson

Gson is a Java library that can be used to convert Java Objects into their JSON representation. It can also be used to convert a JSON string to an equivalent Java object. Gson can work with arbitrary Java objects including pre-existing objects that you do not have source-code of. There are a few open-source projects that can convert Java objects to JSON. However, most of them require that you place Java annotations in your classes; something that you can not do if you do not have access to the source-code. Most also do not fully support the use of Java Generics. Gson considers both of these as very important design goals.

Goals

  • Provide simple toJson() and fromJson() methods to convert Java objects to JSON and vice-versa
  • Allow pre-existing unmodifiable objects to be converted to and from JSON
  • Extensive support of Java Generics
  • Allow custom representations for objects
  • Support arbitrarily complex objects (with deep inheritance hierarchies and extensive use of generic types)

7 hours

$1,990

Xi Editor

About

(pronounced "Zigh") A modern editor with a backend written in Rust. Note: This repo contains only the editor core, which is not usable on its own. For editors based on it, check out the list in Frontends. The xi-editor project is an attempt to build a high quality text editor,


7 hours

$1,990

V

About

The V Programming Language

Key Features of V

Stability guarantee and future changes

Despite being at an early development stage, the V language is relatively stable and has backwards compatibility guarantee, meaning that the code you write today is guaranteed to work a month, a year, or five years from now. There still may be minor syntax changes before the 1.0 release, but they will be handled automatically via vfmt, as has been done in the past. The V core APIs (primarily the os module) will still have minor changes until they are stabilized in 2020. Of course the APIs will grow after that, but without breaking existing code. Unlike many other languages, V is not going to be always changing, with new features being introduced and old features modified. It is always going to be a small and simple language, very similar to the way it is right now.


7 hours

$1,990

Material Dialogs

About

Material Dialogs

Modules

The core module is the fundamental module that you need in order to use this library. The others are extensions to core.

Core

The core module contains everything you need to get started with the library. It contains all core and normal-use functionality. dependencies { implementation 'com.afollestad.material-dialogs:core:3.3.0' }

Input

The input module contains extensions to the core module, such as a text input dialog. dependencies { implementation 'com.afollestad.material-dialogs:input:3.3.0' }

Files

The files module contains extensions to the core module, such as a file and folder chooser. dependencies { implementation 'com.afollestad.material-dialogs:files:3.3.0' }

Color

The color module contains extensions to the core module, such as a color chooser. dependencies { implementation 'com.afollestad.material-dialogs:color:3.3.0' }

DateTime

The datetime module contains extensions to make date, time, and date-time picker dialogs. dependencies { implementation 'com.afollestad.material-dialogs:datetime:3.3.0' }

Bottom Sheets

The bottomsheets module contains extensions to turn modal dialogs into bottom sheets, among other functionality like showing a grid of items. Be sure to checkout the sample project for this, too! dependencies { implementation 'com.afollestad.material-dialogs:bottomsheets:3.3.0' }

Lifecycle

The lifecycle module contains extensions to make dialogs work with AndroidX lifecycles. dependencies { implementation 'com.afollestad.material-dialogs:lifecycle:3.3.0' }


7 hours

$1,990

Explore Picasso

About

Picasso is a powerful image downloading and caching library for Android. Picasso allows for hassle-free image loading in your application—often in one line of code! 

Many common pitfalls of image loading on Android are handled automatically by Picasso:

  • Handling ImageView recycling and download cancelation in an adapter.
  • Complex image transformations with minimal memory use.
  • Automatic memory and disk caching.

Content

  • Download
  • Features
  • Image Transformation
  • Place Holders
  • Resource Loading
  • Debug Indicators

7 hours

$1,990

SnapKit

About

SnapKit is a DSL to make Auto Layout easy on both iOS and OS X.

To use with Swift 4.x please ensure you are using >= 4.0.0

To use with Swift 5.x please ensure you are using >= 5.0.0

Contents

Requirements

Communication


7 hours

$1,990

Discover Jquery Pjax

About

pjax is a jQuery plugin that uses ajax and pushState to deliver a fast browsing experience with real permalinks, page titles, and a working back button. pjax works by fetching HTML from your server via ajax and replacing the content of a container element on your page with the loaded HTML. It then updates the current URL in the browser using pushState. This results in faster page navigation for two reasons:

  • No page resources (JS, CSS) get re-executed or re-applied;
  • If the server is configured for pjax, it can render only partial page contents and thus avoid the potentially costly full layout render.

Content

  • Installation
  • Usage
  • Events

7 hours

$1,990

ML From Scratch

Machine Learning From Scratch

About

Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose of this project is not to produce as optimized and computationally efficient algorithms as possible but rather to present the inner workings of them in a transparent and accessible way.

Table of Contents

  • About
  • Table of Contents
  • Installation
  • Examples
    • Polynomial Regression
    • Classification With CNN
    • Density-Based Clustering
    • Generating Handwritten Digits
    • Deep Reinforcement Learning
    • Image Reconstruction With RBM
    • Evolutionary Evolved Neural Network
    • Genetic Algorithm
    • Association Analysis
  • Implementations
    • Supervised Learning
    • Unsupervised Learning
    • Reinforcement Learning
    • Deep Learning
  • Contact

7 hours

$1,990

ExoPlayer

About

ExoPlayer is an application level media player for Android. It provides an alternative to Androids MediaPlayer API for playing audio and video both locally and over the Internet. ExoPlayer supports features not currently supported by Androids MediaPlayer API, including DASH and SmoothStreaming adaptive playbacks. Unlike the MediaPlayer API, ExoPlayer is easy to customize and extend, and can be updated through Play Store application updates.


7 hours

$1,990

Libra

About

Libra Core implements a decentralized, programmable database which provides a financial infrastructure that can empower billions of people.

Note to Developers

  • Libra Core is a prototype.
  • The APIs are constantly evolving and designed to demonstrate types of functionality. Expect substantial changes before the release.
  • Weve launched a testnet that is a live demonstration of an early prototype of the Libra Blockchain software.

7 hours

$1,990

MWPhotoBrowser

About

A simple iOS photo and video browser with optional grid view, captions and selections.

MWPhotoBrowser can display one or more images or videos by providing either UIImage objects, PHAsset objects, or URLs to library assets, web images/videos or local files. The photo browser handles the downloading and caching of photos from the web seamlessly. Photos can be zoomed and panned, and optional (customisable) captions can be displayed. The browser can also be used to allow the user to select one or more photos using either the grid or main image view. Works on iOS 7+. All strings are localisable so they can be used in apps that support multiple languages.


7 hours

$1,990


Is learning Programming VIII hard?


In the field of Programming VIII learning from a live instructor-led and hand-on training courses would make a big difference as compared with watching a video learning materials. Participants must maintain focus and interact with the trainer for questions and concerns. In Qwikcourse, trainers and participants uses DaDesktop , a cloud desktop environment designed for instructors and students who wish to carry out interactive, hands-on training from distant physical locations.


Is Programming VIII a good field?


For now, there are tremendous work opportunities for various IT fields. Most of the courses in Programming VIII is a great source of IT learning with hands-on training and experience which could be a great contribution to your portfolio.



Programming VIII Online Courses, Programming VIII Training, Programming VIII Instructor-led, Programming VIII Live Trainer, Programming VIII Trainer, Programming VIII Online Lesson, Programming VIII Education