Programming Courses Online

Instructor-led live Computer Programming training courses demonstrate through interactive hands-on practice the fundamentals and advanced topics of Programming. Experience the remote live training by way of interactive and remote desktop led by a human being!

Live Instructor Led Online Training Programming courses is delivered using an interactive remote desktop! .

During the course each participant will be able to perform Programming exercises on their remote desktop provided by Qwikcourse.


How do I start learning Programming?


Select among the courses listed in the category that really interests you.

If you are interested in learning the course under this category, click the "Book" button and purchase the course. Select your preferred schedule at least 5 days ahead. You will receive an email confirmation and we will communicate with trainer of your selected course.

Programming Training


Learn Windows Forms with C#.Net

About

This course will enable delegates to develop Windows applications using Visual Studio 2005. This includes creating customized forms and controls. It will also cover the C# programming language, using the principles of object orientated programming. Delegates will be able to access data from a database and update this from a Windows form.


35 hours

$9,950

Applying NAF with MAGICDRAW

About

This is a 3-day training covering principles of modelling, NAF, UPDM and use of MagicDraw following a case study, which demonstrates a typical defence architecture approach.

The course:

  • Includes lectures and hands-on practice in using MagicDraw for NAF modelling using UPDM;
  • Explains NAF views, sub-views and concepts;
  • Explains UPDM concepts and diagrams;
  • Provides hands-on experience building models;
  • Shows how to trace model elements in different views;
  • Explains how to use MagicDraw features efficiently;

Is based on a consistent modelling case study.

Audience:

  • Enterprise architects,
  • system architects,
  • system engineers,
  • software architects and other stakeholders who will develop and use models

Methods:

  • Presentations, discussions, and case study-based practical assignments.

Course Materials:

  • Slides, case study model, and practical assignment descriptions.

Certificates:

  • Each participant receives No Magic and NobleProg certificate indicating that he attended the training.

21 hours

$5,970

Fundamentals of Wolfram Language

About

The Wolfram Language is a general multi-paradigm computational language developed by Wolfram Research. It emphasizes symbolic computation, functional programming, and rule-based programming and can employ arbitrary structures and data. It is the programming language of the mathematical symbolic computation program Mathematica.

Content

  • History

  • Syntax

    • Basics
    • Syntax sugar
    • Functional programming
    • Pattern matching
  • Implementations

 


7 hours

$1,990

Get to know Chipmunk Basic

About

Chipmunk Basic as supplied to freeware interpreter for Basic programming language called Chipmunk Basic (release 3 version 6 updates 6 patches 0) for Mac OS X (Snow Leopard) or newer by Ron H Nicholson. Some statements work only in the GUI-version, other via the command-line interface or both. Most commands and statements should work more or less the same under other supported platforms like Linux or Microsoft Windows. It's no obligation to start statements with a line number if you write them using an advanced syntax-checking editor like TextWrangler for OS X or Notepad++ on Windows. 

Content

Contents

  • Commands
  • Constants
  • Files
    • directory
    • file
    • input
    • print
    • serial-ports
  • Functions
  • Graphics
  • Objects
  • Operators
    • Arithmetic
    • Boolean algebra
    • Comparison
  • Sound
  • Statements
  • Subroutines
  • Downloads
    • Linux
    • Mac
    • Raspberry Pi
    • Windows

 


21 hours

$5,970

Discover Monkey Programming Language

About

Monkey is a BASIC dialect programming language that translates Monkey code into multiple cross-platforms. Currently, the supported target platforms include Windows, Mac OS X, Android, iOS, HTML5, Flash and XNA.

Monkey is the latest programming language by Blitz Research Limited, following BlitzMax (2004) and BlitzBasic (2000), two prior BASIC programming dialects from the same author.

Monkey code is translated into the target language via the Trans tool and into a native compiler depending on the target platform. Monkey requires the use of other compilers and development kits to reach the end target. This process is largely automated with the accompanying IDE, Monk (2011) and Ted (2012).

Content

  • History
  • Compilation
  • Modules
  • Code examples

14 hours

$3,980

Discover MUMPS Programming

About

MUMPS is a programming language. It is named after the acronym Massachusetts General Hospital Utility Multi-Programming Systems.

If you have programmed before and would like to see a little bit of how MUMPS works and is different from other programming languages, you can get an overview.

Content

  • Beginning MUMPS
  • MUMPS syntax and functions Lines, spaces, commands & arguments
  • Basic Terminology- Routines & Globals Programs: Routines, Database: Globals
  • MUMPS features
  • MUMPS and data
  • MUMPS code and interfaces
  • Setting up to Program MUMPS Set up the IDE and Install MUMPS
  • Advanced MUMPS
  • History of MUMPS

7 hours

$1,990

Learn PWCT (Programming Without Coding Technology)

About

PWCT is a general-purpose visual programming language and software development platform that enables the development of systems and applications, by generating interactive steps instead of writing code.

PWCT is Free and open-source software under the GNU General Public License version 2.

The visual source inside PWCT is designed using the Goal Designer where the programmer can generate the steps tree through the interaction with the visual language components.

Content

  • Concept
  • Features
  • Visual Languages

 

 

 


21 hours

$5,970

Get to know Vala Programming

About

Vala is a new programming language that aims to bring modern programming language features to GNOME developers without imposing any additional runtime requirements or different Application Binary Interfaces (ABIs) compared to applications and libraries written in C.

Content

  • History
  • Programming Style
  • Getting Started
  • Concepts
  • Syntax
  • Libraries
  • Techniques

21 hours

$5,970

Discover Zope

About

Zope is an open source Web server built on top of Python. It is most commonly used for the content management systems Plone and CPS and the enterprise resource planning system ERP5. This textbook will help you install and operate a Zope server.

Content

  • Installation
    • Windows
    • Unix and Linux
      • Debian/Ubuntu using Debian packages
      • Building from source
  • Getting Started
  • Using Zope with Apache

14 hours

$3,980

A Little C Primer

About

This course is a quick introduction to the C programming language. It is written by a novice and is intended for use by a novice. However, it does assume familiarity with a programming language.

The C programming language is a "middle-level" language. It provides low-level programming capability at the expense of some user-friendliness. Cynics tend to claim that C combines the flexibility and power of assembly language with the user-friendliness of a high-level language, but experienced programmers find that the limited set of keywords and the use of pointers allows for fast and elegant programming solutions. C first rose to popularity with the growth of UNIX, and has been used creating the Windows operating system from its earliest versions. It is also used in microcontrollers and super-computers.

The original implementations of C were defined as described in the classic reference, THE C PROGRAMMING LANGUAGE, authored by Brian Kernighan and Dennis Ritchie. This definition left a few things to be desired, and the American National Standards Institute (ANSI) formed a group in the 1980s to create a complete specification. The result was "ANSI C", which is the focus of this document.

Content

  • Tools For Programming
  • Table of Contents
    • Introduction to C; Functions; Control Constructs
    • C Variables, Operators & Preprocessor Directives
    • C Input & Output
    • C Library Functions & Other Comments
    • C Quick Reference
    • Comments and Revision History

21 hours

$5,970

Fundamentals of F# Programming

About

This course is suitable for complete beginners to F# and Functional Programming in general. F# is a functional programming language. Not surprisingly, functions are a big part of the language, and mastering them is the first step to becoming an effective F# developer. "Data structure" is a fancy word which refers to anything that helps programmers group and represent related values in useful, logical units. F# has a number of built-in data structures which include tuples, records, lists, unions, and a number of others. F# is an "impure" programming language, meaning it allows programmers to write functions with side-effects and mutable state, very similar to the programming style used by imperative programming languages such as C# and Java. F# is a CLI/.NET programming language. CLI is an object-oriented platform. One of the most important features of F# is its ability to mix and match styles: since the .NET platform is Object Oriented, with F#, you often work with objects. F# is easy enough for beginners to learn as their first language, yet it provides a powerful set of tools which can be appreciated by experienced developers. This section describes advanced syntactic contructs and techniques often used in F# programs.

Content

  • Introduction

  • Set-Up 

  • Basic Concepts 

  • Declaring Values and Functions

  • Pattern Matching Basics 

  • Recursion and Recursive Functions

  • Higher Order Functions 

  • Data Structures

    • Option Types

    • Tuples and Records 

    • Lists 

    • Sequences 

    • Sets and Maps 

    • Discriminated Unions 

  • Imperative Programming

    • Mutable Data 

    • Control Flow 

    • Arrays 

    • Mutable Collections

    • Basic I/O

    • Exception Handling

  • Object Oriented Programming

    • Operator Overloading

    • Classes 

    • Inheritance

    • Interfaces 

    • Events

    • Modules and Namespaces


21 hours

$5,970

Making a Programming Language From Scratch

About

This course covers the art of language creation. Making a language is a sophisticated task, however, simple languages can be made by transpiling to other higher level languages and by using lexing and parsing packages such as Bison or Flex. This course does not cover this. It demonstrates the creation of languages from nothing at all, as most commercial languages are. Here, the basic algorithms for conversion, assembly language equivalents for some common statements, the advantages and disadvantages of each type of compilation method, basic lexing and parsing are demonstrated. Note that This course assumes that you have at least a moderate understanding of x86 assembly and can write programs in a language. Keep in mind that language creation is an exhaustive process, and thus will require many days of hard labor to create.

Content

Preliminaries

  • Decisions

  • Line by Line Input System

Data Declarations

  • Simple Data Types

  • Arrays

  • Pointers

  • Structures

Expressions

  • Simple Expressions

  • Complex Expressions

Conditions

  • Comparing Two Values

  • Complex Conditions

  • The Braces Problem

  • If statements

  • Else if and else

  • While statements

Functions

  • Localizing

  • Function Definitions

  • Function Call


7 hours

$1,990

Windows Programming

About

This course aims to be a comprehensive source for any developer who is interested in programming for the Windows platform. It starts at the lowest level, with the Win32 API (C and VB Classic) and then goes over to MFC (C++). Beyond these basic sections, it will cover COM, and the creation of ActiveX modules from a variety of languages. Next, it delves into the Windows DDK, and talk about programming device drivers for Windows platform. Finally, it moves on to the highest-level programming tasks, including shell extensions, shell scripting, and finally ASP and WSH. Other topics that will be discussed here are: Writing screen-savers, creating HTML help modules, and compiling DLL files. This course will focus on topics that are specific to Windows and avoids general programming topics. For related material the reader is encouraged to look into Wikibooks other works, they will cover general programming, ASM, C, C++, Visual Basic, and Visual Basic.NET, and other languages and concepts in greater detail. Appropriate links to these books are provided. The reader is assumed to have previous knowledge of the programming languages involved. Specifically, prior knowledge in C, C++, and Visual Basic is required for certain sections of this course.

Content

Section 1: Windows Basics

  • Windows System Architecture

  • User Mode vs Kernel Mode

  • C and Win32 API

  • Handles and Data Types

  • Unicode

  • Dynamic Link Libraries (DLL)

  • Programming Windows With OSS Tools

  • Resource Scripts

Section 2: Win32 API and UI Controls

  • Message Loop Architecture

  • Interfacing (Mouse, keyboard, and timer messages)

  • Window Creation

  • User Interface Controls

  • GDI and Drawing

  • Dialog Boxes

  • Input-Output

  • File Management

  • Memory Subsystem (heaps, virtual memory)

  • Multitasking

  • Interprocess Communication

  • MDI Programs

  • Registry API

  • Security API

  • Winsock

Section 3: Microsoft Foundation Classes (MFC)

  • Microsoft Foundation Classes (MFC)

    • Classes Hierarchy

Section 4: Dynamic Data Exchange (DDE), ActiveX and COM

  • Dynamic Data Exchange (DDE)

  • COM and ActiveX

  • COM Programming

  • DCOM and COM+

  • Multi-language programming examples

  • OLE Programming

Section 5: Device Driver Programming

  • Device Driver Introduction

  • The DDK

  • Driver Structure

  • Driver API

  • Terminate and Stay Resident (TSR)

  • Virtual Device Drivers (VXD)

  • Windows Driver Model (WDM)

  • Vista Driver Migration

Section 6: Shell Programming

  • Programming Shell Extensions

  • Extending IE

  • Programming Screen-savers

  • Programming Services

  • Programming CMD aka Windows Batch Programming

    • Sample FTP script

  • Control Panel Applets

  • Windows Script Host

  • ASP

    • JScript

    • VBScript

    • PerlScript

  • Compiled HTML Help and Help API


35 hours

$9,950

Learn Raku Programming

About

Raku is a successor of the Perl programming language, representing a major backwards-incompatible rewrite of the language. It's a versatile and powerful multi-paradigm programming language. This course is going to introduce the reader to the Raku language and its many features.

Content

  • Introduction

  • Raku Basics

    • Variables and Data
    • Types and Context
    • Basic Operations
    • Control Structures
    • Subroutines
    • Blocks and Closures
    • Classes And Attributes
    • Comments and POD
  • Rules and Grammars

    • Regular Expressions
    • Grammars
    • Operator Overloading
    • Language Extensions
  • Data Types and Operators

    • Junctions
    • Lazy Lists and Feeds
    • Meta Operators
    • Roles and Inheritance
  • Blocks and Subroutines

    • Advanced Subroutines
    • Exceptions and Handlers
    • Property Blocks
  • Multitasking and Concurrency

    • Coroutines
    • Threading
    • Save States
  • Input and Output

    • Files

 


21 hours

$5,970

Discover Alcor6L Programming

About

Alcor6L is a simple-to-use multi-language interactive programming environment that runs on a variety of embedded hardware devices.

It runs programs in:

  • Lua using eLua
  • C using PicoC
  • Lisp using PicoLisp

and in development:

  • BASIC using MY-BASIC
  • Scheme using TinyScheme
  • Forth (which?)

on cheap 32-bit single-board computers:

  • The Mizar32 models A and B by simplemachines.it, available from 4star.it
  • LM3S and STM32 ARM boards

 


7 hours

$1,990

Pascal Programming

About

Pascal is an influential computer programming language named after the mathematician Blaise Pascal. It was invented by Niklaus Wirth in 1968 as a research project into the nascent field of compiler theory.

Content

Standard Pascal

  • Getting started 
  • Beginning Pascal 
  • Variables and Constants 
  • Input and Output 
  • Expressions and Branches 
  • Routines 
  • Enumerations 
  • Sets
  • Arrays 
  • Strings
  • Records 
  • Pointers 
  • Files
  • Scopes

Extensions

  • Units 
  • Object-oriented Programming 
  • Exporting to libraries
  • Foreign Function Interfaces
    • Objective Pascal
  • Generics
  • Miscellaneous extensions

7 hours

$1,990

Prolog Fundamentals

About

Welcome to the Prolog Course. This course can serve as a course or tutorial for anyone who wants to learn the Prolog programming language. No prior programming experience is required. Some basic knowledge of logic can come in handy. For those new to the subject, a short introduction to logic is given, but this is not required reading. The first chapters of the course (under Basics) describe the central syntax and features of the language.

Content

  • Introduction
  • Rules
  • Recursive Rules
  • Variables
  • Lists
  • Math, Functions and Equality
  • Cuts and Negation
  • Reading and Writing code
  • Difference Lists
  • Definite Clause Grammars
  • Inference Engines
  • Testing Terms
  • Bagof, Setof and Findall
  • Modifying the Database
  • Input and Output

7 hours

$1,990

Competitive Programming

About

This course is about programming competitions and what you need to know in order to be competitive.... The primary reason why people compete in programming contests is that they enjoy it. For many, programming is actually fun, at least until they get a job doing it and their desire is burnt out of them. It is also a good way to meet people with similar interests to your own. But for those of you who need additional incentive, it is also a good way to increase others awareness of you. Major programming competitions are always monitored by people looking for new talent for their organizations, sometimes these are the people who actually fund the contest. High school programming contests (such as the ones sponsored by the BPA) often are to help prepare students for college or careers in computer programming. These contests often attract scouts from colleges looking to award scholarships to these individuals. For example, IBM is currently funding the ICPC, a contest that costs them millions annually. Why would they pay so much for a programming contest? They view it as an investment. It lets them filter through the talent and get to those that could potentially make IBM much more money in the long run. Before IBM the contest was funded by Microsoft for the same reasons. Organizations that feel like they can't quite afford the huge price tag associated with ICPC have begun to fund cheaper contests, such as TopCoder, and in the case of Google, running their own contest through TopCoder's technology. The first thing needed to get started is proficiency in a programming language and familiarity with a text editor/development environment. The two languages common to all of the above programming competitions are C++ and Java. These languages will be used throughout This course. There are many books and tutorials available to learn these languages, in addition to an unending amount of freely available code on the internet.

Content

  • What Is This Course About?
  • Where Can I Compete?
  • Why Should I Compete?
  • How Do I Get Started?
  • Which Language Should I Use?
  • What Are The Contests Like?

7 hours

$1,990

Introduction to Programming

About

This course will introduce various concepts in computer programming. There are some simplifications in the explanations below. The purpose of programming is to tell the computer what to do. Computers are better at doing some things than you are like a chef is better at cooking than you are. It's easier to tell a chef to cook you a meal than to cook it yourself. The more precise you are in your request, the more your meal will turn out how you like it. In most scenarios like this in real life, small amounts of ambiguity and misinterpretation are acceptable. Perhaps the chef will boil your potato before mashing it instead of baking it. With computers, however, ambiguity is rarely acceptable. Programs are so complex that if the computer just guessed at the meaning of ambiguous or vague requests, it might cause problems so subtle that you'd never find them. Programming languages, therefore, have been designed to accept only completely clear and unambiguous statements. The program might still have problems, but the blame is then squarely on you, not on the computer's guess. Much of the difficulty in programming comes from having to be perfectly precise and detailed about everything instead of just giving high-level instructions. Some languages require total and complete detail about everything. C and C++ are such languages and are called low-level languages. Other languages will make all sorts of assumptions, and this lets the programmer specify less detail. Python and Basic are such languages and are called high-level languages. In general, high-level languages are easier to program but give you less control. Control is sometimes important, for example, if you want your program to run as quickly as possible. Most of the time total control and speed isn't necessary, and as computers get faster high-level languages become more popular. Here we will deal exclusively with high-level languages. Low-level languages are generally similar except there are often more lines of code to write to accomplish the same thing.

Content

  • Components of a computer
  • Programming Languages
  • Statements
  • Syntax
  • Semantics
  • Variables
  • Math
  • Conditionals
  • Input
  • Loops

7 hours

$1,990

Object Oriented Paradigm

About

Object orientation is a way of programming that bears a remarkable resemblance to the way electric devices have evolved. This page will explain the problems of the past, and how these were solved by components (in electric devices) and objects in programs. The first electric devices were one web of components. If you looked into the interior of an ancient radio, everything was connected. Only a few things could be disconnected: the power plug and maybe external speakers. Sometimes these devices came with an electrical scheme, which had everything in a detailed manner. You could look for hours at such a scheme and discover all sorts of functions in that scheme, like radio reception, amplification, tone control, etc. The first computer programs had the same structure. A program written in assembler or in Basic could easily span hundreds and thousands of code lines. It had to because all the details had to be provided in one code file. Of course, a developer was really proud those days if he managed to make a working program with a decent set of features, but maintenance was hard. And if you wanted another program, you had to write it from scratch. Reusing code from previous programs was not really impossible (you could copy some subroutines), but it was not exactly easy.

Content

  • Introduction
  • Back to object orientation
    • Interfaces and plugs
    • Responsibility
      • Chains of Responsibility
    • Objects and classes
      • Unit Tests
      • Exceptions
    • Polymorphism
  • Back to Organization
    • Inheritance and Specialization
    • Responsibility, Skill, and Escalation
  • Object-Oriented Principles
    • The Open-Closed Principle
    • Refactoring

7 hours

$1,990

Julia for MATLAB Users

About

This course is a place to capture information that could be helpful for people interested in migrating code from MATLAB™ to Julia, and also those who are familiar with MATLAB and would like to learn Julia. It is meant to supplement existing resources, for instance the noteworthy differences from other languages page from the Julia manual. However this wiki intends to be more comprehensive, and to be structured in such a way as to make it easy for one to find answers to questions like: All of the content here is geared towards someone with a MATLAB background. In general This course assumes that the reader is well acquainted with the fundamentals of MATLAB and any aspects of that product that they are interested in seeing the Julia equivalent(s) for. This course is not intended to be a resource for learning MATLAB! In contrast, we do not assume any knowledge of Julia, but leave general purpose introductions to the language to other resources (see Related Resources below for some of these). The course contains distinct sections offering readers different ways to approach learning how to use Julia from the point of view of a MATLAB user. This part provides a guided tour of Julia intended to orient a typical MATLAB user to some of the most significant aspects of Julia, emphasizing what might be some of the more unexpected differences and also highlighting some of the areas where Julia has particular strengths relative to MATLAB.

Content

  • Introduction
  • Contents
    • Introduction to Julia for MATLAB users
    • Tutorials
    • MATLAB-to-Julia Functions Mapping
  • Applicable Versions
  • Related Resources
    • Julia
    • Julia and MATLAB

14 hours

$3,980

Lua Functional Programming

About

This course is about the Lua programming language, inspired by and based on Paul Graham's work On Lisp. You should be familiar with the Lua language. Familiarity with the Lisp language is recommended but not required (I'm not too familiar with it myself but I've read introductory tutorials before and On Lisp does a pretty good job of explaining in English what the code snippets do). On Lisp is an advanced Lisp tutorial showing the reader Lisp programming best practices. Lisp is a language suited for functional programming. The purpose of This course is to investigate whether Lua can be used for similar functional programming tasks as Lisp, and whether Lua might actually be a "better" Lisp. To do that I've attempted to duplicate (in Lua) all the code snippets featured in On Lisp, among other things. The chapters here have a one-to-one correspondence with On Lisp, wherever possible. On the side note, the Lua Programming language is used in the creations of add-ons for the ever popular MMORPG World of Warcraft.

Content

  • History
  • Features
    • Syntax
    • Control flow
    • Functions
    • Tables
    • Metatables
    • Object-oriented programming
  • Implementation
  • C API
  • Applications
  • Languages that compile to Lua

7 hours

$1,990

Learn Lush Basic

About

Lush is a lisp-like object-oriented programming language designed for researchers, experimenters, and engineers interested in numerical applications, including computer vision and machine learning. Lush is designed to be used in situations where one would want to combine the flexibility of a high-level, weakly-typed interpreted language, with the efficiency of a strongly-typed, natively-compiled language, and with the easy integration of code written in C, C++, or other languages. 

Content

  • Features
  • Installation
  • Beginning Lush
  • Classes and Objects
  • Compiling
  • Debugging
  • The Vector, Matrix, Tensor library
  • Input and Output

7 hours

$1,990

Parallel Computing and Computer Clusters

About

Parallel computing and computer clusters are two separate subjects. However, there is a very large cross over between the two which would make for large quantities of duplicated material should they be described separately. The aim of This course is to provide a solid foundation for understanding all aspects of both parallel computing and computing clusters. It will begin by providing an overview of both of the terms used in the course's title and then breaking down existing hardware and software practices to see how they fit into the overall picture. The text will continue on to describe common features of parallel computing & computer clusters in their basic forms - sometimes they are features not readily associated with the field such as the task scheduling in everyday operating systems.

Content

  • Overview
  • Micro Processor Units
  • Memory
  • Software
  • Theory
  • Feature Set
  • The End Result
  • Example Technologies

7 hours

$1,990

Discover Plezuro Programming

About

Plezuro is a programming language created by Piotr Sroczkowski. It enables fast development and is an open source (on license GNU/GPL).

Content

  • Introduction
  • Getting started
  • Built-in types
  • Let's code a little bit
  • Lists

7 hours

$1,990

Object Oriented Programming

About

Object Oriented Programming (OOP) means any kind of programming that uses a programming language with some object oriented constructs or programming in an environment where some object oriented principles are followed. At its heart, though, object oriented programming is a mindset which respects programming as a problem-solving dilemma on a grand scale which requires careful application of abstractions and subdividing problems into manageable pieces. Compared with procedural programming, a superficial examination of code written in both styles would reveal that object oriented code tends to be broken down into vast numbers of small pieces, with the hope that each piece will be trivially verifiable. OOP was one step towards the holy grail of software-re-usability, although no new term has gained widespread acceptance, which is why "OOP" is used to mean almost any modern programming distinct from systems programming, assembly programming, functional programming, or database programming. Modern programming would be better categorized as "multi-paradigm" programming, and that term is sometimes used. This course is primarily aimed at modern, multi-paradigm programming, which has classic object oriented programming as its immediate predecessor and strongest influence. Historically, "OOP" has been one of the most influential developments in computer programming, gaining widespread use in the mid 1980s. Originally heralded for its facility for managing complexity in ever-growing software systems, OOP quickly developed its own set of difficulties. Fortunately, the ever evolving programming landscape gave us "interface" programming, design patterns, generic programming, and other improvements paving the way for more contemporary Multi-Paradigm programming. While some people will debate endlessly about whether or not a certain language implements "Pure" OOP—and bless or denounce a language accordingly—This course is not intended as an academic treatise on object oriented programming or its theory. Instead, we aim for something more pragmatic: we start with basic OO theory and then delve into a handful of real-world languages to examine how they support OO programming. Since we obviously cannot teach each language, the point is to illustrate the trade-offs inherent in different approaches to OOP.

Content

  • Introduction
  • Classes
    • Properties
    • Methods
    • Constructors
      • Lifetime Management
    • Getters and Setters
    • Static vs Dynamic
    • Private vs Public
  • Encapsulation
  • Inheritance
    • Subclasses
    • Superclasses
  • Polymorphism
  • Abstraction
  • Advanced Concepts
    • State
    • Interfaces

14 hours

$3,980

Introductory PLC Programming

About

A Programmable Logic Controller, or PLC, is more or less a small computer with a built-in operating system (OS). This OS is highly specialized and optimized to handle incoming events in real time, i.e., at the time of their occurrence. The PLC has input lines, to which sensors are connected to notify of events (such as temperature above/below a certain level, liquid level reached, etc.), and output lines, to which actuators are connected to affect or signal reactions to the incoming events (such as start an engine, open/close a valve, and so on). The system is user programmable. It uses a language called "Relay Ladder" or RLL (Relay Ladder Logic). The name of this language implies that the control logic of the earlier days, which was built from relays, is being simulated. Some other languages used include: A programmable logic controller, PLC, or programmable controller is a digital computer used for automation of typically industrial electromechanical processes, such as control of machinery on factory assembly lines, amusement rides, or light fixtures. PLCs are used in many machines, in many industries. PLCs are designed for multiple arrangements of digital and analog inputs and outputs, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory. A PLC is an example of a "hard" real-time system since output results must be produced in response to input conditions within a limited time, otherwise unintended operation will result.

Content

  • Introduction
    • What is a Programmable Logic Controller (PLC)?
    • PLC usage scenarios
    • History of PLCs
    • Recent developments
  • Basic Concepts
    • How the PLC operates
      • Scan cycle
  • Basic instructions

 


7 hours

$1,990

Learn Programmable Logic

About

This course will cover the design and implementation of programmable logic devices (PLDs) using the Verilog, VHDL, and System C Hardware description languages. It is not meant to be a comprehensive reference to these languages, but more of a quick guide that covers the parts essential to developing effective digital designs. This course will also cover programming a variety of specific programmable devices, such as FPGA and CPLD devices. Contents intended to be covered are: Programming HDL, Concurrent programming and synthesis. A Previous knowledge of programming a high-level language is assumed. Knowledge of Semiconductors is helpful but not necessary. Previous experience or knowledge of Digital Circuits is must.


7 hours

$1,990

Software Quality Assurance

About

A high-quality software application makes its users happy. They enjoy using it and it doesn't get in their ways. It yields the right results quickly, without requiring workarounds for bugs, does not crash or hogs the system, and allows the users to go on with the rest of their lives. Either the program is invisible to them and they don't think about using it, or it works so well that they enjoy using it and possibly comment about it to their friends. On the other hand, a software application of poor quality annoys, irritates and/or frustrates its users, or even causes them to lose a lot of time, money or worse. Either it is too slow and they lose patience waiting for it to perform its function. Or it crashes a lot or hogs the system. Or it could look ugly, or have a poorly designed user-interface. Or it has other bugs like those causing data loss. Whatever its faults are, it fails or partially fails to be a useful tool in the hand of the user. As software developers, it is our mission to make sure the software we produce is high-quality so it will perform its function properly without inflicting anguish or loss upon the user.

Content

Proposed Methods to Achieve Quality

  • Reuse Existing Efforts
    • Against Code Reuse
    • How to find reusable code
  • Writing Functional Specifications
    • Opinions Against Functional Specifications
  • Code Design
    • Opposing views on how to design
  • Refactoring code and rewriting it
  • Writing Automated Tests
  • Having Testers
  • Hiring the Best Developers
  • Using a Version Control System
  • Using a Bug-tracking System
  • Pair-programming

7 hours

$1,990

Discover Fedena

About

Fedena is a free and open-source school management web application. This course will be a user guide and manual for the new as well as experienced users of Fedena.

Content

  • Introduction
  • Installation 
  • Getting Started
  • Courses and Batches
  • Settings 
  • Student Admission 
  • Student Search
  • Human Resources
  • Time Table
  • News Management
  • Event Management
  • Student Attendance
  • Examination
  • User Management
  • Finance
  • Employee Login
  • Student and Parent Login
  • Other Features
  • FAQ

7 hours

$1,990

Discover Ratpoison Window Manager

About

Ratpoison is a window manager written for X Window, and principally developed by Linux users (though it should run on any operating system and platform X Window does). It is very different from most other window managers in that it is a tiling window manager and because it tries to minimize or completely eliminate the use of the mouse, in order to relieve stress on the arms and shoulders.

In keeping with this old-school approach, Ratpoison is also extremely lightweight, and there are no "fat library dependencies", no fancy graphics beyond what other programs provide, and no window decoration. While this might seem odd at first, starting to use it is in fact quite simple. If you've ever used the GNU Screen terminal multiplexer or GNU Emacs, you'll be quite familiar with the interface, since most of the hotkeys and design ideas are borrowed from those programs.

Content

  • Introduction and Features
  • Basic Key Strokes
  • Pros and Cons
  • Advanced Usage
  • Tips and Tricks

7 hours

$1,990

Lino Developer's Guide

About

This course describes how to install Lino. System requirements. Set up a Python environment. Running your first Lino site. It assumes you are familiar with the Linux shell at least for basic file operations like ls, cp, mkdir, rmdir, file permissions, environment variables, bash scripts etc. Otherwise we suggest to learn about Working in a UNIX shell. Lino theoretically works under Python 3, but we currently still recommend Python 2. If you just want it to work, then choose Python 2. Otherwise consider giving it a try under Python 3 and report your experiences. We assume you have pip installed. pip is not automatically bundled with Python 2, but it has become the de facto standard. Here is how to install it on Debian: You will need to install git on your computer to get the source files:


7 hours

$1,990

Object And Facial Detection In Python

About

Object detection (OB) is one of the computer technologies which are connected to the image processing and computer vision spectra of artificial intelligence. OB interacts with detecting instances of an object such as human faces, buildings, trees, cars, etc. The primary aim of face detection algorithms are to determine whether there are any human faces in an image or not. This repo primarly contains:

  • Training course for learning and using object and face detection.
  • dlib, opencv and tensorflow implementation.
  • Development System - A system I created that makes it possible to test several cool functions with face detections.


7 hours

$1,990

Bcc2020CLE

About

Command Line Essentials for Bioinformaticians

Training course for BCC2020

Session overview

We will start the session with a quick refresher on the basics of bash. I will then introduce a few well known unix tools and features of the shell with a focus on how to use these to make key bioinformatics tasks easier and more efficient.

Getting the most out of your shell (bash centric)

As bioinformaticians we regularly deal with directories filled with hundreds of files and have to manage running an equally large number of parallel jobs. There are many features of the shell that can make this easier. Here I will focus on some of the key ones that I use often. Tools: bash (loops, functions, strings), xargs, parallel

Manipulating tabular data

Lots of bioinformatics data is tabular, gff, vcf, sam. Using these formats as examples I will introduce some useful tools for manipulating tabular data Tools: cut, paste, awk, shuff, comm

Manipulating sequence data

Manipulating sequence data like fasta and fastq requires specialised bioinformatic tools. Two very useful ones are samtools and bioawk. This section will show you how to easily accomplish common tasks like splitting, sampling or reformatting a large sequence file. Tools: samtools, bioawk

Prerequisites

A laptop with a modern web browser


7 hours

$1,990

Code Demos

About

Coding Demonstrations: Programming for Social Science Research

Computational methods for collecting, cleaning and analysing data are an increasingly important component of a social scientists toolkit. Central to engaging in these methods is the ability to write readable and effective code using a programming language.

Topics

The following topics are covered under this training series:

  1. Introduction to Python for social scientists - learn how to utilise the Python programming language for core social science research tasks.
  2. Collecting data I: web-scraping - learn how to collect data from websites using Python.
  3. Collecting data II: APIs - learn how to download data from online databases using Python and Application Programming Interfaces (APIs).
  4. Setting up your computational environment - learn how to create, manage and share a computational environment for social science research projects.

    Materials

    The Training Course - including recordings, slides, and sample Python code - can be found in the following folders:

    • code - run and/or download the code examples using our Jupyter notebook resources.
    • installation - view instructions for how to download and install Python and other packages necessary for working with new forms of data.
    • webinars - watch recordings of the coding demonstrations on our Youtube channel.

      Acknowledgements

      We are grateful to UKRI through the Economic and Social Research Council for their generous funding of this training series.

      Further Information

    • To access learning materials from the wider New Forms of Data for Social Science Research training series: [Training Course]
    • To keep up to date with upcoming and past training events: [Events]
    • To get in contact with feedback, ideas or to seek assistance: [Help] Thank you and good luck on your journey exploring new forms of data! Dr Julia Kasmire and Dr Diarmuid McDonnell UK Data Service
      University of Manchester

7 hours

$1,990

Comp Soc Sci

About

Being a Computational Social Scientist

Scientific research and teaching is increasingly influenced by computational tools, methods and paradigms. The social sciences are no different, with many new forms of social data only available through computational means (Kitchen, 2014). While to some degree social science research has always been marked by technological approaches, the field of computational social science involves the use of tools, data and methods that require a different skill set and mindset.

Topics

The following topics are covered under this training series:

  1. Thinking computationally
  2. Writing code
  3. Computational environments
  4. Manipulating structured and unstructured data
  5. Reproducibility of the scientific workflow

    Materials

    The Training Course - including webinar recordings, slides, and sample Python code - can be found in the following folders:

    • code - run and/or download example Python code using our Jupyter notebook resources.
    • faq - read through some of the frequently asked questions that are posed during our webinars.
    • installation - view instructions for how to download and install Python and other packages necessary for working with new forms of data.
    • reading-list - explore further resources including articles, books, online resources and more.
    • webinars - watch recordings of our webinars and download the underpinning slides.

      Acknowledgements

      We are grateful to UKRI through the Economic and Social Research Council for their generous funding of this training series.

      Further Information

    • To access learning materials from the wider New Forms of Data for Social Science Research training series: [Training Course]
    • To keep up to date with upcoming and past training events: [Events]
    • To get in contact with feedback, ideas or to seek assistance: [Help] Thank you and good luck on your journey exploring new forms of data! Dr Julia Kasmire and Dr Diarmuid McDonnell UK Data Service
      University of Manchester

7 hours

$1,990

Abseil Cpp

About

Abseil - C++ Common Libraries

The repository contains the Abseil C++ library code. Abseil is an open-source collection of C++ code (compliant to C++11) designed to augment the C++ standard library.

About Abseil

Abseil is an open-source collection of C++ library code designed to augment the C++ standard library. The Abseil library code is collected from Google's own C++ code base, has been extensively tested and used in production, and is the same code we depend on in our daily coding lives. In some cases, Abseil provides pieces missing from the C++ standard; in others, Abseil provides alternatives to the standard for special needs we've found through usage in the Google code base. We denote those cases clearly within the library code we provide you. Abseil is not meant to be a competitor to the standard library; we've just found that many of these utilities serve a purpose within our code base, and we now want to provide those resources to the C++ community as a whole.

Quickstart

If you want to just get started, make sure you at least run through the contains information about setting up your development environment, downloading the Abseil code, running tests, and getting a simple binary working.


7 hours

$1,990

WinObjC

About

The Windows Bridge for iOS (also referred to as WinObjC) is a Microsoft open-source project that provides an Objective-C development environment for Visual Studio and support for iOS APIs. The bridge allows you to create Universal Windows Platform (UWP) apps that will run on many Windows devices by re-using your Objective-C code and iOS APIs alongside Windows 10 features like Cortana and Windows Notifications.


7 hours

$1,990

ZeroShotEval

About

A Python toolkit for evaluating the quality of classification models for tasks of Zero-Shot Learning (ZSL) including the models construction with different insides (e.g. miscellaneous NN architectures), procedure for training and saving models for further evaluation. 


7 hours

$1,990

Msgraph Training Rubyrailsapp

About

Microsoft Graph Training Module - Build Ruby on Rails apps with Microsoft Graph

This module will introduce you to working with Microsoft Graph to access data in Office 365 by building Ruby on Rails web applications.

Lab - Build Ruby on Rails apps with Microsoft Graph

In this lab you will create a Ruby on Rails web application using the Microsoft identity platform authentication endpoint to access data in Office 365 using Microsoft Graph.

Completed sample

If you just want the completed sample generated by following this lab, you can find it here.


7 hours

$1,990

MultiLearningPlatform

About

This is the Web Based learning website build with Spring 5 and latest web technology specially designed for the student to learn different things in a single website. Main focus is given on live training conduct by teacher to the student and live video streaming is based on the Java websocket protocol. This is build for Tribhuvan University of NEPAL.


7 hours

$1,990

Msgraph Training Ios Swift

About

Microsoft Graph Training Module - Build iOS apps with Swift and the Microsoft Graph SDK

This module will introduce you to working with the Microsoft Graph SDK for Objective-C in creating a Swift-based iOS application to access data in Office 365.

Lab - Build iOS apps with Swift and the Microsoft Graph SDK

In this lab you will create an iOS application using Swift, configured with Azure Active Directory (Azure AD) for authentication & authorization, that accesses data in Office 365 using the Microsoft Graph SDK.

Completed sample

If you just want the completed sample generated by following this lab, you can find it here.


7 hours

$1,990

OOP Training Course

About

A training course on Object Oriented Programming, in french. It includes some code examples and exercises. The training course slides are located in directory "training". Basic code examples have been placed in directory "examples". Directory "exercises" contains slides with description of exercises.


7 hours

$1,990

TGA

About

Trainings Graphics API

Development Environment

Should simply loading the cmake project not work: Currently implemented are:

API Documentation

Shader

A Shader represents code to be executed on the GPU. A handle to a Shader is created with a call to Interface::createShader(const ShaderInfo &shaderInfo); The ShaderInfo struct requires the following parameters: struct ShaderInfo{ ShaderType type; // Type of Shader. Valid Types are ShaderType::vertex and ShaderType::fragment uint8_t const *src; // Pointer to the shader code. Dependant on the underlying API, for Vulkan this would be SPIR-V size_t srcSize; // Size of the shader code in bytes The handle to a Shader is valid until a call to Interface::free(Shader shader); or until the descruction of the interface

Buffer

A Buffer represents a chunk of memory on the GPU. A handle to a Buffer is created with a call to Interface::createBuffer(const BufferInfo &bufferInfo); The BufferInfo struct requires the following parameters: struct BufferInfo{ BufferUsage usage; // Usage flags of the Buffer. Valid Usage flags are BufferUsage::uniform, BufferUsage::vertex and BufferUsage::index. Others are work in progress uint8_t const data; // Data of the Buffer to be uploaded. Alignment requirements are the users responsibility size_t dataSize; // Size of the buffer data in bytes To update the contents of a Buffer call ```Interface::updateBuffer(Buffer buffer, uint8_t const data, size_t dataSize, uint32_t offset)with the Buffer you want to update, the data you want to write, the size of the data in bytes and an offset from the beginning of the Buffer The handle to a Buffer is valid until a call toInterface::free(Buffer buffer);``` or until the descruction of the interface

Texture

A Texture represents an image that is stored on and used by the GPU. A handle to a Texture is created with a call to Interface::createTexture(const TextureInfo &textureInfo); The TextureInfo struct requires the following parameters: struct TextureInfo{ uint32_t width; // Width of the Texture in pixels uint32_t height; // Height of the Texture in pixels Format format; // Format of the pixels. Example: For 8 bit per Pixel with red, green and blue channel use Format::r8g8b8_unorm. For a list of all formats refer to tga::Format uint8_t const *data; // Data of the Texture. Pass a nullptr to create a texture with undefined content size_t dataSize; // Size of the texture data in bytes SamplerMode samplerMode; // How the Texture is sampled. Valid SamplerModes are SamplerMode::nearest (default) and SamplerMode::linear RepeatMode repeatMode; // How textures reads with uv-coordinates outside of [0:1] are handled. For a list of all repeate modes refer to tga::RepeatMode The handle to a Texture is valid until a call to Interface::free(Texture texture); or until the descruction of the interface

Window

A Window is used to present the result of a fragment shader to the screen. A handle to a Window is created with a call to Interface::createWindow(const WindowInfo &windowInfo); The WindowInfo struct requires the following parameters: struct WindowInfo{ uint32_t width; // Width of the Window in pixels uint32_t height; // Height of the Window in pixels PresentMode presentMode; // How syncronization to the monitor is handled. Valid PresentModes are PresentMode::immediate (show frame as fast as possible, default) and PresentMode::vsync (sync to the monitor refresh rate) uint32_t framebufferCount; // How many backbuffers the window has to manage. Due to minimum and maximum contraints this value may not be the actual resulting number of backbuffers and needs to be polled later The handle to a Window can be used to query and update its state: The handle to a Window is valid until a call to Interface::free(Window window); or until the descruction of the interface

InputSet

An InputSet is a collection of Bindings and a Binding is a resource used in a Shader. A handle to an InputSet is created with a call to Interface::createInputSet(const InputSetInfo &inputSetInfo); The InputSetInfo struct requires the following parameters: struct InputSetInfo{ RenderPass targetRenderPass; // The RenderPass this InputSet should be used with uint32_t setIndex; // The Index of this InputSet as defined in RenderPass.inputLayout
std::vector bindings; // The collection of Bindings in this InputSet

Binding

A Binding assigns a resource to a shader as declared in RenderPass::InputLayout::SetLayout The Binding struct consists of: The handle to an InputSet is valid until a call to Interface::free(InputSet inputSet); or until the descruction of the interface

RenderPass

A RenderPass describes a configuration of the graphics-pipeline. A handle to a RenderPass is created with a call to Interface::createRenderPass(const RenderPassInfo &renderPassInfo); The RenderPassInfo struct requires the following parameters: struct RenderPassInfo{ std::vector shaderStages; // The Shaders to be executed in this RenderPass. Must be ordererd in accordance with the shader stages of the graphics pipeline (i.e vertex before fragment, no duplicate stages, etc.) std::variant renderTarget; // Where the result of the fragment shader stage will be saved. Keep in mind that a Window can have several framebuffers and only one is written at a time ClearOperation clearOperations; // Determines if the renderTarget and/or depth-buffer should be cleared VertexLayout vertexLayout; // Describes the format of the vertices in the vertex-buffer RasterizerConfig rasterizerConfig; // Describes the configuration the Rasterizer, i.e blending, depth-buffer, culling and polygon draw mode InputLayout inputLayout; // Describes how the Bindings are organized

VertexLayout

The VertexLayout describes how a vertex in a vertex-buffer is laid out in memory. The VertexLayout struct consists of: A VertexAttribute consists of:

RasterizerConfig

The RasterizerConfig determines depth test, blending, culling and polygon-draw-mode the RasterizerConfig struct consists of:

InputLayout

The InputLayout describes how Bindings are organized. The InputLayout is a collection of SetLayouts. A SetLayout is a collection of BindingLayouts. The BindingLayout struct consists of: The handle to a RenderPass is valid until a call to Interface::free(RenderPass renderPass); or until the descruction of the interface

CommandBuffer

A CommandBuffer is a list of instructions to be executed by the GPU. A CommandBuffer is started with a call to Interface::beginCommandBuffer(const CommandBufferInfo &commandBufferInfo) Note that the CommandBufferInfo struct is currently empty The handle to a CommandBuffer can then be created with a call to Interface::endCommandBuffer() Inbetween beginCommandBuffer and endCommandBuffer you can call the following commands to be recorded in the CommandBuffer: To execute a CommandBuffer call Interface::execute(CommandBuffer commandBuffer) The handle to a CommandBuffer is valid until a call to Interface::free(CommandBuffer commandBuffer); or until the descruction of the interface


7 hours

$1,990

Learn Dotnet Core

About

This is a beginner code sample for learning C# and .NET Core.

Requirements

To get the .NET SDK, visit dot.net and get the latest version. After installation, verify you installation by running dotnet --version On my Mac, I have .NET Core 3 SDK Installed as of when writing this. so I get. v-daolad@ng-v-daolad MINGW64 /c/Projects/learn-dotnet-core (master) $ dotnet --version 3.1.300 The version number displayed may be different from what you have on your computer which is fine. Run dotnet new and it will show you a list of available app options to build with your installed version of .NET Core

Structure

Week | Day | Slide | Info/Resources 1 | 1 | Day 1 | Info & Resources 1 | 2 | Day 2 | Info & Resources 1 | 3 | Day 3 | Info & Resources


7 hours

$1,990

Msgraph Training Aspnet Core

About

Microsoft Graph Training Module - Build ASP.NET Core apps with Microsoft Graph

This module will introduce you to working with the Microsoft Graph .NET SDK in creating an ASP.NET Core MVC web application to access data in Office 365.

Lab - Build ASP.NET Core apps with Microsoft Graph

In this lab you will create an ASP.NET Core MVC application, configured with Azure Active Directory (Azure AD) for authentication & authorization, that accesses data in Office 365 using the Microsoft Graph .NET SDK.

Completed sample

If you just want the completed sample generated by following this lab, you can find it here.


7 hours

$1,990

Msgraph Training Dotnet Core

About

Microsoft Graph Training Module - Build .NET Core apps with the Microsoft Graph SDK

This module will introduce you to working with the Microsoft Graph SDK to access data in Office 365 by building .NET Core applications.

Lab - Build .NET Core apps with the Microsoft Graph SDK

In this lab you will create a console application using the Microsoft Authentication Library (MSAL) to access data in Office 365 using the Microsoft Graph.

Completed sample

If you just want the completed sample generated by following this lab, you can find it here.


7 hours

$1,990

Myvision

About

MyVision is a free online image annotation tool used for generating computer vision based ML training data. It is designed with the user in mind, offering features to speed up the labelling process and help maintain workflows with large datasets.

Features

Draw bounding boxes and polygons to label your objects:

Polygon manipulation is enriched with additional features to edit, remove and add new points:

Supported dataset formats:

Annotating objects can be a difficult task... You can skip all the hard work and use a pre-trained machine learning model to automatically annotate the objects for you. MyVision leverages the popular 'COCO-SSD' model to generate bounding boxes for your images and by operating locally on your browser - retain all data within the privacy of your computer:

You can import existing annotation projects and continue working on them in MyVision. This process can also be used to convert datasets from one format to another:

Local setup

No setup is required to run this project, open the index.html file and you are all set! 

Requirements: Node version 8+ and NPM version 6+


7 hours

$1,990

Spring Microservices Training

About

Learn how to create awesome Microservices and RESTful web services with Spring and Spring Boot.

Overview

  • Installing Tools
  • Running Examples
  • Course Overview
  • About in28Minutes

    Introduction

    Developing RESTful web services is fun. The combination of Spring Boot, Spring Web MVC, Spring Web Services and JPA makes it even more fun. And its even more fun to create Microservices. There are two parts to this course - RESTful web services and Microservices Architectures are moving towards microservices. RESTful web services are the first step to developing great microservices. Spring Boot, in combination with Spring Web MVC (also called Spring REST) makes it easy to develop RESTful web services. In the first part of the course, you will learn the basics of RESTful web services developing resources for a social media application. You will learn to implement these resources with multiple features - versioning, exception handling, documentation (Swagger), basic authentication (Spring Security), filtering and HATEOAS. You will learn the best practices in designing RESTful web services. In this part of the course, you will be using Spring (Dependency Management), Spring MVC (or Spring REST), Spring Boot, Spring Security (Authentication and Authorization), Spring Boot Actuator (Monitoring), Swagger (Documentation), Maven (dependencies management), Eclipse (IDE), Postman (REST Services Client) and Tomcat Embedded Web Server. We will help you set up each one of these. In the second part of the course, you will learn the basics of Microservices. You will understand how to implement microservices using Spring Cloud. In this part of the course, you will learn to establish communication between microservices, enable load balancing, scaling up and down of microservices. You will also learn to centralize configuration of microservices with Spring Cloud Config Server. You will implement Eureka Naming Server and Distributed tracing with Spring Cloud Sleuth and Zipkin. You will create fault toleranct microservices with Zipkin

You will learn

  • You will be able to develop and design RESTful web services
  • You will setup Centralized Microservice Configuration with Spring Cloud Config Server
  • You will understand how to implement Exception Handling, Validation, HATEOAS and filtering for RESTful Web Services.
  • You will implement client side load balancing (Ribbon), Dynamic scaling(Eureka Naming Server) and an API Gateway (Zuul)
  • You will learn to implement Distributed tracing for microservices with Spring Cloud Sleuth and Zipkin
  • You will implement Fault Tolerance for microservices with Zipkin
  • You will understand how to version your RESTful Web Services
  • You will understand how to monitor RESTful Services with Spring Boot Actuator
  • You will understand how to document RESTful Web Services with Swagger
  • You will understand the best practices in designing RESTful web services
  • Using Spring Cloud Bus to exchange messages about Configuration updates
  • Simplify communication with other Microservices using Feign REST Client

7 hours

$1,990

CJSON

About

Ultralightweight JSON parser in ANSI C.

Table of contents

  • License
  • Usage
    • Welcome to cJSON
    • Building
    • Copying the source
    • CMake
    • Makefile
    • Vcpkg
    • Including cJSON
    • Data Structure
    • Working with the data structure
    • Basic types
    • Arrays
    • Objects
    • Parsing JSON
    • Printing JSON
    • Example
    • Printing
    • Parsing
    • Caveats
    • Zero Character
    • Character Encoding
    • C Standard
    • Floating Point Numbers
    • Deep Nesting Of Arrays And Objects
    • Thread Safety
    • Case Sensitivity
    • Duplicate Object Members
    • Enjoy cJSON!

7 hours

$1,990

CoreSEO MVC

About

This is the tutorial on ASP.NET Core MVC application development from clouds.training

Renamed from CoreSEO to CoreSEO-MVC

The reason for renaming this project from CoreSEO to CoreSEO-MVC is to create a new training series using razor pages, which is the recommended approach.


7 hours

$1,990

Interpretable Ml

About

Interpretable Machine Learning

A collection of code, notebooks, and resources for training interpretable machine learning (ML) models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.

Want to contribute your own examples/code/resources? Just make a pull request.


7 hours

$1,990

Symphony Training Fx Bot

About

Bot template project with code samples on how to use the features offered by Symphony Bot SDK.

Summary

  • Getting Started

    • Prerequisites
    • Defining RSA keys pair
    • Setting the service account
    • POD configuration
    • Running locally
    • Verify your setup
  • Command handling

    • Help command
    • Hello command
    • Create notification command
    • Login command
    • Quote command
    • Attachment command
    • Broadcast command
    • Default response
  • Event handling

  • Receiving notifications

  • Symphony elements samples

    • Register quote command
    • Template command
  • Plugging in an extension app

    • Streams details endpoint
    • Users details endpoint
    • Static content
    • Real-time events
  • Extending monitoring endpoints


7 hours

$1,990

DS 2.3 DS In Production

About

Data Science in Production

This course covers the tools and techniques commonly utilized for production machine learning in industry. Students learn how to provide web interfaces for training machine learning or deep learning models with Flask and Docker. Students will deploy models in the cloud through Amazon Web Services, gather and process data from the web, and display information for consumption in advanced web applications using Plotly and D3.js. The students use PySpark to make querying even the largest data stores manageable.

Why you should know this (optional)

Explain why students should care to learn the material presented in this class.

Course Specifics

Weeks to Completion: 7 Total Seat Hours: 37.5 hours Total Out-of-Class Hours: 75 hours Total Hours: 112.5 hours Units: 3 units Delivery Method: Residential Class Sessions: 14 classes, 7 labs


7 hours

$1,990

Explore Vagrant Hadoop Hive Spark

About

Vagrant is an open-source software product for building and maintaining portable virtual software development environments; e.g., for VirtualBox, KVM, Hyper-V, Docker containers, VMware, and AWS. It tries to simplify the software configuration management of virtualizations in order to increase development productivity. Vagrant is written in the Ruby language, but its ecosystem supports development in a few languages.

Content

  • Introduction
  • Version Information
  • Services
  • Getting Started
  • Work out the IP address of the docker container
  • Work out the IP address of the Virtualbox VM
  • Web user interfaces
  • Validating your Virtual Machine set-up
  • Management of Vagrant VM
  • Problems
  • More advance set-up

 


7 hours

$1,990

Training Fraud Detection Hadoop

About

On-Line Fraud Detection involves identifying a fraud as soon as it has been perpetrated into the system. This technique is only successful by having a training algorithm that can produce a model suitable to be used by a real-time detector. In this project, we will focus on fraud detection for credit card transactions, using Markov chains to train the model off-line and a parallel implementation of a concurrent queue for on-line detection.


7 hours

$1,990

Deploying A Sentiment Analysis Model Using AWS SageMaker

About

Did this project as a part of requirements for graduation of the Udacity Deep Learning Nanodegree. Constructed a Recurrent Neural Network (LSTM) on top of an embedding layer and trained it on the BOW encoded IMDB dataset containing movie review for detecting sentiment of a movie review. Created the training job, model and deployed the model on AWS SageMaker service. Gave access of the deployed model to a simple web app that can take input review as text from an user and give the sentiment of the review as output by creating a Lambda function and API Gateway and giving the web app access to the API Gateway.


7 hours

$1,990

Maths Mentals Alexa Skill

About

Interactive Voice Maths training skill for Alexa. Maths Mentals Skill Maths Mentals is a maths quiz game. You can choose between 5 levels, with 1 being easiest and 5 the hardest. The quiz includes addition, subtraction, multiplication and division with varying levels of difficulty. The questions are dynamic and every quiz is different. Maths Mentals is suitable for kids from kindergarten all the way to primary school.


7 hours

$1,990

Face Recognition In Action LFW

About

In the last few decades, the research on face recognition has become a trendy topic. Significant developments have been made by various research in this field. The definition of face recognition can be expressed as a task using biometric features to recognize the human face. The tasks of face recognition can be divided into three types: Face verification (are they the same person), face identification (who is this person), and face clustering (find common people among these faces). To check what we have learned in this semester, we decided to train a face verification model in a real-world dataset with deep learning approaches. The dataset we used is the Labeled Faces in the Wild Database (LFW). The LFW database is a well-known in face recognition field. It contains more than 13,000 images of faces collected from the web. Each face has been labeled with the name of the person in the picture. There are 1,680 individuals have two or more distinct photos in the dataset. Our method followed the general machine learning workflow: starting with data preprocessing, building model from baseline then improving the model step by step to gain a better performance on the test data. This course aims to train a deep learning model for face verification task. Similar to other machine learning tasks, our method is a purely data driven method as the faces are represented by the pixels. The deep learning network we used is the convolutional neural network. Although face recognition model has been developed very sophisticated, our project is proposed to check the deep learning knowledge we have learnt in this semester. Thus, this course can be regard as a deep learning model training for face recognition in action. With limited computational power, we managed to adjust the model to its best performance.


7 hours

$1,990

DeepQNN

About

Implementations for training deep quantum neural networks in Mathematica and MATLAB. This code can be used to classically simulate deep quantum neural networks as proposed in

  • K. Beer, D. Bondarenko, T. Farrelly, T. J. Osborne, R. Salzmann, and R. Wolf. Efficient Learning for Deep Quantum Neural Networks. arXiv:1902.10445

7 hours

$1,990

Nihon

About

A Japanese Kana training application to practice the recognition of hiragana and katakana characters by translating them to rmaji using real words. This is Windows only, as it uses a webview for the UI. Main features (see screenshots below):

  • Can generate a random training set for hiragana, katakana or both.
  • Words are taken from a real word list. The choice of words is random but it is weighted so common words have a higher chance to appear.
  • Can generate a training set including all chosen characters (as long as the length is enough or if the option "All" is used).
  • Error report highlighting the error and showing the correct translation.
  • Full report with statistics at the end of the training set. This is a spare time project. The app was developed in Rust with the purpose of learning the language while also creating a tool that I missed while studying Japanese.

7 hours

$1,990

Fairing

About

Kubeflow Fairing is a Python package that streamlines the process of building, training, and deploying machine learning (ML) models in a hybrid cloud environment. By using Kubeflow Fairing and adding a few lines of code, you can run your ML training job locally or in the cloud, directly from Python code or a Jupyter notebook. After your training job is complete, you can use Kubeflow Fairing to deploy your trained model as a prediction endpoint.


7 hours

$1,990

Padertorch

About

Padertorch is designed to simplify the training of deep learning models written with PyTorch. While focusing on speech and audio processing, it is not limited to these application areas. This repository is currently under construction. The examples in contrib/examples are only working in the Paderborn NT environment

Highlights

padertorch.Trainer

  • Logging:

    • The review of the model returns a dictionary that will be logged and visualized via 'tensorboard'. The keys define the logging type (e.g. scalars).

    • As logging backend we use TensorboardX to generate a tfevents file that can be visualized from a tensorboard.

  • Dataset type:

    • lazy_dataset.Dataset, torch.utils.data.DataLoader and other iterables...

  • Validation:

    • The ValidationHook runs periodically and and logs the validations results.

  • Learning rate decay with backoff:

    • The ValidationHook has also parameters to do a learning rate with backoff.

  • Test run:

    • The trainer has a test run function to train the model for few iterations and test if

      • the model is executable (burn test)

      • the validation is deterministic/reproducable

      • the model changes the parameter in the training

  • Hooks:

    • The hooks are used to extend the basic features of the trainer. Usually the user dont rearly care about the hooks. By default a SummaryHook, a CheckpointHook and a StopTrainingHook is registerd. So the user only need to register a ValidationHook

  • Checkpointing:

    • The parameters of the model and the state of the trainer are periodically saved. The intervall can be specified with the checkpoint_trigger (The units are epoch and iteration)

  • Virtual minibatch:

    • The Trainer usually do not know if the model is trained with a single example or multiple examples (minibatch), because the exaples that are yielded from the dataset are directly forwarded to the model.

    • When the virtual_minibatch_size option is larger than one, the trainer calls the forward and backward step virtual_minibatch_size times before applying the gradients. This increases the minibatch size, while the memory consumption stays similar.


7 hours

$1,990

Awstrainer

About

Command line tool for machine learning on AWS awstrainer helps you run machine learning tasks (or any other long-running computations) on AWS. With one simple command, it spins up an AWS instance (from your own account), transfers your code & dataset, starts the training run, syncs all output files back to your computer, and terminates the instance after training has finished. It really shines when you need to quickly launch multiple, long-running jobs in parallel (e.g. for hyperparameter optimization).


7 hours

$1,990

WeTalk

About

We talk is a simple chat web application which runs on locally developed using nodeJs. This web app developed only for testing purposes and this not a complete version of the application. application will preform simple message sending and displaying on deferent localhost connections. also this includes mongoDB chat logger connected with mongoDB Atlas cluster. cluster for this is in open ip for commmon use so i recommand to have your own cluster in mongoDB Atlas.


7 hours

$1,990

AdasOptimizer

About

ADAS is short for Adaptive Step Size, it's an optimizer that unlike other optimizers that just normalize the derivatives, it fine-tunes the step size, truly making step size scheduling obsolete.

Training Performance

This is a graph of ADAS (blue) and ADAM (orange)'s inaccuracy percentages in log scale (y-axis) over epochs (x-axis) on MNIST's training dataset using shallow network of 64 hidden nodes. It can be seen that in the start ADAS is ~2x faster than ADAM, and while ADAM slows down, ADAS converages to 0% inaccuracy (AKA 100% accuracy), in 24 iterations exactly, and since then never diverging. To see how ADAM was tested see/run the python script ./adam.py, it uses tensorflow. ADAS was compared against other optimizers too (AdaGrad, AdaDelta, RMSprop, Adamax, Nadam) in tensorflow, and none of them showed better results than ADAM, so their performance was left out of this graph. Increasing ADAM's step size improved the performance in the short-term, but made it worse in the long-term, and vice versa for decreasing it's step size.

Theory

This section explains how ADAS optimizes step size. The problem of finding the optimal step size formulates itself into optimizing f(x + f'(x) * step-size) by step-size. Which is translated into this formula that updates the step-size on the fly: step-size(n+1) = step-size(n) + f'(x) * f'(x + f'(x) * step_size(n)). In english it means optimize step size so the loss decreases the most with each weights update. The final formula makes sense, because whenever x is updated in the same direction, the step-size should increase because we didn't make a large enough step, and vice versa for opposite. You may notice that there's a critical problem in computing the above formula, it requires evaluation of the gradient on the entire dataset twice for each update of step-size, which is quite expensive. To overcome the above problem, compute a running average of x's derivative in SGD-context, this represents the f'(x) in the formula, and for each SGD update to x, its derivative represents the f'(x + f'(x) * step_size(n)), and then update the step-size according to the formula.


7 hours

$1,990

Trained Linearization

About

Interpreting Neural Networks by Reducing Nonlinearities during Training

This repo contains a short paper and sample code demonstrating a simple solution that makes it possible to extract rules from a neural network that employs Parametric Rectified Linear Units (PReLUs). We introduce a force, applied in parallel to backpropagation, that aims to reduce PReLUs into the identity function, which then causes the neural network to collapse into a smaller system of linear functions and inequalities suitable for review or use by human decision makers. As this force reduces the capacity of neural networks, it is expected to help avoid overfitting as well.


7 hours

$1,990

Clojure Workshop

About

A Clojure workshop intended for Clojure beginners. Participants are not required to have any prior experience with Clojure. The workshop materials are intended to guide participants through the whole language and eco system, from theory to deploying an actual web application. The goal of this workshop is to have a person, with no prior knowledge about Clojure, fully capable of writing production ready Clojure after one day. The workshop has successfully been organized in:

Prerequisite

Java

Version 1.8.0 or higher The command: java -version should output: [...] version "1.8.0_XXX"

Lein

The command: lein -v should output: Leiningen 2.9.1 on Java 1.8.0_XXX [...]

Nightcode

Workshop set up

The workshop is split into 6 sections

  1. Introduction in Clojure
  2. Basic Development - REPL
  3. Backend Programming
  4. Frontend Programming with Clojurescript and Reagent
  5. Database (Extra credit)
  6. Deploying (Extra credit)

7 hours

$1,990

Discord Training Generator

About

This script is a simple way to generate training data from exported Discord messages. It copies the preceding message as context and exports the "messages.txt" file with "subject: " and "other: " preceding the lines so the model can be prompted with a conversation history followed by "subject: "


7 hours

$1,990

Multichannel Speech Enhancement With Deep Neural Networks

About

Multichannel Speech Enhancement with Deep Neural Networks - Beamforming with Autoencoders

This project applies an autoencoder deep neural network to the multichannel speech enhancement problem. It takes the problem from dataset generation to the model training.

Single Channel and Multichannel Dataset Generation

In order to train the model, you need to create a dataset containing the mixture signals and the clean target signals. The dataset is then converted to the magnitude spectrum. You can find use the code snippets in Dataset Generation folder to create your own dataset. Note that you will need to find your own speech dataset and noise dataset. This set ensures the mixture generation and STFT conversion into a structured form.


7 hours

$1,990

Mpify

About

mpify is a simple API to launch a "target function" parallelly on a group of ranked processes via a single blocking call. It overcomes the few quirks when multiple python processes meets interactive Jupyter/IPython meets multiple CUDA GPUs, and has the following features:

  • Caller process can participate as a ranked worker (by default as local rank 0)
  • Collect return values from any or all worker procs.
  • Worker procs will exit upon function completion, freeing up resources (e.g. GPUs).
  • Multi-GPUs friendly, since subprocesses are spawned not forked, thus immune from any existing CUDA state in caller.
  • Jupyter-friendly: modules to import, locally defined functions/objects can be passed to spawned subprocesses, thanks to the multiprocess module, a fork of the standard Python multiprocessing.
  • Customizable execution environment around function call via user defined context manager, and
  • Minimal changes (sometimes none) to existing function,
  • *A helper routine to "`from X import " within a Python function**. mpify` hopes to make multiprocessing tasks in Jupyter notebook easier. It works outside of Jupyter as well.

    Example: Porting the first training loop in Fastai2's course-v4 chapter 01_intro notebook to train on 3 GPUs in Torch's DDP mode:

    Original: mpify-ed:

    More Examples

    The examples/ directory contains:

    • A PyTorch tutorial on DDP, ported to use mpify both in Jupyter, or as a standalone script.
    • Several fastai2 course-v4 notebooks ported to use mpify, and to train in distributed data-parallel. Interesting use cases you wish to share, and bug reports are welcome.

7 hours

$1,990

Yt Audio Scraper

About

It is an audio dataSet generator. For training Neural Networks Just put the youtube video links in the json array under "links" key, and $.wav$ file with each pronunced word will be extracted. In order to extract each word the video should have automatic-generated subtitles. For better extractions, the videos should have audio with no/low noise and silence between words.


7 hours

$1,990

Basic Sentiment Analysis

About

Created, trained, and evaluated a Neural Network model that, after the training, was able to predict movie reviews as either positive or negative reviews - classifying the sentiment of the review text. Dataset: The imported dataset is easily accessible in keras. After unloading I unpacked it to populate the training set and the test set. Both the training and test set have 25,000 examples each. When loading the dataset I set the number of words to 10,000.This means that only the most common 10,000 words from the bag of words were used and the rest were ignored. The developers at Keras already did some pre-processing in the data and had assigned unique numeric values to each word Decoding the Reviews: Decode numeric representation of the examples back into text. Decoding is just for my reference so that I can read a couple of reviews and see if their labels seem to make sense. For the decoding, I created a dictionary with key value pairs like the word index imported. Except this new dictionary had the word index values as keys and keys as values. Padding the Examples: The maximum length of 256 words was set for a review and 'the' as added to the reviews with fewer words to expand it's length to 256. 'the' was used as it is just an article and holds no inherent meaning. The input features are a bag of words and the model will make the predictions based on these features. If a particular set of features is a negative review or positive review. So, as it trains, it will start to assign some meaning to certain words which occur often in certain types of reviews. Maybe a word like wonderful will influence the model into thinking that the review is more positive, maybe a word like terrible will influence the network into thinking that the review is more negative. So, as it trains, it will assign how much influence and what influence various words in our vocabulary will have on the output. Word Embedding: An embedding layer will try to find some relation between various words. We are looking to find an embedding for 10,000 words, and we are trying to learn 16 features from those words. Then, all the words are represented as these feature vectors. The embedding layer will learn a 10000 by 16 dimensional Word Embedding where each word has a feature representation of 16 values. Creating and Training the Model: I used the sequential class from Keras. I also imported a few layers that were needed. We know, I needed an Embedding layer (used 16 dimension for the feature representations), needed a pooling layer which converted feature representations of 10,000 by 16 to a 16 dimension vector for each batch that was fed into a Dense layer with a rectified linear unit activation. Finally.Another dense layer with 'sigmoid' activation function to gives a binary classification output for the two classes. number of Epochs was set to 20. Prediction and Evaluation: We split the training set into sets; training set and validation set (20%) . Display the accuracy of our model during training for both the training and the validation set. after predictig the classes in testing set the model came out with the accuracy of 84.175%.


7 hours

$1,990

Security Plus Training

About

A set of exercises to help you learn about using computer security tools, and maybe help pass the Security+ exam. No promises though.

Introduction

The CompTIA Security+ certification body of knowledge covering a number of concepts and tools. In the (section) titled "Technologies and Tools", you are introduced to a number of command line tools for Windows, Linux, and MacOS that can help you explore and troubleshoot local networks as well as remote systems and hosts.

The Questions

It is my hope that I can provide a set of questions relating to each of the recommended tools that will give you real world examples and hands-on experience working with these tools. It is recommended you get familiar with these tools before you take the Security+ exam, according to Mike Chappel.

  • Using Ping

7 hours

$1,990

WebApp Pentest Training

About

In this training, we will learn about how to perform penetration testing. Although the training title is called WebApp Pentesting (which we will strictly follow OWASP Top 10 model). Besides, we will perform all the require enumeration techniques and privilege escalations (both linux and window machines), and we will do simple buffer overflow attack as well (if time permit us. Of course, if we know little about assembly language a little in prior, it could help us tremendously to understand buffer overflow).

Target Audience

Agenda

  1. Lab Setup

7 hours

$1,990

Pentest Training

About

In this training, we will learn about how to perform penetration testing. Although the training title is called WebApp Pentesting (which we will strictly follow OWASP Top 10 model). Besides, we will perform all the require enumeration techniques and privilege escalations (both linux and window machines), and we will do simple buffer overflow attack as well (if time permit us. Of course, if we know little about assembly language a little in prior, it could help us tremendously to understand buffer overflow).

Target Audience

Agenda

  1. Lab Setup
  2. Reconnaissance
  3. Finding Exploit
  4. Privilege Escalation
  5. Cleaning stage

7 hours

$1,990

Training Httpclient

About

These applications contain some examples of the use of HttpClient and HttpClientFactory. The best approach dependes upos the app's requirements.

Basic Usage

IHttpClientFactory can be regfistered by calling AddHttpClient. An IHttpClientFactory can be request using dependecy injection and the code uses IHttpClientFactory to create a HttpClient instance. This is a good way to refactory an exisitng app. It has no impact on how HttpClient is used.

Named client

Named client are a good choice when:

  • The app requires many distinct uses of HttpClient.
  • Many HttpClients have diferent configuration.
  • Provide the same capabilities as named clients without the need to use strings as keys.
  • Provides IntelliSense and compiler help when consuming clients.
  • Provide a single location to configure and interact with a particular HttpClient. For example, a single typed client might be used:
    • For a single backend endpoint.
    • To encapsulate all logic dealing with the endpoint.
  • Work with DI and can be injected where required in the app.

7 hours

$1,990

Inntt

About

inntt: Interactive NeuralNet Trainer for pyTorch

Finding the right hyperparameters when training deep learning models can be painful. The practitioner often ends up applying a trial/error approach to set them, based on the observation of some indicators (tr_loss, val_loss, etc.). Each little modification typically entails retraining from scratch. Interactive NeuralNet Trainer for pyTorch (INNTT) allows you to modify many parameters on the fly, interacting with the keyboard. Some routines/features currently supported: The inntt works currently only on the Linux's terminal; i'll fix that. I would also like to show some examples with growing-nets, this time controlled by the user.


7 hours

$1,990

PhraseTrain

About

A command line interface (CLI) language phrase training program written in Python.

Main menu

PhraseTrain | Test P - Practice the chosen list M - Modify the current list S - Save the current list R - Remove the current list L - Load a previous phrase list C - Create a new phrase list Q - Quit the program What would you like to do? Choice >> m

Modify your phrase lists

PhraseTrain | Modify 'Test' (English -> Russian)

  1. monday ->
  2. tuesday ->
  3. wednesday ->
  4. thursday ->
  5. friday -> A - Add a new phrase B - Back Select a phrase by number, or action by letter. Choice >> b

    Customize your practice session

    PhraseTrain | Setup practice for 'Test' (English -> Russian)


7 hours

$1,990

Learn UVM from Scratch

About

This training course is for a beginner who wants to get familiar with UVM. We will use Mentor Modelsim/Questa as a simulator.

Content

Stage 1: Preparing the LAB environment

Stage 2: Compile UVM LIB

Stage 3: Hello world from UVM

Stage 4: Generate a clock in Verilog domain

 


7 hours

$1,990

Knockknock

About

A small library to get a notification when your training is complete or when it crashes during the process with two additional lines of code. When training deep learning models, it is common to use early stopping. Apart from a rough estimate, it is difficult to predict when the training will finish. Thus, it can be interesting to set up automatic notifications for your training. It is also interesting to be notified when your training crashes in the middle of the process for unexpected reasons.


7 hours

$1,990

Tess4training

About

LSTM Training Tutorial for Tesseract 4 In order to successfully run the Tesseract 4 LSTM Training Tutorial, you need to have a working installation of Tesseract 4 and Tesseract 4 Training Tools and also have the training scripts and required traineddata files in certain directories. For running Tesseract 4, it is useful, but not essential to have a multi-core (4 is good) machine, with OpenMP and Intel Intrinsics support for SSE/AVX extensions. Basically it will still run on anything with enough memory, but the higher-end your processor is, the faster it will go.


7 hours

$1,990

Mask Detection

About

A real-time facial mask detector implemented by training deep learning model with PyTorch Lightning. Detecting face masks from images can be achieved by training deep learning models to classify face images with and without masks. This task is actually a pipeline of two tasks: first we have to detect if a face is present in an image/frame or not. Second, if a face is detected, find out if the person is wearing a mask or not. For the first task, I have used MTCNN to detect human faces. There are other approaches as well, we can use a simple cascade classifier to achieve this task. For the second task, I used a pretrained mobilenet_v2 model by modifying and training its classifier layers to classifiy face images with/without masks. For the dataset, I collected some face images with masks from the internet and some from the RWMFD dataset. For the images of faces, I used real face images from the 'real and fake face' dataset. A mobilenet_v2 model was trained on 1700 images in total with PyTorch Lightning. The aim of this project was of course to implement a face mask detector but I wanted to give PyTorch Lightning a try. It definetely makes your PyTorch implementation more organised and neat but implementation of certain tasks may feel a bit complicated at first, at least for me.


7 hours

$1,990

Docker Project IIEC Rise

About

DevOps is a set of practices that combines software development (Dev) and information-technology operations (Ops) which aims to shorten the systems development life cycle and provide continuous delivery with high software quality. Demand for the development of dependable, functional apps has soared in recent years. In a volatile and highly competitive business environment, the systems created to support, and drive operations are crucial. Naturally, organizations will turn to their in-house development teams to deliver the programs, apps, and utilities on which the business counts to remain relevant."

Docker is a set of platform as a service (PaaS) products that uses OS-level virtualization to deliver software in packages called containers. It is a category of cloud computing services that provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an app. Docker is one of the tools that used the idea of the isolated resources to create a set of tools that allows applications to be packaged with all the dependencies installed and ran wherever wanted. Docker has two concepts that is almost the same with its VM containers as the idea, an image, and a container. An image is the definition of what is going to be executed, just like an operating system image, and a container is the running instance of a given image. Differences between Docker and VM: Docker containers share the same system resources, they dont have separate, dedicated hardware-level resources for them to behave like completely independent machines. They dont need to have a full-blown OS inside. They allow running multiple workloads on the same OS, which allows efficient use of resources. Since they mostly include application-level dependencies, they are pretty lightweight and efficient. A machine where you can run 2 VMs, you can run tens of Docker containers without any trouble, which means fewer resources = less cost = less maintenance = happy people.

Content

  • Docker Commands

  • Docker search httpd

  • Docker run -i -t --name centos_server centos:latest i - interactive t - Terminal

  • Docker Install Apache Webserver - Dockerfile


7 hours

$1,990

SMART

About

SMART is an open source application designed to help data scientists and research teams efficiently build labeled training datasets for supervised machine learning tasks. If you use SMART for a research publication, please consider citing: Development

The simplest way to start developing is to go to the envs/dev directory and run the rebuild script with ./rebuild.sh. This will: clean up any old containers/volumes, rebuild the images, run all migrations, and seed the database with some testing data. The testing data includes three users root, user1, test_user and all of their passwords are password555. There is also a handful of projects with randomly labeled data by the various users.

Docker containers

This project uses docker containers organized by docker-compose to ease dependency management in development. All dependencies are controlled through docker.

Initial Startup

First, install docker and docker-compose. Then navigate to envs/dev and to build all the images run: docker-compose build Next, create the docker volumes where persistent data will be stored. docker volume create --name=vol_smart_pgdata docker volume create --name=vol_smart_data Then, migrate the database to ensure the schema is prepared for the application. docker-compose run --rm smart_backend ./migrate.sh

Workflow During Development

Run docker-compose up to start all docker containers. This will start up the containers in the foreground so you can see the logs. If you prefer to run the containers in the background use docker-compose up -d. When switching between branches there is no need to run any additional commands (except build if there is dependency change).

Dependency Changes

If there is ever a dependency change than you will need to re-build the containers using the following commands: docker-compose build docker-compose rm docker-compose up If your database is blank, you will need to run migrations to initialize all the required schema objects; you can start a blank backend container and run the migration django management command with the following command: docker-compose run --rm smart_backend ./migrate.sh

Custom Environment Variables

The various services will be available on your machine at their standard ports, but you can override the port numbers if they conflict with other running services. For example, you don't want to run SMART's instance of Postgres on port 5432 if you already have your own local instance of Postgres running on port 5432. To override a port, create a file named .env in the envs/dev directory that looks something like this:

Default is 5432

EXTERNAL_POSTGRES_PORT=5433

Default is 3000

EXTERNAL_FRONTEND_PORT=3001 The .env file is ignored by .gitignore.


7 hours

$1,990

Tml

About

Training Markup Language

TML stands for Training Markup Language. TML is a markup language aiming at providing a simple and clean way to describe a training/workout with the exercises and the performances.

Example

I really enjoyed my workout because of my TML logs!

Squat

150kg x 5 @ 7 160kg x 5 @ 8 160kg x 5 @ 8 160kg x 5 @ 9 my squat session felt really great! I hit my RPEs seamlessly!

Deadlift

150kg x 5 @ 7 160kg x 5 @ 8 160kg x 5 @ 8 160kg x 5 @ 9 my deadlift session felt really great! I hit my RPEs seamlessly!

Push ups

25 x 4 it felt really hard, trust me!


7 hours

$1,990

Traffic Sign Classifier

About

Project 3: Traffic Sign Classifier

Dataset Summary:

I read the dataset using the pickle library in Python. The original dataset features:

  1. Number of training samples is 34799
  2. Number of testing samples is 12630
  3. Number of validating examples is 4410
  4. Image data shape = (32, 32, 3)
  5. Number of classes = 43

    Visualisation of the dataset:

    The number of examples of each class has been represented in the bar chart below. Many classes have very less data as compared to other classes. Hence, I have used the function Generatedata to make sure that all classes have at least 800 examples. Generate data calls two functions warp and scale which randomly apply transformations on images. A few examples have been shown below. The original dataset comprises of images as shown below. The label of the image has been indicated above them. Figure 1 Bar Chart Figure 2 Examples

    Pre-processing of dataset

    Pre-processing the original is a very important technique in Machine Learning. In my model I decided to convert the image into grayscale. On observing the results, I realised that some images were simply too dark. I found it difficult to myself classify these traffic signs. To further process the images, I used an OpenCV function CLAHE (Contrast Limiting Adaptive Histogram Equalisation). It greatly improved the image contrast. I would like to draw your attention to the traffic signs in green box. Figure 5 Original Figure 4 Grayscale Figure 3 CLAHE The example in green box is very faint in the original dataset. After converting it into grayscale, it is barely recognizable. However once ve apply CLAHE, the sign is quite clearly visible. The data is then standardised so that all the values lie between 0 and 1. The image below shows some examples of images generated by the Generatedata function:

    Model Architecture

    The architecture of the model is: o Convolution with input 32x32x1 and output 28x28x o Activation using RELU o Max pool with output 14x14x o Convolution with input 14x14x6 and output 10x10x 16 o Activation using RELU o Max pool with output 5x5x 16 o Convolution with input 5x5x16 and output 400 o Activation using RELU o Probability of .75 produced best results o Outputs of layer 2b and layer 2 were fed into this step o Fully connected with input 800 and output 400 o Activation using RELU o Fully connected with input 400 and output 2 00 o Activation using RELU o Fully connected with input 200 and output 43 o Activation using RELU Figure 6 Generated data

    Model Training

    The following parameters were used while training the model:

  6. Epochs = 30
  7. Batch size = 128
  8. Learning rate = 0.
  9. Optimizer: Adam Optimizer
  10. Dropout keep probability = 0.

    Solution Approach

    The result is:

  11. Validation accuracy = 95.2%
  12. Test accuracy = 93.3% I had earlier built a model where I used colour images as input for the model. However, using grayscale images provided better accuracy. The model I was using was the same as the one used by us in Convolution Neural Network. 

7 hours

$1,990

Multi Language Classifier

About

Requires Java 9+ The goal of this is to combine a lot of the various techniques in machine learning, some of which I have learned in detail in my classes, and some of these have just been touched upon. The techniques I'm going to explore:

Feature selection:

Multiple classification: What to learn:

The languages I'm going to try to classify:

These languages are strategically chosen: Given a phrase in one of the languages below, the program can detect and correctly classify the language, using machine learning techniques on training data.

Tasks left to implement

Results - Decision Tree

The above graph shows the Decision Tree accuracy vs depth. Parameters used: examplesFile=training.txt testingFile=testing.txt numberGenerations=75 poolSize=20, with varying tree depth (1-10). The above shows that the testing accuracy peaks at 95.9%. Overfitting starts to play a part once the depth of the trees exceeds 6. Each iteration of the training took around 12 seconds on my 4 core computer.

Result - Adaptive Boosting

The above graph shows the Adaptive Boosting accuracy vs ensemble size. Parameters used: examplesFile=training.txt testingFile=testing.txt numberGenerations=75 poolSize=20, with varying ensemble size (1-15). The above shows that the testing accuracy peaks at 97.2%, with 10 decision stumps in the ensemble. Each iteration of training took around 14 seconds, with the time only slightly increasing with larger ensemble sizes. The attribute learning took much of the time.


7 hours

$1,990

Cyphercat

About

Here are tools and software you can use to replicate our work.

Research

We are focusing on two different areas of research: The below is created by our visualization software. The actual PDF has links to the arxiv papers.


7 hours

$1,990

ToSpcy

About

ToSpcy is a python package which helps prepocessing dataset for model training in spaCy. It could convert labeled dataset into spaCy format.

Example

from toSpcy.toSpacy import Convertor dataset=['When Sebastian Thrun started working on self-driving cars at Google in 2007, few people outside of the company took him seriously.','Tom is traveling in China'] myConvertor=Convertor() spacydata=myConvertor.toSpacyFormat(dataset) spacydata

[('When Sebastian Thrun started working on self-driving cars at Google in 2007, few people outside of the company took him seriously.', {'entities': [(5, 20, 'p'), (61, 67, 'o'), (71, 75, 'd')]}), ('Tom is traveling in China', {'entities': [(0, 3, 'PER'), (20, 25, 'GEO')]})]

TagLabels

You could also covert the tags into desired labels when instantiating your object by using "taglabels" - a dictionary of tags and corresponding labels: dic_taglabels={'p':'PERSON','o':'ORG'} myConvertor=Convertor(dic_taglabels) spacydata=myConvertor.toSpacyFormat(dataset) spacydata[0] ('When Sebastian Thrun started working on self-driving cars at Google in 2007, few people outside of the company took him seriously.', {'entities': [(5, 20, 'PERSON'), (61, 67, 'ORG'), (71, 75, 'd')]})


7 hours

$1,990

Ws Chat

About

Simple Chat

Simple traning project, that implements usable websocket chat, with rooms support. In this project I was experimenting with Grid CSS, Bootstrap, and CSS layout at all, I tried to make responsive layout. Also, I did some experiments with Socket.io, typing declarations between server and client, and also tried to implement some custom React Hooks.


7 hours

$1,990

Face Detection And Recognition

About

Face Recognition: Understanding LBPH Algorithm Human beings perform face recognition automatically every day and practically with no effort. Although it sounds like a very simple task for us, it has proven to be a complex task for a computer, as it has many variables that can impair the accuracy of the methods, for example: illumination variation, low resolution, occlusion, amongst other. In computer science, face recognition is basically the task of recognizing a person based on its facial image. It has become very popular in the last two decades, mainly because of the new methods developed and the high quality of the current videos/cameras. Face recognition is different of face detection: Face Detection: it has the objective of finding the faces (location and size) in an image and probably extract them to be used by the face recognition algorithm. Face Recognition: with the facial images already extracted, cropped, resized and usually converted to grayscale, the face recognition algorithm is responsible for finding characteristics which best describe the image. The face recognition systems can operate basically in two modes: Verification or authentication of a facial image: it basically compares the input facial image with the facial image related to the user which is requiring the authentication. It is basically a 1x1 comparison. Identification or facial recognition: it basically compares the input facial image with all facial images from a dataset with the aim to find the user that matches that face. It is basically a 1xN comparison. There are different types of face recognition algorithms, for example: Eigenfaces (1991) Local Binary Patterns Histograms (LBPH) (1996) Fisherfaces (1997) Scale Invariant Feature Transform (SIFT) (1999) Speed Up Robust Features (SURF) (2006) Each method has a different approach to extract the image information and perform the matching with the input image. However, the methods Eigenfaces and Fisherfaces have a similar approach as well as the SIFT and SURF methods. Today we gonna talk about one of the oldest (not the oldest one) and more popular face recognition algorithms: Local Binary Patterns Histograms (LBPH). Objective The objective of this post is to explain the LBPH as simple as possible, showing the method step-by-step. As it is one of the easier face recognition algorithms I think everyone can understand it without major difficulties. Introduction Local Binary Pattern (LBP) is a simple yet very efficient texture operator which labels the pixels of an image by thresholding the neighborhood of each pixel and considers the result as a binary number. It was first described in 1994 (LBP) and has since been found to be a powerful feature for texture classification. It has further been determined that when LBP is combined with histograms of oriented gradients (HOG) descriptor, it improves the detection performance considerably on some datasets.


7 hours

$1,990

Openmeetingsws

About

OpenMeetings WS for OpenOLAT

A project to compile in Java the WSDL's of Openmeetings used by the Openmeetings virtual classrom in OpenOLAT. The project doesn't contains Java classes, they are generated with maven. To recompile the WSDL first download them and place them in src/main/resssources localhost:5080/openmeetings/services/UserService?wsdl localhost:5080/openmeetings/services/RoomService?wsdl localhost:5080/openmeetings/services/CalendarService?wsdl Delete the java code in src/main/java and recompile the WSDLs with: mvn clean generate-sources Than you have regenerate the java code and you can package the code: mvn package


7 hours

$1,990

Rapid Object Detection Using Cascaded CNNs

About

The purpose of this course is to demonstrate the advantages of combining multiple CNNs to a common cascade structure. In contrast to training a single CNN only, the resulting classifier can be faster and more accurate at once. So far, the provided code has been applied successfully to the problem of face detection. It should be straight forward to adapt it to similar use cases though. This course is about binary(!) classification / detection only. Furthermore, the cascade gets especially fast for highly-unbalanced class distributions.


7 hours

$1,990

FF ANN Genetic Training

About

This is the code in to create a feed forward neural network, and training that by genetic algorithm. I wrote this code as part of a project which because of high dimensionality of the features, BP and other conventional ANN training algorithms were not so effective. It turned out genetic algorithm did much better than the others.


7 hours

$1,990

Dns Training

About

The purpose of this project is to test deployment of bind DNS server with Kubernetes/Docker that is authoritative for a real domain: dns.training. It contains configuration and scripts that can be used to build and deploy bind docker containers in Azure. e.g. $ dig dns.training SOA +noall +answer +authority ; DiG 9.8.3-P1 dns.training SOA +noall +answer +authority ;; global options: +cmd dns.training. 604743 IN SOA ns1.dns.training. admin.dns.training. 2017071401 1800 900 604800 300 dns.training. 604743 IN NS ns2.dns.training. dns.training. 604743 IN NS ns1.dns.training.


7 hours

$1,990

Reinforcement Trading

About

Training of NNs to analyze Stock prices and of an RL agent that simulates trading based on the models provided

Objectives

This whole endevour is not about making money trading stocks or currencies but is about having a practical application for learning about the following concepts:

  • Increase proficiency with git, writing docs and structuring as well as managing projects
  • Use a web APIs to acquire data
  • Reinforcement Learning (Probably Q-Learning since this has been applied with reasonable results)
  • RNN, LSTM, GRU and maybe attention based models
  • Baselines: Prophet, ARIMA,
  • Using AWS to scale up my computational power
  • Implement adequate ways of visualizing data
  • Document code to ensure reproducability
    • Sphinx
  • Deployment of the code
  • Speed up Python
    • Parallel computing
    • Profiling
    • Pypy
    • Cython
    • numba, dask
    • C-extensions

The goal is not to make money trading stocks and I don't claim that it would be wise to use anything from this project to do so

Optional goals:

  • Set up a SQL server to store data even if that is not necessary for the amounts of data used
  • Write a Kalman filter / boosting / ensemble to see if that could be a good idea

7 hours

$1,990

Workplace Training

About

A simple set of cheatsheets for training people working in companies that build software. These cheatsheets are derived from training provided by Pablo Bawdekar which he developed at Quidnunc New York. They are intended for the following audiences: Each cheatsheet is intended to be used to prepare for and complete a task until the person is comfortable with the steps and ideas. It is useful to go over the cheatsheets periodically, particularly when training someone new.


7 hours

$1,990

Learn ASP like CGI Library

About

libeasycgi is designed to be an easy to use and ASP-like library for CGI programming. It will provide an interface for CGI programming in an ASP style and tries to go beyond this. Also, it is designed to be easy for automated test on CGI programs.


7 hours

$1,990


Is learning Programming hard?


In the field of Programming learning from a live instructor-led and hand-on training courses would make a big difference as compared with watching a video learning materials. Participants must maintain focus and interact with the trainer for questions and concerns. In Qwikcourse, trainers and participants uses DaDesktop , a cloud desktop environment designed for instructors and students who wish to carry out interactive, hands-on training from distant physical locations.


Is Programming a good field?


For now, there are tremendous work opportunities for various IT fields. Most of the courses in Programming is a great source of IT learning with hands-on training and experience which could be a great contribution to your portfolio.



Programming Online Courses, Programming Training, Programming Instructor-led, Programming Live Trainer, Programming Trainer, Programming Online Lesson, Programming Education