KERNELS

The machine learning system

Introduction
All about the KERNELS concept
Ready-to-use MULTI-AGENT CLUSTER software. Distributed programming may not differ in level of difficulty from standard coding on a single machine. Knowledge of how to operate a cluster and writing a communication layer for each new algorithm, are no longer an obstacle.
KERNELS allows for FASTER and CHEAPER deployments of new solutions. Moreover, it gives access to those that were previously unrealistic due to implementation time.
In addition to providing access and administration of the computing platform, our team is able to design, develop and deliver a completed program dedicated to your company's needs at KERNELS.
Should you wish to AUTOMATE and ACCELERATE processes as:
Financial
Analysis
Risk
Management
Anomaly
Detections
Time Series
Analysis
Bioinformatics
Algorithms
Statistical
Prediction
Correlation and
Dependence
Statistical
Classification
our solution is what you really need.
Thanks to long term experience of our team with different kinds of data analysis, we designed the system putting emphasis on:
  • The ability to implement ANY parallel algorithm, not just a specific subclass of it, as in other cluster solutions. Especially important in this area are computations that use machine learning and artificial intelligence methods for deep data mining.
  • Removal of the responsibility for managing server processes from the user. Handling the cluster during the development has been the biggest problem so far. That's why KERNELS significantly reduces deployment time and enables projects carried out by smaller teams involved in their realization
  • Increasing resource utilization. The number of jobs executed in parallel is dynamically adjusted according to the free resources on a given machine. As a result, KERNELS uses the potential of the cluster more efficiently, thereby cutting the cost of running work.
  • The simplicity of coding new algorithms. Procedures for task parallelization and data sharing are designed to be as intuitive as possible. This allows for wider implementation, not just by higher-level programming staff.
  • Ease of visualization. KERNELS has a huge number of graphs allowing to present in the interface both final and current data from the work of algorithms.
Exploring Epidemic Concept
Long time ago before the humans appeared on Earth, the World was ruled by viruses, a relatively complicated organic molecules, without a cell structure, made of proteins and nucleic acids. They contained genetic material in the form of RNA or DNA. It was subject to mutation processes, as a result of which viruses have changed their form by adapting to the environment. Most of these processes lasted for a very long time and it still goes on. If we treat RNA or DNA as a set of instructions, we can start thinking about the epidemic process as a mechanism of expansion of the code of virus (code of program).
Our idea is to think of a computing system as a population that we infect with program. Instead of trying to combat the program, the system using artificial intelligence methods manipulates the population's structure by infecting individual elements, maximizing its' evolution (distribution of user program code). The management is based on the concept of decentralized statistical CLUSTER knowledge, supported by unique MACHINE LEARNING methods. The key is that unlike many parallel systems, KERNELS doesn't need synchronization processes.
Virus
Epidemic Concept Advantages
  • Easy algorithm deployment in a Distributed Computing Environment. The KERNELS allows for utilization of multiple processing platforms simultaneously for problem solving. From a developer's viewpoint, CLUSTER is one device.
  • No central management units. What reduces the risk of queques creation in tasks assignement, typical problem in a client-server model. It allows the processing of much larger and more efficient algorithms than in the aforementioned case. Also balance potential problems associated with the large size of the computing cluster.
  • Very fast uniform distribution of performed tasks, despite the lack central management units.
  • Computation stability. A single machine component damage causes only a failure of operations that are being processed on it, not the instability of the entire system.
  • Unlimited scalability and simple production deployment. Our system allows for dynamic assignment of computation power to ongoing tasks without interruption to their processing. It is fast and simple. Just like a supercomputer but with easy onboarding
Interactive animation
System enables an easy data visualization and implementation of changes in the visualization while the program is running. Kernels allows dynamic modification of the specified visualization on the built and started calculation process without stopping the ongoing calculation. Important point is understanding that the measurements not always reflect the quality of the solution. As an example we could consider the algorithm of the machine learning, which takes as a measurement the supremum norm. It might appear that after a certain period of time, despite that the solution did not achieve desired quality, visualizations of its' partial results will satisfy user's needs. This will allow to finish the analysis earlier.
Interactive
BENEFITS
KERNELS will provide you with

Limitless parallelization

KERNELS will execute virtually any parallel algorithm regardless of its level of complexity

Cluster Automation

You only code the logic of algorithm operation and KERNELS transfer it for you into the work of the cluster

Resource scalability

Unlimited online scalability of the KERNELS allows you to add or remove computing power at any time without interrupting ongoing tasks

Algorithms stability

The process of performing your multi-hour analyzes is not affected by the failure of individual computing units

Cluster optimization

When executing algorithms, KERNELS maximizes the use of available hardware resources

IT Security

The combination of elliptic curves cryptography, SSH protocol and process isolation ensure the security of the data being processed

Easy coding

An easy-to-code concept for parallelizing algorithms allows you to implement solutions that were not possible before

Data Visualizations

You recive an ability to edit and add interactive visualizations during creation of the analysis

Consultation

We offer you our expertise in preparation and implemention of personalized machine learning solutions

Applications
A glance at the KERNELS in practice
how it works
The animation shows how our system works in terms of its basic unit, that is VSF (Virtual Search Fork). The concept of these small components (programs, agents) distributed over various disjointed machines allows to avoids the problem of central management of your computations. Depending on their life cycle, they can
  • communicate with each other,
  • travel around it initial and other machines, looking for a job to do,
  • request an instance executing user code (Virtual computation Fork),
  • display more tasks, ready to be undertaken by another VSF,
  • act as a relay between themselves and the user interface.
The magic behind such a computation system makes central management disappears. It is replaced by the issue of managing the population of VSFs, infect with your algorithms. It turns out that the epidemic spreading theory, statistical method and machine learning needed to ensure the efficiency of calculations are much cheaper and more stable than maintaining a centrally managing unit.
System work example
Different colored circles represent VSF on disjointed machines. The connections between the circles symbolize the child node doing the command given by the parent node. The red circle is user interface. Note that VSFs that finish one task do not die, but start another task if it possible.
Algorithms
The key question that arises is what type of AI computation can be implemented in KERNELS? The answer is very favourable, all kinds of machine learning and AI algorithms based on a dynamically changing computation tree, (means the ability of existing nodes to recursively call sub-nodes) could be implemented. This enables more efficient data mining, by making the number of simultaneous executing tasks dependent on the effects that are obtained in a given area of the researched resources.
Examples of the algorithms listed below can be effectively deployed in KERNELS, so that they gain aforementioned capabilities. The attached article links facilitate deep dive into issues:
Evolutionary algorithm use mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions (see also loss function). Evolution of the population then takes place after the repeated application of the above operators.
EA find applications in many fields of industrial engineering, bioinformatics, medicine and logistics management. Links:
Artificial neural network is based on a collection of connected units (called artificial neurons). The model is intended to resemble the structure of a biological brain. Each connection (called edges), like the synapses, can transmit a signal to other neurons. A neuron receives signals and processes them then can signal neurons connected to it. The signal is a real number. Output of each neuron is computed by some non-linear function of the sum of its inputs. The weight increases or decreases the strength of the signal at a connection.
Networks learn by processing examples, which contains a known "input" and "result". Passing each one through the network adjusted the weights. Research strongly suggests that these model can be paralleled. Special attention should be paid to DEEP LEARNING, which refers to the use of multiple layers in the network. Links:
Gradient descent is an optimization algorithm which is commonly-used to train machine learning models and neural networks. Training data helps these models learn over time, and the cost function within gradient descent specifically acts as a barometer, gauging its accuracy with each iteration of parameter updates. It is worth noting that a variation of this method is used as part of neural networks learning process. Links:
Artificial immune system, in artificial intelligence, artificial immune systems (AIS) are a class of computationally intelligent, rule-based machine learning systems inspired by the principles and processes of the vertebrate immune system. The algorithms are typically modeled after the immune system's characteristics of learning and memory for use in problem-solving. Links:

Multi-swarm optimization, particle swarm optimization is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formula over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.
Multi-swarm optimization is a variant of particle swarm optimization (PSO) based on the use of multiple sub-swarms instead of one (standard) swarm. The general approach in multi-swarm optimization is that each sub-swarm focuses on a specific region while a specific diversification method decides where and when to launch the sub-swarms. The multi-swarm framework is especially fitted for the optimization on multi-modal problems, where multiple (local) optima exist. Links:
Hill climbing, a mathematical optimization technique which belongs to the family of local search. It is an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solution by making an incremental change to the solution. If the change produces a better solution, another incremental change is made to the new solution, and so on until no further improvements can be found. Note that this approach allows you to run the algorithm many times in parallel to avoid the problem of locality of a single solution. Links:
Branch and Bound, is a method for solving optimization problems by breaking them down into smaller sub-problems and using a bounding function to eliminate sub-problems that cannot contain the optimal solution. The solutions to the sub-problems are then combined to give a solution to the original problem. Links:
Divide and Conquer, in this approach, the problem is divided into several small sub-problems. Then the sub-problems are solved recursively and combined to get the solution of the original problem. Links:
Dynamic Programming, refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. Unlike divide-and-conquer, subproblems in dynamic programming are not separable, but must have the optimal substructure property. Links:
Greedy Method, algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage. Links:
Example of AI regression learning
The results of the example training program are posted here. The aim of the program is to find areas on the map in which there is a strong linear correlation with the highlighted station. The output data of the program are:
  • Database
  • Table with temperatures in the selected station in 2016
The program generate more child tasks, whose aim is to sample from the database and share their knowledge with the parent task, if the child's task score or return of the result is good, then from the level of the node on which the operation was performed, more child tasks are generated. This process continues until the result is stabilized. The results of the algorithm show that the existence of a linear relationship is not accidental, but includes specific geographical areas.
Chart:
  • Linear progress, shows the progress of the linear relationship during learning process.
  • Learning process, shows on a graph with geographic coordinates how the learning process finds successive areas with increasingly stronger dependencies.
  • Final result, transfer of the final result to the map. In the end, it's all about the result.
Linear progress
Learning process
Final result
The map above shows that there is the strong linear correlation between temperature in the point marked with the large red circle and the linear combination of the areas marked with the smaller red circles. In other words, there is a linear temperature function of the points from the selected areas that determines the temperature at the highlighted point
Cooperation
Find out how you can partner with KERNELS
Our main activities
  • Providing an interface that allows you to build agorithms and visualizations on our platform.
  • Implementation and acceleration of algorithms provided to us on our platform.
  • Design and implementation of AI-algorithms that solve the client's problem.
  • Preparing advanced personalized interactive reports for data analysis.
  • Help in deployment of algorithm on our platform.
  • Training and consulting in the field of writing AI-algorithms.
Agreement
If you would like to know, how the machine learning can improve your business and life, don't hesitate to contact us. We will gladly present you technological possibilities and advise you on how to use them for achieving business needs. We approach each project individually to perfectly meet client requirements. After determining the scope of work we will present you with a personalized offer.
CONTACT
We would be happy to answer your questions

Contact us and we'll get back to you within 24 hours.

Warsaw, PL

contact@kernels-analysis.eu

Interactive
explore more
Discover the KERNELS world possibilities
KERNLES extensive presentation
Interested in learning more about KERNELS? We strongly recommend you to take a look at the presentation. It contains detailed information on the construction of individual components and the operation of our computing platform. Moreover, you will find out more about the benefits for your company.