GithubHelp home page GithubHelp logo

sic_xe-assembler's Introduction

A Two-Pass SIC-XE assembler

Course Name
Algorithmic Problem Solving
Course Code
23ECSE309
Name
Shrihari Hampiholi
University
KLE Technological University, Hubballi

This page hosts:

  1. Pre-requisites
  2. Introduction
  3. Objectives
  4. Design
  5. Business Cases
  6. Sample Execution
  7. Usage
  8. Future Scope
  9. About Me
  10. References

Prerequisites

  • Basic understanding of the SIC and SIC/XE architecture. The staple recommendation to learn this would be the Leland Beck 1 book.

  • Modern C++ knowledge. The code is written with features from C++11 and above. The code is written in a way that it is easy to understand and follow, and is well-commented and structured!.

Below is an introduction that I feel would be helpful to understand the project on an abstract level. Feel free to skip to the next section if you are already familiar with the concepts.

Introduction

  • Understanding of the two-pass assembler:

    • An assembler is essentially an algorithm that is used to convert a respective assembly program into the respective machine code. A two-pass assembler does this exact thing in 2 distinct steps.
      The first pass is used to generate the symbol table and the second pass is used to generate the object code using the symbol table that was generated in the first pass.
  • Understanding of the SIC/XE architecture:

    • The SIC/XE architecture is an extension of the SIC architecture. The SIC/XE architecture has more features than the SIC architecture. The SIC/XE architecture supports more registers, more addressing modes, and more instructions than the SIC architecture, making it more powerful and versatile due to the hardware enhancements.
  • Understanding of the SIC/XE instructions:

    • The SIC/XE instructions are divided into three categories:
    1. Format 1 instructions
    2. Format 2 instructions
    3. Format 3/4 instructions
      The Format 1 instructions are the simplest instructions to test stuff like the I/O, Registers and others. The Format 2 instructions are the instructions that have two operands, and mainly revolve around the registers. The Format 3/4 instructions are the instructions that have one operand, and mainly revolve around the memory.
  • Understanding of the SIC/XE addressing modes:

    • The SIC/XE addressing modes are divided into five categories:
      1. Immediate addressing mode
        • This mode is used to specify the value of the operand directly
      2. Direct addressing mode
        • This mode is used to specify the address of the operand directly
      3. Indirect addressing mode
        • This mode is used to specify the address of the operand indirectly, where the address of the operand is obtained from operand's data.
      4. Indexed addressing mode
        • This mode is used to specify the address of the operand indirectly with an index register.
      5. Base-relative addressing mode.
        • This mode is used to specify the address of the operand indirectly with a base register.
  • Understanding of the SIC/XE directives:

    • The SIC/XE directives are used to specify the attributes of the program. These are divided into two categories:
      1. Assembler directives
        • The Assembler directives are used to specify the attributes of the program that are used by the assembler. Examples of such directives are: START, END, BYTE, WORD, RESB, and RESW.
      2. Machine directives
        • The Machine directives are used to specify the attributes of the program that are used by the machine. Examples of such directives are: LDA, STA, ADD, SUB, MUL, DIV, and COMP.
  • Understanding of Control Sections:

    • Control Sections allow for the separation of the program into multiple sections that embrace the readability and maintainability of the program.
    • The Linker, at link-time, is responsible to link the multiple sections of the program into a single executable program.
  • Understanding the Type of Symbols:

    • The symbols in the SIC/XE architecture are divided into two categories:
      1. External Symbols
        • The External Symbols are the symbols that are defined in one Control Section but are referred to in another Control Section. The Linker is responsible to resolve the External Symbols at link-time.
      2. Absolute Symbols
        • The Absolute Symbols are the symbols that are defined in one Control Section and are referred to in the same Control Section. The Assembler is responsible to resolve the Absolute Symbols.

Objectives

  • The primary objective of this project is to implement an efficient Two-Pass assembler for the SIC/XE architecture, in the aim of which the following sub-objectives are set:
    • To optimize memory usage and execution time, making the assembler suitable for use in environments with limited resources.
    • To ensure compatibility with the SIC/XE instruction set, enabling the assembler to accurately translate mnemonic instructions into machine code.
    • To support all addressing modes provided by the SIC/XE architecture, enhancing the flexibility and capability of the assembler.
    • To implement error detection and reporting mechanisms, assisting developers in identifying and correcting syntax or semantic errors in their assembly code.
    • To provide clear and comprehensive documentation on how to use the assembler, including examples of assembly programs and explanations of the assembly process.
    • To use modern C++ features to write clean, efficient, and maintainable code, following best practices and design principles, and to have detailed discussions on the choice of Data Structures.
    • To create a user-friendly interface for the assembler on the command-line interface, to facilitate ease of use for both beginners and experienced users.
    • To ensure extensibility of the assembler, allowing for future enhancements such as support for additional features such as Program Blocks, Parallel Processing, and Optimization Techniques.
    • To structure the project across multiple header and C++ source files, promoting modularity and maintainability, facilitating parallel development, simplifying debugging, and enhancing the readability and organization of the codebase.
    • To employ GNU Make as the build system to manage the compilation process efficiently, leveraging its capabilities to automate the build process and ensure a smooth development workflow across multiple platforms.

Design

  • The design of the Two-Pass assembler is divided into two distinct steps:
    1. The 1st pass
      • This is used to generate the Symbol Table respective to each Control Section. The symbol table is used to store the symbols and their respective addresses to help in the second pass.
    2. The 2nd pass
      • This is used to generate the Object Code that is further utilised by the Linker and Loader to process the program. The object code is generated by using the symbol table that was generated in the first pass.

Possible Data Structures that can be used for the Two-Pass Assembler:

For the First Pass:

  1. For the Symbol Table
    As the Symbol Table is used to store the symbols (variable names) and their respective addresses in the input ASM file, and requires the below operations:
    • Insert a symbol and its address
    • Search for a symbol and its address
    • Delete a symbol and its address
      We can use the following data structures:

a) Hash Table

A Hash Table is used to store the symbols and their respective addresses in an efficient manner. It uses a hash function to map the symbols to their respective addresses. The hash function tries to ensure the best that the symbols are stored in a unique location in the hash table. The hash table ensures that the symbols can be searched, inserted, and deleted in constant time. Provided a good hash function 2, the time complexity of insertion, deletion, and search operations is O(1) on average. It is on average because there can be collisions in the hash table, which can increase the time complexity to O(n) in the worst case. The space complexity of the hash table is O(n) where n is the number of symbols in the program.

Time Complexity: O(1) for insertion, deletion, and search operations on average.
Space Complexity: O(n) where n is the number of symbols in the program.

b) Self-Balancing Binary Search Trees

If it is very crucial to us that we have a strict upper bound on the time complexity of the operations 3, we can opt the Symbol Table to be made of Self-Balancing Binary Search Trees like the AVL Tree 4 or the Red-Black Tree 5. These trees ensure that the height of the tree is balanced, which ensures a constant time complexity for insertion, deletion, and search operations.

Time Complexity: O(log n) for insertion, deletion, and search operations.
Space Complexity: O(n) where n is the number of symbols in the program.

c) Trie

A Trie is a tree-like data structure that is used to store a dynamic set of strings. The Trie here can be used to store the symbols and their respective addresses in an efficient manner. The Trie ensures that the symbols can be searched, inserted, and deleted in constant time with respect to the length of the symbol.

Time Complexity: O(L) for insertion, deletion, and search operations where `L` is the length of the symbol.
Space Complexity: O(L*n) where n is the number of symbols in the program.

d) Skip List

A Skip List is a probabilistic data structure that enables fast search, insertion, and deletion operations. By maintaining multiple layers of forward pointers, Skip Lists allow operations to skip over large sections of the list, achieving average O(log n) time complexity for these operations. This makes Skip Lists an efficient and practical choice for symbol tables where logarithmic operation time is desirable.

Time Complexity: Average O(log n) for insertion, deletion, and search operations.
Space Complexity: O(n), with higher constants due to additional pointers.

Implementations of the above data structures can be found in the data_structures directory. 1. Hash Table 2. AVL Tree 3. Red-Black Tree 4. Trie 5. Skip List

Due to the presence of Control Sections, we maintain a separate Symbol Table for each Control Section. This ensures that the symbols are unique within the Control Section and are not duplicated across Control Section. This helps to cut-off the ambiguity that can take place during the linking process.

  1. For the Opcode Table and Register Table
    Both, the Opcode Table and the Register Table are Static Data Structures that are used to store the opcodes and their respective machine codes, and the registers and their respective machine codes respectively, dictated by the hardware of the SIC/XE architecture.
    The Opcode Table is used to validate the instructions and their respective opcodes, and the Register Table is used to validate the registers and their respective machine codes.
    • In Pass 1 to validate the instructions and their respective opcodes, reserving proper space for the instructions in the Intermediate File.
    • In Pass 1 to validate the Opcode if it is even supported by the SIC/XE architecture hardware.
    • In Pass 2 to generate the Object Code by using the Opcode Table.

We can use the exact same data structures as mentioned above for the Symbol Table. Additionally, due to the virtue of them being a static data structures, we can opt for a Hash Table, as we only perform search operations on these tables during the first and second pass. We can meticulously design the hash function to ensure that the opcodes & register names are stored in a unique location respectively, in the hash table, yielding the worst case time complexity of O(1) for search operations.

Time Complexity: O(1) (Omega) for search operations on average.
    Space Complexity: O(n) where n is the number of opcodes in the SIC architecture.
  1. For the Intermediate File
    The Intermediate File is used to store the intermediate results of the first pass. The intermediate file is used to store the Control Section, the Symbol Table, and the Program. The intermediate file is used to generate the Object Code in the second pass. We can use the following data structures:

a. Secondary Memory

  • The Intermediate File can be stored in the secondary memory like the hard disk. It can further be read from the secondary memory in the second pass to generate the Object Code.
Time Complexity: Dependent on the size of the data, Hardware of the Secondary Storage, underlying Operating System and system performance, not strictly O(1).
Space Complexity: O(n) where n is the size of the Intermediate File.

b. In-Memory Data Structures

  • The Intermediate File can be stored in the memory. It can be stored in the memory in the form of a data structure like a vector or a list, and can be read from the memory in the second pass to generate the Object Code.
Time Complexity: 
    O(1) for insertion and search operations.
    O(n) for deletion operations.
Space Complexity: 
    O(n) where n is the size of the Object Code.

For the Second Pass:

Let me set a bit of context that helps explain the Header Record, Text Records, Modification Records and End Record that are generated in the second pass. The SIC/XE architecture has a fixed-length instruction format that is used to store the instructions. The fixed-length instruction format is used to store the instructions in the memory. This is analogous to the ELF format in the Linux Operating System, albeit this is a way simpler version of the ELF format.
The fixed-length instruction format is used to store the instructions in the memory in the form of:

  • Header Record (H)
    • The Header Record is used to store the starting address of the program, the name of the program, and the length of the program.
  • Text Records (T)
    • The Text Records are used to store the instructions of the program in the form of the fixed-length instruction format. SIC/XE likes to have its instructions in a fixed-length format of 30 bytes.
  • Modification Records (M)
    • The Modification Records are used to store the modifications to the instructions. For example, if one executable is referring a symbol from another executable, the Modification Records are used to store the modifications the instructions need to undergo to refer the symbol from the other executable by the Linker.
  • End Record (E)
    • The End Record marks the end of the program, and specifies the address of the first executable instruction.
  1. For the Object Code The Object Code is used to store the instructions of the program in the form of the fixed-length instruction format. The Object Code is used by the Linker and Loader to process the program. The Object Code is generated by using the Symbol Table that was generated in the first pass. We can use the following data structures:

a. Secondary Memory

The Object Code can be stored in the secondary memory like the hard disk. The Object Code can be stored in a file in the secondary memory. The Object Code can be read from the secondary memory by the Linker and Loader to process the program.

Time Complexity: Dependent on the size of the data, Hardware of the Secondary Storage, underlying Operating System and system performance, not strictly O(1).
Space Complexity: O(n) where n is the size of the Object Code.

b. In-Memory Data Structures

  • The Object Code can be stored in the memory. The Object Code can be stored in the memory in the form of a data structure like a vector or a list. The Object Code can be read from the memory by the Linker and Loader to process the program.
Time Complexity: 
    O(1) for insertion and search operations.
    O(n) for deletion operations.

Space Complexity: 
    O(n) where n is the size of the Object Code.

Business Cases for Enhanced SIC/XE Assembler Design

During the development of my Two-Pass SIC/XE assembler, I identified key areas where strategic data structure selection and optimization deliver significant business value. These improvements translate directly to tangible benefits for organizations employing assembly language programming:

Business Case 1: High Performant Compute Through Optimized Symbol Management

Problem:

Traditional symbol table implementations (with Linear Search) become bottlenecks in large projects due to linear search times.

My Solution:

I utilize a Hash Table for Symbol Tables across multiple Control Sections. Hash tables excel at providing near-constant-time complexity for search, insertion, and deletion operations, regardless of the symbol table's size, provided a good hash function is used, which I utilise in my implementation by having a Prime Number as the size of the Hash Table 6.

Business Impact:

Faster compilation cycles translate to increased developer productivity, shorter development timelines, and ultimately, faster time-to-market for software products along with efficient memory utilization.

Business Case 2: Performance Gains Through In-Memory Processing

Problem:

Storing intermediate files and the Symbol Table on secondary storage introduces latency from frequent disk I/O operations.

My Solution:

Transitioning to in-memory data structures for these components, allows us to leverage the speed of RAM to minimize data access times. The intermediate file can be stored in the memory in the form of a data structure like a std::vector or a std::list (preferably std::vector since the hardware caches memory pages).
The intermediate file can be read from the memory in the second pass to generate the Object Code.

Business Impact:

This results in significantly faster assembly times 7, crucial for time-sensitive compilation processes, large projects, and environments where rapid prototyping is essential.

Business Case 3: Resource Optimization via Memory Usage Analysis

Problem:

Inefficient data representation or storage can lead to excessive memory consumption, impacting performance, especially on resource-constrained systems as mentioned in Business Case 27.

My Solution:

I conduct thorough analysis of memory access patterns within the Symbol Table and intermediate file handling. This helps identify opportunities for optimized data structures, compression, and improved cache utilization (as mentioned in Business Case 2 by the utilization of std::vector or any contiguous memory data structure).

Business Impact:

Reduced memory footprint translates to the ability to handle larger projects efficiently, potentially lower hardware requirements, and a smaller overall operational cost.

Business Case 4: Scalability for Future-Proofing Development Efforts

Problem:

As software projects grow in complexity, assembler performance needs to scale accordingly to avoid becoming a bottleneck.

My Solution:

I prioritize data structures and file management techniques known for their scalability, ensuring the assembler remains performant and reliable even with increasingly large and complex input programs. We can utilise parallel processing techniques to further enhance the scalability of the assembler, which is one of the future scopes of this project, as most hardware today is multi-core.

Business Impact:

This future-proofs development efforts, allowing organizations to confidently tackle ambitious projects without the assembler becoming a limiting factor as codebases expand.

Business Case 5: Streamlined Debugging Through Enhanced Error Handling

Problem:

Identifying and resolving errors efficiently is critical for developer productivity, but assembly language programming can make this challenging.

My Solution:

I implement robust error detection and reporting mechanisms, including features like symbol cross-referencing, referencing Unknown Instructions and early error flagging just to name a few. This ensures that developers can quickly identify and address issues, reducing debugging time and enhancing code quality.

Business Impact:

This minimizes the time spent on debugging, reduces the likelihood of subtle errors going unnoticed, and contributes to higher overall code quality.

Business Case 6: Enhanced Code Quality Through Error Handling during both the passes, visible in the Intermediate and Listing Files

Problem:

Traditional assemblers often provide limited visibility into the assembly process, making it difficult to understand how source code translates to machine code. This lack of transparency makes debugging a frustrating and time-consuming process.

My Solution:

My assembler addresses this by generating two key files:

Intermediate Files: These provide a step-by-step breakdown of the assembly process, making it easy to trace how the assembler interprets and translates your code.

Listing Files: These combine your source code with generated machine code, symbol tables, and clear error messages mapped to specific lines. This integrated view simplifies debugging and enhances code understanding.

Business Impact:

This transparency promotes faster debugging, reduces errors, and improves code quality through enhanced readability and understanding of the assembly process. Ultimately, this translates to quicker development cycles and more reliable software products.

Business Case 7: Accelerating Development with Makefile-Driven Builds

Problem:

Building projects in C++ often involves multiple steps: assembling multiple source files, linking object files, potentially running pre or post-processing scripts. Manually managing these steps is error-prone, time-consuming, and difficult to reproduce consistently, especially across different development environments. And given that I have divided the project into multiple files, it is essential to have a build system that can compile the project efficiently, abstracting the complexity of the build process.

My Solution:

I've leveraged GNU Makefiles 8, a powerful build automation tool, to manage the entire project build process. Here's how Makefiles bring value:

  • Automation: Makefiles define all build steps and their dependencies. A single command ("make") executes the entire build process correctly and efficiently.
  • Dependency Tracking: Makefiles automatically determine which files need to be recompiled or relinked based on their dependencies. This prevents unnecessary rebuilds, saving significant time during development.
  • Reproducibility: Makefiles ensure that the project can be built consistently across different machines and environments using the same defined steps, reducing the "it works on my machine" problem.
  • Flexibility: Makefiles allow for customization of build targets, allowing developers to easily perform specific actions like running tests or generating documentation.

Business Impact:

  • Increased Developer Productivity: Automation eliminates the tedious manual build steps (shooting yourself in the foot especically with C++), freeing developers to focus on writing and testing code.
  • Faster Development Cycles: Automated dependency tracking and parallel builds (possible with make -j) significantly reduce build times, especially for large projects.
  • Reduced Errors: Makefiles enforce a consistent and reliable build process, minimizing the potential for human error and ensuring consistent output.
  • Improved Code Maintainability: Makefiles act as clear and concise documentation of the build process, making it easier for others to understand and maintain the project over time.

By meticulously addressing these business cases and the design choices, my SIC/XE assembler will not only be functionally sound but showcases a robust design that aligns with industry best practices and modern development standards. This approach ensures that the assembler is not just a standalone tool but a strategic asset that accelerates development, enhances code quality, and future-proofs development efforts.


A sample execution of the Two-Pass Assembler with figures

  1. Input File
    Input File
    The above input consists of 3 Control Sections: Main, RDREC, and WRREC. Each Control Section has a set of instructions and directives.

    When this is fed as input to the Two-Pass Assembler, the following happens:

    1. The First Pass generates the Symbol Table, and an Intermediate file for Layout of the program in the memory along with errors (if any), for each Control Section.
    2. The Second Pass generates the Object Code for each Control Section and a Listing File depicting detailed information about the program, and errors if any.
  2. Intermediate File
    Intermediate File
    The above intermediate file consists of the Control Section, the Symbol Table, and the Program. The intermediate file is used to generate the Object Code in the second pass.

  3. Object Code
    Object Code
    The above object code consists of the Header Record, Text Records, Modification Records, and End Record. The object code is used by the Linker and Loader to process the program.

  4. Listing File
    Listing File
    The above listing file consists of the detailed information about the program. The listing file is used to debug the program and to understand the program.


Usage

  • The source code for the Two-Pass Assembler is available in the src directory.
  • The Design and Implementation of the Two-Pass Assembler for the SIC/XE architecture is done in C++, adhering to its Modern Standards. The code is structured across multiple header and C++ source files, promoting modularity and maintainability. The code is well-commented and structured, making it easy to understand and follow.
  • It is built using the GNU Make build system, which automates the compilation process and ensures a smooth development workflow across multiple platforms. The Makefile defines all build steps and their dependencies, allowing developers to compile the project efficiently with a single command.
    Below is a diagram depicting the structure of the project:

Project Structure

The following steps will guide you on how to build and run the Two-Pass Assembler on your local machine:

  1. Clone the repository

    git clone https://github.com/ShriHari33/sic_xe-assembler.git
    cd sic_xe-assembler/src
  2. Build the program

    make

    This will compile, link, and build the program. The executable assembler will be generated in the src directory.


  3. Run the program

    ./assembler {input_file_name}
    • Replace {input_file} with the path to the input file you want to assemble. A sample input file is provided in the src directory, with the name input.txt. So to test that, type ./assembler input.txt.

    This will generate the Intermediate File, Object Code, and Listing File in the src directory. The command line will display the errors if any. If there are any errors in the First Pass, the Second Pass will not be executed and the user will be prompted to correct the errors in the Intermediate File.


  4. Clean the program

    make clean

    In case you want to clean the program, you can run the above command. This will remove the object files and the executable file.


Future Scope

  1. Parallel Processing:
    • The Two-Pass Assembler can be enhanced to support parallel processing to speed up the assembly process. This can be achieved by splitting the assembly process into multiple threads, each handling a separate Control Section. I reckon this will significantly reduce the overall assembly time and improve the performance of the assembler.
  2. Optimization Techniques:
    • The Two-Pass Assembler can be further optimized by implementing optimization techniques such as dead code elimination, constant folding, and loop optimization. Taking the idea of LLVM's MLIR 9 optimizations, we can transform the intermediate representation of the program to optimize the code and reduce the execution time.
  3. Program Blocks:
    • In addition to the already present Control Sections, this Two-Pass Assembler can be extended to support Program Blocks, which allow the separation of the program into multiple blocks. This will further enhance the readability and maintainability within a single program and facilitate the reuse of code.
  4. Macro Processing:
    • The Two-Pass Assembler can be enhanced to support macro processing, which allows the definition and expansion of macros in the assembly code. This will reduce code duplication and improve code readability. This part is the most exciting part for the future scope of this project for me!

    Please submit PRs if you have any ideas or suggestions for the future scope of this project. I would be more than happy to share, discuss, and implement them with the community!


About Me

As a pre-final year Computer Science and Engineering student at KLE Technological University, I, Shrihari, have developed a strong foundation in low-level systems, with a particular focus on Linux device drivers, Interposing Libraries and ELF file systems. My proficiency in C and Modern C++ has been instrumental in my work with these systems.

My professional experience includes contributing to projects at NVIDIA, which has solidified my aspiration to pursue a career in Systems Engineering. This exposure has not only enhanced my technical skills but also made me realise my passion for working with low-level software.

Beyond my technical pursuits, I maintain a keen interest in Mathematics, finding intellectual stimulation in its principles and applications. Additionally, I derive great enjoyment from operating manual transmission vehicles, appreciating the mechanical intricacies and control they offer. I also love to tinker with hardware components!

These diverse interests and experiences have shaped my academic and professional trajectory, positioning me well for future endeavors in the field of Systems Engineering. I am currently preparing for GATE CSE 2025 for my Masters to explore various other Systems Engineering domains such as Computer Graphics, GPU Programming, HPC and Advanced Computer Architecture.


References

Footnotes

  1. L. L. Beck, "System Software: An Introduction to Systems Programming," 3rd ed., Pearson Education, 1996, Available: https://www.amazon.in/System-Software-Introduction-Systems-Programming/dp/0201423006.

  2. Wikipedia, "Hash function," Available: https://en.wikipedia.org/wiki/Hash_function.

  3. D. Salomon, "Assemblers and Loaders," Available: https://www.davidsalomon.name/assem.advertis/asl.pdf.

  4. Wikipedia, "AVL tree," Available: https://en.wikipedia.org/wiki/AVL_tree.

  5. Wikipedia, "Red–black tree," Available: https://en.wikipedia.org/wiki/Red%E2%80%93black_tree.

  6. Stack Overflow, "Why should hash functions use a prime number modulus?" Available: https://stackoverflow.com/questions/1145217/why-should-hash-functions-use-a-prime-number-modulus.

  7. Harvard University, CS61, "Storage," 2019, Available: https://cs61.seas.harvard.edu/site/2019/Storage/. 2

  8. GNU, "GNU Make," Available: https://www.gnu.org/software/make/.

  9. LLVM, "MLIR: Multi-Level Intermediate Representation," Available: https://mlir.llvm.org/.

sic_xe-assembler's People

Contributors

shrihari33 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.