• Welcome!

    It's bright & sunny here outside of the cave* of deep learning / LLMs / gen AI.

     

    Most are trying to build AI from human-levels (up to "AGI") and yet hallucination suggests something big is missing.

     

    That's our opportunity here, to build what's missing from the bottom-up, starting from simple examples of animal intelligence for AI that is grounded by and transparent to its training.

     

    In hard-nose machine learning language, we're building AI that learns without external loss or objective scoring, at a lower level of abstraction through Agents who's objectives are generated dynamically as a function of their architecture (like instincts, which trigger the Agent to self-label) and past experience.

    broken image

    Plato's Cave of Hyper-Param "AGI"

    Key Points & Performance Overview:

     

    • At present, many-to-one, or N-to-1,** inputs-to-output is most reliable Agent configuration or Arch.

     

    • Our Hello, World is the simplest possible Agent, which can learn to associate any 3 input channels with 1 output channel (3-to-1), is conceptualized as an Agent with clam-level cognition; try it here.

     

    • The Nextbox Agent is a fork of the simplest Agent with each channel scaled up to 10 binary neurons (3-to-1 with 30-10 neurons).

     

    • You can create and invoke Agents of arbitrary sizes (thousands of neurons or more) depending on your application data model by forking and modifying exisiting Agent Archs to create your own.

     

     

     

     

     

    Step 1 Invoke our Basic Agent to learn the main API call.

     

     

    Step 2 Configure your own custom Agents unique to your application's data model.

     

     

     

    Data privacy & infrastructure note: Unlike with tokenization, your data is kept private. You only ever pass through your data in binary to our API, which we store per Agent on AWS DynamoDB; the binary data encoder-decoder is not exposed to the Agents or our API and remains in your control. Lambda functions retrieve and run Agents as invoked through the API given a unique Agent ID which you can associate with unique user/customer IDs.

  • API Endpoint & Reference 

    Invoke with this temp API key: buildBottomUpRealAGI

     You'll recieve a unique, private key by email soon.

     

    A simple 9-neuron Agent, what we call our Basic Clam demo, is pre-loaded in the API for you to test.

  • To Configure Your Own Agents

    1) Fork an Agent Arch as a Reference Design / Template

    Go straight to the Arch code, or interact with a few Agent Demos first to understand their performance.

    2) Change the Number of Neurons Depending on Your Data Model

    Agents are made up of binary neurons, so you will need to encode your application's data model--all possible inputs, outputs, and reward signals-- into binary through a fixed encoding. The number of binary digits needed represents the number of neurons your Agent should have.

     

    For instance, our Netbox Agent is built to learn how a network device's Manufacterer, Site andType are inter-related with a device's Role, to be trained against specific individual local networks'. In the general Netbox application data model, the Manufacterer, Site, Type and Role are represented as unique IDs. We've used 10 binary digits per category or data channel, allowing us to encode up to 2^10=1048 unique IDs per channel. As such, our Netbox Agent is made up of 41 neurons-- 10 binary neurons each for Manufacter, Site, Type, and Role, and 1 neuron for labelling.

    Thank you for reading.

    Say hi on discord.

  • Join Us in Building AGI Bottom-Up 

    Open Plan

    For experimentation

    free

    while in beta

    Limited # of Kennels & Agents

    Possible API throttling

    Show Some Love

    broken image

    Github Sponsorship

    For developers & supporters

    starting at $5

    per month

    Your own API token

    Unlimited # of Kennels & Agents

    First access to developments

    Support from founders

     

    We're an early-stage startup and your sponsorship-- as little as $5 or a cup of hipster coffee a month -- really helps us validate our business.

     

    Thank you!

    Enterprise Access

    For large teams

    starting at $1000

    per month

    Premium support from founders

    Response time SLA

    Consulting to validate and implement custom use-cases

  • Footnotes

     

    * Reality always casts a shadow. There is always the thing and the shadow of that thing, Plato's allegory of the cave. Status quo AI is trained exclusively on the shadows (words, images, videos)-- no wonder it hallucinates!

     

    ** Measures of AI accuracy pre-suppose an objective measure for success so are associated with deep pre-trained AI systems which we are endavoring to extend. Measuring accuracy of an Agent is possible given a context; some applications ought to be more subjective, like device discovery per network or recommendations per user.

     

    broken image