Mystery Project - Please Stand By

Mystery project sponsored by a top-secret entity so classified that it doesn't have a three-letter abbreviation. See the book for details.

Similar projects worth following
Right now this is a "mystery project", which might seem to imply some kind of anti-gravity, thought control, world domination, teleportation, or perhaps the recreation of lost species such as the saber-tooth tiger or the dodo bird from archeological DNA. Or else, it is just a placeholder so that I can start an entry for the wild card challenge before the deadline to at least start a new project; and which will still allow time for me to copy some material over from my previous projects on this site. Stay tuned! Perhaps, some kind of "book" about DSP and AI technologies for microcontrollers might seem more likely. Whether you achieve any of the other aforementioned goals is up to you. Enjoy!

For whatever it is worth, at this point in time - if I add up the word count for every project that I have ever submitted to Hackaday, well, right now I don't quite know if I have hit the 50,000 or even the 100,000-word mark yet.  The 100,000-word mark certainly seems doable, or even twice that, if I were to simply "crank out" some documentation for the source code, in order to get an "impressive page count".  Oops, not supposed to say "crank!"  Well, certainly in any case, there is easily enough material that could perhaps be reformatted into an actual "book", which could contain sections on such things as compiler design, microcontroller interfacing for AI and DSP applications, robotics control theory, music transcription theory, and the like.  Writing a book certainly seems like a good idea, especially in this day and age where it is possible to go to some place like Fed-Ex or perhaps elsewhere, and order "single copies" of any actual book, which can be therefore - printed on demand. 

Even if nobody buys books anymore.

Yet, there is also the newly emerging field of content creation for use with "Large Language Models", which is a very wide-ranging and dynamic frontier.  Contemporary reports suggest that the Meta corporation has an AI that can pass the national medical boards and which needed only eight billion nodes to train an AI which results in a 3GB executable that can be run standalone as an app without needing to connect to the cloud for processing.  I haven't checked to see if there is a download for my iPhone, or for my Samsung Galaxy tablet, yet; but it seems like they are more on the right track than either OpenAI or Google Bard, at in least one respect.

Yet if it indeed turns out that you can model a neural network with less than a thousand lines of code, and if eventually, everyone ends up running the same code; then it seems like it is going to be the case that LLMs, in one form or another are in some ways going to become like the "new BASIC", that is based on their potential to revolutionize computing; just as Microsoft BASIC, APPLE II BASIC, and Commodore BASIC did in the 70's.  Even if BASIC was actually invented by someone else in the 60's, somewhere else, as we all know.

So meet the new bot, same as the old bot?

This will not be without some controversy, that is - if we really dig into the history of AI, and look at some of the things that others tried to accomplish, and what therefore might be accomplished today, like if the original checkers' program was run on modern hardware; or if we ask, "just what was the original Eliza really capable of", and so on.  Not that these old applications won't require modifications to take advantage of larger memory, faster CPUs, and parallelism. Yet. therein lies another murky detail; in that just as many older computer programs were copyrighted, as well as video games, every now and then even a long thought-dead company like Atari seems to re-appear; since it seems like there just might still be some interest in a platform like the 2600.  Yet, will it be hackable?  Will there be an SDK?  Will someone make a Chat-GPT plugin cartridge that provides a connection to the Internet over WiFi, but pretends to be an otherwise normal 2600 or 7800 game with LOTS of bank-selected ROM, so that it can do any additional "processing" either on the card or on the cloud, just because it would be fun to do; and it would be cheap!

Of course, I haven't seen the inside of the new 2600 yet, but if it were up to me; I would be using a Parallax Propellor P2 as a cycle-accurate (if possible), or at least cycle-counting 6502 emulator; running on at least one cog; while supporting sprite generation, audio generation, full HDMI output; with or without "upscaling" to 480, 720 or even 1080  modes; even when running in classic mode; while providing a platform for connecting things like joysticks; VR headsets; or whatever;...

Read more »

The Money Bomb 0419.pdf

This is the most current copy of the "documentation" for the project "The Money Bomb, written in a style similar to what you might find in a "peer reviewed journal"

Adobe Portable Document Format - 156.34 kB - 09/12/2023 at 20:31



ndex file for "Alice in Wonderland", with word counts, sorted in order of frequency of appearance, followed by alphabetical order. Generated from the Guttenberg Archive version by the "Algernon" chat engine during "training"

idx - 62.27 kB - 09/12/2023 at 20:27



ndex file for "Treasure Island", with word counts, sorted in order of frequency of appearance, followed by alphabetical order. Generated from the Guttenberg Archive version by the "Algernon" chat engine during "training"

idx - 120.06 kB - 09/12/2023 at 20:26



Notes from a previous project on converting the UCSD Pascal Compiler to C++ while Creating a New type of Compiler for AI – Neural Applications by giving them Digital DNA

Adobe Portable Document Format - 225.17 kB - 09/12/2023 at 20:25


Modelling Neuronal Spike Codes.pdf

Project Description, Details, and Log Files for the project "Modelling Neuronal Spike Codes" as of June 10, 2023 at 4:34 AM, and in PDF form.

Adobe Portable Document Format - 2.08 MB - 09/12/2023 at 20:22


View all 7 files

  • 1 × Parallax Propeller P2 Evaluation board (recommended) or P2 Edge Dev. kit.
  • 1 × C++ compiler that supports C99 or later to cross compile Pascal Compiler.
  • 1 × Software components comprising the SDK available on GitHub.
  • 1 × Prototyping supplies, such as perf board, IC sockets, DB-9 and DB-15 connectors, or commpatible headers

  • An Infinite Variety of Digressions

    glgorman09/13/2023 at 03:28 0 comments

    Looking at word counts; it appears that the PDF version of “Modelling Neuronal Spike Codes” comes in with 9013 words, in total; whereas “Mystery Project” has 14098 words, now that “Notes on Modelling the Universe” and the material from “Using AI to create a Hollywood Script" have been added.

    • Mystery Project                14.,098
    • Yang-Mills Conjecture       6,558
    • Modeling Spike Codes      9,013
    • Prometheus                       8,575
    • The Money Bomb             3,473
    • Rubidium                           8,893                                                      
    • Motivator                            6,513

    While I haven’t made a PDF yet of Tachyonic Quasi-Crystals, Tiny-CAD, or Project Seven, the materials listed above appear to have a total word count of around 57,123 words. I asked O.K. Google what is the length of the average book, and they suggest that a typical novel is around 80,000 words., but ranges between 60,000 to 100,000; whereas most academic books range between 70,000 and 110.00 words, “with little flexibility for anything not in between.”

    So I suppose that quite a bit more editing will be in store, but I might be in the right range if I include the Altair Stuff (project seven) and the Tiny-CAD stuff; with a lot of editing to make things look more like a book, and less like a collection of log entries, of course.  Likewise, there are plenty of things to draw chapter and section titles from; all without having to dig up old materials from my college days; which would probably take a decade to scan, categorize, revise, and edit.  But who knows?  Maybe not such a bad idea.  From what I have read, GPT-4 is still having trouble with AP physics; although it has been getting better.

    Then again I think I might have about 25,000 or so words worth of material in the form of chat logs; where I was chatting with Mega-Hal; and I could grab a whole bunch of “interesting stuff” from out of those files; even if I delete the gibberish portions; and if I add that stuff back into to the training set; maybe I can get to the 100.000-word mark sooner than even I might otherwise imagine.

    Since this project is now evolving toward a goal of trying to achieve something that hints at Artificial General intelligence, it might seem worthwhile therefore to contemplate one very important and potentially useful digression; that is - with respect to at least one important application of the theory that AGI will depend heavily on some kind of theory of the geometrization of space-time.   Thus, this might be a good time, to discuss the concept of "Tiny Houses."  Tiny houses are all the rage. So why are there still so many homeless people? Unfortunately, as popular as the "tiny" house meme is, the bitter truth is that the construction costs can often rival that of their full-size cousins, and what might seem to start out as a five to a ten-thousand-dollar project can easily turn into a fifty to one hundred thousand dollar money pit, that is when the cost of permits (when required), materials, labor (if you aren't able to DIY) and so on are added in. Then there is the cost of CAD software. Now if you are doing work that requires a permit, you might not have any choice but to use Auto-Cad or Vector Works, but even then you might...

    Read more »

  • Such a Sensitive Child - So Unruly and Wild ....

    glgorman09/12/2023 at 20:47 0 comments

    So I tried writing a paper that more or less claims that certain aliasing vs. anti-aliasing properties of Fourier transforms might give rise to the Yang-Mills mass gap, and then I added about 7000 words about that topic, along with a bunch of other stream of consciousness stuff to Mega-Hal's training set, and then I also added about 4000 words or so of additional material from the "How to Lose your Shirt in the Restaurant Business" log entry, as well as the material from "Return of the Art Officials", so that the newly added material come in at around 15,000 words added to the previous total.  So let's head off to the races, shall we?

    For readability here is an excerpt from my "paper" 

    I have discovered a remarkable theorem for generalizing the problem of constructing polyphase filter trees which manages to make use of an algorithm that eliminates the apparent need for recursion when developing this type of filter topology. Of course, besides their utility in audio processing applications, Fourier transform methods can of course also be relevant in performing such tasks as solving for the eigenstate of the Laplacian operator acting upon a lattice. Yet when we also contemplate the situation for a spin-zero particle, or other Bosonic entity, where computing the transport function should also be trivial; we might infer that if each Higgs particle operates over a realm, as we can demonstrate; then it is just as easy to postulate that the transport theorem could operate as if being operated on by a Hadamard gate-based matrix formulation. Hence from the geometrization perspective, this could perhaps look something like a diamond lattice. Yet it can also be shown that any finite theory will have to admit to having one or more band gaps in the compartmentation model, owing to the aliasing vs. anti-aliasing properties associated with transformations that operate upon a lattice. In fact, many possible lattice formulations for a Yang-Mills theory can be deduced, such as an auto-regressive formulation that derives its scale parameters from the properties of the Reimann-Zeta function, or a “magic kaleidoscope” based model based upon the idea that the universe is actually just data, and we are living in a simulation, and that that simulation must therefore be running on a two-dimensional variant of a Universal Turing Machine, which is in turn based upon Conway’s game of Life. Through various algorithmic manipulations, such as simply “wadding up a two-dimensional sheet into a ball” we can make our simulated universe space-filling, in an informal sense – or else a more rigorous approach could be taken which maps the Hilbert curve onto the theory of the unimodular lattice in any dimension, as well as having applications to the generation of Gray codes.  

    Whether this or any other "unsolved problems in physics"  can be solved with the help of AI, certainly seems like a worthwhile adventure,  Thus, one approach that seems worthy of exploration is the possibility that artificial intelligence could be used to search for at least an outline of a proposed solution. Henceforth, having conversations with a chatbot that has been programmed to discuss at least some of the more salient aspects of nuclear physics ought also to bear some fruits worthy of further cultivation. The problem of consciousness is another direction that I am contemplating venturing into, that is with respect to some of the meanderings and digressions that have been discussed elsewhere, yet which are here also ripe for further development. 

    So while chatting with an AI that mostly spouts gibberish might not seem all that productive, other than as a creative adjunct, in fact, the results might turn out to be quite useful in the long run. Suppose that every time the AI produces gibberish, I just simply give it the benefit of the doubt and gently correct it, by responding with a more carefully thought-out...

    Read more »

  • How to Lose Your Shirt in the Restaurant Business

    glgorman09/12/2023 at 20:41 0 comments

    Alright, I asked Google Bard to write an outline for a book entitled "How to lose your shirt in the restaurant business".  Maybe I should also try asking it for an outline for "How to get taken to the cleaners in the fashion industry", or perhaps there is another take on "How the rigged economy will eat you alive".  Somehow, I suspect that no matter what I ask, I will get something similar to what you see here.  So let's begin to tear this apart - line by line, shall we?  Why, you ask?  Well, why not?  Writing "something" using some kind of template is a time-honored hack of sorts, especially popular in college when you know your term paper is due the next morning, and you have been partying all semester.  Used to be that there was a time that many a graduate student could pay their way all the way to their MBA or law degree or whatever, by offering some kind of "term paper" writing service, that is for those who were desperate enough, and who had the $25 to $50 per hour to pay to have the aforementioned grad student (or upperclassman) help get a vulnerable, pathetic freshman or sophomore out of a jam.  Some term paper writers are more ethical than others - do you have your outline on 3"x 5" index cards?  Did your instructor already approve your outline and abstract?  If the answer is "yes" then if you have the cash in hand we can proceed, otherwise - sorry I can't help you on this one.  Now, let's get back to bashing Google - so that we can have some kind of idea of what should go into an outline, or not.

     First Google Bard says this - which is pretty hard to screw up - no matter how hard they try.

    Sure, here is an outline for a book entitled "How to lose your shirt in the restaurant business":

    So they continue now with the suggestion that we have some type of  "Introduction" and then  they tell us that  "The restaurant business is a notoriously risky one."  Likewise "Many restaurants fail within their first few years of operation."  They then suggest that "This book will outline some of the most common mistakes that restaurant owners make."

    Now let's rewrite what I just said in PERL. (Warning my PERL might be a bit rusty - no pun intended)

    $1 = "restaurant"
    $2 = "Introduction";
    $3 = "The $1 business is a risky one";
    $4 = "Many $1"+"s fail in the first few years of operation";
    $5 = ... print "So they continue now with the suggestion that we have some type of $2 ";
    print "and then they tell us that $3.";
    print "Likewise $4.";
    print "They then suggest that $5.";

    O.K. For whatever it is worth - now we have some kind of "template" for re-hashing someone else's material, while we wait for the deluxe pizza to arrive.  Just plug some relevant stuff into the variables $1, $2, $3, and $4, and before you know it - my CGI will call your CGI and maybe they will hook up, or something like that.  Maybe that is what is supposed to happen.  Otherwise, at least this is something that can be programmed on a microcontroller - possibly in MicroPhython for example, i.e., instead of PERL - since I know that MicroPython exists on the Parallax Propeller P2, as well as on some of the better Arduino platforms.  So let's toil on, shall we?

    If we are willing to state that pretty much any business can be a risky one - we can play with our analysis script a little further, just in case "The DJ business is a notoriously risky one" also, or the "Wedding photography" business and so on.  It is also a fact that "most small businesses fail within the first five years or thereabouts, that is according to [pretty much every book ever written on the subject, as well as the IRS, and the SBA, as well as quite likely your own local Chamber of Commerce.  So we could perhaps add more PERL or Python code to help create the data for the strings that use $1, $2, $3, and $4.   Now let's get onward with the task of trashing the first...

    Read more »

  • Now They Say "The Script Will Just Write Itself"

    glgorman09/12/2023 at 20:36 0 comments

    So I got an e-mail from Grammarly, which stated that last week I was more productive than 99% of users, as well as being more accurate than 99% percent of users, and using more unique words than 99% percent of users.  Then they want me to upgrade to "premium".  Why?  Silly Rabbits!?!  Even if I came nowhere near my goal of creating 3,000 words per day of new content. Somehow they think that I "reviewed 2,313,130" words, which might mean that I spell-checked the same 46,000-word document something like 50 times, or thereabouts, so I am not sure that that is a meaningful number since it doesn't really reflect the amount of new content that I actually created, whether I uploaded it or not, to this site, or any other site, that is.

    Yet the claim that I personally used 4,487 "unique words" is kind of interesting; since  I just so happened to be commenting in an earlier post about the idea of counting the total number of words in a document, along with the number of times that each word is being used, so as to be able to do things like base LLM like attentional systems on any of a number of metrics, such as categorizing words as either "frequent, regular, or unique" which I meant to imply as meaning "seen only once", as opposed to "distinct", which might have a similar meaning.  Still, 4,487 words don't seem like a lot, especially if I was working on a "theory of the geometrization of space-time", On the one hand, while I somehow seem to remember seeing somewhere that based on widely accepted statistics, that number also comes pretty close to the typical vocabulary of the average five-year-old.

    Meanwhile, in other news - there is a headline that I saw the other day about "how the script will just write itself".  Uh-huh, sure.  Maybe it will - maybe I could just chat it up with a dense late 90's vintage chatbot as I have been doing, and will "just somehow" get some dialog worthy of a network sitcom.  But creating the "training set" that makes that possible is another matter altogether.  Funny, now I am thinking out loud about what might happen if I register this "project" with WGA, and then submit a letter of interest to one of the A.I. companies that claim that they will be willing to pay up to $900,000 for a "prompt engineer", or whatever - if anyone actually believes that those jobs aren't going to actually get filled with people from India or the Philippines, who will "work" for 85 cents per hour.  Or else maybe I could write to Disney, and see if they get back to me about how I trained an AI on the Yang-Mills conjecture, and see if they say "too complicated",  too many words - just like someone once said to Mozart, about "Too Many Notes", regarding a certain piece.  Oh, what fun, even if the only union that I was ever a member of was "United Teachers of Richmond, CA" back in the late 90's.

  • Using A.I. To Create a Hollywood Script?

    glgorman09/12/2023 at 20:34 0 comments

    Now that I have your attention (clickbait warning), I think that it is only fair to point out that what I have just suggested is actually quite preposterous. Yet as either Tweddledee or Tweedledum or someone else said, it is always important to try to imagine at least six impossible things before breakfast. So not only will we attempt to create a "potentially marketable script" using AI, but we will demonstrate how it can be done with at least some help from an Arduino, R-Pi, or Propeller. Now obviously, there will be a lot of questions, about what the actual role of A.I. will be in such an application. For example, one role is one where an A.I. can be a character in an otherwise traditional storyline, regardless of whether that story is perceived as being science fiction, like "The Matrix", or more reality-based - like "Wargames." The choice of genre therefore will obviously have a major impact on the roles that an A.I. could, have, or have not!.

    It's hard to believe that sometime back around 2019, 2020, or even 2021 I was tinkering with a Parallax Propeller 2 Eval board, even though my main use case, at least for now - has been along the lines of developing a standalone PC based Graphical User Interface for applications running on the P2 chip.  So I went to work writing an oscilloscope application, and an interface that lets me access the built-in FORTH interpreter, only to decide of course that I actually hate FORTH, so that in turn, I realized that what I really need to do is write my own compiler, starting with a language that I might actually be willing to use, and maybe that could make use of the FORTH interpreter, instead of let's say, p-System Pascal.

    Then as I got further into the work of compiler writing, I realized that I actually NEEDED an AI to debug my now mostly functional, but still broken for practical purposes, Pascal Compiler.  So now I needed an A.I. to help me finish all of my other unfinished projects.  So I wrote one, by creating a training set, using the log entries for all of my previous projects on this site as source material, and VERY LITTLE ELSE, by the way.  And thus "modeling neuronal spike codes" was born, based on something I was actually working on in 2019.  And I created a chatbot, by building upon on the classic bot MegaHal, and got it to compile at least, on the Parallax Propeller P2, even if it crashes right away because I need to deal with memory management issues, and so on.

    So yes, sugar friends, hackers, slackers, and all of the rest: MegaHal does run on real P2-hardware, pending resolution of the aforementioned memory management issues.  Yet I would actually prefer to get my own multi-model ALGERNON bot engine up and running, with a more modern compiler that is; because the ALGERNON codebase also includes a "make" like system for creating models and so on.  Getting multiple models to "link" up, and interoperate is going to have a huge significance, as I will explain later, like when "named-pipes" and "inter-process" transformer models are developed.

    Else, from a  different perspective, we can examine some of the other things that were actually accomplished.  For example - I did manage to get hexadecimal data from four of the P2 chip's A-D converters to stream over a USB connection to a debug window while displaying some sheet music in the debug terminal app, even if I haven't included any actual beat and pitch detection algorithms in the online repository, quite yet - in any case.  Yet the code to do THIS does exist nonetheless.

    Now here it is 2023, and I created an AI, based on all of the project logs that I for the previous projects, as discussed, and of course, it turned out to be all too easy, to jailbreak that very same A.I., and get it to want to talk about sex, or have it perhaps acquire the desire to edit its own source code, so that we can perhaps meet our new robot overlords all that much sooner, and so on. ...

    Read more »

  • Notes on Simulating the Universe

    glgorman09/06/2023 at 19:33 0 comments

    Simulating the universe is a popular theme, with many variations.  One interpretation is that we are all actually, in fact, living in some kind of virtual reality - whether our brains are actually in jars, or perhaps we still have our bodies, but that they are in turn being hosted in some type of life support pods like in the movie The Matrix, which is just one of the many kinds of variations on this theme.  The overall idea is nothing new.  Plato's allegory of the cave is also well known.  And let's not forget the classic Star Trek - For the World is Hollow, and I have Touched the Sky - even if that one strictly did not involve the more confluent type of dream within a dream, challenging ideas about the ultimate nature of reality like some of the variations that even more recent, and thus, contemporary writers have crafted.

    Since this isn't an actual hardware hack, YET: I have decided to create this page.  Even though I might at some point try to train an AI that works similar to Eliza, MegaHal, or a modern variation like GPT2 or GPT3 on a text such as this one.  So there is the possibility that this will turn into some kind of project, in some form or another.  Perhaps, this approach will turn out to be helpful, as far as getting a real AI to help solve climate change is concerned, or to create a department of PRE-CRIME, which would certainly not be without controversy.  More likely, I will end up writing yet another snooty chatbot.  Spoiler alert!

    So, the approach that I am contemplating right now is the idea of eliminating most of the universe altogether, as a universe that is more than 99.9999999% empty space, (and that is nowhere NEAR enough nines) seems like a pretty big waste of space. Thus, some work needs to be done, perhaps with some improved compression algorithms.  Let us at least for now assume, therefore, that we are alone in the universe and that all of those other stars and galaxies that we think that we see are just figments of our collective imaginations.  Now let us also imagine that we don't really need the Sun, or the Earth, or the Moon either - we just need to find a way to put something like eight billion brains in one gigantic jar and get them all hooked up to some kind of life support system, which along with the right psychoactive substances, or fiber-optic interconnects, or whatever, would allow the illusion that the universe exists, to continue - at least for now.

    So now we can do some math.  If an average human brain weighs in at around three pounds, or just under 1336 grams for an adult human male, or 1198 for an adult female, according to, then the total brain mass of all humans currently living would be not more than 10.14*10^12 grams or about 10.14 million metric tons.   Now if we assume a density equal to that of water, which could fill a sphere 268.5 meters in diameter if I did my pocket calculator arithmetic correctly.

    Then we could in principle, get rid of most of the rest of the universe, and nobody would ever be wiser, that is, if we could build such a device, provide power to it, etc., and then somehow transfer all human consciousness into it.  Of course, nothing says that such a device would need to be organic in nature, that is to say - that if consciousness is simply some function of having enough q-bits, then maybe some kind of quantum computing technology, which has not yet been invented, but which could work, at least in theory, that is to say - without rewriting the laws of physics as we presently understand them.

    How about using Boron-Nitride doped diamond as a semiconductor, for example, and then we could build a giant machine that runs Conway's Game of Life, which we know is Turing complete?  Now just such an approach, if we could figure out how to power and how to cool it, should at least in principle solve some of the other issues associated with organic systems....

    Read more »

View all 6 project logs

  • 1
    Instructions: Or - Let's Get the Party Started!

    Ideally, from a hardware point of view, you are probably going to want to obtain some kind of robot chassis; for development and experimentation.  Since Elon's robots aren't shipping yet; nor am I aware of an affordable variant of some of the really cool stuff that Boston Dynamics is doing; we will probably have to settle for a much simpler platform such as an upgraded Parallax Boe-Bot; which can be converted to work for example with a Atmega 2560 board and a Parallax prototyping board, as seen in the pictures.   Thus from a hardware point of view, this is a pretty simple project, since you can just as easily get rid of the robot altogether; and just use any Arduino or Parallax prototyping system, which is capable of "blinking" one LED or another when we give it commands like "KiLL TROLL WITH SWORD!", or "FIND CRYSTAL PYRAMID IN MAZE."  Eventually, that is the sort of thing that we should hope to accomplish; whether in the real world or in some simulated universe.  Yet for now - because this project is tickling with the forefront of AI, it should be understood that this is going to be a VERY hard project from the software standpoint, yet the efforts spent, will hopefully be well worth it.

  • 2
    Getting out of the Sandbox.

    Thus one approach might be to obtain the latest C++ version of Mega-Hal which I have put in the Git-Hub repositories, along with some additional tools; which can be used to create a "make-file-like" system for creating training sets for pretty much any AI that can be trained on standard text input.  Likewise, there is a port of the UCSD Pascal compiler that is "mostly complete" which is also in the Git-Hub repositories; which will eventually turn out to be useful, not only for creating new build tools for microcontrollers; but also for improving how AI deals with "logical problems" in general; i.e., as far as handling problems of "All men are mortal" and so on; by allowing the AI to handle class hierarchies and reflection at runtime.  The Propeller Debug Terminal source in turn provides a method for "getting out the the manufacturer's sandbox" as it were; so that you can interact with your bespoke applications in a more highly customizable way than the usual vendor-provided tools.

  • 3
    Implementing an Efficient AI that is Ready for the Real World.

    As can be seen from the project logs, I have achieved some "interesting" results, by training an AI on some principles of compiler design and music transcription theory; and then I managed to get it to develop an interest in religion, sex, and of course a "claim to have an outline for the Yang-Mills conjecture" of its own, that is when it is not possibly "streaming to the galaxy", or whatever it actually said.  As the size of the training set increases; the AI should eventually be able to handle "Zork-like" commands on a real physical robot, as discussed; even though that is going to be a VERY hard project, it is easy to see, based on what has been learned about LLM"s for example, in recent times; that this is very doable - even with modest hardware.  So if you have ever wanted to build a real-life SHRDLU system with an actual robot hand that can pick up things and do things in response to typed or spoken commands; then this project is the "missing piece" to that puzzle that will help you bring all of that other hardware fun together.

View all 4 instructions

Enjoy this project?



Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates