Our journey to build an A.I. with Perl.


 We love Perl at Supervene LLC! So we have documented our failures and success while developing an A.I. using the Perl language.


Project Categories:

 A.I. is a hot button topic all over the internet with release of ChatGPT. If you haven't tried it yet, I would highly recommend testing its abilities. However, the exact methds used to develop such a system are somewhat obfuscated by layers or different software, programming languages, and frameworks that have to be configured in such a way that it all just works together. While we don;t yet know the full working set of ChatGPT (GPT-4) we do know what makes GPT-3.5 work and here is the list:

Python, PyTorch, C++, CUDA, OpenCL, jQuery, Node.js, TensorFlow, WebAssembly, Javascript, CSS, HTML, and probably Others..

We don't really know what else is being used without downloading and diving deep int othe code, but the point is, it is bloated. This is why we set out to build an A.I. solution that was simple and used only standard "off the shelf" components. The only exception here is CUDA libraries, but this is only neccessary for the scaling process later on. So lets dive into the process and see what happens! So how does it all work? This is the question we started with. So we will break it down without all the "lingo" and try explain it in plain old english.

In order to create an A.I. similar to ChatGPT, you need to setup a neural network. This is just a fancy way of creating a system of checks that provide weights to saved values. A weight is numeric assignment that detemines how important or relevant an item is. So each word, number, or symbol has a weight. The weights are constantly shifted based on user input, for exmaple if you ask an A.I. about what dog food is best, weights are shifted for categories regarding dogs and food and best. The shifting of weights are called biases. So user input affects the output by creating biases in saved data and the rest of the weights are there to determine what words are used to form a complete sentence. This is only a partial answer, because this doesn't explain HOW an A.I. knows what to use to form a proper sentence. So whats the going on? Training. The A.I. must digest millions of lines of human sentances, paragraphs, articles, conversations, stories, and more in order to learn the proper syntax. A well known method is N-Grams. N-Grams are word pairs that get evaluated to determine the weight of the leading word. The "N" in N-Gram is a place holder for a value, so it can be 2-gram, 3-gram-4-gram, or more if your wish. But the practicality of N-Grams higher than 4 or 5 begin to lose value as the weights are shifted less and less between trainings. For instance, if we were to use 2-gram training set of the phrase: "This is a sentence for training", the 2-grams would be tokenized and appear as:

[This, is] [is, a] [a, sentence] [sentence, for] [for, training]

The idea behind n-grams, is to see how often the leading word is followed by the next word i nthe sequence. To find out which word is most relevant, we assign weights. So now our example looks like this:

[This, is, 0] [is, a, 0] [a, sentence, 0] [sentence, for, 0] [for, training, 0]

This is a compelte 2-gram data set that is untrained. So each weight is zero. in order to make this data useful, we need more data to create more fine tuned values. For this we will create a new test sentence: "This is a second sentence for training the AI". And here is the 2-Gram data again:

  • [This, is, 0]
  • [is, a, 0]
  • [a, second, 0]
  • [second, sentence, 0]
  • [sentence, for, 0]
  • [for, training, 0]
  • [training, the, 0]
  • [the, ai, 0]

Combining the two 2-gram data you end up with this dataset below. We do not need to double the lines that are the same, we can just throw out the duplicates and increase the weights, like so:

  • [This, is, 2]
  • [is, a, 2]
  • [a, second, 1]
  • [a, sentence, 1]
  • [second, sentence, 1]
  • [sentence, for, 2]
  • [for, training, 2]
  • [training, the, 1]
  • [the, ai, 1]

Now we can see how some word elements begin to have more "weight" or preference than others when forming a sentence. Now if these words could associate to granular topics, we could isolate them based on input biases so phrases ranked high regardin A.I. do not appear when we ask about dog food. But before We can ask about dog food we need to train our network on millions of lines of text so our A.I. can form a decent understanding of human sentence structure. For this will need large amounts of text or "Corpus Data".

Below is a list of data files we have used for training our A.I. so far. Like Most corpus data, cleanup ius required to remove any abnormal symbols, misspellings, slang, abbreviations, or other unusual text. You can choose to leave them if you wish but the additional time spent creating n-grams and calulating weights will be orecious and can cost DAYS when training!


 This collection is still rather small, but using 2,3,4 gram configurations, we are able to create models with over 500 million parameters. Although, we have noticed the 4-gram sets are not as helpful due to diversity becoming too great thus causing the weights not to shift as much as we initially thought. This could change as we move into larger models. I assume this will very much be the case when we begin scaling to a much larger system with multiple graphics cards for parallel processing. But for now, we are only using a standard workstation to test our training data builds.

Text Sources:
https://wortschatz.uni-leipzig.de/en/download/English
Corpus Text Used:
2019 News 1 Million Sentences
2018 News 1 Million Sentences
2017 News 1 Million Sentences
2016 News 1 Million Sentences
2014 News 1 Million Sentences
2013 News 1 Million Sentences
2010 News 1 Million Sentences
2009 News 1 Million Sentences
2005 News 1 Million Sentences

We decided on a few basic requirements to command/control the A.I. and have basic interactions with it for testing. Rather than a web interface, our tests would be in a command line interface (CLI). This allows us to focus on the development more than how pretty the interface is when interacting. The core command functions of our CLI interface will be:

count - Display the number of n-grams the program has loaded.
sample N - Display N number of random samples from the n-grams.
train - Train the data using all corpus text in the predefined directory.
save X - Save the trained data as n-grams in a "brain" file where X is the name of the saved file.
load X - Load a previously saved "brain" file so we do not have to retrain inbetween reloads where X is the file name.
brains - Displays a list of saved "brain" files in the predefined model directory.
corpus - Lists all corpus data files in the predefined training directory.
help - Lists the commands if we forget what we are doing, because it happens when you keep revising.
clear - Purge the screen of all previous data.
exit - Close the program.

Here is a sample of code that runs a while loop and accepts user input to control the bot:

# Process user input
while (1) {
    print "> ";
    my $input = ;
    chomp $input;
	
	next if $input =~ /^\s*$/;
	
	# Train the model with files
	if ($input =~ /^train/i) { train_model(); } #calcNN(); }
	
	# Print paramter counts for each ngram and total
	elsif ($input =~ /^count/i) { count_model(); } 
	
	# List trainig files
	elsif ($input =~ /^corpus/i) { list_corpus(); }

	# Print the model data
	elsif ($input =~ /^sample (.+)/i) { sample_model($1); }
	
	# Save the trained model to a file
    elsif ($input =~ /^save (.+)/i) { save_model($1); print "Model saved.\n"; }

	# Load a trained model from a file
    elsif ($input =~ /^load (.+)/i) { load_model($1); } 

	# List brain files
	elsif ($input =~ /^brains/i) { list_brains(); }
	
	# clear screen, move cursor
	elsif ($input =~ /^clear/i) { print "\e[2J"; print "\e[H"; }
	
	# Load a trained model from a file	
	elsif ($input =~ /^help/i) { load_help(); }
	
	# Exit the program
    elsif ($input =~ /^exit/i) { last; }
	
	else {
		my $output = generate_response($input);
		print "$output\n";
	}
}
						

 The count command will count the total N-gram data produced by training and display the number reference for training parameters. The count comand will display the total along aith a break down of N-gram totals.

# display current model params
sub count_model {
    my $count_2grams = 0;
    my $count_3grams = 0;
    my $count_4grams = 0;
    my $count_all = 0;

    foreach my $ngram (keys %weights) {
        my $n = $weights{$ngram}->{n};
        if ($n == 2) {
            $count_2grams++;
        } elsif ($n == 3) {
            $count_3grams++;
        } elsif ($n == 4) {
            $count_4grams++;
        }
        $count_all++;
    }

    print "2-grams: $count_2grams\n";
    print "3-grams: $count_3grams\n";
    print "4-grams: $count_4grams\n";
    print "Total ngrams: $count_all\n";
}
					

 The sample command will provide samples from each "N" number of word-grams from each category. For example, if we create a training set of 2-gram and 3-gram data sets, then run "sample 2", we will get two randomly selected N-grams from each 2 and 3 gram datasets. The result is 4 samples (two of each).

# display current model set data:
sub sample_model {
    my ($num_samples) = @_;
    my @ngrams = keys %weights;
    my $num_ngrams = scalar @ngrams;
    for (my $i = 0; $i < $num_samples; $i++) {
        my $random_index = int(rand($num_ngrams));
        my $ngram = $ngrams[$random_index];
        my $weight = $weights{$ngram}{weight};
        print "$ngram: $weight\n";
    }
}
					

 The train command simply tells the bot to look at the predefined directory containing corpus data and generate N-grams based on the users preference.

# Train model from corpus data in $dir
sub train_model {
	my $corpus = "";
	opendir(my $dh, $dir) || die "Can't open directory: $!";
	my $num_files = scalar(grep { !/^..?$/ && !-d $_ } readdir($dh));
	my $processed_files = 0;
	rewinddir($dh);

	while (my $file = readdir $dh) {
		next if ($file =~ /^\.\.?$/);
		next if (-d $file);
		$processed_files++;
		print "Processing file $processed_files/$num_files: $file\n";

		open(my $fh, "<", "$dir/$file") || die "Can't open file: $!";
			while (my $line = <$fh>) {
				chomp $line;
				$corpus .= " $line";
			}
		close $fh;
	}
	close($dh);

	my @words = split(/\s+/, $corpus);

	for my $n (2) {
			for (my $i = 0; $i < scalar(@words) - $n + 1; $i++) {
			my $ngram = join(' ', @words[$i .. $i + $n - 1]);
			$weights{$ngram} = {
				weight => 0,
				words => \@words,
				n => $n,
				prefix => join(' ', @words[0 .. $n - 2]),
			};
		}
	}
	my $num_ngrams = scalar(keys %weights);
	my $processed_ngrams = 0;
	foreach my $ngram (keys %weights) {
        $processed_ngrams++;
		my $n = $weights{$ngram}->{n};
        my @words = @{$weights{$ngram}->{words}};
        my $prefix = $weights{$ngram}->{prefix};
        my $denominator = $weights{$prefix}->{weight} || 1;
        my $weight = $weights{$ngram}->{weight};
        $weight = $weights{$ngram}->{weight} = $weights{$ngram}->{weight} / $denominator;
  
		my $progress = int($processed_ngrams/$num_ngrams * 100);
		print "Processing ngram $processed_ngrams/$num_ngrams ($progress%)\r";
	}

	print "Training Complete.\n";
	return \%weights;
}
					

 Save the current training set as "X" where X is your specified file name.

# Save weights to file
sub save_model {
    my ($filename) = @_;
    $filename = "Model-" . $filename . ".txt";

    open(my $fh, '>', $filename) or print "Could not open file '$filename' $!";
    # Iterate over each ngram in %weights
    foreach my $ngram (keys %weights) {
        print $fh "$ngram:$weights{$ngram}->{weight}\n";
    }

    close($fh);
}					
					

 Loads a saved training file without having to repeat the training process. Where X specifies the model name entered by the user.

# Load weights from file
sub load_model {
    my ($filename) = @_;
    $filename = "Model-" . $filename . ".txt";

    open(my $fh, '<', $filename) or die "Can't open file $filename for reading: $!";
    while (my $line = <$fh>) {
        chomp $line;
        my ($ngram, $weight) = split(/:/, $line);
        $weights{$ngram} = {
            weight => $weight,
            words => \@words,
            n => $n,
            prefix => join(' ', @words[0 .. $n - 2]),
        };
    }
    close($fh);

    print "$filename has been loaded.\n";
    return \%weights;
}					
					

					
					

					
					

					
					

					
					

					
					

Here are the basic requirements to command and control the A.I.


 These are the minimum requirements we used to train, save, load, and interact with our A.I.


Here are the basic requirements to command and control the A.I.


 These are the minimum requirements we used to train, save, load, and interact with our A.I.


Scaling A.I.