Total Tests:
Blog Filters reset x
By Incident
By Jurisdiction
Show More

Cybersecurity
Compliance

Cybersecurity
Legal Advisory
Learn More

Strong AI in 2020? No

The so-called Strong AI is a scientific term to define an AI capable to fully substitute human, often seen in Hollywood movies where machines defeat humans. In 2020, we did a solid step towards the Strong AI, but it’s still not here.


Wednesday, November 11, 2020
Views: 22k Read Time: 6 min.

Strong AI in 2020? No

As you probably know, there are two main types of AI: Strong AI and Weak AI. Strong AI is the type often seen in Hollywood movies where intelligent machines act like humans solving an unrestricted scope of simple and complicated tasks: from chatting and dancing to conquering the universe.

Weak AI refers both to machine learning and to other algorithms that stay behind most of the intelligent tools that we use daily: search engines, chatbots, route navigators or cybersecurity solutions.

So far, it has been evident that the Strong AI isn't coming any time soon, as it is quite an ambitious task and it is not known for sure if it is ever possible to accomplish it.

The arrival of a super-algorithm in 2020?

But wait. Here is a list of cool things that one algorithm, developed recently in 2020, is capable of doing:

1. It can be a [junior] front-end programmer

It can build an HTML layout and even a simple web application based on a description in natural language. It can use Figma to create a whole site and produce a code in React, yes, based only on verbal description from its boss.

2. It can be a [junior] system administrator and IT manager

Yes, it can handle SQL programming and program in Python. System administration is also not a problem for this AI.

But you don’t have to believe that it can do all the above. Sure, you can interview it before hiring. It won’t be upset if it fails: it can recreate itself into a [probably] a smarter self.

3. It may replace your [junior] data analyst

You can ask it questions in natural language and get answers with links to Wikipedia. Yes, Google can also do this, but can it design sites? OK, answering questions isn’t probably something to be excited about, but what if it can add information to a table. No, you don’t have to tell it what kind of information is needed: it can figure it out on itself.

Once the data is in, you can ask it to draw a chart or a graph and update it when you need so.

And finally, if you cannot remember the title of a film, you can ask it simply by describing the basics of the plot.

4. It can be your attorney, secretary or even any of the big five personalities

Of course, it can translate (no surprise, yeah?). But it also can pretend to be an attorney and decipher the legal language for you. It can write letters based on key points, make your rants more polite.

Want to learn some wisdom but don’t have time for it? The algorithm can summarize them. Or you can even make it rewrite texts like any of the big five personalities.

Be careful when you select a lawyer, however, as money saving here can trigger disastrous consequences and long-lasting legal ramifications.

5. It can be a columnist for The Guardian

Yes, here is an article written by AI which turned out to be worth publishing in one of the main British newspapers. So you can imagine that writing Google ads and other marketing materials shouldn’t be a problem either for this AI. Even memes

By the way, do you know how to recruit board members? No? Ask AI.

6. It can teach you something and make conclusions

For example, it can tell you what coding is and what it means to code well (AI-generated a presentation). Or it can test students in different subjects. AI can provide some advice to doctors and elaborate on physics problems or math. Both are examples of reasoning which so difficult to implement even for some humans. It is possible to fool the AI with stupid questions, but once it knows that not all the questions should be answered, he can cope with such. And, of course, it can play chess pretty well.

Of course, one may say that some (if not all) of these functions were implemented by some other algorithms. However, you should not forget, that all the above are implemented by one model, built by one algorithm. Apparently, it can a lot, even draw pictures!

So, isn’t it the all-mighty Strong AI? No.

Introduction to GPT-3 and how it works

The name of the algorithm is GPT-3 (Generative Pre-trained Transformer 3) created by Open AI. Some people even deny that it belongs to AI, as it is simply a language model. A gigantic language model: it cost 355 GPU years and $4.6m.

But what is a language model? It is a class of algorithms that predict the next token (e.g. a word), given some preceding context. For example, all of us can predict the next word in this line: “To be, or not to be, that is the ...”. Simple? Yes. And there exist a lot of such models. The main difference of GPT-3 is that it is truly gigantic: it was trained on 570 Gb of text from the Internet and resulting model ‘weights’ 700Gb. The model has 175bn parameters (compare with the second-biggest model by Microsoft which has ‘only’ 17bn parameters). The ‘context window’ is 2048 tokens. It means that it predicts the next word, taking into account 2048 preceding words.

The language models have been around for many years, and the approach is rather trivial. What makes GPT-3 exceptional is its size, which, as it was shown above, matters.

And it matters a lot, as it seems that the growth in size causes the paradigmatic shift in Machine Learning.

I would like to cite Gwern Branwen (a writer and independent researcher):

“The GPT-3 neural network is so large a model in terms of power and dataset that it exhibits qualitatively different behavior: you do not apply it to a fixed set of tasks which were in the training dataset, requiring retraining on additional data if one wants to handle a new task (as one would have to retrain GPT-2); instead, you interact with it, expressing any task in terms of natural language descriptions, requests, and examples, tweaking the prompt until it “understands” & it meta-learns the new task based on the high-level abstractions it learned from the pretraining. This is a rather different way of using a DL model, and it’s better to think of it as a new kind of programming, where the prompt is now a “program” which programs GPT-3 to do new things. “Prompt programming” is less like regular programming than it is like coaching a super-intelligent cat into learning a new trick: you can ask it, and it will do the trick perfectly sometimes, which makes it all the more frustrating when it rolls over to lick its butt instead—you know the problem is not that it can’t but that it won’t.”

GPT is an evidence of the effectiveness of Transformer-based algorithms, a set of ML algorithms that constitute an important milestone in machine learning which is rapidly replacing previous algorithms, especially Recurrent neural networks.

In its current research Emerging Technologies and Trends Impact Radar: Artificial Intelligence, Gartner noted the importance of transformer-based language models as they “deliver substantial improvements in advanced text analytics and all the related applications such as conversational user interfaces, intelligent virtual assistants and automated text generation”.

The first milestone in recent ANN-based AI was the introduction of contextualized representation of words: word2vec by Tomáš Mikolov and Yoshua Bengio. This unsupervised method learned semantic vectors of words from plain text and served as a starting point for many machine learning algorithms.

The second significant breakthrough is transformer-based algorithms that stay behind the GPT-X, BERT, BART and some other implementations.

So why isn’t it a Strong AI yet?

By its name Artificial Intelligence is supposed to model natural intelligence. Of course, it is not necessary to mimic the physiology of humans (‘neurons’ in neural networks are not models of human neurons, it’s just a metaphor), but it needs to be capable of working similarly to natural (human) intelligence.

But GPT-3 actually memorized a huge amount of textual data, it does not understand it, it does not truly infer knowledge from it, it does not think. Of course, it is much more sophisticated than a simple text database. It ‘understands’ meaning of words by utilizing contextualized representations of words (‘embeddings’) and does not need an exact hit of the context to be able to predict the next word. It needs similar words in similar context to make a good guess what comes next.

In practice, it means that despite being very ‘realistic’ it lacks truly intelligent capabilities. For example, in some text generation tests it was generating a nice coherent text but it was very inaccurate about facts and sometimes repeated itself. Yes, it can generate nice abstract essays but one needs to do a lot of fact-checking to make sure the text is worth reading.

How we use AI at ImmuniWeb?

At ImmuniWeb, we follow the latest and the most innovative research in AI and machine learning, and experiment a lot with new algorithms. We ensure that our existing and newly deployed algorithms are solving practical problems by learning work patterns from our penetration testers and security analysts.

We use a wide spectrum of smaller algorithms to train different interrelated ML models, that gradually capture knowledge and experience of our cybersecurity experts for intelligent automation of laborious tasks: from fuzzing and WAF bypass to data analysis on the Dark Web and risk scoring.

We mostly use Deep Learning, paired with classical machine learning algorithms to automate medium and high complexity tasks and processes, while leaving humans to do the most sophisticated tasks that deserve human ingenuity and cognition. In sum, we bring the best of two worlds of humans and machines to our customers, and deliver the best value for money paired with technical excellence.


Latest news and insights on AI and Machine Learning for application security testing, web, mobile and IoT security vulnerabilities, and application penetration testing.
Book a Call Ask a Question
Close
Talk to ImmuniWeb Experts
ImmuniWeb AI Platform
Have a Technical Question?

Our security experts will answer within
one business day. No obligations.

Have a Sales Question?
Email:
Tel: +41 22 560 6800 (Switzerland)
Tel: +1 720 605 9147 (USA)
*
*
*
*
Your data will stay private and confidential